text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The advent of electronic health records has greatly benefited the healthcare industry in both the United States and Canada. With wider access to patients’ medical files, physicians can call up critical information during a medical emergency and share those records with fellow doctors if a patient needs to seek treatment at another hospital. However, like any other digital medium, the move to a networked database has resulted in massive cybersecurity concerns as healthcare administrators struggle to protect sensitive patient data from being accessed by hackers. One of the most successful motivating drivers for cybersecurity best practices adoption in the healthcare field has been the implementation of federal regulations stipulating what measures hospital personnel should be taking to secure patient data. In the United States, passage of the Health Insurance Portability and Accountability Act has influenced numerous medical facilities to bolster their network defenses. If an administration fails to do so, it may receive a substantial fine from the federal government. U.S. officials want to make those regulations even more stringent by shortening the window within which a facility can report a data breach. The U.S. Department of Health & Human Services recently proposed that those federally facilitated exchanges that were created through the Affordable Care Act, along with any organizations working in conjunction with them, should be given no more than an hour to report a data breach once it has been discovered. The need for greater governance The proposal calls for greater cybersecurity governance from members of the healthcare community. As has been witnessed on multiple occasions, establishing a culture of threat awareness and ensuring that employees adhere to data security best practice. For instance, British Columbia’s Health Ministry was reprimanded for failing to have the necessary controls and defenses in place to properly secure patient information. According to then-health minister Margaret McDiarmid, several ministry employees were discovered to have accessed millions of sensitive medical records and handed them over to contracted researchers last year. A report on the incidents released by British Columbia Privacy Commissioner Elizabeth Denham stated that the ministry lacked suitable security and privacy measures as outlined by Section 30 of the Freedom of Information and Protection of Privacy Act. In addition to fostering cybersecurity governance, medical officials should ensure that they deploy a comprehensive suite of applications to address a range of potential threats. For example, can be leveraged to prevent unknown and potentially malicious programs from running on a hospital workstation. These whitelisting utilities can block threats such as zero day viruses from accessing the hospital servers that house sensitive medical information. software
<urn:uuid:eac5df37-7575-4a65-97b2-d7c3b8deb5f5>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/improving-healthcare-data-defenses-for-compliance-patient-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00026.warc.gz
en
0.955193
488
2.625
3
As one of the initial vectors of the first known targeted attack against industrial control systems (Stuxnet), and more recently responsible for carrying an infected download of La La Land into a control room, USB has certainly gotten a bad reputation, leading to exclusive policies, device bans, and hot glue. But the reason so many people use USB devices every single day is because the USB protocol is amazingly beneficial, and condemning its use hurts us far more than it could help us. Instead of pushing USB away, we can control it, secure it, and make USB safe again. Recognizing the Threat It’s not all about infected files on portable drives. There are USB tools, commercially available, that could cause serious harm to an industrial control system. “Rubber Duckies” and “Bash Bunnies” are great pen testing gadgets, but in the wrong hands they’re also powerful hacking tools because they manipulate the USB protocol to purposefully and programmatically misbehave. In industrial environments, where production computers are likely to be more vulnerable and less protected than your average enterprise, this misbehavior can have severe consequences. These “threats” work because the USB protocol was designed to be universal. To accommodate this, a degree of trust is required to allow USB devices of various types to load appropriate drivers. The problems start when devices announce themselves as things they are not — sometimes maliciously, but more commonly out of some desire for a device manufacturer to add value. Avoiding the technical details, the bottom line is that not all devices are what they seem; and not all devices behave as you might expect. At the risk of letting our imaginations run wild, here’s something else to think about: the firmware that USB devices use to present themselves to a USB host controller can be overwritten with malicious firmware. So, even benign, approved devices, acquired via an official and verified supply chain, can become infected and act badly (Unless you only approve verified secure devices with signed firmware, which, in the wake of BadUSB, I highly recommend). It’s a plight that still plagues about 50 percent of the USB devices out there, and it is appropriately named “BadUSB.” So your mouse might start acting like a keyboard, or a network interface, or a serial connector, or a storage device, or all of the above at once. It's a serious vulnerability that's been known about for years, but there’s absolutely no way to detect it. Understand that the reason USB can be so tricky to defend against is because it’s an amazingly sophisticated and flexible protocol. It doesn’t mean that you can’t secure USB, but it does mean that you’re going to have to take a multi-faceted approach. To protect against USB threats, you have to understand how the USB protocol works and cover all vectors. You must think like the USB. This means you need more controls, and better ones. Keep using application whitelisting (AWL). It’s a great anti-malware mechanism. Traditional anti-virus (AV) isn’t terribly effective on its own anymore, but you should still use AV as well. These tenets of Defense-in-Depth haven’t changed, because no single control is infallible. When securing a protocol like USB that is adaptable by design, strong Defense-in-Depth is even more important. Human Authorization as a Mitigating Control In our explorations of various USB threats, our Honeywell cybersecurity team had an epiphany: what the USB protocol needs is device authorization, something to ensure that your USB devices are legitimate. In fact, the USB standards are evolving in that very direction, for this very reason. Unfortunately, we can’t wait — and if we did, it would be a long wait, because “industrial control systems” and “modern computers” don’t typically go together. Instead, we teamed up with Open Systems Resources, experts in Windows driver technology, to develop an authorization approach to securing how USB is used in an industrial enterprise. We call it T.R.U.S.T. – Trusted Response User Substantiation Technology. It works like this: - First, TRUST gets in the way of the normal USB protocol to quarantine new devices so that they can’t connect and cause any harm on their own. - Next, it determines what the device really is, by observing how the device presents itself and how the host computer responds. - Then, it presents a Captcha to the human user. This Captcha tells you exactly what the device you are connecting really is, and requires a human response to authorize the device. All three pieces are important because: - Once a USB device connects, it’s too late. You have to isolate that device first, and in such a way that only a secure service can interact with it. - You need to interact with a USB device to determine what it is. You can’t rely on whatever the device tells you it is, because USB Device Types, Device IDs, Serial Numbers, and other identifiers can easily be spoofed or manipulated. - You have to be able to break any programmatic attempt for a smart, malicious USB device to circumvent these points noted above. Requiring a conscious authorization from an Administrative user is a sure-fire approach that has been proven extremely effective in other areas of privacy and security. Used together with another modern industrial control system (ICS) security technique — cloud threat intelligence — your USB protection can also be automated and teach itself what to look for. But that’s another technique we can explore further in another article. I hope this article helps you once again benefit from the Universal Serial Bus — the protocol that freed us from the floppy drive, untethered us from a tangle of proprietary interfaces, and saved us from insufficient storage. USB is truly designed to be universal, and its success at this goal has made it ubiquitous, convenient and cost effective. Pairing it with defense-in-depth and cutting-edge human authorization techniques can help reduce the risks of USB exploitation for safe use, instead of sticking hot glue into your USB ports. Eric Knapp is Chief Engineer and Director of the Strategic Innovation Group, Honeywell Industrial Cyber Security.
<urn:uuid:f1090bbc-37bb-493d-a4a8-34d0c1053fcc>
CC-MAIN-2022-40
https://www.mbtmag.com/security/article/13245907/hot-glue-is-not-a-strategy-for-usb-device-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00026.warc.gz
en
0.934777
1,321
2.53125
3
FTTX – Fiber to the X: Explained September 25, 2018 FTTx stands for Fiber to the X (where X represents a particular name or object, such as ‘home’ or ‘cabinet’). It is a telecommunications network architecture that is used within the local loop (the last section of the providers network that spans between the end-user premises and the edge of the carrier network) and delivers broadband connections to homes, businesses and organisations all around the world. Many legacy copper-based networks are steadily being replaced with FTTx systems due to the benefits in speed and capacity that comes with fiber optic cabling. This article looks at the various types of FTTx systems, how they work and how they differ from one and other. Different types of FTTx architectures There are two main groups of FTTx architectures, FTTP (Fiber to the Premises) and FTTC (Fiber to the Cabinet). These two groups have multiple sub-groups and architectures. Fiber to the Premesis (FTTP) With FTTP, an optical fiber is run in an optical distribution network (ODN) from the central office to the subscriber premises. FTTP can be categorized according to where the fiber ends: FTTH – Fiber to the Home is when the fiber reaches the living or working space. The fiber runs from the central office all the way to the living or working space. The signal may then be conveyed throughout that space using any means, such as twisted pair, coaxial cable, wireless (Wi-Fi for example) or further optical fiber. FTTB – Fiber to the Building is where the fiber reaches the subscribers building that may have multiple living or working spaces that are being served by the network provider. The optical fiber terminated before reaching the individual living or working space but is conveyed the final distance by any means. Although FTTP and FTTB are the main two subsets of the FTTP architectures, there are also more which are fairly self-explanatory, including: - FTTBD – Fiber to the Desktop - FTTR – Fiber to the Router - FTTO – Fiber to the Office - FTTF – Fiber to the Frontage Fiber to the Cabinet (FTTC) FTTC (also sometimes known as Fiber to the Curb or Fiber to the Node) is the other main type of fiber optic deployment. An FTTC system runs through an ODN from the providers hub through to a central platform or node. Indivdual customers can then connect to this node using either twisted pair or coaxial cables. FTTC systems are defined as a platform that terminated its fiber optic cables within 1000ft of the customer premesis. FTTC also has several sub-groups depending on the type of deployment: FTTN – Fiber to the Neighbourhood is when customers connect to a cabinet in that is generally located within a 1 mile radius of each subscriber. These cabinets can connect several hundreds of subscribers to the network. FTTdp – Fiber to the Distribution Point is very similar to FTTC/FTTN but is located one-step closer, moving the fiber to within meters of the boundary of customers premesis, to a junction box known as the distribution point. This allows for near gigabit speeds. FTTx technology will continue to be implemented within telecommunication networks as more operators replace older copper infrastructure with fiber based systems. Subscribers will see the benefit of these infrastructure improvements with enhanced speeds and greater connectivity. Carritech supply, repair and support a range of telecommunication network products for the both operators and end-users around the world. View our products here or contact us today to discuss how we can help support you. Get all of our latest news sent to your inbox each month.
<urn:uuid:0e1c39db-9921-465f-a001-5811da96c046>
CC-MAIN-2022-40
https://www.carritech.com/news/fttx-fiber-to-the-x-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00026.warc.gz
en
0.941389
788
3.265625
3
For Who Course Students and professionals with zero or minimal experience of information security and information technology. What you will learn - The main goal of the course is to increase the level of cybersecurity awareness among students, professionals, teachers and ordinary users, as well as teach how to protect against the most common cyber-incidents. Kaspersky Academy Trainer Main course objectives: - Learn the basic concepts and fundamental technologies related to cybersecurity - Gain an understanding of current cyberthreats and their types - Learn the basic rules of corporate and personal cybersecurity - Study the methods and techniques for protecting against random and targeted attacks - Gain insight into the key areas of cybersecurity - Study current trends in information security
<urn:uuid:11e9e0b1-d667-4ace-869e-fff8dfd59a70>
CC-MAIN-2022-40
https://academy.kaspersky.com/courses/entry-level-course-on-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00026.warc.gz
en
0.895052
172
2.828125
3
Short Guide to RFID Tag Read Range — Eight FAQs RFID tag read range is one of the first performance metrics organizations consider when selecting a tag for their new RFID-based business process. But beyond the number, what else do you need to know about tag performance? Read range shouldn’t be the only consideration when selecting an RFID tag. Read range is a vital component of RFID tag capabilities, but it alone cannot determine the quality of a tag’s performance. To fully understand if a tag can actually work for a use case, factors like antenna, reader signal/power, mounting material, surrounding material, whether or not a tag is embedded, and more have to be considered. To understand why you need to consider more than just read range for your RFID application, explore the answers to these eight FAQs about read range. Note: In this blog, we are talking strictly about passive RFID tags. #1. What Is Read Range? Read range is the distance from which an RFID tag can be detected. The read range expresses the distance from which the tag receives just enough power to be activated to send back a signal to the reader. #2. How Is Read Range Determined? Generally, the manufacturer spec sheet includes RFID read range information. It’s important to realize that the manufacturer tests their tag in highly favorable, repeatable laboratory conditions: oriented properly in a low humidity, room temperature area with a high-powered antenna, etc. In other words, the environment is as ideal as it can be with a fixed set of conditions. This is simply the nature of testing a tag’s read range. But when you take the tag into the real world — in a different set of conditions — the tag will naturally perform differently. This is often a problem because if you need a tag that reads at 15 feet, for example, a tag with a 15-foot read range on the spec sheet may not suffice because of material, environmental and other conditions. For example, in the real world, users have handheld readers with lower power and a smaller antenna than a fixed reader, which means the energy they can send and collect is much smaller – thus, the tag has a shorter read range. So, it is often the case that a tag with a longer maximum read range will be needed than the product spec sheet might lead you to believe. A shorter read range typically translates to a smaller footprint and lower cost. So, most incline to get the tag that exactly meets spec — but in the real world, it comes up short. It’s much better to “round up” with your tag. This is because, in most cases, the real world degrades the performance of the tag in relation to perfect lab conditions. #3. What Is the Maximum Read Range? The maximum read range is the longest distance the tag will send a detectable response signal under ideal laboratory test conditions, which includes the maximum strength query signal from the reader allowed by regulations. Generally, the bigger the tag, the longer the read range. But there are practical limits to this. At Vizinex RFID (an HID Global company), one of our largest tags is 5.7” x 1.48” x 0.27”and reads at 100 feet. This is the upper limit for read range on commercially available tags at this time. In many applications, though, a tag with a smaller footprint is preferred — for example, instrument tags. For a specific RFID tag, the maximum read range is generally the read range listed on the manufacturer’s spec sheet. As mentioned, the tag in real world conditions will perform less than the maximum read range on the spec sheet. If you’re not getting the maximum read range, you should also check to see what reader power you’re on. Reader power is often reduced adjusted to help prevent reading unrelated tags that are within range to be picked up. #4. How Important Is Read Range to Tag Selection? It’s essential you have enough read range to detect the tags you need to detect in the application. Read range is important to tag selection, but this factor is application specific and should not be solely relied on. System performance is not exclusively based on read range. To determine the appropriate tag read range for a project, users will need to have an in-depth understanding of the application and the goals of the data collection task. It is likely that some experimentation with some alternative tags will help in making the correct tag choice for the materials involved and the one in question. #5. Does Long Read Range Equal Higher Performance? For RFID tags, higher performance is referring to higher reliability of reads. In other words, when an object is in the proximity of the reader, is it detected? Does the reader reliably detect every tag it’s supposed to? Read range certainly contributes to this level of reliability. A longer read range means the tag is more powerful and more likely to be picked up. Because of this, many people use read range as a primary RFID performance metric — using it as a proxy of the performance of a tag and giving users insight about how a tag can be used. While this is helpful information, long read range doesn’t always equal higher performance. It depends on the application. For example, in some cases, frequency response is more important to reliability than maximum read range. The environment you’re in will affect the readability of the tag in a number of different ways. Longer read range can’t always compensate for those other factors. These factors might prevent the tag from being picked up at all, in which case the read range doesn’t matter. Read range is one factor that helps maximize the read reliability of a tag, but it doesn’t give the complete picture of how good or bad an RFID tag performs. #6. Is Longer Read Range Always Better? There is a trade off with read range, tag footprint and cost. If you want or need more read range, the tag will likely be bigger and, as a result, more expensive. So, what’s “better” for your application? It’s all about determining an acceptable trade off and ROI for your application. In some situations, long read range is helpful to identify certain objects within a certain area. However, if other tags with long read ranges are also near that area, they may be unintentionally detected. This issue calls for stray management, which is defined as controlling unrelated tag reads during a project. Detecting unrelated tags during a read can be detrimental and is a disadvantage of having a longer read range. #7. What Factors Impact Read Range? There are two main factors that impact read range: 1. Size of the Antenna The read range is proportional to the amount of energy it can collect from the inbound signal. A passive tag has no internal power, so it gathers power to send a response from an incoming query signal. The bigger the antenna, the more power it can collect to send a strong response. 2. Material on Which the Tag Is Mounted The type of material a tag is mounted on can affect the read range significantly. Many tags are designed to be mounted on a particular material — a high dielectric plastic, paper or corrugated, metal or other conductive surfaces, hanging in free air, etc. RFID tag design can be engineered to either take advantage of material benefits, or to compensate for deficiencies that a material may cause in the read range process. If the tag is mounted on a material other than the one it was designed for, the read range can be substantially reduced. Users must be careful to choose a tag designed for the surface on which it will be mounted. #8. What Else Should I Consider Beyond Read Range for Tag Selection? A tag may have a very long, strong nominal read range for a given application, but the frequency band for which it provides that long range its frequency is extremely narrow. It might respond well at a limited frequency range but performs poorly on other frequency bands. The query signals sent by readers bounce from frequency to frequency within the RFID band (902-928 MHz in the US). Some tags have a very strong response to a very narrow range of query frequencies. They have a ‘peaky’ response curve, tall and narrow. Other tags have a flatter response curve, and the tags respond equally well to all query signals within the band. In some cases, tags with flatter response curves provide significantly better response rates than tags with better nominal read ranges and a response curve in a very narrow set of frequencies. 2. Directional Response Some tags respond much better when oriented one way than if rotated 90 degrees from the broadcasting antenna. In some cases, it doesn’t matter if the tag has a nonuniform direction response because it will automatically be oriented relative to the reader. However, you might not know for sure if the tag will always be oriented the right way. 3. Business Processes The most important factor to consider in the tag selection process is does it serve the need of your unique business process? The tag you choose will need a read range for your specific application and project which is why it’s such a high priority. Then, you should consider other factors that impact the system’s performance, like durability, footprint, material compatibility and the environment of the tag. You need a tag that has enough read range for your application but also weighs other business process considerations — otherwise, you might not even achieve the read range you need. Explore other common RFID-related questions and answers. Read range plays a key role in the life of an RFID tag and is one of the factors that determines a tag’s overall use. However, relying on read range alone will direct you to choose a tag that may not actually work for your use case. It’s important to look at the whole picture so you can choose an RFID tag based on your specific needs. To learn more about HID Global’s breadth of asset tracking RFID tags, check out our latest IDT Asset Tracking & Logistics Tag Comparison Chart. Nick Iandolo is an experienced Senior Marketing Strategist specializing in Content Marketing and Corporate Communications Writing, primarily for market-disrupting technology organizations. His work has been featured in publications such as Morning Consult, NewDesign Magazine UK, SmartCard Identity News, and Construction Outlook. Nick is also a Spartan Race athlete, and lives just outside of Boston Mass with his wife, daughter, and Golden Retriever.
<urn:uuid:a606a347-9499-4da5-b99d-fa9b1a29ce78>
CC-MAIN-2022-40
https://blog.hidglobal.com/ko/node/39345
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00026.warc.gz
en
0.928212
2,227
2.921875
3
Healthcare data networks carry information that is both vital to patient care and highly confidential. It should be protected at all cost. Advances in healthcare technology have made paper-based records a thing of the past. Instead, healthcare organisations rely on high-speed, high-performance networks to enable the flow of sensitive information such as patient records and management information. However, without protection, these networks and the data flowing across them are at risk from cyber attack. The healthcare sector has also taken advantage of the growth in the Internet of Things (IoT) and ‘big data’ to fuel the rise of e-Health. While this trend has improved the efficiency of many healthcare organisations, it has also placed an increasing emphasis on systems security. The coming age of the quantum computer places further requirements on healthcare organsations to encrypt their data at a standard resistant to quantum attacks. While today’s encryption standards would take conventional computers many thousands of years to break, quantum computers will be able to achieve this in a fraction of the time. IDQ’s range of quantum-safe security solutions ensure that healthcare data in motion is protected in both the current and future security landscape. Since 2013, over 272 million healthcare records have been lost or stolen. Because this information does not only contain treatment information, but names, addresses, dates of birth and so on, it is particularly valuable for cyber criminals to use in identity theft. Breaches themselves take two main forms: The first is to inject rogue data into systems to give, for example, false readings on remote monitors, interrupt CCTV or intercept ambulance communications – be this as a malicious or ‘nuisance’ act. The second is to capture sensitive information such as patient records or personal and business information which can then be exploited. Such attacks affect both patients, leaving their personal information and health at risk, and healthcare organisations which can experience hefty financial loss, potential fines and loss of confidence – be they public or private institutions. Recent high-profile data breaches in the healthcare sector include: Anthem: 2015 saw criminal hackers steal 80 million records from US healthcare provider Anthem in what was reported to be a state-sponsored attack. Premera: The patient records of 11 million Premera Blue Cross customers were exposed as a result of a 2014 attack. Dates of birth, social security records, bank account details and clinical records were amongst the information that could have leaked. Banner Health: The payment card details of 3.7m customers’ were compromised in 2016 after hackers gained access to the company’s food and drinks payment system. NHS: The UK’s National Health Service had 239 data security incidents reported between June – October 2016. The organisation was also affected by a global cyber attack in 2017. To ensure the continued data security both today and in the post-quantum era, healthcare organisations must act quickly to secure it. Implementing a quantum-safe security solution allows organisations to encrypt their data in motion to a level unparalleled by more traditional cryptographic methods. IDQ’s Centauris network encryption platform offers “set & forget” functionality to ensure that the encryption does not place an additional burden on the network team. In addition, state-of-the-art security features meet even the most stringent regulatory requirements. FIPS and Common Criteria level security certifications ensure both physical protection of the appliance as well as best-practice encryption key management processes and access controls. IDQ’s Cerberis quantum key distribution range is the world’s first carrier-grade QKD platform that provides provably secure key exchange. The range exploits a fundamental principle of quantum physics to exchange cryptographic keys over networks, ensuring long-term protection and confidentiality. Learn how ID Quantique, in partnership with fragmentiX, have applied QKD to secure data as it is transmitted between the Medical University Graz and the Landeskrankenhaus Graz II – West, enabling clinicians to securely access and exchange data across the network. READ MORE
<urn:uuid:e1f118e6-2614-4632-abe0-91a4e255405a>
CC-MAIN-2022-40
https://www.idquantique.com/quantum-safe-security/applications/healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00026.warc.gz
en
0.933571
831
2.640625
3
Sun Microsystems engineer James Gosling publicly announced the Java language on May 23, 1995, though the initial effort actually got started in June 1991. The early promise of Java was simple and yet profound, offering developers the opportunity to write once and then run anywhere. At the time, code was often tightly coupled with underlying hardware and the idea of code portability wasn't fully realized—until Java came along. Java has undergone significant technical and organizational challenges over the past 25 years. It moved from being entirely proprietary to open source in 2006, it transitioned from Sun to Oracle in 2010 as part of Oracle's acquisition of Sun, and, in 2017, its release cadence was accelerated. Oracle Reflects on the Rise of Java In an online event hosted by Oracle celebrating 25 years of Java, Georges Saab, vice president of the Java Platform Group at Oracle and chair of the OpenJDK Governing Board, noted that Java found initial success because it solved a problem that developers had. The problem was code portability and the ability to write code that can run on any system. Mark Reinhold, chief architect of the Java Platform Group at Oracle, also reflected on how promising Java was in the beginning, though he admitted that there have been many surprises. "Probably the thing that surprised me the most is how long it's lasted," he said. Beyond the promise of portability, key tenets of the Java language are also stability and backward compatibility, such that code written a decade ago can still run today. Reinhold noted that early on, Java development struck a balance between adding new features while still maintaining stability. Over the years, it's been critical that Java has continued to innovate, added Brian Goetz, Java language architect at Oracle. "If we don't innovate, we're going to become irrelevant, and Java is not going to be interesting to people to program with," he said. What's Wrong with Java While Java has many strengths, it isn't a perfect language. Pluralsight Java and Android development instructor Jim Wilson told ITPro Today that Java tends to be more verbose than other modern languages, which sometimes is difficult for beginners. That said, issues that beginning developers might have with Java can be overcome through a combination of education and tools, he added. "Education provides developers with the understanding to work effectively and enables developers to maximize the capabilities and power of the language," Wilson said. There is no one perfect language, he added, as every language has pluses and minuses. One language may be the one great language for performing Task A, but another may be the one great language for performing Task B, and yet another for performing Task C. A language alone does not constitute a solution. Wilson noted that ecosystem, APIs, tooling support, experience with a language, access to knowledgeable developers, available libraries of existing solutions, etc., all factor into determining the best language to work in when producing high-value, production software solutions. And when all of these issues are factored in, Java is a tough language to beat, Wilson said. Java Success and Open Source "It’s hard to say there’s a lot wrong with any language that has literally stood the test of time," Mark Little, vice president of software engineering, Runtimes, at Red Hat, told ITPro Today. "In fact, there’s probably far more 'right' with Java today than wrong, including the vibrant OpenJDK community, the fact it remains in the top 2 ranking of developer languages globally, is still used as a teaching language in many schools and colleges throughout the world, and is at the heart of one of the most popular multiplayer games in the world, Minecraft." Little noted that Java was popular even before it was open sourced through efforts such as IcedTea and then OpenJDK, which is an open source Java development kit that is at the core of Java development today. "However, the fact that OpenJDK came along and has grown to be an incredibly vibrant community, with participation from Red Hat, Oracle, Azul, Microsoft and others, helped developers to recognize that no one vendor is in charge of their programming language destiny," he said. "There’s been an explosion of open source frameworks, toolkits, IDEs, etc., over the past 10+ years, and whilst there are many different causes for this, I think one of them is the fact Java was open sourced." Little added that open source software efforts have fundamentally empowered the Java language to maintain its top-ranked status for large-scale application development and production. The Next 25 Years of Java Java is likely to remain a cornerstone of enterprise application stacks for years to come. Although there have been many rivals, Java hasn't been displaced by any of them, according to Azul Systems CTO and co-founder Gil Tene. "When Java displaced C, C++ and 4GL as the predominant means of building enterprise applications, that disruptive shift happened in a handful of years," Tene told ITPro Today. "We've seen multiple waves of presumptive Java killers since, but none have taken." Among the many reasons why Tene expects Java to remain widely used is the simple fact that there is a large pool of trained Java developers already for enterprises to draw from, making it possible to maintain code years after it was initially written. In addition, the rapid release cadence for Java that began in 2018 means that there are now two releases of Java each year. In the past, it could take three or more years for a new release. This year has seen one release thus far, Java 14 , providing a series of incremental improvements. Oracle's Goetz said the six-month release cycle has helped Java innovate faster, while still maintaining stability. "With the six-month cadence, we've been able to rotate our balance back to a good mix of big and small features, work on them side by side and deliver them as they're ready," he said. "If you look at the response to Java 14, people are all excited about the new features, and you know, we're going to keep them coming."
<urn:uuid:781c801d-e100-4c1f-ad93-f5cbddfb4fe6>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/devops/java-language-25-write-once-run-anywhere-lives
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00227.warc.gz
en
0.969891
1,272
2.515625
3
In IT, two goals that, at first glance, contradict each other are data protection and the ecological protection of our planet. A lot is being done to keep the ecological footprint of data centers as small as possible; economical and energy-efficient. Server and storage solutions, and the relocation of entire data centers to climatically more suitable zones, is also on the agenda. But with the growing threats from cyber criminals and higher data protection requirements, efforts to ensure a ‘greener’ IT environment can be thwarted. Data protection is a must But what exactly is the problem? The General Data Protection Regulation (GDPR) came into effect in May 2018, making it compulsory for all organizations collecting data relating to European citizens to put measures in place to make IT systems, and the data residing upon them, more reliable and secure. Its requirements for personal data protection are perhaps the most significant challenge. This is because the regulation stipulates that data must be in encrypted form at all times. Encryption can be done either by processing systems or - better - by appropriate coprocessors. However, the additional computing power needed to achieve this also means more power is consumed, more heat is generated and more cooling is needed. This has a knock-on effect in the data center, leading to increased energy consumption as a result of an increased need for more air conditioning. However, this issue is by no means limited to the European Union. Other economic regions have used GDPR as a model, with the United States, Japan, South Africa and Israel - to name a few – developing similar plans with similar potential consequences. Rising energy demand The additional energy requirement for calculation and cooling is not the only challenge facing data center operators. For a long time, many have put their faith in a storage environment that relies exclusively, or to a large extent on, solid-state drive (SSD) storage. Although these storage media are faster than hard disk drives (HDD), they have a significant disadvantage. They require much more storage space to store encrypted content. These All-Flash Arrays (AFAs) work with an inherent compression, deduplication and pattern removal engine to be able to map as much data as possible to their storage. However, if the data is encrypted, as required by law, these patterns no longer occur. It becomes almost impossible to further compress encrypted content. As a result, the capacity of an SSD can actually be less than the official nominal capacity. This means that more SSDs will need to be used for encrypted data than for the same amount of information in an unencrypted format. The consequence of all this? More SSD storage inevitably results in higher energy costs for its operation (and associated cooling) as well as the initial acquisition costs of course. Better use of technology What could the potential solution to these challenges look like? It’s clear that the exclusive use of SSDs is not valid from an environmental point of view. Instead a better approach for many organizations is a lower dependency on flash, such as flash-optimized arrays which through machine learning minimize the amount of flash required, and leverage hard disk drives within a single storage pool. In this scenario, flash can perform caching using its advantage in speed whilst the personal data stored, that must be encrypted, will not inflate the cost or the environmental impact of the storage medium used. New rules impact company structure For some industries, the new rules on the protection of personal data pose a significant challenge. In the medical sector, such as hospitals, where a huge volume of personal data exists, adhering to GDPR is much harder because they typically only have a very limited opportunity to expand or improve their IT infrastructure. They need to focus not only on the acquisition costs and the structural accommodation of the IT systems required to achieve GDPR compliance, but also on energy costs and the ease of administration to fit leaner IT teams. They must also ensure that their own IT can be implemented structurally in terms of energy supply and removal. New awareness needed While there is a very real need to balance the green agenda alongside being compliant with GDPR, what’s really needed is a new way of thinking. The time has come to move away from the perception that IT exists in isolation to one where information technology and expenditure on energy are viewed as a single package. In many companies, these are often separate budget items with different responsibilities leading IT leaders to prefer short-term capex savings over long term energy (opex) savings. However, if we take a holistic view, in which technology and resources are not seen separately, we should be in a stronger position to take the first step towards a ‘greener’ design of IT across the board - even with higher data protection requirements. Ensuring that a solid and coherent plan for a future-proofed storage environment has been put in place will be paramount going forward.
<urn:uuid:413ae254-3e49-452e-8120-f6678b17b9bb>
CC-MAIN-2022-40
https://direct.datacenterdynamics.com/en/opinions/balancing-green-agenda-and-gdpr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00227.warc.gz
en
0.953049
1,006
3.03125
3
In computer networks tunneling protocols are often used for a variety of reasons. For example, to pass private data (perhaps encrypted) through an open public network such as the Internet or to tunnel incompatible (e.g. IPv6 or private IP addresses) protocols over an IP network. In each case the protocol data of interest is embedded or “tunneled” inside the payload of an IP packet. The net result is that the outer network/transport addressing is not of real interest. Each Accolade ANIC adapter contains a powerful packet parser. The Accolade parser doesn’t just blindly separate the various header fields in the packet. It has a great deal of intelligence built in. For example, if a given packet is buried inside a tunneling protocol such as VLAN, VXLAN, MPLS, GTP or GRE, the parser is able to bypass the encapsulation and work on the inner header portion of the packet where the packet data is held. But at the same time, if some function requires the encapsulation fields (e.g. MPLS label) as an input, the parser keeps track of that information as well. For a complete review of all ANIC adapter features see below.
<urn:uuid:cf621b3f-5e53-4a8d-bdc8-2e2246b2c944>
CC-MAIN-2022-40
https://accoladetechnology.com/tunneling-protocols/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00227.warc.gz
en
0.883795
260
2.75
3
This course provides an overview of several problems with popular risk management practices as well as how to fix them. This course includes slides as well as a simple Monte Carlo spreadsheets for measuring risk. If you have not yet purchased or enrolled in this course, please click here. Learn the pitfalls of using qualitative methods for measuring risk and learn a set of quantitative measurement methods to replace them. - Live 2 Hour Online Webinar - 1 Online Review Quiz - Monte Carlo Tool and Spreadsheets for Measuring Risk - PDF copy of PowerPoint slides Recommended Next Courses Calibration, Creating Simulations in Excel Basic and Intermediate, Making Decisions Under Uncertainty, Measurement Methods in Excel Basic and Intermediate Douglas Hubbard is the inventor of the Applied Information Economics (AIE) method and founder of Hubbard Decision Research (HDR). He is the author of How to Measure Anything: Finding the Value of Intangibles in Business, The Failure of Risk Management: Why It’s Broken and How to Fix It, Pulse: The New Science of Harnessing Internet Buzz to Track Threats and Opportunities and his latest book, How to Measure Anything in Cybersecurity Risk (Wiley, 2016). He has sold over 100,000 copies of his books in eight different languages. Two of his books are required reading for the Society of Actuaries exam prep. In addition to his books, Mr. Hubbard has been published in several periodicals including Nature, The IBM Journal of Research and Development, OR/MS Today, Analytics, CIO, Information Week, and Architecture Boston. Please download the following spreadsheets and slide deck to follow along in the course examples and to answer review questions.
<urn:uuid:eb2e45f6-5fbc-465c-aa87-8c9c985380f0>
CC-MAIN-2022-40
https://hubbardresearch.com/courses/the-failure-of-risk-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00227.warc.gz
en
0.915321
370
2.5625
3
About the Author What is "End of Life" for software? You may have heard the term "End of Life" or EOL used with some of your favorite business or personal software. Let's talk a little bit about what that means. From our friends at Wikipedia: "End-of-life" (EOL) is a term used with respect to a product supplied to customers, indicating that the product is in the end of its useful life (from the vendor's point of view), and a vendor stops marketing, selling, or rework sustaining it." More simply put, the software vendor has made a decision not to continue to sell or maintain the product past a specific date that they have published. Software vendors take a variety of criteria into account when deciding to EOL a product, and we will discuss a few of them and why they are good for you and your business." Software products build on other software and hardware technology. Software is produced using specific programming languages, software development toolkits, and delivered on specific operating systems and database servers. The manufacturers of these underlying tools also make decisions on EOL timeframes for them. For example, Microsoft recently EOL'd Windows XP, SQL Server 2008, and Windows Server 2008. Some of these manufacturers may also go out of business and stop offering the tool altogether. Some technologies and tools may still be viable, but it may be hard or impossible to find developers that know and are willing to work on it. Your software product vendor may not be able to efficiently or reasonably make the product work well on the latest versions of the tools, if there are new versions and decide to EOL the product. Why is this good? Underlying tools must be maintained to continue to work and be secure. When these tools are EOLed, they rapidly become security and stability liabilities in your enterprise. Without critical security updates, hackers may take advantage of the unpatched vulnerabilities to attack your business. The moment we are born, we being to age. The same is true of a software product. As it ages, it accumulates "technical debt." Technical debt is a collection of maintenance tasks to keep the product up to date. Examples include updating tools, upgrading the programming language tools/version, updates to software development kits, applying code patches. It also may consist of the need to rewrite (refactor) code in the product that has become complex or inefficient as new features are added to the product. Technical debt is often unseen by users, and the effort to "pay down" often brings little value to users as well. Technical debt has a way of growing in favor of applying resources to features and functionality that users want to see in the product. Technical debt does slow programmers, increase effort to maintain and enhance software, and sometimes block the ability to advance the software to meet the current needs of customers. Why is this good? Software with high technical debt is hard to maintain and enhance. Developers either work around the limitations of the debt or invest significantly in paying it down. Your subscription or maintenance fees will maintain the software and not enhance it. Over time, it may be more advantageous and viable to rewrite the product or to migrate customers to a newer one in the portfolio that can deliver greater value and be enhanced more quickly. When a product is past the prime of its life, and new generations are available in a vendor's portfolio, the customer base migrates away from the product over time. At certain "tipping points," the number of paying customers dwindles to below the level where it supports the costs of maintaining the product. At this tipping point, the vendor may elect to set an EOL date for the product and incentivize customers to migrate to a new offering before that date. Why is this good? Your business benefits from having a current, well maintained, and supported software running your operations. Software vendors often create a new "Next Generation" product to replace older products in their portfolios. These products make use of the latest technology, tools, and innovations. These new products do not carry as much technical debt and often make use of technologies that allow developers to build enhancements faster for customers. Why is this good? Software vendors want you to take advantage of this investment and invest less in "yesterday's" product. You'll get more modern software, functionality, and integrations than you had previously. So, what do you do when faced with an EOL date for your software product? Stay on top of industry trends and insights. Subscribe to the Big Ideas for SMBs blog. About the Author
<urn:uuid:afc77f7b-055f-4a72-8d5f-848811c75575>
CC-MAIN-2022-40
https://www.ecisolutions.com/blog/software-product-end-of-life-and-why-it-is-good-for-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00227.warc.gz
en
0.942651
940
2.6875
3
As technology has developed, humans have been able to create machines that have the computing power to do incredible things. We can get from place to place using navigation applications such as Google Maps and Waze and track calorie consumption, sleeping habits and exercise efficiency with tools like the Fitbit. We can even start our cars remotely from our cell phones. All of these advancements in smartphone and Internet of Things technology have led tech advocates to the same question: What's the next big thing? The answer could be artificial intelligence, or AI. According to CB Insights, a venture capital database, to this point in 2016, over 200 AI companies have raised over $1.5 billion in funding, marking a 65 percent increase from 2015. This trend shows that tech giants and venture capital firms are on board with developing computers that have the ability to think like humans. This year, tech giants such as Apple, Google, Microsoft and IBM have invested heavily in AI companies to improve their platforms, developing technologies that can learn patterns and better understand their users. In April, Apple acquired Emotient, a leader in emotion detection and sentiment analysis. Apple wants to focus on facial recognition technology and consumer reaction to advertisements. With this technology, Apple could read people's facial expressions and use the data to influence future ads in an attempt to sell more product. Google has focused its AI investments on machine learning, investing $400 million in DeepMind. Machine learning can be found in image and speech recognition technology and translation. This type of software mimics the way that the human brain works and recognizes patterns. Microsoft is developing application program interfaces (APIs) to better understand users by recognizing their faces, emotions and speech. Skype, which is owned by Microsoft, is developing real-time language translations. This technology, already developed for six languages, recognizes speech and converts it to text as the user speaks. This technology has the ability to bridge the language gap between businesses and individuals. IBM's Watson has proven to be an incredibly powerful tool, besting human competitors on Jeopardy. IBM aims to use computers to extract meaning from photos, videos, text and speech. The company is also developing a teaching assistant application that will plan lessons based on approved material, possibly revolutionizing the way that people learn. There is no way around it, Artificial Intelligence is coming, and coming fast. This upward trend of investment shows that the tech world is motivated to teach computers to learn in an attempt to improve human life and drive the human race forward. Jeff Hawkins, co-founder of Numenta puts it simply: "Imagine you can build a brain that is a million times faster than a human, never gets tired, and it's really tuned to be a mathematician. We could advance mathematical theories extremely rapidly, faster than we could otherwise. This machine isn't gonna do anything else!" With each passing day, it becomes increasingly clear that we are moving in the direction of a world powered by artificially intelligent computers, computers with the ability to learn. Industry leaders have a vision of accelerated understanding, and they see artificial intelligence as a platform to achieve that vision.
<urn:uuid:c5c99b18-7d8a-46dd-bb79-305e16c68b2e>
CC-MAIN-2022-40
https://www.givainc.com/blog/index.cfm/2016/10/24/artificial-intelligence-sees-increased-investment
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00227.warc.gz
en
0.958469
634
2.53125
3
A groundbreaking Loyola Medicine study suggests that a simple 15-minute electrocardiogram could help a physician determine whether a patient has major depression or bipolar disorder. Bipolar disorder often is misdiagnosed as major depression. But while the symptoms of the depressive phase of bipolar disorder are similar to that of major depression, the treatments are different and often challenging for the physician. In bipolar disorder, formerly called manic depression, a patient swings between an emotional high (manic episode) and severe depression. Treatment for the depressed phase includes an antidepressant along with a safeguard such as a mood stabilizer or antipsychotic drug to prevent a switch to a manic episode. A physician who misdiagnoses bipolar disorder as major depression could inadvertently trigger a manic episode by prescribing an antidepressant without a safeguard mood stabilizing drug. The study found that heart rate variability, as measured by an electrocardiogram, indicated whether subjects had major depression or bipolar disorder. (Heart rate variability is a variation in the time interval between heartbeats.) The study, by senior author Angelos Halaris, MD, PhD and colleagues, was published in the World Journal of Biological Psychiatry. “Having a noninvasive, easy-to-use and affordable test to differentiate between major depression and bipolar disorder would be a major breakthrough in both psychiatric and primary care practices,” Dr. Halaris said. Dr. Halaris said further research is needed to confirm the study’s findings and determine their clinical significance. Dr. Halaris is a professor in Loyola’s department of psychiatry and behavioral neurosciences and medical director of adult psychiatry. Major depression is among the most common and severe health problems in the world. In the United States, at least 8 to 10 percent of the population suffers from major depression at any given time. While less common than major depression, bipolar disorder is a significant mental health problem, affecting an estimated 50 million people worldwide. The Loyola study enrolled 64 adults with major depression and 37 adults with bipolar disorder. All subjects underwent electrocardiograms at the start of the study. Each participant rested comfortably on an exam table while a three-lead electrocardiogram was attached to the chest. After the patient rested for 15 minutes, the electrocardiographic data were collected for 15 minutes. Using a special software package, researchers converted the electrocardiographic data into the components of heart rate variability. These data were further corrected with specialized software programs developed by study co-author Stephen W. Porges, PhD, of Indiana University’s Kinsey Institute. In measuring heart rate variability, researchers computed what is known to cardiologists as respiratory sinus arrhythmia (RSA). At the baseline (beginning of the study), the subjects with major depression had significantly higher RSA than those with bipolar disorder. In a secondary finding, researchers found that patients with bipolar disorder had higher blood levels of inflammation biomarkers than patients with major depression. Inflammation occurs when the immune system revs up in response to a stressful condition such as bipolar disorder. The study is titled “Low cardiac vagal tone index by heart rate variability differentiates bipolar from major depression.” In addition to Drs. Halaris and Porges, other co-authors are Brandon Hage, MD, a graduate of Loyola University Chicago Stritch School of Medicine now at the University of Pittsburgh (first author); Stritch student Briana Britton; Loyola psychiatric resident David Daniels, MD; and Keri Heilman, PhD of the University of North Carolina. Source: Jim Ritter – Loyola University Health System Original Research: Abstract for “Distinguishing bipolar II depression from unipolar major depressive disorder: Differences in heart rate variability” by Hsin-An Chang Department of Psychiatry, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, Chuan-Chia Chang, Terry B. J. Kuo & San-Yuan Huang in World Journal of Biological Psychiatry. Published online November 14 2017 doi:10.3109/15622975.2015.1017606
<urn:uuid:d4b4875e-5fce-4ba2-bf3b-197a429d5c69>
CC-MAIN-2022-40
https://debuglies.com/2017/11/21/simple-ekg-can-determine-whether-patient-has-depression-or-bipolar-disorder/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00227.warc.gz
en
0.924354
858
2.890625
3
LRN (Location Routing Number) Location Routing Number (LRN) is an identification for a telephone switch for the purpose of routing telephone calls through the PSTN (public switched telephone network) in the United States. When telephone numbers port over from another carrier, Bandwidth assigns an LRN to the number to ensure routing is correct. A LRN is part of how calls are routed, namely calls to ported or pooled numbers are routed based on the NPA-NXX of the number’s associated LRN. If a customer ports their number to another provider, the current telephone number can be retained and only the LRN needs to be changed. Each carrier has at least one LRN per LATA. LRNs were created to provide local number portability by allowing numbers to route successfully when moved between carriers.
<urn:uuid:e5f94c54-7565-4946-a3e4-995ba6bb3420>
CC-MAIN-2022-40
https://www.bandwidth.com/glossary/local-routing-number-lrn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00227.warc.gz
en
0.943933
171
3.078125
3
The philanthropic ventures of Oprah Winfrey From a childhood in poverty to becoming the first Black woman billionaire, Oprah has become a historic figure, touching the lives of many across the globe. Through her philanthropic ventures, Oprah made it her mission “to lead, to educate, to uplift, to inspire and to empower women and children throughout the world”, which she has successfully accomplished through donating approximately US$72mn to worthy organisations. In a bid to encourage people to make a difference around the world, Oprah launched the Oprah's Angel Network in 1998. Her vision was to inspire individuals by creating new opportunities to enable underserved women and children to rise to their potential. All funds went straight to the charity programmes, and Oprah herself covered all administrative costs. The organisation had raised a whopping US$80mn by 2010, which went towards various causes, such as helping women’s shelters, before the organisation stopped taking donations and eventually dissolved. “Through her foundation, Oprah Winfrey has taken her ability to convene and highlight issues close to her heart and translate that into action,” says Caroline Underwood, CEO of Philanthropy Company. “She has used the power of her personality, celebrity and reach to tackle issues affecting millions.” As a fierce advocate for girls and women, it’s no wonder that Oprah has donated to the Time’s Up campaign, which aims to create a society free of gender-based discrimination. Another venture that Oprah is in favour of is N Street Village, a non-profit providing housing and services for homeless and low-income women. But Oprah’s philanthropic efforts don’t stop in the US; the Oprah Winfrey Leadership Academy for Girls, provides just one example. Since founding the academy in 2007, Oprah is said to have spent over US$140mn on the school, providing a private education for underprivileged South Afrians girls in grades eight to 12.
<urn:uuid:b4eab948-2863-4bb5-ba92-06b6badd8c1d>
CC-MAIN-2022-40
https://march8.com/articles/the-philanthropic-ventures-of-oprah-winfrey
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00427.warc.gz
en
0.960141
404
2.515625
3
How Big Data and AI Will Work Together? By John Chan Both Big Data and Artificial Intelligence applications (AI) have a mutually beneficial relationship. The success of AI applications is contingent on the big data input. AI is now assisting organizations towards utilizing their data as a means of influencing organizational decision-making with previously thought-to-be unfeasible methodologies. Glenn Gruber, a Senior Digital Strategist at Anexinet states “The more data we put through the machine learning models, the better they get. It’s a virtuous cycle.” There are three key ways that AI can deliver better insights with big data: 1. AI is creating new and enhanced methods for analyzing data Determining insight from data previously required much manual effort from an organization’s staff. Historically, engineers had to use a SQL query or a list of SQL queries to analyze data. With AI, an array of new and enhanced methods to obtain data insights have become available. Thus, AI and machine learning are now creating new and more efficient methods for analyzing an immense quantity of data. 2. AI can be used to alleviate common data problems The value of big data sets is intricately tied to data quality. Data that is of deficient quality is of little or no worth for the organizational decision-making process. The dirty secret of many big data projects is that 80% of the effort is spent towards cleansing and preparing the data for analytics. Machine learning algorithms in AI applications can discover outlier and missing values, duplicate records and standardize data for big data analytics. 3. Analytics become more predictive and prescriptive In the past, data analytics was primarily backward-looking with a post-analysis of what happened. Predictions and forecasts were essentially historical analyses. Big data decisions were therefore based on past and present data points with a linear ROI. AI is now creating new opportunities for enhanced predictions and forecasts. An AI algorithm can be set up to make a decision or take an action based upon forward-looking insights. In essence, big data analytics can become more predictive and prescriptive. The Future of Big Data and AI In the future, there will be a greater availability of intelligence enterprise software that can leverage big data to solve problems. New techniques will develop towards analyzing data for real-time insights. AI – created reports will provide more enhanced context with proposed solutions for organizational problems than was previously available. Consequently, organizations will begin to realize more significant ROI from all their stored data. At present, as the world continues to fight the spread of the Coronavirus, many data scientists are utilizing various Big Data and AI applications assist in their fight against this deadly virus. Big Data and AI are thus having a transformative effect on how authorities respond to and get a handle on the Coronavirus outbreak. Moreover, it can be foreseen that both applications will play a critical role in any future effort to prevent and respond to any similar epidemics. ISM would be happy to data analytics developments impacting your business in more detail with you over a call. Contact ISM to schedule a time that works for you: [email protected].
<urn:uuid:b8eab3a2-28db-4d14-8e09-511a0062e49f>
CC-MAIN-2022-40
https://ismguide.com/how-big-data-and-ai-will-work-together/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00427.warc.gz
en
0.936588
648
2.96875
3
Malware Attribute Enumeration and Characterization (MAEC) is a structured language used for sharing and encoding of high-fidelity information about malware. MAEC is sponsored by the U.S. Department of Homeland Security (DHS) Office of Cybersecurity and Communications, and it is managed by the MITRE Corporation, which also provides technical guidance to the members of MAEC Community. The first version of MAEC was released on January 14, 2011; then it was updated to following versions: MAEC 2.0 (April 2012), MAEC 3.0 (April 2013), and MAEC 4.0 (Sept. 2013). The most recent version: MAEC 5.0, was released in Oct. 2017. As in May 2019, MAEC is not pursued in a formal standards body. However, once an appropriate level is achieved on the stability, maturity, and use, international standardization may be sought for MAEC. What are the key elements of MAEC? There are two key elements of the MAEC. The first is the Core Specifications document, that introduces MAEC, and provides high-level use cases, and defines MAEC data types and top-level objects. The second one is the Vocabularies Specifications document, that provides explicit values for each of the open vocabularies referenced in the core concepts document. Since MAEC provides a common grammar and vocabulary for the malware domain, it follows that most use cases for MAEC are motivated by the accurate and unambiguous communication of malware attributes enabled by MAEC. What are the key use cases of MAEC? One of the important key use cases of MAEC includes a static, dynamic, and visual malware analysis to mitigate the threat with a better understanding of the malware nature and propagation). To perform static analysis, MAEC can be used to capture the detailed attributes of a malware instance, like information about instance packaging, some interesting code snippets obtained using reverse engineering of the malware code, etc. As for the dynamic analysis, MAEC can help capture details of any particular action or event that occurs when malicious code is executed. With MAEC, this can be done at multiple levels of abstraction. At the lowest level, some form of native system API calls can be captured, while at higher levels, any particular unit of malicious functionality, like keylogging, can be described. Besides these, other common use cases include Cyber Threat Analysis (like Malware Threat Scoring System, Malware Provenance and Attribution), and Incident Management (like having a Uniform Malware Reporting Format, Malware Repositories, and Malware Remediation). Why should you care about MAEC? The absence of any widely accepted standard for characterizing malware means that there is no precise technique for communicating the particular malware attributes, nor for enumerating its fundamental makeup. MAEC framework solves these problems, as the characterization of malware using abstract patterns offers a wide range of benefits over the use of physical signatures. It allows accurate encoding of how the malware operates and the particular actions that it performs. Such information can be used for malware detection, but also for assessing the malware’s end-goal. Overall, it provides a set of modern tools and techniques for combating and detecting malware. What is the MAEC Community? MAEC is a community-developed project, which involves representatives from antivirus, operating system, and software vendors, security services providers, IT users, and others from across the international cybersecurity communities. What are the benefits of MAEC? By adopting MAEC for encoding malware-related information in a structured way, organizations can eliminate the ambiguity and inaccuracy in malware descriptions, and improve the general awareness of malware. This can also help in reducing the duplication of malware analysis efforts, and decrease the overall response time to malware threats. In this community-developed project, the information is shared based on attributes such as artifacts, behaviors, and relationships between malware samples. MAEC enables faster development of countermeasures and provides the ability to leverage responses to previously observed malware instances. What is the relationship between MAEC and TAXII? TAXII (Trusted Automated eXchange of Indicator Information) uses STIX (Structured Threat Information eXpression) to constitute cyber threat information. Where STIX characterizes ‘what’ is being shared, the TAXII defines ‘how’ the STIX payload is shared. However, it is also feasible that TAXII could use MAEC as its payload instead of STIX. MAEC provides a comprehensive, structured way of capturing detailed information about malware, targeting malware analysts, while STIX targets a more diverse audience by capturing a broad spectrum of cyber-threat related information, including basic malware information.
<urn:uuid:5777e5c1-7957-4ccd-b08f-9fa057706983>
CC-MAIN-2022-40
https://cyware.com/educational-guides/cyber-threat-intelligence/what-is-malware-attribute-enumeration-and-characterization-maec-81e2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00427.warc.gz
en
0.902069
977
2.609375
3
Critical infrastructure, when loosely used as a term, includes a number of assets including electricity generation, telecommunication, financial and security services and shipping & transportation. It may even include IT infrastructure and computer logistics, which contain sensitive information and data. With most of these services, whether public or private, being handled by third-party agencies with the help of futuristic technologies, security and protection of these critical infrastructural components become paramount. Threats to critical infrastructure can arise from a variety of sources. Mischief mongers, disgruntled employees, spies, hackers, spammers, and even criminals can be potential sources of threats. Moreover, critical infrastructure, both at private and public sectors, has moved towards cloud computing because of the ease and affordability. While cloud computing helps clients to run critical infrastructure components with respect to IT in an accessible and affordable manner, certain privacy and security threats need to be addressed. In order to ensure that both public and private critical infrastructure are protected, certain steps need to be undertaken in a methodical manner. This begins with protecting the aforementioned critical infrastructure in the following ways: It is important to describe what critical infrastructure entails. Often times, it is confused with nuclear, power and transportation facilities that are the prerogative of the government or public bodies. However, corporations and organizations have their set of critical infrastructure too. This could range from electrical supply to IT department to actual cloud computing technology. It is important to evaluate what a certain company or body’s critical infrastructure entails. This identification helps in the process of protecting it. Threats to critical infrastructure can come in a number of ways. Most common threats are hackers, spies, cyber criminals and various bots. However, administrators must also identify extraneous threats such as power disruption, loss of critical data, irreversible loss to existing human resources and other possibilities which normally do not come under the category of critical infrastructural threats. Once critical infrastructure and their possible threats are identified, the next step is to monitor for signs of hacking and attacks. Even an innocuous loss of an unimportant file may suggest something more sinister at work. Thus, hiring professionals who are specialized in critical infrastructure monitoring can be a good step to follow. There are also tools and programs that help to automate the monitoring of possible threats emerging. As and when threats are monitored, it is also important to have a risk mitigation plan in place. This may include having access to specialized security professionals who understand how to deal with malicious attempts to takeover data. Resolving issues should be a constant process, as opposed to beginning to resolve only after the damage is done. Constant monitoring usually helps in risk mitigation but there needs to be an agreed framework to resolve issues as and when they occur. As companies move towards cloud computing, much of their infrastructure is left vulnerable to external and internal attempts to hack and attack. It is important to be prepared for future attacks and have plans in place which reduce the damage that could take place and also identify new potential threats. Being prepared for future attacks is the key to protecting critical infrastructure, whether in a public or a private organization. Importance of having a protection plan in place A step by step critical infrastructure protection plan must be put in place. If an organization is unable to come up with a framework on its own to mitigate the risks, it is better to hire professional agencies that are specialized in identifying critical infrastructure and the threats that lurk around them. Even when an organization comes up with an infrastructure protection plan, it may not be the most effective strategy with respect to IT security. With this in mind, it makes sense to consult with agencies that specialize in the protection of critical infrastructure against hacks and attacks. Most importantly, being prepared from the outset to deal with hacks and attacks and following a methodical plan to avert catastrophic events ensures safety of data, infrastructure and technology. The Rising Cybersecurity Threats CIOs cannot afford to ignore What’s New in Securing the Critical Infrastructures of the Financial Sector Double the security with integrated systems 7-Notorious IT Security Attacks Your Fort Knox Grade Security will Crumble in the Face of a Hack if… Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:d3254e81-08ed-401d-bb5b-3c7f68b01eae>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/how-would-you-protect-critical-infrastructure-from-hacks-and-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00427.warc.gz
en
0.937637
1,069
2.828125
3
When people talk about ‘big data’, there is an oft-quoted example: a proposed public health tool called Google Flu Trends. It has become something of a pin-up for the big data movement, but it might not be as effective as many claim. The idea behind big data is that large amount of information can help us do things which smaller volumes cannot. Google first outlined the Flu Trends approach in a 2008 paper in the journal Nature. Rather than relying on disease surveillance used by the US Centers for Disease Control and Prevention (CDC) – such as visits to doctors and lab tests – the authors suggested it would be possible to predict epidemics through Google searches. When suffering from flu, many Americans will search for information related to their condition. The Google team collected more than 50 million potential search terms – all sorts of phrases, not just the word “flu” – and compared the frequency with which people searched for these words with the amount of reported influenza-like cases between 2003 and 2006. This data revealed that out of the millions of phrases, there were 45 that provided the best fit to the observed data. The team then tested their model against disease reports from the subsequent 2007 epidemic. The predictions appeared to be pretty close to real-life disease levels. Because Flu Trends would able to predict an increase in cases before the CDC, it was trumpeted as the arrival of the big data age. Between 2003 and 2008, flu epidemics in the US had been strongly seasonal, appearing each winter. However, in 2009, the first cases (as reported by the CDC) started in Easter. Flu Trends had already made its predictions when the CDC data was published, but it turned out that the Google model didn’t match reality. It had substantially underestimated the size of the initial outbreak.
<urn:uuid:64a81ece-7347-4f65-8a42-c643faadb83c>
CC-MAIN-2022-40
https://www.crayondata.com/googles-flu-fail-shows-the-problem-with-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00427.warc.gz
en
0.969072
368
3.578125
4
Physical distance lost its value in today’s virtual world. You’re holding the vast world in your hand now; all these are possible only because of the computer networks designed by the number of network Architect. What do they do, How you can become one, and what is the career opportunities if so Ever had these questions in your mind? All these questions are discussed in today’s article. Who is Network Architect? Network Architect is the person who is responsible for building communication networks and other internal networks it can be Local Area Networks (LAN) and Wide Area Networks (WAN) and intranets. He also involves in the update and maintenance of software and hardware like adapters, routers, etc… People often confuse the network architect with the Network engineer who builds and repair the networks or with Network Administrator who manages the networks. It’s true in some Startups and Small and Midsized business network Architect involved in the above functions. But Network architect covers more areas than that. A Network Architect is responsible to plan and design the network connections according to the company’s goals and needs. Along with the other technical skills, he should know the recent trends and future opportunities in Networking. He often discusses with the CTO (Chief Technology Officer) to evaluate and modify the organization’s requirements and goals. In short, a Network architect is a computer network technical expert with strategic and decision-making functions. Roles and Responsibility A Network Architect the roles and responsibilities of overlaps with various other personnel’s duties of the organization. Here is the rough framework of the Roles and Responsibilities of a Network Architect. The main duty of the network architect is to design and plan the network structure. He should consider factors like bandwidth, infrastructure, cost, distance, and other technical factors while planning the Network layout. In most cases he plans and network engineers will execute them. The network architect should have the profound technical knowledge to list the requirements. And he is responsible to resolve the problems before they arise. Network Modelling and Maintenance: In the case already existing network he should evaluate the performance and the state of the network and make suggestions to the Network Administrator. They plan an important role in upgrading the network and optimizing the performance. They should make minor repairs to ensure the proper working of the network. He is responsible to take care of the scheduled maintenance. He should check with the latest trends and advice the organization to adopt them. If a security breach appears it will affect the whole function of the network. So it is an important responsibility of the network architect to check and ensure that network is not vulnerable to any attacks. The professional should keep aware of the factors and new threats that might affect network security. The network architect has to keep records of the network he designed. This documentation will help him in the future if he plans to upgrade the network or if any problem arises with it will become handy in disaster management. The records should be detailed and precise to help the successor to understand the existing networks. Basic Skills a network architect should have Anyone who has technical knowledge cannot become a network architect. Network Architect should have the leadership quality to lead a team of software and computer engineers who has equal knowledge as him. They need to work with different teams and personnel’s in an organization to design the best computer network. So he should have good interpersonal skills and team spirit. The network architect job has more importance on the precise information so he should have a keen eye for details and specifications. The analytical skill of the network architect should be top-notch to evaluate, identify the errors in the network. The good analytical skill helps the Network architect to make the right decision. Other than the above skills he or she should have the following skills – - Recording and documentation skills - Quick response and ability to work under high pressure. - Fast learning and understanding of the new network trends and technologies. - Good communication skills to advise organizational leaders to make decisions. Qualifications and Requirements: The Qualifications can differ based on the organization size and polices, here the commonly demanded qualifications- - Bachelor’s Degree in computer science, engineering, mathematics, or any other related field. - Experienced and Master’s degree graduates have a higher chance of securing a job and a large salary package. Other than this formal qualification, some other IT certifications can also help you secure the job; here are a few of them – - Cisco Certified Design Associate (CCDA) - Cisco Certified Design Professional (CCDP) - Cisco Certified Design Expert (CCDE) - Cisco Certified Architect ( CCAr) - Red Hat Certified Architect (RHCA) - Zachman Certified – Enterprise Architect - Salesforce Certified Technical Architect (CTA) - ITIL Master Are you a person who has always fantasized the computer networks and wants to pursue Network Architect as your future path? It is the best choice for you. The average annual salary of a Network Architect is $101,210 and the U.S. Bureau of Labour expects the job opportunities of the Network Architect to increase by 6 percent in the next ten years.
<urn:uuid:48867571-88c6-48b8-b0f6-def46d76c76d>
CC-MAIN-2022-40
https://networkinterview.com/network-architect-roles-and-responsibility/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00427.warc.gz
en
0.93509
1,106
3.046875
3
Watching our favorite television shows or movies have been are getting easier as the platforms we watch them on are evolving. Gone are the days of the VHS, DVD and almost the Blu-Ray as we’ve entered the entirely digital era of programming. So the real questions are, what are the differences and which one is better for you? That’s what we hope to break down for you here in this easy to understand guide. What’s the difference in the process? These two terms often get mixed up by individuals and even organizations that don’t really understand the methodology whether it be with video on demand or podcasts. So what is really the difference between them and what do they mean? Streaming is the process and technology used to carries content between computers and mobile devices through the internet. As it transmits this data, which is typically audio and video, it transfers with a constant flow and allows the user on the other end to watch or listen virtually instantaneously. This entertainment data is delivered by a provider to you, the viewer or listener. The verb “stream” states that it is to transfer digital data, such as audio or video material in a continuous stream, especially for immediate processing or playback. Downloading is the process of transmitting a file from one computer to another. The verb “download” states to transfer data from a distant to a nearby computer, from a larger to a smaller computer, or from a computer to a peripheral device. Or in plainer terms, you download a file which requests it from another computer or website somewhere on the internet and then receives it on your end. The best part here is that when you’ve downloaded it, the data is on your device and you no longer have to worry if there’s an internet connection or not. Cases where downloading would come in handy is downloading a movie or television show from your streaming service, downloading a podcast, or music playlist to your device for driving to work or working out. Which is better, streaming or downloading? There are actually benefits to each of these different data transfer configurations. Streaming gives you content on demand but at the cost of your internet connection speeds and whether it’s online or not. Downloading gives you that nice portability for on-the-go digital consumption without the tether of being online. So which is better? Let’s break it down for the streaming side of things on the benefits and pitfalls. - Determined based on your internet provider’s bandwidth and data caps - This can be good or bad depending on whether you watch your content in SD, HD or 4K - Provides your device with the freedom of not having to store the data on the device - More programs are available on the streaming service as opposed to downloading - You’re unable to stream if the streaming service or internet connection goes offline Now that we’ve seen what streaming has to offer, let’s see the download benefits and pitfalls. - With downloading, you have the perks of offline viewing - Being able to watch the content whenever and wherever you want - Need to have sufficient storage space on the device - Can choose between Standard Definition (SD) or High Definition (HD) video quality - Again this depends on the amount of space on the device - Fewer programs are available for downloading - Some providers such as Hulu don’t even offer to download for their programming The best way to watch your shows and movies Regardless of your choice, the battery will be the main contender for how long you get to actually absorb the content unless you’re plugged in. This also really depends on your preferred methods of watching or where you’re at whether it makes sense. If you’re more into the quality of the video, want a larger selection, and have a great internet connection, then streaming may be your preferred method of viewing. However, if you’re on the go without an internet connection and just want a handful of programs to watch where quality isn’t really a concern for you, then downloading would be your best bet. My personal preferences lie with streaming because I’m almost always connected, but when I’m not, I own the digital copy too so I have that option. Either direction you decide to go, there really are plenty of options for you and your devices.
<urn:uuid:4a6291b3-a82f-482c-8f49-aa8cbbd146ba>
CC-MAIN-2022-40
https://www.komando.com/tech-tips/streaming-vs-downloading-which-is-better/557559/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00427.warc.gz
en
0.934036
922
2.59375
3
A report by a research group from the University of Southampton says that the bandwidth capacity of current fibre optic broadband may soon be reached. Published in the journal Science, the report entitled ‘Filling the Light Pipe’ is the work of the Optoelectronics Research Centre at the University of Southampton and cites a “growing realisation” within the telecommunications industry that the end of the period of rapid growth in optical fibre capacity is now within sight. The report points to news coming from the Optical Fibre Communication Conference earlier this year in which several stakeholders had “reported results within a factor of 2 of the ultimate capacity limits of existing optical fibre technology.” The group calls for “radical innovation in our physical network infrastructure,” saying that research was needed to improve the physical properties of fibre optic cables and the amplifiers used to relay day over long distance such as the submarine cables linking countries around the world. Ultimately the research improvements may come to late to avoid hitting what the group called the “capacity crunch”, adding that the future of broadband could result in an “increasing need to get used to the idea that bandwidth – just like water and energy – is a valuable commodity to be used wisely." Image credit: Sandia National Laboratory.
<urn:uuid:e1617ea7-c3d7-4933-b63a-6af0c1a7cc56>
CC-MAIN-2022-40
https://www.pcr-online.biz/2010/10/18/fibre-could-hit-capacity-crunch-before-technology-improves/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00427.warc.gz
en
0.923872
261
2.546875
3
Welcome to this first article in a series of three, where I will cover the history of hacking. This first article will cover the years 1970 to 1990. Initially, the term “hacker” was used as a term of honour for someone who was able to come up with creative solutions to programming problems. However, it was during this period that the term hacker changed from something positive to something negative. This happened when a journalist got the term changed after an interview. There is a lot of disagreement about when exactly this happened, therefore I will refrain from giving a precise year. This is also the period when you start hearing the name Kevin Mitnick for the first time. He has been described far and wide in many places, including in a movie. 1971 is the year when the concept of phreaking, hacking the telephone system, is born in its modern form. It happens when a Vietnam veteran named John Draper, also called Captain Crunch, discovers that a small whistle that comes with the Captain Crunch breakfast can be used to get free long-distance calls. The whistle emits a tone of 2600 hertz, and when it is sent down the phone line, all long-distance calls are free! Later, Draper builds what becomes known as the Blue Box, a small piece of electronics that can automate the trick of getting free phone calls. The Hacker magazine that appears in 1984 under the name 2600 – The Hacker Quarterly got its name from this frequency. The founders of Apple Computer, Steve Jobs and Steve Wozniak, earn a little extra money during their studies by selling these ‘Blue Boxes’. The blueprints for this Blue Box are made public in a magazine called Esquire Magazine in an article called Secrets of the little Blue Box. After this is done, cheating with the telephone system becomes very common in the United States. One of the reasons is of course that it is very easy, but also because at the time there was an additional tax on long distance calls that was used to finance the Vietnam war. Therefore, it was seen as a sacred duty to cheat as much as possible to bring an end to the war. In 1972, the first computer underground magazine is founded, it is called YIPL, which stands for Youth International Party Line. It is later renamed TAP magazines for Technical Assistance Program. The magazine was founded by Abbie Hoffman and existed until 1984. Hoffman was a mildly colourful character who for a long period of his life lived on the run from the authorities under an alias. However, it was not because of computer or phone fraud, but because of a conviction for trafficking in Cocaine. It is also here in the 70s that the first Bulletin Board Systems (BBS’s) appear. Some of the best known of the kind from the underground were Catch-22 and Sherwood. You may be surprised that we have only talked about phone hacking so far, but there is a good reason for that. Hacking as we recognize it today only appeared in the early 80s. In 1982, the first hackers as we know them today are arrested. This is the group that was known as the 414 Gang. During nine days in 1982, the six members break into 60 computer systems, these systems extend from a computer in a cancer research centre to military computers in Los Alamos. Unfortunately for them, their skills are not good enough to camouflage where the calls to these computers are coming from and they are arrested. The hacker concept and methodology came to the surface in 1983 with the film War Games. How many people have been inspired by that film is unknown, but there are at least quite a few who got their first modem after seeing it, including myself. It is also in 1983 that the responsibility for fraud with credit cards and computers is given to the Secret Service. One of the most active BBS networks was Plovernet. Two people who were active on this network must be mentioned right away, one is Emmanuel Goldstein, the founder of 2600 magazine, and Lex Luthor who founded the Legion of Doom (LoD) in 1984. The name came from the Superman series where it is the name of the group of supervillains. There is another group of interest that is founded in the same year, the Chaos Computer Club (CCC) in Germany. We’ll come back to CCC later, for now we’ll stick with LoD. One of the reasons LoD became as strong as it did was that the members each had their own area of expertise. So, if someone got stuck in a hack, he was sure to find someone in the group who had answers to the challenges. In the beginning, most of the members of the group were exclusively Phreakers, phone hackers, but as the systems that controlled the phone network became more and more advanced and computerized, they gradually became more like the hackers as we know them. Lex Luthor himself was an expert in COSMOS, i.e. Central System for Mainframe Operations. LoD was so active in the 80s that the authorities were of the belief that virtually everyone in the computer underground had a connection to LoD. The LoD then published their own magazine, the LoD Technical Journal. One of the hackers in LoD, The Mentor, wrote a new version of hacker ethics in 1986, and a part of it was used in the movie Hackers from 1995 with Angelina Jolie in one of the main roles. It was called ‘The Hacker Manifesto’, it is too long to reproduce here, but a link to it can be found at the end of this article. There are many stories and legends surrounding LoD, and the later spin off, Masters of Deception (MoD). A detailed history can be found in the book by Bruce Sterling The Hacker Crackdown. We will dwell here on one of them, namely the story of the E911 document. As the name suggests, it is about the American emergency phone number. One of the most ELITE of the hackers in LoD was Prophet. Prophet was the author of the file ‘UNIX use and Security from the ground up’. In 1986 he had been convicted of illegal intrusion into computer networks, and in that connection, he had gotten rid of most of the material he had about hacking and phreaking. Unfortunately for him, he didn’t resist the temptations of Cyberspace, and in the fall of 1988, he was once again at work with some of the most acidic systems on the web, along with two other LoD members Leftist and Urville. At the beginning of September ’88, he breaks into one of BellSouth’s computer systems AIMSX. AIMSX was an internal business network for BellSouth where employees could store e-mail documents and other goodies. Since AIMSX had no dialup lines connected, the system was considered invisible and there was therefore no security worth talking about. There were in effect no requirement for passwords for the individual users on the system. Prophet broke into the system by posing as one of the recognized users. He was on the system approximately ten times and one of the times he copied one of the documents on the system to have a trophy from his intrusion. This document turned out to be about how the administration of the 911 system was done and was not a document that BellSouth was interested in making public. And it wasn’t, at all, information that they thought hackers should have access to! Prophet’s copying of this document would prove to be the beginning of the end for the LoD, becoming the very heart of the lawsuits that resulted from the investigation launched by the authorities in the late 80’s and culminating in 1990 after AT&T’s long-distance network goes down on January 15th. January 15 is one of the most sensitive holidays in the United States, it is Martin Luther King Day. That the network goes down on this exact day is a coincidence, and is due, it turns out, to a software error. Back to Prophet. We have now reached February 1989, and Prophet now believes himself safe enough after his AIMSX hack that he sends a copy of the document to Knight Lightning who is then the editor of the underground magazine Phrack. It is made public in Phrack No. 24 on 25 February 1989 under the synonym The Evesdropper. In the 80s, there was not widespread knowledge of computers in police circles, saying words like Ram, CPU, UNIX and the like to police people did not get a big reaction. But saying hackers tampered with the 911 emergency call system was a sure-fire way to get a reaction. Although that was not the point of this article, we quickly slip into 1990. Three days after the breakdown on January 15, four police agents are stationed in Knight Lightning’s room. One of them named Timothy Foley accuses him of being responsible for the crash three days earlier. Knight Lightning was horrified at the accusation. Although he was not the cause of the crash, he was familiar with hackers who boasted that it was something they could do with their arms behind their backs. After being confronted with the E911 document, he began to cooperate with the authorities. Knight Lightning was convicted at a trial in July of the same year. His was just one in a series of cases targeting members of the LoD. Now we jump from the hackers in the US to a group of hackers in Australia known as The Realm. A long more detailed story about them, as well as some hackers from the USA and England, can be found in the book Underground. The Australian hackers’ specialty was X.25 networks. They got their access to the international X.25 network through the then government-owned Overseas Telecommunications Commission (OTC). It required that the hackers gain access to an account on the computer that controlled the settlement of the traffic that went out of Australia and out onto the network. This happened in several ways but one of the most widespread was to call a company that had access to the system and trick the password out of them. The policy on the system was that usernames consisted of the first three letters of the company name and three numbers, NOR001, NOR002 etc. So, everyone was able to guess multiple usernames on the system, then it was just a matter of calling the company and coming up with one song from the warm countries, and voila, global network access. One of the things that signalled ELITE in the underground at that time was getting on ALTOS. ALTOS was a computer in Germany that had an early form of live chat of the same kind as IRC. Also, the hackers from LoD, CCC and hackers not affiliated with any particular group came there. The reason the Australian hackers had such a high profile was not only because of their expertise with X.25 networking, but also because they had a tool called DEFCON. It was a small program designed to automate the scanning for interesting connections on the X.25 network. DEFCON was very hard on the Australians. It was not given to anyone outside of The Realm. Not even Eric Bloodaxe, who was one of the stars of LoD, got it when he asked. One of the hackers in The Realm was Force. Force was a very methodical hacker, all his discoveries on the X.25 network were very carefully put into plastic folders and binders in his room. One day while he is scanning with DEFCON, he connects with a new computer that starts broadcasting numbers on his screen. He has done nothing but establish a connection, yet the computer happily starts typing what turns out to be credit card numbers onto his screen. For three hours the computer continues to send out credit card numbers, and he could see from the header on some of the cards that they are registered in CitySaudi, which is the Arab branch of one of the world’s largest banks, namely Citybank. When he goes through the data that the computer has spewed out, he can see that the last part of the data not only contains credit card numbers, but also the names of some of the card holders. In addition, there was an overview of what card holders what credit limit, one of them had up to 5 Million Dollar! There was also an overview of what had been of transactions on these cards, restaurant visits, a Mercedes bought in cash and one person had visited a brothel. All Force could think at that time was that he would have free network connections for the rest of his life. At that time in the 80s, connections to networks were quite expensive, therefore many of the hackers were involved in what is called carding, i.e. abusing credit cards to avoid having to pay for their network connections themselves. It was considered a little underpowered, but ok when used for nothing more than connecting to the international BBS’s. Unfortunately for them, it is also something that attracts the attention of the authorities, especially in the United States. In January ’89 there is an article in The Australian, where the headline reads: Citybank hackers score $500,000. It was something that attracted attention, both among the ‘normals’ but especially in the underground who thought they were above this kind of crime. There is doubt as to whether it was the hackers from The Realm who did it, but a lot of money actually disappeared from accounts at Citybank. You can hardly expect anything else when their computer spits out accounts when you connect to them. At the time, Australia did not have a law against hacking, but this hack and several others caused the authorities in the United States to put a lot of pressure on Australia to get them to put an end to the hackers’ ravages of American computer systems. This causes the Australian government to introduce laws to be used against hacking in general and the Realm hackers in particular. After playing cat and mouse for a little over a year, they are all eventually put behind bars, only to be sentenced to very lenient sentences at the beginning of ’91. Some of them get away with it by saying that they are addicted to hack. Be that as it may, there has not been such an active underground in Australia since. Another one of the hackers in The Realm was known as Mendax, better known today as Julian Assange, founder of Wikileaks. Here at the end, we slip right back to Germany to just quickly talk about an alliance between German hackers and the Russian KGB in 1988. The most famous of these hackers were Pengo and Hagbard who were loosely associated with the CCC. They both came from West Berlin and were part of a group that broke into computers at the Lawrence Berkeley Laboratory (LBL) in the USA, and from there moved on to the ARPAnet. From there they moved on to what was known as MILNET which was the military branch of ARPAnet. From MILNET, they get access to parts of the US Department of Defence’s network. The system administrator at LBL is a certain Clifford Stoll, who will be known to some. He spends months pursuing these hackers, and thereby discovers that what they are looking for on the military networks is classified information. At that time, they managed to download the blueprints for the space shuttle and sell them on to the Russian intelligence service. They are eventually caught by putting some fake files on the network, it made them ask for the information via regular mail, and even though the recipient address was not directly to themselves, it was enough to bring them down. The sentences they received ranged from 14 months up to 2 years. Later, Hagbard committed suicide in a forest, or he was murdered – there is no agreement on that. In any case, it has become an integral part of the legends of the underground. The whole story can be read in the book written by Clifford Stoll, it is called The Cuckoo’s Egg and is by any standard an exciting read. The Hacker Manifesto: http://www.mithral.com/~beberg/manifesto.html Underground Book: https://www.wikiwand.com/en/Underground_(Dreyfus_book)
<urn:uuid:b8bf9f33-271a-4093-b005-d28ad6d275d7>
CC-MAIN-2022-40
https://cybersecurity-magazine.com/hackers-history-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00627.warc.gz
en
0.98478
3,348
2.703125
3
Welcome back to our Security 101 series. We talk time and time again about making 2016 the year of multi-factor authentication. It’s really one of the best things you can do to help secure user logons. But, since it will require change and budget and we all know those are two things the company hates, let’s talk about passwords and what makes the difference between good ones and bad ones. Good passwords don’t exist in dictionaries (of any language.) Passwords should appear random or at least not easily mapped to a common word, proper name, or anything else that might exist on a list used by dictionary attacks. “P@ssw0rd” may map to most organizations’ password requirements, as it includes all four types of character (upper and lower case letters, numbers, and punctuation) and is at least eight characters long, but it appears on practically every brute force tool’s dictionary list too. Simple substitutions like swapping in numbers or punctuation for letters can help make a password more complex, but you have to balance that with what is common. Purely random strings are more complex and difficult to brute force, but of course they are also more likely to be written down. You should also not include anything that indicates the system for which the password is set, so don’t use the word money in your password for your bank. Without running passwords through a checker, the best way to prevent dictionary words is to require complexity in the password policy. See below for more details on that. Longer is better. That’s pretty straight forward. The longer the password minimums, the greater the number of possible combinations of characters an attacker must cycle through to find a match. If you use only letters in a non-case sensitive password, an 8-character password has 2 billion possible combinations and would take the most powerful super computer or distributed attackers less than four minutes to crack. A single modern machine might need 35 minutes to do the same. But if you made that same password, with only 26 possible characters to choose from, 15 characters long, you would have 1.6 sextillion possible combinations. A supercomputer would need 53K years to crack that, and a single computer would need almost half a million years to do the same. The password policy should set a minimum based on what meets the security needs of your organization and the sensitivity of the data, without being too onerous to users. 12 characters is a good compromise for most needs. One tip from Edward Snowden himself? Think of passphrases rather than passwords. But of course, passwords are case sensitive, and there are far more characters available on a standard QWERTY keyboard than just letters. If you include upper case and lower case letters, numbers, and punctuation, there are 96 possible characters on a keyboard that can be entered just using a standard key with or without SHIFT. Using repeated characters compromises complexity, so don’t use the same character or even consecutive characters in a password. Passwords should use at least three of the four possible character sets, and the password policy should enforce that. Passwords should be changed with some regularity and frequency. 30 to 60 days is a pretty good range for most needs, but if you are in a higher security setting, you may want to force changes even more frequently. For customers, you need to find a good balance between security and usability. A customer who shops with you once every couple of months and has to change their password every time may soon decide to shop elsewhere. You may want to enforce once a year for them, or at least suggest that they change their password but not require it. Passwords need to be unique, both on the system they are set within, and across systems. You should not use the same password on more than one system, application, or social network, and you should not use the same password on the same system when prompted to change it. The password policy should require a new password with the change interval and remember at least the previous ten to ensure users are not cycling through the same password again and again. Passwords must never be shared, ever. Administrators and support personnel must understand that there is never a situation where they should ask a user for their password, and end users must be trained that they should never give out their password to anyone, ever. They should also ensure that they never write passwords down. Of course, a long and complex password that must be changed regularly and cannot be used on more than one system begs to be forgotten, or worse, written down, so ensuring users can remember passwords will help minimize that. Teach them to use passphrases that might mean something to them that makes it easier to remember, but won’t be readily guessable by someone who has access to their social networking information. For example, if you have an account at Amazon, think about something you only get there, or the first thing you ever got there, and use that as the basis for your password. I always buy my coffee there, so I create a password based on that-“IBuyc0ffeeHere.” Not including the quotes, that is a password that includes all four possible character types, is 15 characters long, and memorable. Of course, now that I shared that with you, I have to change it! While using multi-factor authentication is the better way to go, when you just don’t have that option, creating, using, and enforcing good password practices can help with security. Use the guidelines above to help create a good password policy in your network, and to teach your users good password practices.
<urn:uuid:7bdd62ff-c187-49aa-8e98-bf15e97ba76a>
CC-MAIN-2022-40
https://techtalk.gfi.com/what-makes-a-good-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00627.warc.gz
en
0.950384
1,191
3.3125
3
Wi-Fi Spatial Streaming Explained Wi-Fi Spatial streaming or multiplexing (often shortened to SM or SMX) is a transmission technique used in MIMO wireless communication to transmit independent and separately coded data signals, so called streams, from each of the multiple transmit antennas. This results in the space being reused, or multiplexed, more than one time. On each band, the Wireless-N standard is available in three primary configs, depending on the number of spatial streams being used. The lowest, single stream (1x1), dual stream (2x2) and three-stream (3x3), offering cap speeds of 150Mbps, 300Mbps and 450Mbps. This in turns creates three types of true dual-band routers: N600 (each of the two bands offers a 300Mbps speed cap), N750 (one band has a 300Mbps speed cap while the other caps at 450Mbps) and N900 (each of the two bands allows up to 450Mbps cap speed). 802.11ac: Sometimes referred to as 5G Wi-Fi, this latest Wi-Fi standard operates only on the 5GHz frequency band and currently offers Wi-Fi speeds of up to 2167Mbps (or even faster with the latest processors) when used with a quad-stream (4x4) setup. The standard also comes with 3x3, 2x2, 1x1 that cap at 1,300Mbps, 900Mbps and 450Mbps, respectively. Technically, each spatial steam of the 802.11ac standard is about four times faster than that of the 802.11n standard, and therefore is much better for battery life since it doesn’t need to work as hard to deliver the same amount of data throughput. In real-world testing so far, with the same amount of streams, it has been found that 802.11ac is about three times the speed of Wireless-N which is still significant. The real world sustained speeds for wireless standards are always much lower than the theoretical speeds. This is partly because the testing had been conducted what are essentially lab conditions, where the environments are clean and completely free from interference. On the same 5GHz band, 802.11ac devices are backward-compatible with Wireless-N and 802.11a devices. While 802.11ac is not available on the 2.4GHz band, for compatibility purposes, an 802.11ac router can also serve as a Wireless-N access point. All 802.11ac chips on the market support both 802.11ac and 802.11n Wi-Fi standards.
<urn:uuid:ed0fe0a7-ff61-41e3-b997-fab206b4db87>
CC-MAIN-2022-40
https://www.digitalairwireless.com/articles/blog/wi-fi-spatial-streaming-explained
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00627.warc.gz
en
0.928527
537
3.078125
3
After every fiber optic cable installation or repair, you need to test for continuity and end-to-end loss. You may also need to troubleshoot installed fiber that’s not performing up to expectations. To check fiber, you need to test it with a light source and a power meter, then compare your results with an estimate of what a reasonable loss for that cable or link is. This estimate is called a loss budget and is calculated using typical losses expected for each part of the cable: the fiber itself, connectors, and splices, if any. If the measured loss exceeds the loss budget by a significant amount, there is a problem with the cable, most often at the connectors or a splice rather than with the cable itself. A fiber light source is used to inject light into a fiber optic cable for the purpose of testing it. They come in two basic varieties: light emitting diodes (LEDs) and laser diodes. They’re further differentiated by the wavelength they produce and the type of cable they test. LEDs are low cost, slower speed, easy to use, multimode-only, and have a wide output pattern. Because LEDs produce a less concentrated light than lasers and have a much lower power output than lasers, they’re difficult to couple into fibers, limiting them to multimode fibers. LEDs have less bandwidth than lasers and can achieve a maximum throughput of 1 Gbps. Laser diodes are higher cost and faster speed, allow single-mode or multimode, and have a narrow output pattern. Lasers can achieve throughput up to and beyond 10 Gbps. The three kinds of lasers in use for fiber optic transmission are Fabry-Perot lasers, distributed feedback (DFB) lasers, and vertical cavity surface-emitting lasers (VCSELs). Fabry-Perot lasers are the most versatile, operating over both multimode and single-mode cable. DFB lasers are used for very long-distance applications over single-mode fiber. VCSELs can carry very high speeds. They’re usually used only for multimode fiber, although they can also support 1310 single-mode fiber. Because the light source used for testing should work with the fiber being tested, as well as the power meter, it’s important to read the light source’s specifications to ensure that it works with the cable you have (multimode or single-mode) and the wavelength you’re using. Although fiber optic light sources are usually too low in power to cause much eye damage, some high-powered sources can cause retina damage and blind spots. Never look directly into a light source or into the end of a fiber cable unless you’re sure it’s dark. Always check fiber with a power meter or traffic identifier before looking into it.
<urn:uuid:de8a49d8-8482-46cb-bdbb-3651f46be344>
CC-MAIN-2022-40
https://www.blackbox.com/en-ca/insights/blackbox-explains/inner/detail/fiber-optic-cable/installation-of-fiber-optic-cables/fiber-light-sources
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00627.warc.gz
en
0.923918
588
3.078125
3
In this example, Entuity identifies the true cause of a problem as being the failing network element (IP address) closest to the Entuity server. Entuity identifies the upstream point by first recognizing the traceroute path taken to a device, but this only includes the inbound IP addresses, e.g.: - hop 1 - 10.44.1.1 - hop 2 - 10.45.1.2 - hop 3 - 10.46.1.1 To derive the outbound IP addresses: - Entuity identifies the IP addresses upstream of the switch, starting from 10.46.1.1. - Entuity identifies its upstream node by finding the device associated with the IP address of the preceding hop (i.e. 10.45.1.2 on router-2). - Entuity then searches through the list of all other IP addresses on that device to find the one that is in the same subnet as the downstream hop (i.e. 10.46.1.2 on router-2 is in the same subnet as 10.46.1.1 on switch-1). This IP address is then taken as the one to fill the gap between hop 2 and hop 3. A similar procedure is applied to fill the gap between hop 1 and hop 2.
<urn:uuid:9c4c0752-1f30-4735-b6b7-7f8cd6bfd72d>
CC-MAIN-2022-40
https://support.entuity.com/hc/en-us/articles/360004562958-Identifying-upstream-availability
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00627.warc.gz
en
0.909159
273
2.703125
3
A database group is a logical entity that contains the databases that need to be backed up. You can back up specific databases at a particular time or frequency by adding the databases to a user-defined database group for DumpBasedBackupSet. You cannot create a user-defined database group for FSBasedBackupSet. It can only contain a default database group. Before You Begin If you have not done so, create a server plan. This plan determines when the software automatically backs up the data files. For more information, see Creating a Server Plan. If you have not already done so, add an instance for the PostgreSQL database. From the navigation pane, go to Protect > Databases. The Instances page appears. Click an instance The instance page appears. In the Backup sets section, click DumpBasedBackupSet. The DumpBasedBackupSet page appears. Click Add database group. The Add database group dialog box appears. In the Database group name box, type the database group name. In the Number of data streams box, type the number of streams that the software uses for backups. To include the object list in the backup content for dump-based backups, select the Collect Object List During Backup check box. From the Plan list, select a server plan. To add the databases that you want to include in the database group, select the check boxes of the databases. Note: For Dumpbased backup set, associating a database group to a backup plan will only associate a storage policy, and not a schedule policy.
<urn:uuid:c3ee52af-1b02-4e30-b8e2-2c2d64259655>
CC-MAIN-2022-40
https://documentation.commvault.com/v11/essential/87532_adding_postgresql_database_group.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00627.warc.gz
en
0.735699
340
2.546875
3
Quiz reveals awareness of the growing e-waste problem worldwide People's perceptions of e-waste are changing, and almost all consumers are aware that e-waste poses a danger to human health, according to results from a global quiz. Almost 2,000 people from 50 countries participated in the quiz ahead of International E-Waste Day on 14 October. IT sustainability organisation TCO Development created the quiz to understand what people think about e-waste. It found that 95% of those who participated in the quiz understand that e-waste is a risk to human health. TCO Certified adds that e-waste contains hazardous substances that can leak into the natural environment, which in turn can affect human health if e-waste is not properly recycled. However, 18% believe that e-waste is growing because people are finally starting to recycle. This, TCO Certified says, is not correct. “In fact, the increased e-waste volumes is mainly fuelled by higher consumption rates of electronics, short life cycles, and few repair options,” the company states. People do, however, understand the global cost of e-waste, with almost half of respondents correctly stating that the estimated annual value of raw materials contained in e-waste amounts to EUR 55 billion (SGD $86.9 billion). Further statistics from TCO Certified show that in 2019, the world generated 53.6 million metric tonnes of e-waste - that could reach 74 million tonnes by 2030. TCO Development marketing and communication director Gabriella Mellstrand says, “Talking about e-waste is really important to help buyers take a more holistic approach to their electronic goods. “The single most important thing you can do for the environment, both in terms of e-waste and reducing greenhouse gas emissions, is keeping IT products in use longer. For example, adding two years to a notebook's life can cut total emissions by up to 30 percent.” Gabriella Mellstrand continues. TCO Certified says that the way e-waste is being dealt with now is actually having a negative economic impact on the world. Electronic products actually have many valuable and scarce parts that can meet future product needs. WEEE Forum director-general Pascal Leroy adds, “With International E-Waste Day we hope that more people will understand that it is important to reuse, repair, resell, or dispose of their used products responsibly. International E-Waste Day aims to raise awareness about promoting the correct disposal of e-waste throughout the world. Last year, more than 112 organisations from 48 countries backed the initiative to encourage e-waste recycling. “Be part of the solution. Keep your IT products in circulation for as long as possible by use, re-use, repair or reselling the product. If none of this is possible, make sure the product is responsibly recycled,” concludes Mellstrand.
<urn:uuid:66194913-17ed-482a-a690-cb1763d5a251>
CC-MAIN-2022-40
https://itbrief.asia/story/quiz-reveals-awareness-of-the-growing-e-waste-problem-worldwide
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00627.warc.gz
en
0.939736
618
3.265625
3
I’m sure we’ve all seen the commercials about the cars with the built-in wireless networks. Well this is just the first step towards an even bigger integration of technologies to make the roads safer for everyone. Recently General Motors has started testing a technology that will allow cars to “talk” to each other and provide feedback, both visually and physically, to the driver. Have you ever come to a blind turn and checked as best you could to check for traffic? It looks clear but as soon as you start pulling out to turn you just get missed by a vehicle going much too fast. Well this is the main reason for this new technology. An article in MIT’s “Technology Review” they did this exact scenario in the GM parking lot. The test driver would weave through an obstacle course and then speed through a straight way with a blind 2-way stop obstructed by hedges on both sides. Coming up to the 2-way stop the car discovered another car coming, and going way too fast to stop, and immediately began providing feedback to the driver. The dashboard flashed a collision light, the seats began to vibrate, and finally an audio queue was given for an impending collision. The test driver stopped just in time to see the speeding vehicle become a blur in his vision and run through the 2-way stop. This is an all too common happening on today’s roadways. The addition of this technology could possibly save millions of lives in the first year of full integration. The U.S. Department of Transportation is so impressed with the prototypes that they are already drafting rules to eventually mandate this technology in all new cars. This technology is still a few years out, but GM is committed to making this a standard feature in their 2017 models. This is a great leap forward to protecting the roadways of the world, and will create a great opportunity for IT Support companies to expand into more than just computer networks. There may come a day that you won’t just need a mechanic to work on your car but also an IT specialist, and we hope when that day comes you keep Frankenstein computers in mind. ” More than five million crashes occur on U.S. roads alone every year, and more than 30,000 of those are fatal. The prospect of preventing many such accidents will provide significant impetus for networking technology.” – MIT Technology review article “NHTSA researchers concluded that the technology could prevent more than half a million accidents and more than a thousand fatalities in the United States every year.” – MIT Technology review article “The technology stands to revolutionize the way we drive” – John Maddox
<urn:uuid:d23d3bf5-0bdf-4191-88b6-ceb4a6349b95>
CC-MAIN-2022-40
https://www.fcnaustin.com/car-to-car-communication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00627.warc.gz
en
0.964518
551
2.515625
3
With global business having been shut down due the pandemic and lockdown, normal sources of supply have been disrupted, and sourcing has become unreliable. Many firms are struggling to meet baseline demand due to insufficient supply and others are trying to streamline to meet unprecedented spikes in demand for certain products. Transparency and agility in our supply chains have never been more complex or more vital. We need new ways to analyse and manage highly complicated inter-dependencies in order to ensure resilience and continuity of supply chains. Manufacturers and shippers of goods have always needed a highly scalable way to manage the vast volumes of serial numbers, supplier and facility details, certifications, documents and complex regulatory requirements. However, as The Big Reset begins, organisations must also adapt to unprecedented amounts of change that look to be a new normal. In parallel, businesses will need to assure consumers that they can continue to deliver products that meet standards and maintain full compliance with international regulations as well as sustainability, social responsibility, and quality targets. As supply chain technology experts Transparency-One’s CTO Frédéric Daniel has noted, “The more complicated a supply chain is — the more components, suppliers, and facilities involved — the more vulnerable it is, regardless or vertical or sector. Supply chains that have multiple tiers, are heavily globalised, and/or involve several components or stages of transformation are inherently more complicated and at greater risk of being impacted by a pandemic or other crises.” Supply chains are complex networks of interdependent processes and components that must work in concert to meet demand. As businesses are forced to make real-time adjustments, they may source components from suppliers who are difficult to vet or take risks that could put their entire operation in jeopardy. For example in closely-regulated industries such as pharmaceuticals, suppliers must be able to identify where any individual medicine item is, at any given time. In the event of a safety issue, it is imperative that items or batches can be quickly removed from the market to minimise the risk to consumers and intelligently targeted to minimise the cost of redress or widespread product recall. The technical challenge of meeting these targets can be onerous. That being said, Daniel continues, “a supply chain’s vulnerability largely depends on how prepared the business is to deal with a crisis. Businesses who have visibility into their supply chains and know who is involved, where they are located, how their products are potentially impacted, and what alternate sources are available to them are much better equipped. Those who lack this knowledge are more vulnerable because they do not have the information needed to make informed decisions”. To have the flexibility to deal with a crisis requires having visibility into complicated relationships with thousands of product lines containing even more subcomponents, produced across multiple sites which are then sold into hundreds of markets and millions of consumers. This is not well represented in tables in rows: keeping track of all these items, let alone analysing them, exceeds the scope of the old standard ways businesses have organised supply chain data, specifically using relational database systems (think Oracle or Microsoft SQL Server). Consider the numbers of unique serial codes that alone can run into billions; CIOs need not only a highly scalable way to manage the vast volumes of serial numbers but more importantly the ability to quickly analyse all relationships between them – and everything else in their supply chain. Graph technology is used to manage and analyse complex networks like supply chains, because of its ability to record data interdependencies at scale and use relationships to find patterns. Graphs offer a tremendous advantage over traditional relational databases, maintaining high performance even with vast volumes of data. Instead of using relational tables, graph databases are purpose-built to analyse interconnections in data, and are closely aligned with the way humans think about information. Graph algorithms use relationships to understand the structure of data and uncover meaningful shapes that can be used to infer behaviour. Graphs are practically impossible at analysing the relationships between a large number of data points. Such a relationship-centric approach enables the manufacturer to better manage, read and visualise their data, giving them a truly trackable and in-depth picture of all products, suppliers and facilities and the relationships between them. Using a graph database, manufacturers can typically demonstrate 100 times faster query response speeds than that enabled by relational databases. Graph analytics can answer questions that are intractable without the use of relationship-based algorithms. That sort of response time and insight is critical during this crisis and will continue to be relevant as organisations work to mitigate the supply chain risk that has been exposed. Supply chains need to build resilience to comply with the latest global regulations on traceability and to manage time-critical product recall, as well as to manage the surges and drops in demand that the pandemic has heralded. Graph database and analytics technology is a great enabler and an effective solution for organisations that need to work with complex supply chains and provide the level of highly granular governance and sourcing capability our global economy demands.
<urn:uuid:3d77eb6a-5620-431d-b4c9-5e6b595579da>
CC-MAIN-2022-40
https://itsupplychain.com/lets-restart-our-broken-global-supply-chains-with-graph-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00027.warc.gz
en
0.953426
1,014
2.515625
3
Delve into the world of smart data security using machine learning algorithms and Python librariesRead more AI is a bigger concept to create intelligent machines that can simulate human thinking capability and behavior, whereas, machine learning is an application or subset of AI that allows machines to learn from data without being programmed explicitly. Is an AI-powered malware. DeepLocker was developed as a proof of concept by IBM Research in order to understand how several AI and malware techniques already being seen in the wild could be combined to create a highly evasive new breed of malware, which conceals its malicious intent until it reached a specific victim. It achieves this by using a Deep Neural Network (DNN) AI-model to hide its attack payload in benign carrier applications, while the payload will only be unlocked if—and only if —the intended target is reached. This method can be either supervised or unsupervised. In either case, it requires an analysis of big data. Deep learning uses neural networks and requires a large amount of computing power. One application of deep learning is image recognition.
<urn:uuid:62fd298e-cea1-4cc8-bea6-929969bb91ae>
CC-MAIN-2022-40
https://cybermaterial.com/ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00027.warc.gz
en
0.92935
216
3.09375
3
Thursday, September 29, 2022 Published 2 Years Ago on Monday, Sep 14 2020 By Karim Husami The 5G connections in Latin America are expected to increase from 0.3 million devices in 2020 to 61.9 million in 2025, according to Statista. This requires significantly enhanced broadband communications, according to Omdia and Nokia’s ‘Why 5G in Latin America’ report, leading to the adoption of 5G which could add $3.3 trillion of value by 2035 and a $9 trillion improvement in productivity. Latin American and Caribbean (LAC) countries will accelerate adoption of 5G, but it is important to resolve the challenges they will face to successfully deploy the networks in the region. As such, some of the challenges are high implementation costs, securing spectrum, and issues concerning activation. “Latin American countries must diversify their sources of income and jobs into higher value-added activities. Activities including mining and manufacturing must become more productive and 5G will play an important role on this,” Wally Swain, Principal Consultant for Omdia Latin America said. The fifth generation technology will lead the 4th Industrial Revolution and transform the society and economy of the future. In a continent where 4G only reaches 50% of mobile connections, some of the procedures to enhance 5G is to finish allocating 4G spectrum, upgrading 4G to be 5G and pushing fiber deeper into the network. For example, Claro – a part of the Mexican telecommunications company América Móvil – announced on July 2, the implementation of Brazil’s first 5G commercial network. The carrier also informed that users who buy new smartphones with the 5G DSS technology will have connections 12 times faster than 4G’. How will 5G impact Brazil? It will witness the largest total gain with $1.216 trillion of 5G Economic impact and an increase in productivity of $3,084 trillion. The ICT industry will be most affected in the country with a $241 billion Economic impact, according to the report. Another major problem is the underdeveloped infrastructure regarding low broadband penetration costly Internet connection, low usage, and sporadic adoption of mobile technology. One of the major goals for governments is to universalize access to and usage of broadband which plays a key role in society, GDP, productivity, and employment. This macro-economic problem has caused a gap with developed nations in broadband penetration which is not going away. Thus, governments must take a number of actions, including regulatory improvement, establishing institutions, and providing financial support related to investment in the 5G network. Even during its current winter state, the crypto world is still alive. New buyers are still coming in, maybe not as before, but still, some are committed to buying the dip. The Crypto wallet conversation is one to be had when venturing into the crypto world. Between the crypto physical wallet and its virtual counterpart […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:01d0482a-26d9-4caa-bf45-9a36527f5132>
CC-MAIN-2022-40
https://insidetelecom.com/5g-to-boost-productivity-and-connections-in-latin-america/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00027.warc.gz
en
0.934676
650
2.578125
3
Thursday, September 29, 2022 Published 2 Years Ago on Friday, Sep 25 2020 By Yehia El Amine As the clock ticks closer and closer toward the rollout of 5G worldwide, a new wave of entrepreneurs are frantically thinking of the next multi-billion dollar idea that will drive this rapidly changing era in telecom. In parallel, many experts from all professions are studying and prepping themselves to integrate 5G within their core business models to stay ahead of the competition. While there may be an array of exciting new opportunities and ideas being brought to the forefront, humanity needs to stop and ask itself, “what’s the catch?” This massive level of accessibility and connectivity that 5G offers is perceived to be a double-edged sword that could either prove to be a valuable weapon in the fight against climate change, or one that quickens its consequences. First thing’s first: 5G stands for the fifth generation of wireless technology. It is the wave of wireless technology surpassing the 4G network that is currently used. Previous generations brought the first cell phones (1G), text messaging (2G), online capabilities (3G), and faster speed (4G). The fifth generation aims to increase the speed of data movement, be more responsive, and allow for greater connectivity of devices simultaneously. This means that 5G will allow for the near instantaneous downloading of data that, with the current networks, would take hours. This new era of telecoms will bring with it more precise and accurate self-driving cars, the ability for remote surgeries due to shortened latency, as well as a massive expansion of the Internet of Things (IoT), and enhance humanity’s technological capabilities across the aisle. Naturally, such high levels of connectivity and computing will demand an increased supply of energy to power this online revolution, thus creating an even bigger need for burning fossil fuels in parallel. Faster 5G rollout will accelerate this and cannot be swept under the rug for much longer. The telecoms industry, alongside governments, have a pivotal role to play to fight global warming, where indirectly supporting customers to reduce CO2 emissions is no longer considered enough. While 5G will aid in making other industries much more efficient by enabling a broader horizon toward renewable energy generation and reducing travel, telcos urgently need to look inward on solving their own direct greenhouse gas emissions. In Laymen’s terms, the pure electrical energy required to power networks is costly for operators, either directly from local generators or through power grids. In turn, 5G’s emissions increase in parallel to data growth. Everything you see online from your social media feeds, to websites, podcasts, videos and the like are all linked to thousands upon thousands of data centers around the globe; consider that these data centers are essentially massive warehouses filled to the brim with wall-to-wall computer systems. These warehouses need to have power, air conditioning, lighting, uninterruptible power supply (UPS), generators, fire suppression and alarm systems. Now, just take a moment to consider how much electricity is needed to power the world’s infinite stream of online information. Data centers are known to be one of climate change’s biggest nemesis accounting for 3 percent of globally generated power, and that number will increase in parallel to the skyrocketing demand for data. An article published by the Yale School of Forestry & Environmental Studies reported that if the global IT industry was a country, only China and the United States would have a worse impact on climate change. However, a lot of work is being done by industry giants to kick-start a greener approach for sustainability. Back in 2014, Microsoft started working on an underwater data center, called Project Natick. After 4 years of excessive research and testing, their research team was successfully able to drown a data center 100 feet under the surface of the North Sea by UK’s Orkney Islands. As time passes, renewable energy sources (such as wind, solar, and tidal) are being used for eco-friendly data centers, especially as they are less expensive than burning fossil fuels. Apple followed suit in 2018, where they were able to transfer their entire global enterprise to 100 percent renewable energy. This includes all of their data centers. In the same year, Google took a colossal step of handing the keys of its cooling controls for several of its data centers to an AI algorithm it has been developing for years. The algorithm is able to teach itself the most efficient way to adjust cooling systems, from fans, ventilation, as well as many other aspects, in an effort to lower power consumption. The AI would also make recommendations, based on its calculations, to data center managers in which they would decide whether to implement them or not. This algorithm has led to saving around 40 percent of cooling systems’ power usage across the board. “It’s the first time that an autonomous industrial control system will be deployed at this scale, to the best of our knowledge,” Mustafa Suleyman, Head of applied AI at DeepMind, the London-based artificial-intelligence company – Google acquired in 2014 – was quoted as saying. This project is considered a prime example of how AI systems can work alongside humans to extract the best results possible. While Google’s algorithm works independently, a person manages it and has the ability to intervene if they notice any risky behavior. According to a report done by Huawei, a faster 5G rollout could reduce cumulative carbon emissions by 0.5 billion tonnes of CO2 by 2030, due to its quickness of extracting and delivering data, thus lowering computing and processing needs. Let’s take a moment to consider that 0.5 billion tonnes of CO2 is almost equivalent to the annual carbon emission of all international aviation in 2018. “Our analysis shows that rapidly rolling out 5G networks could reduce the cumulative CO2 footprint of mobile networks globally by over a third, compared with a slower rollout,” the report stated. There is still a lot of work to be done from all aspects, especially from national authorities to simplify and ease the work happening on the ground to support the switch to 5G. This aligns with global ambitions to reduce greenhouse gas emissions; thus 5G network deployment should not only be seen as an investment, but also a solid strategy to contain increased energy demands of escalating data growth. Huawei’s report tackled a number of recommendations that local governments must consider, to enable a smooth transition to 5G, such as incentivizing its accelerated rollout through policy, licensing, and tax cuts. “Establishing and enforcing rights of way, access to ducting, and nationwide frameworks for use of power/lighting poles and streamlining other planning processes; in addition to reducing or eliminating import duty on 5G infrastructure,” the report added. More importantly, there needs to be an incentivized migration from 2G/3G to 5G across the board. As the countdown to 5G rollout edges closer to reality, a lot of work needs to be done to ensure that its launch will work toward the betterment of human life; preventing further damage to an already fragile environment. Even during its current winter state, the crypto world is still alive. New buyers are still coming in, maybe not as before, but still, some are committed to buying the dip. The Crypto wallet conversation is one to be had when venturing into the crypto world. Between the crypto physical wallet and its virtual counterpart […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:97c5569e-4582-43d0-bef1-599dd0ba91c1>
CC-MAIN-2022-40
https://insidetelecom.com/5gs-double-edged-sword-impact-on-climate-change/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00027.warc.gz
en
0.946628
1,598
2.984375
3
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. The short excerpt below from the 1938 film La Femme du Boulanger (The Baker’s Wife) ingeniously depicts how the human mind can extract deep meaning from life experiences and perceived situations. In the movie, directed by Marcel Pagnol, the baker Aimable welcomes his wife Aurelie, who has just come back after running off with a shepherd days earlier. While Aimable treats Aurelie with sweet words and a heart-shaped bread (which he had baked for himself), he shows no kindness toward Pomponette, his female cat who coincidentally returns home at the same time as Aurelie, after abandoning her mate Pompon for a chat de gouttière (alley cat). Aimable calls Pomponette ordur (junk) and a salope (a rude term) who has run off with un inconnu (a nobody) and bon-a-rien (good for nothing) while the poor Pompon has been miserably searching for her everywhere. While Aimable cuts the cat down to size with his tongue-lashing, Aurelie cringes in her seat and starts to sob. “What does he have better than [Pompon]?” Aimable asks. “Rien (nothing),” answers Aurelie in a shaky voice, barely above a whisper. It’s not clear whether she’s talking about the stray alley cat or the shepherd boy. “You say rien,” Aimable tells his wife in a sweet and soft voice. “But if she could talk,” he says, his voice becoming stern again as he returns his gaze back to Pomponette, “if she had no shame, if she didn’t fear to pain pauvre Pompon, she would say, ‘He is prettier.’” Again, there are many hidden meanings and accusations in his words. As Aimable is rambling on, apparently oblivious to his wife’s reaction, Pomponette starts drinking milk from Pompon’s bowl. And that’s where he drives the dagger. “Look there,” he says. “This is why she returned. She was cold and hungry.” Meanwhile, Aurelie is holding the heart-shaped bread that Aimable had baked—for himself. Most humans can extract the deep meanings, metaphors, and intricate nuances hidden in the flaky grayscale image frames and noisy sound waves of this video sequence. We can empathize with Aimable and Aurelie (and map them to our own previous life experiences). But the most advanced artificial intelligence technology we have today—our best imitation of the brain—can at best see people and faces, detect genders and objects, and provide very basic descriptions such as “a couple dining at a table.” This is just a glimpse the human mind’s ability to understand the world—and how great a challenge its replication remains after six decades of artificial intelligence research. “Humans are able to ‘actually understand’ the situations they encounter, whereas even the most advanced of today’s AI systems do not yet have a human-like understanding of the concepts that we are trying to teach them,” writes computer scientist and AI researcher Melanie Mitchell in her latest paper for AI Magazine. In her paper, Mitchell, who is also a professor at Santa Fe Institute and the author of a recent book on artificial intelligence, discusses the struggles of current AI systems, namely deep learning, in extracting meaning from the information they process. Deep learning is very good at ferreting out correlations between tons of data points, but when it comes to digging deeper into the data and forming abstractions and concepts, they barely scratch the surface (even that might be an overstatement). We have AI systems that can locate objects in images and convert audio to text, but none that can empathize with Aurelie and appreciate her unease when her husband attacks Pomponette. In fact, our AI systems start to break as soon as they face situations that are slightly different from the data they’ve been trained on. Some scientists believe that such limits will be overcome as we scale deep learning systems with larger neural networks and bigger datasets. But, Mitchell suspects, something more fundamental might be missing. In 2018, Mitchell helped organize a three-day workshop at the Santa Fe Institute titled “Artificial Intelligence and the Barrier of Meaning.” The workshop explored concepts such as what is “meaning” and “understanding,” how to extract meaning from data and experience, and how understanding situations can help create AI systems that can generalize their abilities and are more robust to changes in their environment. The result of the workshop, which Mitchell shares in her paper, gives some directions on how we can make more reliable AI systems in the future. AI lacks innate abilities Like the term “artificial intelligence,” the notions of “meaning” and “understanding” are hard to define and measure. Therefore, instead of trying to give the terms a formal definition, the participants in the workshop defined a list of “correlates,” abilities and skills closely tied to our capacity to understand situations. They also examined to what extent current AI systems enjoy these capacities. “Understanding is built on a foundation of innate core knowledge,” Mitchell writes. Our basic understanding of physics, gravity, object persistence, and causality enable us to trace the relations between objects and their parts, think about counterfactuals and what-if scenarios, and act in the world with consistency. Recent research indicates that intuitive physics and causal models play a key role in our understanding of visual scenes, and scientists have described it as one of the key components of the “dark matter” of computer vision. Beyond physics, humans also have “innate or early-developed intuitive psychology,” Mitchell writes, which gives us the ability to analyze, empathize, and communicate with other social beings. Mitchell also speaks of “metacognition,” the ability to “explain and predict our own thought processes and decisions, and map them onto the thought processes of others.” These capabilities are essential for us to develop an idea of the scope of information we have and how relevant it is to solving problems. It also allows us to put ourselves in Aurelie’s shoes and imagine her feelings as she watches Aimable lash out at Pomponette. Neural networks can’t extrapolate Compared to humans, deep neural networks need much more data to learn new things. This is because, while neural networks are efficient at interpolating between data points they’ve seen during training, they’re terrible at dealing with situations not covered by their training data. Humans, on the other hand, are good at extrapolating their knowledge and experience to previously unseen situations because they “build abstract representations,” Mitchell writes. Abstraction is a powerful tool of the human mind. It’s what allows us to extract the high-level meanings of the movie excerpt we saw at the beginning of this article and compare them with things we already know. And unlike neural networks, which have a different training and deployment process, the human brain is an active learning machine that continues to adjust its knowledge throughout its entire life. “Perception, learning, and inference are active processes that unfold dynamically over time, involve continual feedback from context and prior knowledge, and are largely unsupervised,” Mitchell writes. The AI and neuroscience community is divided on how the human mind acquires knowledge efficiently. Many scientists believe that the brain comes prewired with many capabilities. These innate capabilities, which we mostly take for granted, enable us to make sense of situations we’ve never seen before and to learn things with very few examples. Others researchers assert that like artificial neural networks, the brain is a large interpolation machine that learns to fill the gaps between known data, and we need to discover the secret algorithm that makes us efficient at extracting meaning from the world. “I don’t think anyone knows the answer to this,” Mitchell told TechTalks in written comments. “I’m not even sure it’s an either/or—we likely have prewired capabilities in the brain that guide our early self-supervised learning. We also probably have some prewired ‘facts’ about the world, such as how to identify that something is an ‘object.’” Another area explored at the Santa Fe workshop was the need for AI systems to have a body to experience the world. “Understanding in living systems arises not from an isolated brain but rather from the inseparable combination of brain and body interacting in the world,” Mitchell writes, adding that the supporters of this hypothesis believe that a disembodied brain will not achieve human-like understanding. “I think if you asked the people at the workshop, there would have been a lot of difference in opinion on what ‘embodiment’ means,” Mitchell told me. “But it certainly includes the ability to actively ‘sense’ the world in some form or another, emphasis on the ‘actively.’ I don’t think anyone can say that there is a single kind of ‘embodiment’ that is necessary for general intelligence.” Evolution has also played a key role in shaping the mind of every living being to serve its physical needs. “Over the last decades evidence has emerged from neuroscience, psychology, and linguistics that supports the essential role of the body in virtually all aspects of thinking,” Mitchell writes. For instance, while chimpanzees are obviously less intelligent than humans, they have a much better short-term memory. Likewise, the minds of squirrels have evolved to remember thousands of food hideouts. These are cognitive abilities that have developed over thousands and millions of generations and repeated interactions with the environment. “Perhaps the particular underlying structure of the brain is not as central to understanding as the evolutionary process itself,” Mitchell observes in her paper, adding that an evolutionary approach might open a path forward toward integrating meaning and understanding in AI systems. In this respect, one of the benefit of artificial intelligence is that, where simulated environments allow, it can play evolutionary cycles in fast forward. Understanding is not a loss function or a benchmark Machine learning algorithms are designed to optimize for a cost or loss function. For instance, when a neural network undergoes training, it tunes its parameters to reduce the difference between its predictions and the human-provided labels, which represent the ground truth. This simplistic approach to solving problems is not what “understanding” is about, the participants at the Santa Fe Institute workshop argued. There’s no single metric to measure the level of understanding. It’s unclear what should be “optimized” to achieve the correlates of understanding or “even if optimization itself is the right framework to be using,” Mitchell writes in her paper. Another problem that plagues the AI community is the narrow focus on optimizing algorithms for specific benchmarks and datasets. In the past decade, many datasets have emerged that contain millions of examples in areas such as computer vision and natural language processing. These datasets allow AI researchers to train their algorithms and test their accuracy and performance. But while the hard work that have gone into curating these datasets is commendable and has contributed much to many advances we’ve seen in AI in the past years, they have also ushered in a culture that creates a false impression of achievement. “Due to the incentives the field puts on successful performance on specific benchmarks, sometimes research becomes too focused on a particular benchmark rather than the more general underlying task,” Mitchell writes in AI Magazine. When scoring higher on dataset becomes the goal, it can lead to detrimental results. For instance, in 2015, a team of AI researchers from Baidu cheated to score higher than other competitors at ImageNet, a yearly computer vision competition. Instead of finding a novel algorithm that could classify images more accurately, the team managed to find a way to game the benchmark in violation of the contest’s rules. The shortcomings of narrowly curated datasets have also become the highlight of more recent research. For instance, at the NeurIPS 2019 conference, a team of researchers at the MIT-IBM Watson AI Lab showed that algorithms trained on the ImageNet dataset performed poorly in real-world situations where objects are found in uncommon positions and lighting conditions. “Many of the papers published using ImageNet focused on incremental improvement on the all Important ‘state of the art’ rather than giving any insight into what these networks were actually recognizing or how robust they were,” Mitchell writes. Recently, there’s been a push to develop benchmarks and datasets that can better measure the general problem-solving capabilities of AI algorithms. A notable effort in this respect is the Abstract Reasoning Corpus developed by Keras founder Francois Chollet. ARC challenges AI researchers to develop AI algorithms that can extract abstract meaning from data and learn to perform tasks with very few examples. “I agree with Chollet that abstraction and analogy—of the kind required in solving the ARC problems—are core aspects of intelligence that are under-studied in today’s AI research community, and that to make progress on the issues I outline in the ‘Crashing the Barrier of Meaning’ paper, we’ll have to figure out how to get machines to be able to do this kind of task,” Mitchell said in her comments to TechTalks. “But even if a machine could solve the ARC problems, it remains to be seen if it could use the same mechanisms to deal with abstraction and analogy in the real world, especially where language is concerned.” Finding meaning is an interdisciplinary challenge “Our limited conception of what understanding actually involves makes it hard to answer basic questions: How do we know if a system is ‘actually understanding’? What metrics can we use? Could machines be said to ‘understand’ differently from humans?” Mitchell writes in her paper. What made this specific study interesting was the broad range of perspectives brought together to tackle this complicated topic. Participants in the workshop came from various disciplines, including AI, robotics, cognitive and developmental psychology, animal behavior, information theory, and philosophy, among others. “When I first got into AI, there was a real interdisciplinary feel to it. AI people attended cognitive science conferences, and vice versa. Then statistics took over AI, and the field got less diverse,” Mitchell said. “But I see a trend now in the field returning to its interdisciplinary roots, which I think is a very positive development.” The paper includes many examples from studies in fields other than computer science and robotics, which help appreciate the depth of meaning in living beings. “For me the perspectives from people outside AI (in psychology, neuroscience, philosophy, etc.) helped show how these issues of ‘understanding’ and ‘meaning’ are simultaneously key to intelligence, but also very hard to study,” Mitchell told me. “Listening to people from psychology and neuroscience really drove home how complex intelligence is, not only in humans but also in other animals ranging from jumping spiders to grey parrots to our primate cousins. And also that we really don’t understand natural intelligence very well at all.”
<urn:uuid:09538d01-eb65-40c8-85a5-f62acb840cc4>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/07/13/ai-barrier-meaning-understanding/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00027.warc.gz
en
0.955115
3,276
3.125
3
Cybercriminals use innovative ways to hack devices and networks and hence, organizations must examine every data packet that enters their system to avoid ransomware and malware attacks. At the same time, hackers are also learning to work around these loopholes. In this cat and mouse game, encryption has become an important strategy for cybercriminals as they can encrypt malicious content in data packets and send them to the network. To stay one up, organizations are using different strategies to encrypt these data packets to understand their contents. One such strategy is SSL decryption. In this article, we'll talk a bit about SSL decryption followed by how you can decrypt data packets using a tool called Wireshark. What is SSL Decryption? All web traffic today is encrypted using the Secure Sockets Layer protocol that sits on the application layer (Layer 7) of the OSI model. This protocol is known to reduce the chances of breaches unless cybercriminals take explicit steps to work around this encryption. As mentioned earlier, organizations must decrypt this SSL traffic to examine the contents of the incoming data packets for malware and other threats. The process of decrypting the encrypted data packets is called SSL decryption. Also known as SSL Visibility, the process of SSL decryption starts by routing the data packets to various inspection tools that examine the packets for threats. The routing decisions depend on the configuration and the tools available in your infrastructure. Before going into how you can decrypt SSL using tools like Wireshark, let's understand the possible challenges that come with this decryption process. Challenges of SSL Decryption Decrypting SSL traffic is not easy and depends greatly on how your infrastructure is set up, the available tools, the volume of traffic that passes through your network, and more. Here are some common challenges that come with decrypting SSL traffic. - Complex Architecture Large organizations tend to have multiple security layers and tools to detect and stop different types of security threats. Not all the security tools in your infrastructure can decrypt SSL traffic, and these variations create a sense of security chaos. It's hard to route traffic not knowing which tool will intercept that data packet. Such a complex setup is one of the biggest roadblocks to implementing SSL decryption. - Cryptography Limitations Sometimes, the limitations of cryptography also add to the complexity of implementing SSL decryption. For example, let's say, your organization uses forward secrecy ciphers. In such a case, how can you send the encryption key to devices that are outside the inspection band? Such nuances of cryptography require extensive planning and implementation. - Privacy Concerns Privacy and security regulations restrict how you can handle data packets, especially if it violates users' privacy. This means you can't decrypt data packets from some applications as this can expose the sensitive information of users. Viewing such information is a violation of privacy laws and can also put your organization at risk of data breaches. In turn, this reduces your visibility into some data packets and cybercriminals can use these packets as a conduit for transferring malware to your system. - Performance Degradation Another potential downside of SSL decryption is its negative impact on performance. Undoubtedly, decryption and inspection will take a few extra seconds and this can degrade network performance, especially if you're grappling with issues such as low bandwidth and heavy traffic surges. In all, implementing SSL decryption is not easy, and could require additional time and effort. Nevertheless, it is important and can save your organization from ransomware attacks. One way to strike a balance between the challenges and benefits of SSL decryption is to use tools like WireShark that are built for decrypting data packets and examining their contents. Though this tool doesn't address all problems, especially those related to privacy, it's still a good option to consider. Next, let's talk about how you can decrypt SSL packets with WireShark. Decrypting SSL Using Wireshark Wireshark is a popular packet and protocol analyzer tool that enables you to examine the contents of every data packet that enters your network. It captures as many details as possible about every data packet and in the process, reduces the chances of a cyber attack. This free and open-source tool works well on individual connections to large networks. Here's a step-by-step guide on using Wireshark to decrypt SSL packets. Create an Environment Variable To set up Wireshark on your Windows 10 device: - Go to Windows 10 Settings and click on “Advanced Settings”. - A new window will open. Look for a button called “Environment Variables” and click it. - Create a new environment variable. - Give the variable name as SSLKEYLOGFILE and choose the location where you want to store the log file. The location can be, for example, C:\Users\Admin\sslkeylogfile.log. Once you create the environment variable, click OK and exit from Windows settings. To check, navigate to the URL you entered for your log file and see if sslkeylogfile.log exists. Note that you can give any name to the log file, just make sure it ends with the “log” file extension. Next, download and install Wireshark if you don't already have it. To configure Wireshark: - Open Wireshark and navigate to Edit > Preferences. - In the Preferences window, search for SSL on the left-hand pane, and click it. - On the corresponding right-hand pane, set the “(Pre)-Master-Secret log filename” to the location you used earlier to create the log file. In the above example, this filename must be C:\Users\Admin\sslkeylogfile.log With this, you're all set to use Wireshark. Test the Settings Head to your browser and type any website. Open your Wireshark in parallel and navigate to Capture > Start. Soon, you'll see the data packets and the information they contain. At any time, click on the Stop (red square button) on the toolbar to stop the data capture. You'll notice a bunch of data packets, possibly from different websites. You can filter them using IP addresses as well. Right-click on any data packet and navigate to Follow > SSL Stream. A new window opens and here, you'll see the header and contents of the data packet. Also, open your log file and you'll see a bunch of encryption keys that Wireshark is using to decrypt these data packets. That's it really and now you can send this log data to a central logging mechanism for further processing and analytics. You can also pipeline this information to a monitoring platform that can raise alerts if the decrypted content matches flagging patterns. Undoubtedly, Wireshark is efficient and easy to use, and this is why it's so popular. That said, there are also other tools that you can use to get the same results, and next, let's talk about one such tool. SolarWinds Deep Packet Inspection tool comes as a part of the Network Performance Monitor and is used to decrypt SSL messages, so you can get instant insights into the traffic passing through your network. - Determines latency and its underlying cause. - Analyzes more than 1200 different applications. - Identifies risk levels and restricts access as needed. - Improves end-user experience. - Sends customizable alerts. - Comes with intuitive dashboards for easy understanding of data points. It examines the contents of every packet and provides a context for the same. Besides the content, you also get other information such as the source of the packet, level of risk, and more. The biggest advantage of the SolarWinds Deep Packet Inspection tool is the insights you get about applications, their use of your network resources, impact on user experience, and other relevant insights that in turn, can help you better understand what's going on in your network. More importantly, you can make appropriate and well-informed decisions that would positively impact your business. In all, Solarwinds Deep Packet Inspection and Analysis can be a handy addition to your monitoring and analytics arsenal. Download: Click here for a fully functional 30-day free trial To summarize, SSL packets are encrypted, and cybercriminals use these packets as a conduit to transport malware and ransomware. This is why organizations prefer to decrypt these data packets to ensure that there is no malicious content in them. A popular tool used for decrypting these SSL packets is Wireshark. This free and open-source tool can be configured with just a few steps to capture and decrypt the SSL packets, and in this article, we see how you can do this configuration. Besides Wireshark, other tools also help to inspect and decrypt these data packets. One such tool that we talked about in this article is the SolarWinds Deep packet Inspection and Analysis tool that comes as a part of the Network Performance Monitor suite. It comes with many advanced features that can inspect and analyze data packets from more than 1200 different applications and undoubtedly, add an extra layer of security to your infrastructure. We hope this was an interesting read. Browse through our site for similar articles and reviews.
<urn:uuid:9113ce85-6686-49c9-8402-012023ad4550>
CC-MAIN-2022-40
https://www.ittsystems.com/decrypt-ssl-with-wireshark/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00027.warc.gz
en
0.923922
1,929
2.9375
3
Domain Controllers (DCs) are critical to protecting network security, centralising user data, and rolling out standard system security protocols. In this article we take you through DCs main functions, why they’re so important, and when MSPs need them. What is a domain controller? A domain controller is a server that runs Active Directory Domain Services (AD DS). The DC is responsible for authentication requests within a certain domain. Organizations typically have a number of DCs, each of which has a copy of the Active Directory (AD). All login credentials from across the network are consolidated and held in the DC’s active directory service. For this reason, the DC is critical in helping manage the network’s security and maintain user identity security. The most common examples of Active Directory (of which your DC is a part of) are Microsoft Active Directory (on-premises) , Microsoft AzureAD (cloud-based) for Windows, and Samba for Linux. What does a domain controller do? Think of a domain controller as a gatekeeper that handles user authentication, entitlements, authorizes users, and security protocols within your domain. It does this using an active directory. Authentication and Validation The DC is responsible for authenticating a user’s right to access your network when they attempt to log in. It will usually validate a user’s identity by cross-referencing the account information, like a username and password, against the logged information in its active directory. Based on this, it will either permit or deny entry into your network. Regulates Permissions and Access The DC facilitates a hierarchical organization of your users based on different levels of entitlement. The DC oversees a user’s access rights within your domain. Using its active directory, it first determines whether a user is permitted to access domain resources and then identifies a user’s entitlements and what they should be able to see or access within your network. Implement Group Policies Using the DC, you can also implement network-wide rules and security protocols. For example: - Set requirements for unique or complex passwords - Set minimum length or other requirements for passwords - Set requirements for how often passwords need to be changed - Configure your network so that user settings follow them wherever they log in - Grant access to specific services across the network to certain users - Configure all computers in your domain to lock their screens after a certain period of inactivity In addition, some MSPs implement AD groups. This eases administrative workload, since permissions don’t need to be assigned individually. Instead, if a user is placed in a specific AD group, they gain access to the relevant resources automatically. Why are Domain Controllers important? MSPs should not overlook the importance of domain controllers. As the gatekeepers to your client’s networks and computer networks, your DCs determine who can gain access and to what. What is an AD? Active Directory (AD) is essentially a database that holds all the information about your network’s users and devices. Think of it as a log book that contains critical network information such as user accounts, groups, contacts, and computer data. The DC is the server that runs the AD. The AD facilitates the DC’s operations, enabling it to carry out its authentication, validation, and to grant access to resources within the network. An AD has 3 sections: domains, trees, and forests. A domain refers to a group of users, computers, and other objects that are related. A domain is a section of a network where all comprised objects can be managed together. A tree is where numerous domains are combined. A forest is a collection of trees (or groups of domains). It also constitutes a restricted security area, because inter-forest objects cannot interact unless a ‘trust’ is created. When do MSPs need a domain controller? As MSPs, one of your key roles is running and managing your clients’ networks. Here’s when and why MSPs need a DC: - Simplifies your administrative workload - Centralizes your control over user settings and entitlements - Ensures and maximizes the security of your clients’ network and data - Provides a centralised database of user credentials - Increased collaborative possibilities within the domain Why can domain controllers be a risk? Given that DCs are a critical gatekeeper to your domain, they’re also a prime target for cyber attacks. Their crucial role in authenticating users and granting access to your networks make them highly liable to being preyed on. A successful breach of your DC can lead to serious damage to your AD DS database, security leaks, and compromised user credentials and data. Should your AD forest be compromised, you’ll be unable to use it again unless you have a good and reliable backup. That’s why MSPs should ensure that they implement sturdy cybersecurity measures to protect their domain controllers. Another issue to consider is just how dependent your networks are on your DC’s uptime. For this reason, it’s advisable that your DCs are dedicated solely to domain services. This is because running any other services risks slowing down or crashing the system. How MSPs should ensure security of their DCs Given the high-value and high-risk of the DCs and therefore the AD, it’s absolutely critical that you take the necessary steps to protect it. Here are a couple of strategies you can use: - Limit physical and remote access to your DCs - For virtual domain controllers, run them on dedicated physical hosts - Continually monitor and audit your DC - Implement robust security protocols, including stringent authentication processes like multi-factor authentication (MFA) and unique or complex password requirements - Minimize the vulnerability of your DCs and AD by granting domain admin status to only a select few users - Ensure your DCs always have empty disk space and limit the other services that your DC is running - Block internet access on your DCs - Run all DCs on the most up-to-date OS Sounds difficult? It doesn’t have to be, as Atera’s Network Discovery makes it easy to proactively maintain and audit your clients’ networks by scanning your customers’ workgroups and DC networks. Try it for free with our free trial. What is a primary domain controller? Given the importance of the DC, it’s always advisable to have at least two domain controllers. This is a backup mechanism for occasions when one DC goes down. In this context, you may hear the terms ‘Primary Domain Controller (PDC)’ as well as ‘Backup Domain Controllers (BDCs)’ being used. In fact, since 2008, the hierarchical arrangement between PDCs and BDCs has been redundant. Instead, all DCs are now considered to be ‘equal’, with the active directory synchronised across all of them. MSPs will know both the value and the vulnerability of DCs. Though DCs help to make your life easier, and give you the peace of mind that comes with a secure network, they’re also the first port of call for potential attackers. As with anything, it’s always better to be safe than sorry. Always ensure to implement robust security measures for your DC so that you can reap all the benefits without the risk.
<urn:uuid:067a060c-9e04-4b13-80c8-692dea6eb3cd>
CC-MAIN-2022-40
https://www.atera.com/blog/what-is-a-domain-controller/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00027.warc.gz
en
0.913567
1,569
3.671875
4
Undermining citizens’ confidence in the election outcome has been a side effect of conspiracy theorists, campaigns and recent headlines, Georgia Secretary of State Brian Kemp told lawmakers. American election systems face threats, but the most vulnerable part isn’t technical, electoral and cybersecurity experts told a House subcommittee. “The biggest threats to the integrity of this November’s election and our democratic system are attempts to undermine public confidence in the reliability of that system,” Lawrence Norden, deputy director of the Brennan Center for Justice at the New York School of Law, testified Sept. 28 before the House Oversight and Government Committee’s IT subcommittee. Rep. Will Hurd, R-Texas, convened the hearing to determine what cyber threats elections systems face and directly asked whether a cyberattack would affect the outcome of the November presidential election. All five panelists—a Homeland Security Department official, a state secretary, an Election Assistance Commission official and two academics—agreed the answer is no. » Get the best federal technology news and ideas delivered right to your inbox. Sign up here. But undermining citizens’ confidence in the election outcome has been a side effect of conspiracy theorists, campaigns and recent headlines, Georgia Secretary of State Brian Kemp told lawmakers. As an example, Kemp named Sen. Diane Feinstein’s recent letter stating Russian officials are trying to influence U.S. elections. Doubts that votes wouldn’t count could keep voters from polls, according to a recent Carbon Black survey. The survey found 56 percent of respondents are concerned the presidential election will be affected by a cyberattack. “The foundation of our republic rests on the trust that Americans have in the way we elect representatives to the government," Kemp said. "If that trust is eroded, our enemies know they have created fissures in the bedrock of American democracy." Experts clarified the differences between the three primary parts of elections systems: campaign systems, which are not maintained by state governments; registration and reporting systems, which are maintained by states and often connected to the internet; and voting machines, which are not connected to the internet. “Headlines are not representative of our voting machines,” said Thomas Hicks, commissioner of the U.S. Election Assistance Commission. Anyone interested in manipulating voting machines would need to do it in person, he explained. Andrew Appel, a computer science professor at Princeton University, suggested eliminating direct reporting machines for the 2020 election and instead encourage auditing. He suggested using optical-scan paper ballots, which is when the voters fill in a bubble on a paper ballot that is then scanned. Forty states already use this system, he said. The variety of the systems states use, and the fact they’re dispersed throughout the country, helps keep secure the voting system, according to Andy Ozment, DHS assistant secretary for cybersecurity and communications. The department has also offered a variety of assistance to state and local governments, including cyber hygiene scans for internet-facing systems, and on-site risk and vulnerability assessments. He emphasized that all help is voluntary on behalf of the states and that 18 have accepted assistance. “I want to reiterate that we have confidence in the overall integrity of our electoral system,” Ozment said. “Our voting infrastructure is is diverse, subject to local control, and has many checks and balances built in.”
<urn:uuid:ccd211f5-d195-4e5d-a3b9-59761cd22bab>
CC-MAIN-2022-40
https://www.nextgov.com/cybersecurity/2016/09/election-systems-are-vulnerable-not-how-you-think/131978/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00027.warc.gz
en
0.949785
693
2.578125
3
Lychee contains quercetin, a powerful antioxidant with anti-inflammatory properties, cancer-fighting heart-healthy kaempferol and more. Antioxidants are nature’s way of providing your cells with adequate defense against attack by reactive oxygen species, or free radicals. The flavonoids, fiber and antioxidants in lychees may support heart health. In addition, oligonol derived from lychee fruit has been shown to increase nitric oxide levels in animal studies. Increasing nitric oxide in your blood may open constricted blood vessels and lower your blood pressure. The nutrients in lychee, including magnesium, copper, iron, vitamin C, manganese and folate, are required for blood circulation and formation. Lychees have one of the highest concentrations of polyphenols among fruits. Among them is rutin, a bioflavonoid known to strengthen blood vessels. One of the most prominent nutrients in lychee fruit is vitamin C. Vitamin C is considered an anti-aging vitamin and actually reversed age-related abnormalities in mice with a premature aging disorder, restoring healthy aging. For more tips, follow our today’s health tip listing.
<urn:uuid:eec102e4-c0bb-4192-ad5a-0c931f82516b>
CC-MAIN-2022-40
https://areflect.com/2019/05/30/todays-tech-news-benefits-of-lychee/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00027.warc.gz
en
0.90645
253
2.609375
3
In semiconductor device fabrication, etching refers to any technology that will selectively remove material from a thin film on a substrate (with or without prior structures on its surface) and by this removal create a pattern of that material on the substrate. The pattern is defined by a mask that is resistant to the etching process, the creation of which was described in detail in Photolithography. Once the mask is in place, etching of the material that is not protected by the mask can occur, by either wet chemical or by "dry" physical methods. Figure 1 shows a schematic representation of this process. Historically, wet chemical methods played a significant role in etching for pattern definition, up until the advent of VLSI and ULSI technology. However, as device feature sizes were reduced and surface topographies grew more critical, wet chemical etching gave way to dry etching technologies. This shift was due, primarily, to the isotropic nature of wet etching. Wet etching produces material removal in all directions, as shown in Figure 2, which results in a discrepancy between the feature size defined by the mask and that which is replicated on the substrate. VLSI and ULSI designs demand much more precise mask to pattern feature size correlation than was needed at larger feature sizes. In addition, aspect ratios (depth to width ratios) in advanced devices increased and achieving these ratios required an ability to anisotropically etch material using directional etching technologies. Figure 3 provides a schematic to help in understanding isotropic vs. anisotropic feature generation and directional etching. The final blow to wet etching's utility in advanced processing may have been the fact that many of the newer materials being used for device fabrication did not have accessible wet chemistries that could be employed for etching. These issues combined to relegate wet etch technologies to nearly exclusive use for cleaning rather than in etching applications. Only devices that have relatively large feature sizes (such as some MEMS structures) continue to employ wet methods for etching. Surface cleaning has been discussed in detail in Wafer Surface Cleaning. Anisotropic etching uses a suite of technologies cumulatively known as "dry" etch. These technologies are universally used for etching in VLSI and ULSI device fabrication and they will be the only methods discussed in detail in this Section. Dry etching can remove material through physical means such as ion impact accompanied by ejection of material from the substrate or by chemical reactions that convert substrate material to volatile reaction products that can be pumped away. Dry etching technologies include the following commonly used methods (whether the etch process occurs through chemical etching, physical etching, or a combination as noted in parenthesis): All dry etching technologies are conducted under vacuum conditions with the pressure dictating, to some extent, the nature of the etch phenomenon. Table 1 is taken from Wolf and Tauber and shows the relative pressure regime and generalized characteristics for the different etch methods. While there are a number of specific variants on the equipment and process characteristics used for etching, we will limit our discussion to a brief description of the process basics and the three primary etching methodologies identified in Table 1. |Physical Sputtering and Ion Beam Milling| |< 100 m Torr|| ||Higher Excitation Energy| |Reactive Ion Etch| |~ 100 mTorr|| |Isotropic Radial Etching| ||Lower Excitation Energy| Table 1. Pressure regimes and characteristics for different etch methods. In-depth discussions of plasma etching fundamentals are available in a number of texts (Wolf and Tauber, Sze) and the interested reader is referred to these sources. Here we provide only the briefest description of the basic fundamentals of plasma generation. Within a plasma etching process, a number of physical phenomena are at work. When a strong electrical field is created in a plasma chamber using either electrodes (in the case of a DC potential or RF excitation) or a waveguide (in the case of microwaves), the field accelerates any available free electrons raising their internal energy (there are always a few free electrons in any environment resulting from cosmic rays, etc.). Free electrons collide with atoms or molecules in the gas phase and, if the electron transfers enough energy to the atom/molecule in the collision, an ionization event will occur producing a positive ion and another free electron. Collisions that transfer insufficient energy for ionization can nevertheless transfer sufficient energy to create a stable but reactive neutral species (i.e., molecular radicals). When sufficient energy is fed to the system, a stable, gas-phase plasma containing free electrons, positive ions and reactive neutrals is produced. In plasma etching processes, the atomic and molecular ions and/or reactive neutrals from a plasma can be used to remove material from the substrate by either physical or chemical pathways, or by mechanisms that employ both. Purely physical etching (Figure 4) is accomplished by using strong electric fields to accelerate positive atomic ions (usually an ion of a heavy inert element such as argon) towards the substrate. This acceleration imparts energy to the ions and when they impact the substrate surface, their internal energy is transferred to the atoms in the substrate. If sufficient energy is transferred, a substrate atom will be ejected into the gas phase to be pumped away by the vacuum system. The incident ion is neutralized in the collision and, since it is a gas, it desorbs into the gas phase to be re-ionized or pumped out of the system. Chemical etching differs from physical etching in that it employs a chemical reaction between reactive neutral species created within the plasma and the substrate material. The most common type of chemical etching involves halide chemistries in which chlorine or fluorine atoms are the active agent in the etching process. A representative chemistry for etch processes is the use of NF3 for silicon etching. The chemical reaction sequence in this etch process is: NF3 + e- → •NF2 + F• + e- Si(s) +4F• → SiF4↑ NF3 is dissociated in the plasma to produce highly reactive atomic fluorine radicals. These radicals react with silicon in the substrate to produce silicon tetrafluoride, SiF4, which is a volatile gas that can be pumped away. In this manner silicon is etched from the substrate. Chemical etching, like wet etching, is an isotropic process without directionality (Figure 5). The reason for this is that the sticking coefficient of reactive neutrals is relatively low, so most impacts with the substrate surface do not result in etching, but rather with simple desorption of the reactive neutral back into the gas phase. This phenomenon results in an evening-out of the etch process within the feature being etched and ultimately isotropic character in the etch. Most of the etching technologies used in modern device fabrication incorporate aspects of both physical and chemical etching. In processes such as reactive ion etching (RIE), directional etching is achieved by biasing the substrate so that ionic species from the plasma are accelerated towards the substrate surface. There they interact with the surface and the reactive neutrals to produce volatile products that can be pumped away (Figure 6). The ion energy in RIE is much lower than that employed for physical etching technologies and ion bombardment effects are negligible. The transfer of ion energies to the surface can enhance the directionality through improved adsorption of reactant species on the bombarded surface (incoming ions create high energy defects where adsorption and reaction preferentially occur) and through enhanced by-product desorption (incoming ion energies are transferred to reaction products causing them to desorb from the surface).
<urn:uuid:89c41c76-73ab-4f5f-a6f1-f2d6b29f9c5d>
CC-MAIN-2022-40
https://www.mks.com/n/etch-overview
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00227.warc.gz
en
0.918933
1,642
3.765625
4
A whitelist is list of administrator-approved entities including IP addresses, email addresses and applications. Items on a whitelist are granted access to the system allowing them to be installed, altered, and communicated with over the private's network. The goal of having a whitelist is to protect a private network and its devices from outside attacks. Whitelisting is the direct opposite of blacklisting. The two are cybersecurity strategies that manifest as policies where administrators have explicitly sanctioned or have prohibited domains and locations they have deemed safe or unsafe. Whitelisted locations would be subject to normal visitation and usage. A blacklisted location or service would be impossible to access through admins' technical enforcement against the action. Rather than take an exhaustive approach to adding items to a whitelist, the default approach of granting access to everything is generally applied and when there is evidence or suspicion that an IP address, domain, service, or application is unsafe, admins blacklist it. "Our university admin keeps a long whitelist of students and faculty users who are able to access systems after hours."
<urn:uuid:cd9b34bb-1bc9-44df-8ed3-768199aed328>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/whitelist
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00227.warc.gz
en
0.945933
221
2.640625
3
Most IT departments are aiming to optimize performance, cost-efficiency and security for a growing range of applications and workloads. As a result, interest in hybrid environments is growing quickly. Recognizing this opportunity, service providers are rushing to define what “hybrid” means and basing offerings on their own interpretations. Amid that noise, several definitions for the term stand out, but each has limitations. For example, many experts, including the National Institute of Standards and Technology (NIST), focus exclusively on the cloud when describing hybrid environments. NIST defines a “hybrid cloud” as a combination of public, private and community clouds “bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).” While this definition is sufficient when describing a combination of disparate clouds, it fails to address common scenarios where technologies enable data and applications to be easily managed and moved across both cloud and non-cloud infrastructure environments. Hybridization is often also defined as a mixture of on-premises and hosted cloud — but this presents a number of problems. First, the importance of the demarcation line between on- and off-premises infrastructures is diminishing. Logical and physical networks and other infrastructure elements, like WAN acceleration appliances, firewalls, storage gateways and application-delivery controllers, are routinely extended across companies’ on-premises data centers and third-party sites to improve network performance, security and ease of use. This blurring of the cloud “edge” renders the physical location of an application running across environments less important. Split application architectures may be deployed in a single data center but have certain workloads reside within different hosting environments. Read more here.
<urn:uuid:2c0b50a9-58be-41b3-8837-6dd7f02c1d56>
CC-MAIN-2022-40
https://4atc.com/hybrid-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00227.warc.gz
en
0.92289
369
2.703125
3
Many organizations depend on Amazon Web Services for critical pieces of their infrastructure, including storing large amounts of sensitive data. To keep this information safe, AWS provides users a wide variety of security services that work together to limit access to authorized users. Security on Amazon Web Services (AWS) is the customizable collection of protections built to provide AWS customers with a safe space to control their accounts. The AWS Shared Security Model In general, AWS sees its responsibility as ensuring the security ‘of’ the cloud, while customers are responsible for ensuring their own security ‘in’ the cloud. In practice, this means that customers can rely on AWS global infrastructure in general, and on the safety of their data when used together with properly-configured compute and storage resources. However, areas like content, identity and access management, encryption, and OS configuration are the responsibility of the customer. AWS Security Features - Identity and Access Management: a framework for managing of digital identities. Exclusively cloud-centric, IAM gives IT managers control over users access to sensitive data by defining ‘access roles’, then placing users in said roles based on their security privileges. - Elastic Load Balancer: built and provided by AWS, an ELB can help mitigate DDoS style attacks. An ELB can protect applications by moving traffic to multiple server instances during high traffic loads. - AWS VPC: a virtual private cloud service, which can assist in a fully customizable and secure connection between both client and server. - AWS Monitoring: through tools like AWS Cloudwatch and EC2 Scripted Monitoring, both of which serve as fully featured monitoring services. Constant monitoring will help to catch any security breach immediately. With Amazon’s monitoring services, this process can be automated to avoid any delays in catching a serious breach. - Certificate Management: a service customers can use to provision, manage, and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet. - Client/Server Side Encryption Tools: Client-side encryption refers to encrypting data before sending it to Amazon S3. You have the following two options for using data encryption keys. Use an AWS KMS-managed customer master key. Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. - Hardware Security Modules: AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. - Web Application Firewalls: helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. - Data Encryption: capabilities available in AWS storage and database services, such as EBS, S3, Glacier, Oracle RDS, SQL Server RDS, and Redshift. - Key Management: including AWS Key Management Service, allows the user to choose to have AWS manage the encryption keys or to maintain independent control of them. - Encrypted Message Queues: for transmitting sensitive data using encryption in the server side. - Integration APIs: integrate encryption and data protection with any of the services in an AWS environment. AWS is built to provide scalable security solutions. With over 1800 security controls, AWS can often provide a much stronger level of protection, especially for smaller businesses, than could be built in house. An advantage of the AWS cloud is that it lets its users expand and innovate while being guaranteed a safe and secure cloud environment. Customers only have to pay for the services they actually use, which relieves upfront expenses, all while maintaining at a lower cost than an on-premises work environment. AWS Security is built around giving the user as much or as little power as they want. - Blog: Barracuda Achieves AWS Security Competency - Datasheet: Barracuda Email Security Gateway for AWS - Datasheet: Barracuda Firewall Control Center for AWS How Barracuda Can Help Barracuda offers two products to secure your AWS environment: The Barracuda CloudGen Firewall for AWS provides native network protection to AWS and hybrid networks. It helps ensure reliable access to applications and data running in AWS with full support for auto-scaling and metered billing. The Barracuda CloudGen WAF for AWS protects AWS-hosted websites and web-facing applications from thousands of types of cyber-attacks, automatically integrates security into your application deployments, and accelerates application delivery. Do you have more questions about AWS Security? Contact us today.
<urn:uuid:f50393b9-9952-4d6d-801c-f329b8aa8cd6>
CC-MAIN-2022-40
https://www.barracuda.com/glossary/aws-security?switch_lang_code=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00227.warc.gz
en
0.91273
1,044
2.609375
3
Types of Data Analytics Table of Contents: - Understanding Data Analytics - History of Data Analytics - Types of Data Analytics - Data Analytics Technologies Here you can Watch Intellipaat’s Data Analytics Full Course Understanding Data Analytics Data analytics can be defined as the science of analyzing unprocessed data to get conclusions from it. Data analytic strategies help you collect raw data and identify patterns to draw practical insights. Data analytics is now a standard of the primary research of data experts. Many businesses also employ data analytics to help them make wise choices. The phrase “data analytics” is very broad and encompasses many different types of data analysis. This can be applied to any form of data to gain information that can be utilized to improve things. For instance, gaming companies employ data analytics to create prize schedules for players which keep the majority of players active in the game. Similarly, various sorts of businesses use data analytics to meet their specific demands. History of Data Analytics Spreadsheets were traditionally the preferred tool for manually comparing statistics and evaluating data for business insights. Beginning in the 1970s, organizations started utilizing electronic technologies, such as relational databases, data warehouses, machine learning (ML) algorithms, web search engines, data visualization, and other tools with the ability to facilitate, speed, and automate the analytics process. Modern data sources have also put a load on traditional relational databases and other tools’ abilities to input, search, and modify enormous amounts of data. These tools were created to manage structured data like names, dates, and addresses. Modern data sources that produce unstructured data include email, text, video, audio, word processing, and satellite imagery. These types of data cannot be handled and evaluated using traditional methods. With advancements in technology, new tools started getting into the picture and the whole process of Data Analytics now is fairly simplified. Master Data Analytics with our Big Data Training Course Types of Data Analytics Data Analytics is generally of four types. In this article we’ll be broadly discussing all four of them: Suppose you want to find out yearly cost changes, monthly sales growth, the total number of customers, and revenue generated per customer. These all measure what your business has faced in the past, to prepare a report for all of them you’ll be using Descriptive Analytics. Descriptive analytics is the use of various types of past data to make comparisons. Let’s discuss a few cases where you can apply descriptive analytics: - Engagement and Traffic Reports: Reporting is one type of descriptive analytics. If your company tracks engagement through social media analytics or online traffic, you’re probably using descriptive analytics. These reports are developed by comparing current metrics to previous metrics and visualizing trends using raw data generated when people visit your website, adverts, or social media content. - Demand Trends: Additionally, descriptive analytics can be used to figure out patterns in customer choice and behavior and use them to predict demand for particular goods or services. Music streaming giant Spotify is a good example of how descriptive analytics can be used by a business. Spotify analysts track user behavior and their streaming patterns to determine which tracks are in high demand and accordingly use that data to prepare their trending list. - Aggregated Survey Results: Descriptive analytics can also be used for market research. When it comes to gaining information from the survey and focus group data, descriptive analytics can assist in identifying links between variables and patterns. For instance, you might carry out a poll and discover that as users’ ages rise, so does their propensity to buy your goods. If you have repeated this survey over several years, descriptive analytics would reveal if the age-purchase connection has always existed or whether it was a trend that only happened this year. Diagnostic analytics is a subset of analytics that seeks to answer the question, “Why did this happen?” Diagnostic analytics could also be used for data drilling and data mining. Companies may need to analyze various data sources, maybe including external data, to understand the core cause of trends. Let’s discuss a few of the examples: - Explaining customer Behavior: For businesses that collect user data, Diagnostic analytics is a secret key to gathering customer data and understanding why customers behave the way they do. These observations can further be used to improve brand messaging, user experience, and product-audience fit. - Identifying Technology Issues: Running tests to identify the root of a technology issue is one example of diagnostic analytics that needs the usage of a software program. You may have done this before when having computer problems; it is commonly known as “running diagnostics.” - Examining Market Demand: Determining the reasons behind product demand is another application of diagnostic analytics. As the name suggests, Predictive analytics alludes to a future prediction. It combines diagnostic and descriptive analytics for identifying special cases and predicting future trends, making it an important device for estimation. Predictive analytics sits alongside advanced analytics types, bringing several benefits such as complicated analysis based on machine or deep learning. Predictive analytics examples include: - Healthcare: Predictive analytics makes sure patients with urgent medical needs can receive care more quickly by identifying which patients are at high-risk. Simultaneously, healthcare practitioners can make better use of their time and resources. - Finance: Financial institutions can manage cash flow more effectively by predicting which customers or companies would likely forget to make their next payment. By reminding potential late payers, they can also take action to reduce the issue. - Manufacturing: Manufacturing managers can track the performance and condition of equipment and detect failures by integrating predictive analytics into their systems. To minimize any effects on production, they might prepare ahead of time and shift the burden to other equipment. The goal of prescriptive analytics is to advise on how to avoid a future problem or benefit from a potential trend. Prescriptive analytics uses cutting-edge tools and technology like machine learning, and algorithms, making it easy to implement and administer. Examples of prescriptive analytics: - Marketing and Sales: Large volumes of customer data are available to marketing and sales organizations, and this information may be used to develop the best possible marketing strategies. For example, knowing how to price things and what kinds of products go well together is one of these tactics. Since they are no longer limited to acting purely on instinct and experience, prescriptive analytics enables marketers and salespeople to be more exact with their campaigns and client outreach. - Transportation industry: Cost-effective delivery is essential to achieving growth and profit in the package delivery and transport sectors. Time and money can be saved by reducing energy use through better route selection and resolving logistical problems like inaccurate delivery destinations. Data Analytics Technologies Analyzing various kinds of data sets to derive useful information is known as data analytics. For organizational decision-making, data analytics is utilized to find hidden patterns, market trends, and consumer preferences. Data analytics involves some technologies such as: - Machine learning: Data analytics depend on machine learning, a branch of artificial intelligence that contains algorithms with self-memorizing capabilities. - Data Mining: Data mining is the act of extracting vast amounts of data to identify patterns and discover relationships. It allows you to search through large datasets and identify relevant information. - Data Management: The first step in data analysis is to understand how data enters and leaves your system. Then you must maintain that data organized, as well as examine the quality of the data and store it in a secure location. So, putting together a data management program helps ensure that everyone in your firm is on the same page when it comes to data governance and management. Data is critical for any organization because it allows them to better understand their customers, improve their advertising strategies, and extend their bottom lines. There are many benefits to data, but you can’t make use of them without the right tools, therefore data analytics procedures and tools are very important. While raw data is quite powerful, data analytics is what unlocks the potential to grow your business. As a result, we can state that data analytics is highly crucial in the growth of any business because it assists the organization in maximizing its performance. Check out this Amazing Data Analytics Course by IIT Madras!
<urn:uuid:f7e0cfc1-3d11-4be5-bac2-ec4441cefef4>
CC-MAIN-2022-40
https://www.businessprocessincubator.com/content/types-of-data-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00227.warc.gz
en
0.92565
1,729
3.375
3
2021 marks 75 years since Electronic Numerical Integrator and Computer (ENIAC) was first revealed to the public. An important part of computing history, ENIAC’s development was a collection of important milestones. ENIAC may have been the first electronic general-purpose machine that was Turing complete - i.e. theoretically able to handle any computational problem. Its development was key to the founding of the commercial computing industry, providing many of the early ideas and principles that underpin computers of all shapes and sizes. A ‘revolution in the mathematics of engineering’ Built between 1943-1945 at the University of Pennsylvania by engineers John Presper Eckert and John William Mauchly, ENIAC was created to calculate artillery tables - the projectile trajectories of explosive shells - for the US Army Ballistics Research Laboratory. Mauchly proposed using a general-purpose electronic computer to calculate ballistic trajectories in 1942 in a five-page memo called The Use of Vacuum Tube Devices in Calculating. After getting wind of the idea, the US Army commissioned the university to build the machine, known as the time as Project PX. The system was completed and brought online towards the end of 1945, and moved to the Aberdeen Proving Ground in Maryland in 1947. Taking up a 1,500-square-foot room at UPenn’s Moore School of Electrical Engineering, ENIAC comprised 40 nine-foot cabinets. Weighing 30 tons, the machine contained over 18,000 vacuum tubes and 1,500 relays, as well as hundreds of thousands of resistors, capacitors, and inductors. “It was a bunch of guys bending metal in the basement of a building in UPenn,” says Jim Thompson, CTO of the ClearPath Forward product at Unisys, a company that through acquisitions can trace its lineage back to ENIAC, and the Eckert-Mauchly Corporation, founded in 1946. “There was nobody building computer parts; these guys made ENIAC literally out of radios and televisions and everything else they could find, taking vacuum tubes that were designed for a different purpose and then turning them into logic devices and repurposing them.” After the end of WW2, ENIAC was donated to the University of Pennsylvania on February 15, 1946. According to the Smithsonian, where parts of the machine now reside, an Army press release at the time described ENIAC as “a new machine that is expected to revolutionize the mathematics of engineering and change many of our industrial design methods. “Begun in 1943 at the request of the Ordnance Department to break a mathematical bottleneck in ballistic research, its peacetime uses extend to all branches of scientific and engineering work.” Prior to ENIAC, human ‘computers,’ mostly teams of women, performed calculations by hand with mechanical calculators. Predicting a shell’s path used calculations that took into account air density, temperature, and wind. A single trajectory took a person around 20 to 40 hours to ‘compute.’ With ENIAC, the same calculations were now possible in 30 seconds. Input was done through an IBM card reader and an IBM card punch was used for output. While ENIAC had no system to store memory at first, the punch cards could be used for external memory storage. A 100-word magnetic-core memory built by the Burroughs Corporation was added to ENIAC in 1953. Capable of around 5,000 calculations a second, ENIAC was a thousand times faster than any other machine of the time and had modules to multiply, divide, and square root. “This was a machine that straddled the point in history where we went from mechanical calculators and adding machines to electronic computers,” says Thompson. While a massive step-change in terms of capability compared to any other computer in the world at the time, it also had various challenges in operation. With minimal cooling technology – two 20-horsepower blowers – ENIAC raised the room temperature to 50ºC when in operation and its 160kW energy consumption caused blackouts in the city of Philadelphia. Reliability was also a constant challenge. Before speciality tubes became available in 1948, the machine used standard radio tubes, which burnt out on a near-daily basis. At first it took hours to work out which tube had actually blown, but the team eventually developed a system to reduce this down to around 15 minutes thanks to some ‘predictive maintenance’ and careful monitoring of equipment. ENIAC was difficult and complicated to use. Initially, it used patch cables and switches for its programming, and reprogramming the machine was a physically taxing task requiring a lot of preplanning and often took days to do. “The first electronic digital computers, including ENIAC, had to be programmed by wiring using patch cords,” explains David Taylor, co-founder of coding tutorial site Prooffreader and a computing history enthusiast. “Once a program was written, the program-specific logic had to be literally wired into the machine, meaning programmers had to physically move the cables on a plugboard and change the switches that controlled the response to inputs. “ENIAC was then able to solve only that particular problem. To change the program, the machine’s data paths had to be hand-wired again. This was a quite laborious process, taking a few days to make the necessary physical changes and weeks to design and write new programs.” Improvements in 1948 made it possible to execute stored programs set in function table memory, speeding up the ‘programming’ process. “The three different kinds of memory used in ENIAC were replaced with a single, erasable high-speed memory, allowing programs to be stored in the form of read-only memory,” says Taylor. “This conversion immensely sped up the reprogramming process, taking only a few hours instead of days.” ENIAC’s legacy: an important part of history The late 1930s to 1940s were rife with computing pioneers developing historically significant machines, all to help the war effort. IBM’s Harvard Mark I - another general purpose machine of the era - was capable of just three additions or subtractions per second, while multiplications each took six seconds, divisions 15 seconds, and a logarithm or a trigonometric function more than a minute. The UK’s Colossus, built at Bletchley Park, was incredibly important to deciphering Nazi cryptography, but was built for one specific purpose and government secrecy meant its learnings couldn’t be shared at the time. The Atanasoff–Berry computer, built in 1942 by John V. Atanasoff, was neither programmable nor Turing-complete. The Konrad Zuse-built Z3 was completed in Berlin in 1941 but government funding was denied as it was not viewed as important. The machine never put into everyday use, and was destroyed during Allied bombing of Berlin in 1943. “It’s easy to focus on how slow, massive, power-hungry and memory-poor these early computers were, rather than recognizing the exponential leaps in technology that they represented compared to the previous state of the art,” says Charlie Ashton, senior director of business development at SmartNIC provider Napatech. “One of the most impressive statistics about ENIAC is the multiple-orders-of-magnitude improvement in performance compared to the electro-mechanical machines that it replaced. While historians may argue over the importance of certain machines and which ones have the honor of being ‘first’ in various categories, few can argue over whether ENIAC was a leap forward and had a real-world impact. Its computing power was a massive leap in comparison to its peers at the time, and its general purpose nature led the way for computers being re-programmable for any number of potential use cases. “There were other computers (mechanical and electro-mechanical) before it. And we can over-index on the fact that it was the first working general purpose digital computer, but the impact was far greater,” says Charles Edge, CTO of startup investment firm Bootstrappers.mn and host of The History Of Computing Podcast. At a cost of around $400,000 at the time - equivalent to around $7 million today - ENIAC was a relative bargain, even if the original project budget was just $61,700. The machine was retired in October 1955 after a lightning strike, but it had already made a lasting mark on the nascent computer industry. ENIAC a failure and a success By the time ENIAC was ready for service in 1945, the war was coming to an end. And as a result, it was never actually used for its intended purpose of calculating ballistic trajectories. “The difference for ENIAC was it really was a general-purpose problem solver,” says Thompson. “Even though it showed up late in the war and didn't really contribute to its original design purpose, it was immediately adapted to help with the US effort around nuclear weapons, to do agriculture work, and anything where we had to do a lot of computations quickly.” During its lifetime, the machine performed calculations for the design of a hydrogen bomb, weather predictions, cosmic-ray studies, random-number studies, and even wind-tunnel design. ENIAC’s work to investigate the distance that neutrons would likely travel through various materials helped popularize Monte Carlo methods of calculations. “ENIAC certainly goes down as a pivotal moment in computing,” says Edge. “It was important in the mathematical modeling that led to the hydrogen bomb. In part out of that work, we got the Monte-Carlo simulation and von Neumann’s legacy coming from that early work. We got the concept of stored programs.” Mauchly’s interest in computers reportedly stemmed from hopes of forecasting the weather through computers and using electronics to aid statistical analysis of weather phenomena. And ENIAC did in fact conduct the first 24-hour weather forecast. But, as with most computers of the time, John von Neumann and the specter of nuclear weapons research was an important part of ENIAC’s output. While working on the hydrogen bomb at Los Alamos National Laboratory, von Neumann grew aware of ENIAC’s development, and after becoming involved in its development, the machine’s first program was not ballistics tables but a study of the feasibility of a thermonuclear weapon. von Neumann also worked with IBM’s Harvard Mark I, and essentially created the von Neumann architecture when documenting his thoughts on the ENIAC’s successor machine, the EDVAC. Many ENIAC researchers gave the first computer education talks in Philadelphia, Pennsylvania in 1946 that are collectively known as The Theory and Techniques for Design of Digital Computers and often referred to as the Moore School Lectures. The talks were highly influential in the future development of computers. “The Moore School Lectures helped produce Claude Shannon of Bell Labs, Jay Forrester at MIT, mainframe developers from GE which would go on to be a substantial player in the mainframe industry, and engineers, researchers, and the future of the still-nascent computer industry,” says Edge. The Pentagon invited experts from Britain as well as the US to jumpstart research in the field. The Lectures, and von Neumann’s memo, First Draft of a Report on the EDVAC, sparked off a race to create truly general-purpose systems which could run from stored programs. Mauchly and Eckert were quick to build on their ideas, delivering EDVAC (Electronic Discrete Variable Automatic Computer) to the US Army’s Ballistic Research Laboratory in 1949. Design work began before ENIAC was even fully operational, implementing architectural and logical improvements conceived during the ENIAC's construction. However, they were beaten by the Manchester Baby system, the first stored-program computer, and narrowly by the EDSAC in Cambridge, which is regarded as the first practical computer, and the origin of business computing via the Lyons Leo. “ENIAC also heralded a variety of build techniques that persist to this day,” says Thompson. “It was a modular design, it was a scalable site so you can add more capability to it, you can change capability.” “Over its lifetime, which was just shy of a decade, they added all kinds of technology to it that changed from how it was at the beginning to how it was at the end. That becomes sort of a blueprint for computing as you and I know it.” The start of commercial computing Today, technology names are some of the most valuable brands in the world. But even in the 1940s, technology was able to captivate the media. With Bletchley Park’s Colossus still a state secret, ENIAC was free to grab the headlines. A NY Times headline from February 1946 read ‘Electronic Computer Flashes Answers, May Speed Engineering.’ In other publications, ENIAC was described by the press at the time as the “mechanical brain” and “electronic Einstein.” ENIAC had an early technology marketing campaign. Javier García, academic director of the engineering and sciences area at the U-tad University Center, Spain, explains that as funding dried up after the end of the war, the creators of ENIAC produced and exhibited a film about its operation to drive interest. In one marketing trick, the team incorporated panels with light and ping-pong balls with painted numbers that lit up while carrying out an operation to impress viewers. “They were useless. Mere aesthetics. But in the popular imagination, it has remained as the image of the first computers. Just look at science fiction films. In fact, to reach more people, these bulbs are defined as an electronic brain. An absolute marketing success,” Garcia told El Pais. While neither ENIAC nor EDVAC, nor the British EDSAC, were sold on the market, Mauchly and Eckert’s work was an important leap forward for the commercialization of computing. The two launched the first computer company, Electronic Control Co., in 1946 after leaving UPenn over patent disputes. Unable to find a bank or investor that would lend them money, the two borrowed $25,000 from Eckert's father to get the business off the ground. Due to financial issues, the renamed Eckert-Mauchly Computer Co. was sold in 1950 to Remington-Rand. Eckert-Mauchly developed two successor machines: BINAC was the US’s first stored program computer in 1949, while UNIVAC didn’t launch till 1951, after acquisition by Remington-Rand. With varying degrees of success, these helped kickstart the sale and use of computers for dedicated business purposes in the US. “At that point in time IBM was in the business,” says Thompson. “IBM did a better job of making that pivot and commercializing, but Eckert and Mauchly were there at the beginning and they started producing standardized computers that could be built in volume for the day – that is more than one – from a standard set of designs and sold to customers to do a general set of tasks. So that's a pretty big switch from a machine that's built for a purpose. “Within 10 years of its birth and five after having been shut down, we were building general-purpose commercial computers. It created a whole new industry. We would not have programmers, we would not have system designers, we wouldn't have all kinds of things that we have [today]. Not 15 years out from ENIAC we're building computers to go to the Moon.” Northrop accepted the first BINAC in September 1949, but it reportedly never worked properly after it was delivered and was never used as a production machine. Northrop blamed EMCC for packing the machine incorrectly, while Eckert and Mauchly said Northrop had assembled it poorly and wouldn't let its engineers on-site to fix it. However, after the Rand acquisition, the first UNIVAC was successfully delivered to the United States Census Bureau in March 1951. The fifth machine – built for the US Atomic Energy Commission – was famously used by CBS to correctly predict the result of the 1952 presidential election in a marketing scheme cooked up by the pair. Though not in name, Eckert and Mauchly’s company lives on today. Rand merged with Sperry Corp. in 1955 and then Burroughs in 1986 and became Unisys. Eckert retired from Unisys in 1989.
<urn:uuid:a7901adc-b7d7-4a84-9d74-b8852f771da5>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/analysis/eniac-at-75-a-pioneer-of-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00227.warc.gz
en
0.965902
3,552
3.75
4
By 2035, the uptake of robotics and autonomous systems (RAS) could mean a boost of £6.4 billion ($8.7 billion) in value added to the UK economy. That's according to the government's own estimates in a new, 110-page report from the department for Business, Energy and Industrial Strategy (BEIS). Despite the modest economic promise, the document makes odd, disjointed, and depressing reading - for several reasons. Not least of these is the recent McKinsey forecast (quoted in the report) that the global annual gain from advanced robotics could be as high as $1.7 trillion to $4.5 trillion of added economic value by 2025. That feels like an overestimate, but McKinsey's ‘hitting an elephant with a dart' forecast is endorsed by the government in this report. So, if the actual figure falls in the middle of McKinsey's range come 2025, robotics will be adding roughly the same amount to the world economy as current UK GDP ($2.83 trillion). Whatever the accuracy of those figures may turn out to be, the BEIS report emphasises that the government is aware of robotics' huge global potential: According to more recent estimates, boosting robot installations 30% above the baseline could add an extra $4.9 trillion per year to the global economy by 2030 (Oxford Economics, 2019). In contrast to previous waves of robotics, which were mostly focused on industrial applications, RAS has the potential to impact a much wider range of sectors, with use-cases and opportunities emerging across the economy. These include, for instance, automated guided vehicles, mobile retail robots, and humanoid customer service robots. That's true. But leaving aside the probability that humanoid service robots will remain a pointless damp squib for the foreseeable future, big questions arise from the government's report. The biggest of these is obvious. Why is the UK only poised to gain a paltry 0.19% to 0.5% of the predicted added global value from robotics ($8.7 billion expressed as a percentage of McKinsey's $4.5 trillion and $1.7 trillion estimates)? And in a 14-year rather than four-year timescale? That's hardly the good news the government thinks it is, when the UK is, by 2021 nominal GDP estimates, still the world's fifth largest economy. Common sense suggests that any modern, top-10 industrial economy ought to be seeing at least a single-digit economic boost from RAS, not a fractional one, as its percentage share of McKinsey's predicted global gains. But that isn't happening in the UK's case - and the report admits it. The context couldn't be clearer in competitive terms: South Korea is the world's most automated country (it has the highest robot density), China is automating faster than any other nation, and the US, Japan, and many countries in Europe and Scandinavia are ahead of the UK in terms of adopting Industry 4.0 technologies - aside from AI, in which Britain has been making good progress. The most recent data from the International Federation of Robotics (IFR.org) shows that industrial robot installations in the UK remain stubbornly low compared to its peers. Only around 2,500 industrial robots were installed in the UK in 2020 (0.5% of an estimated world total of 520,900). This compares to estimates of around 6,000 (1.2% of the world total) in France, 8,500 (1.6%) in Italy, 25,000 (4.8%) in Germany, 55,000 (10.6%) in the US, and 210,000 (40.3%) in China. To BEIS' credit, the report acknowledges the UK's lamentable RAS gains to date, which are rooted in a widespread reluctance or inability to modernize across many industries. It states: This [£6.4 billion/$8.7 billion] growth should be seen in light of a relatively low base, with robots in UK industry historically lagging behind other nations. RAS offers a potential solution to key challenges to the UK's continued economic growth. In the UK, productivity is lower than in many peer economies such as the United States, France, and Germany. Moreover, productivity growth in the UK has been sluggish since the 2008/09 recession. [It has been flatlining - CM.] However, unlocking the potential economic benefits of RAS is not straightforward. The UK failed to capitalise on the opportunities presented by the previous wave of industrial robotics, with the use of industrial robots in the UK lagging behind other nations. Moreover, recent research by Boston Consulting Group (BCG) suggests that there is a significant gap between companies' ambitions to implement advanced robots and actual implementation. The report adds: More than 90% of the companies surveyed by BCG reported that at least one of three key enablers - including a complete vision of their future operations, sufficient knowledge of and training with RAS and related issues, and the development of a system architecture to support future operations - was not fully present in their company. Remember: this is an 11-year-old government observing the nation as though these issues are nothing to do with its own management. Similar problems can be seen in other technology sectors, including AI and cybersecurity, where lack of skills and tens of thousands of unfilled vacancies reveal that the UK workforce is poorly placed to capitalise on the economic potential created by its own innovators. Digital skills gaps have been observed since the 1990s. These are alarming findings after more than a decade of the same government (albeit one with a constantly changing face). The Industrial Strategy's aims were first aired in 2016 and published in 2017, while RAS was identified as far back as 2013 as one of the ‘Eight Great Technologies' that were key to the UK's future prosperity. Getting in its own way Britain had the right idea eight years ago. So, what has happened since? Can you guess? The government talked the talk, but it didn't walk the walk; it failed to persuade British industry to modernize and invest in new technology. Arguably, this was partly due to every Whitehall department being "impaled on Brexit" (to quote a Conservative peer at a recent Westminster eForum). Put another way, leaving the EU has not only taken up much of the government's time for the past six years, but it has also produced almost no practical action or help for British industry. With one exception: publication of a new Industrial Strategy. A cynic might ask what the hell the Cabinet has been doing in all that time. But wait, there's more. To compound the problems arising from Brexit and the COVID-19 pandemic, in March this year the Prime Minister (unbelievably) scrapped the Industrial Strategy. This was the document that had at least underpinned government support of, and investment in, new technologies since 2016. The move was in support of launching a new ‘Plan for Growth' (which in July 2021 was measured at 0.1%). It's hard to avoid the suspicion that it was really an expression of Boris Johnson's animosity towards his two Conservative predecessors, Theresa May and David Cameron. Why else pile unnecessary problems onto the UK economy by undoing a strategy that, if nothing else, had identified the right technologies and galvanised some investment in them? It was a dumb, knee-jerk decision, even considering Covid's unplanned-for impact on the economy. To deepen innovators' frustrations, this new report's predictions of modest UK gains from robotics are based on uptake of RAS continuing at its current slow rate - for the next 14 years. These are hardly the bold, sunlit uplands of bluster that we have come to expect from this administration, making this report a rare dose of honesty and realpolitik. It says progress has been poor and, in most cases, will remain so until 2035. Gaps in thinking But will it? The problem is the document remains bafflingly odd. For one thing, BEIS' national forecasts come from an analysis of just seven sectors, which the government has chosen from a shortlist of 14. The chosen seven are: agriculture, construction, energy, food & drink, health & social care, infrastructure, and logistics. All promising areas with real robotics applications, but why not look at all 14 - or the whole economy - and thus present a more comprehensive picture of the technology's potential? What purpose is served by drawing holistic conclusions from just seven markets? Aside from food & drink, the shortlist leaves out manufacturing as a core focus of the report, in areas such as automotive, electronics, and pharmaceuticals, for example. That's a bizarre omission. It also largely ignores the boost from emergent sectors, such as space technology, deep-sea engineering, nuclear decommissioning (principally of weapons rather than spent fuel), and others. Many of these are so-called ‘extreme environments', alongside the likes of deep mining and defence. Any sector where humans fear to tread (because of hazardous or lethal conditions) represents a huge opportunity for robotics, and yet these issues are largely (but not completely) omitted from the BEIS report. The omission of most extreme-environment robotics from detailed discussion is both odd and unhelpful, as there is enormous potential there, according to a 2019 Innovate UK report produced for the government. And I should know: I wrote it. For example, the UK's two academic nuclear hubs, RAIN and the National Centre for Nuclear Robotics, have estimated that the cost of decommissioning nuclear materials by hand - via workers in hazmat suits wielding power tools - is likely to be $2 billion a year for 100 years. On the face of it, that one sector alone could be a £200 billion robotics opportunity for the UK, and the technologies to capitalize on it are in active development. Elephant(s) in the room So, Brexit aside, why else has the UK been sinking in a sea of global opportunity? One reason is that the nation has long been hindered by ill-informed news media that persist in equating robotics with job losses and existential terror, rather than employment gains, increased productivity, and economic growth. Pre-Covid, human unemployment was low in most highly automated economies. In 2018, for example, the World Economic Forum predicted there would be a net gain of 58 million jobs worldwide from adopting Industry 4.0 technologies, but that was pre-pandemic. Even so, that message never reached the ears of the British public or most businesses. As such, it represents a failure of local leadership to counter dystopian narratives about job-killing robots. Which brings us to another elephant in the room - and there's a herd of them. Whatever your political beliefs and party allegiances may be, it must be obvious - even to his most ardent supporters - that Boris Johnson has the wrong mindset and personality to inspire the nation with technology-infused visions of the future. Yet that's precisely what the UK needs. This is the man who joked about Kermit the Frog, Alexa "stamping her foot", cheese, "limbless chickens" and "pink-eyed Terminators" when addressing the United Nations about new technologies - speeches that were met with embarrassed silence by the UK's peers and allies. But politics aside, how is the UK doing across the sectors BEIS has chosen for its analysis of this critical technology? Based on estimates of unit shipments, the total UK market for robotics and autonomous systems will grow at a compound annual growth rate of more than 40% between 2020 and 2030, says the report. Good news, but it will reach a value of just £3.5 billion in that timescale, given the current low installed base. And as we've seen, the total value added to the economy five years after that will be modest. A significant proportion of this growth is expected to come from a rise in demand for mobile robots, says the report. If UK shipments follow a similar trend to the wider European market (local data is missing), mobile robots will grow from an estimated 1,500 shipments in 2020 to over 90,000 shipments a year by 2030. The report adds: Estimates of current robot density (robots per million hours worked) for the sectors selected highlight significantly stronger uptake in the warehouse logistics and food & drink manufacturing sectors compared to other industries. Estimates of future uptake also suggest that this concentration is expected to continue over the period being assessed. That's fine, but the statement is not without its own frustrations: the IFR defines robot density as the number of robots per 10,000 human workers, not per million hours worked (does each of 10,000 workers work 100 hours?). The government seems determined to use different measures to the ones that other organisations use, typifying its "doing things differently just because we can" attitude. None of this helps the UK, as it makes like-for-like comparisons impossible; perhaps that's the point. That aside, the figures are interesting: the warehouse logistics sector - companies like Ocado automating their operations to compete with Amazon - is expected to see the biggest gains, with robotics adding 14% (£4.4 billion/$5.98 billion) to the sector's baseline value by 2035. Meanwhile, the impact of RAS in food & drink manufacturing is predicted to add 3% (£0.9 billion/$1.22 billion) of baseline added value by 2035. More good news. However, the other five chosen sectors remain almost flat in terms of modernisation and automation, despite the many opportunities for RAS that are set out in the report. Their robot density is low and is likely to remain so, says the government, though agriculture is forecast to see a slight uptick in adoption, partly in response to seasonal labour shortages. A very British document: a government observing how badly the UK is doing, while ignoring its own role in that failure for more than a decade. The result? No sunlit uplands, but isolated bright spells in the rain - thanks to the UK's technology innovators, rather than to the government itself. BEIS has identified the UK's deep-seated industrial problems. So, where's the vision and investment to solve them - and soon? What's urgently needed is modern, competent, forward-looking leadership that can inspire both industry and the public about new technology's potential. Meanwhile, ‘government by jolly japes' is nothing more than an embarrassing distraction from the serious challenges ahead.
<urn:uuid:6613385e-0564-42ad-a33f-220ff887ca31>
CC-MAIN-2022-40
https://diginomica.com/uk-robotics-government-forecasts-small-gains-massive-inaction
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00227.warc.gz
en
0.956203
3,023
2.5625
3
If you’re a PC user, you may have encountered this hard drive boot error at least once in your life: Info: An error occurred while attempting to read the boot configuration data. When you get this hard drive boot error message, you’ve clearly run into a problem. The problem lies with a very important part of Windows: the boot configuration data file. The BCD, or boot configuration data, has played a critical role in your PC’s startup process from Windows Vista all the way to Windows 10. The BCD database stores boot-time configuration data, which Windows Boot Manager accesses at startup. It’s a key portion of your Windows operating system. But just like any other portion of the O/S, it can run into its problems of its own. Unfortunately, these problems can stop your computer from successfully booting up. If the BCD becomes corrupted, you can encounter a hard drive boot error message telling you either that “the boot configuration data file is missing required information” or that “an error occurred while attempting to read the BCD”. You may also encounter an error telling you that the BCD cannot be found, or that the BCD is missing, corrupt, or improperly configured. In some cases, your computer can find the BCD and recognizes that it’s gone bad. In other cases—especially due to filesystem or boot sector corruption, wear on the SATA cables connecting your hard drive to your PC, or deterioration of your hard drive’s inner workings—your computer can’t find the BCD at all, and spits out one of these error message because it doesn’t know what else to say. If your computer has difficulty accessing your hard drive due to a physical failure-in-progress, it can spit out these hard drive boot errors. How to Fix a BCD Error 0xc000000f There are ways to fix the BCD when it goes bad. In some situations, if you tap F8 upon powering on your computer and select “Use last known good configuration” from the Windows boot menu, the issue will resolve itself. Using a bootable USB stick with Windows installation or repair files on it can also resolve the issue. After creating a recovery USB for Windows and booting from it, you can use the recovery USB to repair the BCD. In some cases, Windows can repair the BCD manually. But in other situations, you may have to get a bit more hands-on. In the System Recovery Options menu, select the Command Prompt option and type in bootrec.exe, then press “Enter”. Unfortunately, the client in this case study couldn’t fix the problem on their own. After the client’s attempt to repair the BCD on their own left them empty-handed, they brought their computer to a local repair shop. The repair technician ended up removing the hard drive and hooking it up to another machine in an attempt to recover data from the drive, but couldn’t salvage any data from the drive on their own. But there was still a ray of hope. The technician referred the computer’s owner to Gillware Data Recovery’s specialists to get their lost files back. Salvaging Data After a Hard Drive Boot Error In this data recovery case, the client’s hard drive had suffered from filesystem corruption. Filesystem corruption can happen for many reasons. We often see file and filesystem corruption affect clients’ lost data after an accidental reformat or system restore. Filesystem corruption can occur naturally as your hard drive ages and sectors on its platters wear out and go bad. Filesystem corruption can also sometimes occur if your hard drive loses power while it’s trying to write to filesystem sectors. This can happen in the event of, say, a sudden power outage or a forced shutdown by the user. Your hard drive’s filesystem is the trail of breadcrumbs your computer follows to avoid getting lost in the woods. When corruption gobbles up even a small part of the breadcrumb trail, your computer gets lost—and you can’t find any of your files! Drive Model: Seagate Momentus Thin ST500LT012 Drive Capacity: 500 GB Operating/File System: Windows NTFS Data Loss Situation: Received Windows startup error upon booting up and could not read data from the hard drive when connected to another computer via USB-SATA adapter cable Type of Data Recovered: Documents, photos Binary Read: 29.2% Gillware Data Recovery Case Rating: 10 With the help of our advanced data recovery tools, our logical hard drive failure recovery experts can sniff out the remaining filesystem breadcrumbs and salvage data, even when filesystem corruption renders a hard drive inaccessible. After imaging the entire used area of the client’s hard drive, our engineers successfully salvaged 100% of the client’s photos and documents from the drive. We rated this data recovery case a perfect 10 on our ten-point rating scale. You can often fix many a hard drive boot error, including the 0xc000000f error, with a little elbow grease. These problems often lie within the operating system, and usually don’t put your important personal files in jeopardy. But you can also encounter these boot errors due to a real failure of your hard disk drive itself. In those situations, you’ll want to call on the data recovery experts at Gillware to retrieve your data. Data Recovery Software to recover lost or deleted data on Windows If you’ve lost or deleted any crucial files or folders from your PC, hard disk drive, or USB drive and need to recover it instantly, try our recommended data recovery tool. Retrieve deleted or lost documents, videos, email files, photos, and more Restore data from PCs, laptops, HDDs, SSDs, USB drives, etc. Recover data lost due to deletion, formatting, or corruption
<urn:uuid:6822a73d-5fd1-4d1a-abba-df8f56c052c6>
CC-MAIN-2022-40
https://www.gillware.com/hard-drive-data-recovery/hard-drive-boot-error-0xc000000f/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00227.warc.gz
en
0.897969
1,265
2.578125
3
As more and more U.S. schools and businesses shutter their doors, the rapidly evolving coronavirus pandemic is helping to expose society’s dependence – good and bad – on the digital world. Entire swaths of society, including classes we teach at American University, have moved online until the coast is clear. As vast segments of society are temporarily forced into isolation to achieve social distancing, the internet is their window into the world. Online social events like virtual happy hours foster a sense of connectedness amid social distancing. While the online world is often portrayed as a societal ill, this pandemic is a reminder of how much the digital world has to offer. The pandemic also lays bare the many vulnerabilities created by society’s dependence on the internet. These include the dangerous consequences of censorship, the constantly morphing spread of disinformation, supply chain vulnerabilities and the risks of weak cybersecurity. 1. China’s censorship affects us all The global pandemic reminds us that even local censorship can have global ramifications. China’s early suppression of coronavirus information likely contributed to what is now a worldwide pandemic. Had the doctor in Wuhan who spotted the outbreak been able to speak freely, public health authorities might have been able to do more to contain it early. China is not alone. Much of the world lives in countries that impose controls on what can and cannot be said about their governments online. Such censorship is not just a free speech issue, but a public health issue as well. Technologies that circumvent censorship are increasingly a matter of life and death. Disinformation online isn’t just speech – it’s also a matter of health and safety During a public health emergency, sharing accurate information rapidly is critical. Social media can be an effective tool for doing just that. But it’s also a source of disinformation and manipulation in ways that can threaten global health and personal safety – something tech companies are desperately, yet imperfectly, trying to combat. Facebook, for example, has banned ads selling face masks or promising false preventions or cures, while giving the World Health Organization unlimited ad space. Twitter is placing links to the Centers for Disease Control and Prevention and other reliable information sources atop search returns. Meanwhile, Russia and others reportedly are spreading rumors about the coronavirus’s origins. Others are using the coronavirus to spread racist vitriol, in ways that put individuals at risk. Not only does COVID-19 warn us of the costs – and geopolitics – of disinformation, it highlights the roles and responsibilities of the private sector in confronting these risks. Figuring out how to do so effectively, without suppressing legitimate critics, is one of the greatest challenges for the next decade. Cyber resiliency and security matter more than ever Our university has moved our work online. We are holding meetings by video chat and conducting virtual courses. While many don’t have this luxury, including those on the front lines of health and public safety or newly unemployed, thousands of other universities, businesses and other institutions also moved online – a testament to the benefits of technological innovation. At the same time, these moves remind us of the importance of strong encryption, reliable networks and effective cyber defenses. Today network outages are not just about losing access to Netflix but about losing livelihoods. Cyber insecurity is also a threat to public health, such as when ransomware attacks disrupt entire medical facilities. Smart technologies as a lifeline The virus also exposes the promise and risks of the “internet of things,” the globe-spanning web of always-on, always-connected cameras, thermostats, alarm systems and other physical objects. Smart thermometers, blood pressure monitors and other medical devices are increasingly connected to the web. This makes it easier for people with pre-existing conditions to manage their health at home, rather than having to seek treatment in a medical facility where they are at much greater risk of exposure to the disease. Yet this reliance on the internet of things carries risks. Insecure smart devices can be co-opted to disrupt democracy and society, such as when the Mirai botnet hijacked home appliances to disrupt critical news and information sites in the fall of 2016. When digitally interconnected devices are attacked, their benefits suddenly disappear – adding to the sense of crisis and sending those dependent on connected home diagnostic tools into already overcrowded hospitals. Tech supply chain is a point of vulnerability The shutdown of Chinese factories in the wake of the pandemic interrupted the supply of critical parts to many industries, including the U.S. tech sector. Even Apple had to temporarily halt production of the iPhone. Had China not begun to recover, the toll on the global economy could have been even greater than it is now. This interdependence of our supply chain is neither new nor tech-specific. Manufacturing – medical and otherwise – has long depended on parts from all over the world. The crisis serves as a reminder of the global, complex interactions of the many companies that produce gadgets, phones, computers and many other products on which the economy and society as a whole depend. Even if the virus had never traveled outside of China, the effects would have reverberated – highlighting ways in which even local crises have global ramifications. As the next phase of the pandemic response unfolds, society will be grappling with more and more difficult questions. Among the many challenges are complex choices about how to curb the spread of the disease while preserving core freedoms. How much tracking and surveillance are people willing to accept as a means of protecting public health? As Laura explains in “The Internet in Everything,” cyber policy is now entangled with everything, including health, the environment and consumer safety. Choices that we make now, about cybersecurity, speech online, encryption policies and product design will have dramatic ramifications for health, security and basic human flourishing. • Laura DeNardis is Professor of Communication Studies, American University School of Communication; Jennifer Daskal is Professor of Law and Faculty Director, Technology, Law & Security Program, American University. This article originally appeared on TheConversation.
<urn:uuid:7620cd62-6c1a-4888-be89-5e0759eb4e13>
CC-MAIN-2022-40
https://news.networktigers.com/network-news/five-cyber-issues-laid-bare-by-covid-19/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00227.warc.gz
en
0.944667
1,251
3
3
One of the key trends emerging in the Global Navigation Satellite System (GNSS) industry is the advancement of high-precision GNSS, which is capable of delivering centimeter-level accuracy. Some of the applications where these solutions are being used include automotive, commercial Unmanned Aerial Vehicles (UAVs), precision agriculture, surveying, robotics, and heavy machinery navigation. To accomplish high-precision positioning, implementers and solution providers must seriously pay attention to chipset diversification, the application of corrective services like Real-Time Kinematic (RTK), and sensor fusion. GNSS versus GPS Global Navigation Satellite System (GNSS) is a general term that classifies various types of satellites, which provide positioning and timing data from orbit. The global satellite systems that make up the GNSS include The United States’ Global Positioning System (GPS), Russia’s GLONASS, the European Union’s (EU) Galileo, China’s BeiDou, and two regional systems (Japan’s Quasi-Zenith Satellite System (QZSS) and the Indian Regional Navigation Satellite System (IRNSS)/NavIC). In other words, GPS is simply an extension of the broader term GNSS. Multi-Frequency Chipsets Are the Building Blocks of High-Precision GNSS Multi-frequency chipsets are crucial for delivering highly precise GNSS signals because they mitigate the effects of two common errors: multipath errors and ionosphere errors. Often, a satellite signal is delayed because it reflects off objects, such as a building. So, if a GNSS deployment is used for an autonomous vehicle within a dense city center, which is a common scenario, the constant delay will result in an inaccurate position calculation. Adding to multipath, the ionosphere sometimes interrupts and delays the GNSS receiver signal—reducing the accuracy by as much as 5 Meters (m) or more. How Do Multi-Frequency Chips Help GNSS? Instead of relying on a single frequency, multi-frequency GNSS receivers leverage multiple satellite signals at various frequencies, including the more superior L5/E5a band. A chipset that supports multiple frequencies brings a stronger signal structure and wider bandwidth to detect possible reflections in the received signal, resulting in enhanced multipath prevention. Moreover, multi-frequency GNSS mitigates the adverse effects of ionospheric interference as it can compare each delay from the two different frequencies, and then fix it. To take things further, if the L5/E5a band is used, the signal will be cleaner and more resilient to band interference. The L5 wideband signals are so much better than L1 that some vendors, such as oneNAV, are providing L5-only GNSS receivers. Figure 1: Advantages of Using the L5/E5a Band for GNSS High-Precision GNSS Uses Multi-Constellation Chipsets, Too Just like supporting multiple frequencies makes a GNSS more reliable, a wider range of constellation support improves GNSS accuracy. A multi-constellation receiver has the ability to accept signals from a number of global constellations like GPS, GLONASS, BeiDou, and Galileo, in conjunction with regional constellations, such as NavIC, and QZSS. Multi-constellation supports accurate global positioning by: - Increasing signal availability, which decreases the signal acquisition time and Time to First Fix (TTFF) - Strengthening signal visibility because it helps deter the influence of physical obstructions like buildings and foliage. - Improving redundancy, as it can use signals from several constellations. In turn, failures can be prevented and spoofing is not as simple as it would be with a single constellation setup. - Enhancing accuracy due to the higher likelihood of obtaining an ideal signal and multipath correction capabilities. Multi-constellation has been fully embraced in the GNSS market, with 87% of the market having supported three or more constellations in 2021. By 2026, that number is expected to climb to 95%. Real-Time Kinematic as a Means to Correction RTK is a GNSS positioning technique that uses cellular carrier measurements to deliver tracking within 1 Kilometer (km) accuracy. In this high-precision process, a static base station, with a known location, transmits location corrections to a moving rover. As a result, the rover can provide centimeter-level precision in real time. Increasingly, many GNSS solution providers like u-blox are focused on integration between RTK and Precise Point Positioning (PPP) to deliver even greater error mitigation. GNSS Correction Services - HERE Technologies offers the cloud-based correction service High Definition Global Navigation Satellite System (HD GNSS), which leverages both PPP and RTK. For mobile use cases, accuracy is sub-meter, and for automotive, that number is as low as 20 cm. At some point in the future, HERE plans to reach widespread centimeter-level accuracy. Autonomous driving in poor weather conditions and assisted driving are key use cases for HD GNSS. - PointPerfect, a PP-RTK corrections services solution from u-blox, targets myriad mass-market deployments. That includes UAVs, robots, machinery automation, micro-mobility, and automotive applications. - Swift Navigation’s Skylark product collects data from hundreds of GNSS reference stations and provides correction data via the Internet to the end user. In recent years, the company has struck multiple alliances with ecosystem players, such as Arm (for simplifying positioning solutions for autonomous and connected vehicle applications), Deutsche Telekom (DT) (to improve DT’s Precise Positioning service), and KDDI (to penetrate the Japanese market). In May 2022, Swift Navigation announced that it will deliver centimeter-level accuracy for automotive use cases by utilizing the STMicroelectronics ASM330LHH Systems-in-Package (SiP). - Public GNSS corrections services will also have a profound role to play and it’s expected to be a competitive market. Galileo’s High Accuracy Service, which will launch by 2024, will use a PPP technique to provide free corrections service (sub-20 cm accuracy). Moreover, the Quasi-Zenith Satellite System (QZSS) from Japan is a PPP-RTK Centimeter-Level Augmentation Service (CLAS) that works nationwide. As the market evolves, expect considerable consolidation among high-precision GNSS players through partnerships and acquisitions. Maintaining GNSS Preciseness with Sensor Fusion and Dead Reckoning When location accuracy is difficult to calculate, such as under an overpass, dead reckoning capability is imperative. Dead reckoning, which is very common in the automotive industry, leverages sensor data (via Inertial Measurement Units (IMUS)) to provide highly precise location positioning when a GNSS satellite signal is interrupted. As an example, the SL869-ADR module, developed by Telit, ensures continuous tracking when GNSS coverage is missing or damaged. The receiver combines speed and heading data that come from internal sensors with a car’s odometer data. A more advanced solution comes from ACEINNA’s RTK-GNSS receiver OpenRTK330L. This solution uses a triple-redundant three-axis Microelectromechanical Systems (MEMS) gyroscope and three-axis MEMS accelerator for dead reckoning. Embedding this means that the OpenRTK330L receiver only obtains credible sensor data and faulty sensor outputs are disregarded. Impressively, the receiver can still provide accurate positioning for 10 to 30 seconds in the event that the satellite signal is completely lost. High-Precision GNSS Reduces Downtime Unless conditions are absolutely perfect, a satellite signal is likely to get lost from time to time. In critical use cases, such as surveying or driver safety, GNSS adopters can’t afford to have location intelligence downtime. However, signal accuracy and reliability can be greatly enhanced with highly accurate GNSS approaches. It starts with choosing a chipset that can support multiple frequencies and constellations. But it’s also essential to leverage corrective services like PPP-RTK and to ensure the use of dead reckoning to keep location tracking operational when the signal is disrupted. By hitting on all these marks, network operators can be more confident that their GNSS solution will be safer from jamming/spoofing, and more resistant to signal-hindering obstructions like skyscrapers or trees. To learn more, download the report: The Future of GNSS: Opportunities and Challenges for Mass Market Positioning.
<urn:uuid:7e5c1d89-82f9-4629-aafb-9197a2c3915f>
CC-MAIN-2022-40
https://www.abiresearch.com/blogs/2022/09/08/the-journey-to-high-precision-gnss/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00227.warc.gz
en
0.913854
1,805
3.15625
3
LTE, LTE-A & LTE-A Pro: Explained LTE or Long Term Evolution is a series of 4G network standards that were agreed in 2008. The architecture used in LTE was designed to surpass the mobile data rates that were available using 3G technologies. In 3G networks, the radio network controller or RNC, controlled what were called NodeB base stations in the network. However, with LTE networks, the base stations had an embedded control functionality which was called eNB (evolved NodeB), removing the need for an RNC altogether. This simplified, flatter version of the network architecture mean response times are much quicker and therefore users of the network would realise much better data rates. LTE-A (advanced) improved on the architecture of LTE, before being superseded by LTE-A Pro which aimed to not only improve the existing network but prepare itself for the introduction of 5G in the next few years. What are the key differences between LTE & LTE-A and LTE-A Pro? The main aim for LTE-A Pro is to increase the data speeds and bandwidth that are currently available for mobile communications. Data speeds are set to be three times faster than LTE-A (in excess of 3Gbps whereas LTE-A was just 1Gbs). User experience will be significantly improved as a result optimising the capacity, performance and functionality of existing LTE-A networks. For example, carriers will have 640MHz bandwidth with LTE-A Pro (compared to 100MHz with LTE-A). Latency will also decrease, allowing for much quicker response times; vital for the development of IoT (internet of things) technology. It will lessen to just 2ms (compared with 10ms at LTE-A). How will these improvements be achieved? There will be a number of different technologies that will be incorporated into LTE-A Pro, many of which will be advanced and evolved versions of the things that are already present in the existing LTE-A and LTE networks. Data speeds will be increased by using and improved version of Carrier Aggregation technology, the process in which larger amounts of bandwidth is made available by using more than one carrier. It is already used in LTE-A, but with LTE-A Pro the number of different carriers that will be able to simultaneously supported will increase from just five, all the way up to 32. A huge advanced again for IoT devices that rely on constant connectivity, often when moving. Greater demand for data transfer means that small cells are being deployed within range of macro cell coverage to provide dual connectivity which significantly improves per-user throughput and mobility robustness, again this is an area which will be continued to be improved upon with LTE-A Pro. A new dynamic uplink and downlink aggregation will mean that operators are able to adjust the configuration of their networks depending on traffic needs. MIMO and Massive MIMO MIMO (multiple-input multiple-output) is one of the key technologies which will address the future of capacity demand. MIMO or ‘multiple-input, multiple-output’ is a wireless technology that, when deployed, uses multiple antennas at both the source (transmitter) and the destination (receiver). This allows for more data to be sent and received at the same time, unlike in conventional wireless communications where only a single antenna is used. LTE-A Pro will use this technology to exploit its capabilities to provide much-improved connectivity. Eventually, this drive in usage of MIMO will evolve towards Massive MIMO, a key enabler for 5G. Existing technology supports 8, 12 and 16-antenna elements whereas massive MIMO will go beyond with up to 64 antenna ports at the eNB. When will LTE-A Pro be commercially available? Currently, despite the progress that has been made in the standards, none of the commercial operations have yet to achieve the 3Gbps data speeds that are defined for LTE-A Pro. There still aren’t any phones on the market that support download speeds of 3Gbps, and those that are moving forwards are still only just achieving rates that are marginally better than the LTE-A defined speeds. There is a lot of talk about 5G and its current development, but without the drive in LTE-A Pro (what many see as the stepping stone or foundation for 5G), it could be some time before the technology is realised commercially. Get all of our latest news sent to your inbox each month.
<urn:uuid:cd5dc7d1-574b-4b4f-9556-68edc08605a8>
CC-MAIN-2022-40
https://www.carritech.com/news/lte-lte-lte-pro-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00227.warc.gz
en
0.948345
958
2.890625
3
AI adoption is growing faster than many had predicted. Research from a recent Global AI Survey by Morning Consult and commissioned by IBM indicates that 34 percent of businesses surveyed across the U.S., Europe and China have adopted AI. Copyright by www.forbes.com That number far exceeds estimates from market watchers last year, which put adoption rates in the low teens. And the examples of AI at work in the business world are vast and varied. For example, a major European bank was able to reduce costs while enhancing productivity at its customer call center with an AI-powered virtual assistant. A healthcare provider in the Midwest used AI to create a program that could help it better predict which patients were most likely to develop sepsis. Some industry analysts may attribute the rise in AI adoption to the surge of new tools and services designed to help lower the barriers to AI entry. Those include new ways to fight data complexity, improve data integration and management and ensure privacy. While all true, I think even bigger forces are at work. In fact, I’d suggest that the major drivers of this revolution are the same ones that helped propel the original Industrial Revolution: language , automation and trust . Forged in factories of the mid-18 th century, all three forces are playing a unique role in tempering AI for widespread use today. Organizations like the World Economic Forum include AI, along with other technologies like mobile, robotics and IoT, in what is referred to as the 4 th Industrial Revolution . But we at IBM believe AI itself is the heart of the new revolution—the AI Revolution. One difference this time around, compared with the 18th-century Industrial Revolution, is that the infusion of language, automation and trust into AI is deliberate—not the byproducts of trial and error, abuse and remedy. In the AI Revolution, language, automation and trust serve as guideposts for AI providers and practitioners to follow as they design, build, procure and deploy the technologies. Critical to the Industrial Revolution was the construction of quasi-universal languages. Vocabularies formed that included words to describe new parts, new products and new processes to enable producers, traders and distributors to facilitate trade and commerce at home and internationally. In fact, the idea of a shared commercial vocabulary can be traced even further back, to the Middle Ages when the term lingua franca arose to describe a pidgin language used between Italian and French traders. But with the Industrial Revolution came terminology around such life-changing innovations as steam-powered machines, processes like assembly lines and new modes of transportation, like “train,” that would remain relevant two centuries later. In the AI Revolution, though, it’s not necessary to create languages to adapt to the technology. Instead, the technology can adapt to human language. The AI technology known as natural language processing (NLP) uses computational linguistics to provide parsing and semantic interpretation of human-language text. Whether the AI system accepts audio and converts it to text or takes text directly from a chatbot, for example, NLP enables computer systems to learn, analyze and understand human language with great accuracy, as it understands sentiment, dialects, intonations and more. This language capability advances AI from the realm of numerical data to understanding and predicting human behaviors. With NLP, data scientists can build human language into AI models to begin improving everything from customer care and transportation to finance and education. The keys to widespread adoption are in the technology’s ability to be customized for particular projects, to support more languages than just English and to understand the intentions of a user’s query or command. NLP, for example, can leverage advanced “intent classification” that automatically discerns the intention of a question or comment to quickly give chatbot users accurate results. […]
<urn:uuid:3a941857-fd5e-4de9-b58f-911cc2b0f8dd>
CC-MAIN-2022-40
https://swisscognitive.ch/2020/03/06/how-ai-is-driving-the-new-industrial-revolution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00427.warc.gz
en
0.937231
781
2.53125
3
Nowadays, different people have different views on the hacking scene. Often times people of similar skill level have similar opinions. There is no official definition of a hacker, rather a vague idea amongst the masses. In addition, the media loves to add false information to draw audiences’ attention across the nation, for the pure sake of money. It all began in the 1960s at MIT, origin of the term “hacker”, where extremely skilled individuals practiced hardcore programming in FORTRAN and other older languages. Some may ignorantly dub them “nerds” or “geeks” but these individuals were, by far, the most intelligent, individual, and intellectually advanced people who happen to be the pioneers and forefathers of the talented individuals that are today the true hackers. The true hackers amongst our societies have an unquenchable thirst for knowledge. Boredom is never an object of challenge for hackers. They have an almost anomalous ability to absorb, retain, and exert vast amounts of knowledge with regard to intricate details. In 1969, Bell Labs employee Ken Thompson invented UNIX and permanently changed the future of the computer industry. Then in the very early 1970s, Dennis Ritchie invented the computer programming language “C” which was specifically invented to be used with UNIX. Programmers ceased to use assembler, while developing an appreciation for the portability of “C.” Hackers used to be viewed as people who sat locked in a room all day programming nonstop, hours on end. No one seemed to mind hackers back in the 1960s when this was the most widely excepted reputation. In fact, most people had no idea what hacking was. The term hacker was accepted as a positive label slapped onto computer gurus who could push computer systems beyond the defined limits. Hackers emerged out of the artificial intelligence labs at MIT in the 1960s. A network known as ARPANET was founded by the Department of Defense as a means to link government offices. In time, ARPANET evolved into what is today known as the Internet. In the 1970s, “Captain Crunch” devised a way to make free long distance calls and groups of phone hackers, later dubbed “phreakers” emerged. Throughout the 1970s and halfway into the 1980s, XEROX’s Palo Alto Research Center (PARC) spit out fresh new innovations such as the laser printer and LANs. During the early 1980s, the term “cyberspace” is coined from a novel called “Neuromancer.” A group called the “414s” is one of the earliest hacker groups to ever get raided by the FBI and they get charged with 60 computer intrusions. Usenets began to pop up around the nation at this time and hackers exchanged thoughts using their UNIX based machines. While all of this was going on, the Secret Service was granted jurisdiction over credit card and computer fraud. During the 1980s, hacking was not known amongst the masses as it is presently. To be a hacker was to be a part of a very exclusive and secluded group. The infamous hacker groups the “Legion of Doom,” based in the USA and the “Chaos Computer Club,” based in Germany, were founded and are still two of the most widely recognized and respected hacker groups ever founded. Another significant foundation is that of “2600: The Hacker Quarterly,” an old school hacker magazine or “zine.” 2600 Magazine still continues to play a role in today’s hacker community. As the end of the decade approached, Kevin Mitnick was arrested and sentenced to a year in prison on convictions of stealing software and damaging computers. In addition, federal officials raided Atlanta, where some members of the Legion of Doom were residing, at the time. The LOD, CCC, and 2600 Magazine have become known as old school hackers and are still widely respected and recognized. During the 1990s, Kevin Mitnick is arrested after being tracked down by Tsutomu Shimomura. The trials of Kevin Mitnick were of the most publicized hacker trials in hacker history. As hackers and time progressed, hackers found ways to exploit holes in operating systems of local and remote machines. Hackers have developed methods to exploit security holes in various computer systems. As protocols become updated, hackers probe them on a neverending mission to make computing more secure. In fact, due to the tendency hackers have of exploiting society, there have been spinoff categories such as “cracking” which deals with cracking software, “phreaking” which deals with exploiting phone systems, and “social engineering” which is the practice of exploiting human resources. When hacking first originated, the urge to hack into computer systems was based purely on curiosity. Curiosity of what the system did, how the system could be used, HOW the system did what did, and WHY it did what it did. Some modern day hackers archive exploit upon exploit on their machines, but archiving and using exploits is definitely not what modern hackers do. All too often, media figures and the general public mistake those who deface webpages, steal credit card numbers and/or money, and otherwise constantly wreak havoc upon the masses as hackers. You must be thinking, “Well, isn’t that what hackers DO? They gain unauthorized access to computers,” and technically you would be correct. HOWEVER, that’s not all they do. Hackers find and release the vulnerabilities in computer systems which, if not found, could remain secret and one day lead to the downfall of our increasingly computer dependant civilization. In a way, hackers are the regulators of electronic communication. Hackers come up with useful new computer systems and solutions to make life easier for all of humanity. Whether you know it or not, I know from personal experience that ANYBODY you know could very well lead an unexposed life as a hacker. Hackers live amongst us all. They work in all of our major corporations, as well as in many small companies. Some choose to use their skills and help our government, others choose to use their skills in a more malicious and negative way. If you look around you, ANY INDIVIDUAL you see is a potential hacker. Often, it’s the people who you would suspect the least that are the hackers in our society. People in our modern day society tend to stereotype hackers as well. All hackers aren’t 31337lbs, 5’5, wearing glasses and suspenders, scrawny, pale skinned, with a comical Steve Urkel resemblance and no social life. If you think this, you are WRONG. Hackers are black, white, asian, european, tall, short, socially active (and not), cool, nerdy, and a bunch of other miscellaneous categories. Just like you can’t make an assumption that if someone is from “Clique X” than they must be really [whatever], you can’t apply a stereotype to genres of hackers. Although there are people running around saying, “Look, I defaced a website, I did it, and therefore I’m a hacker,” doesn’t mean that they’re a hacker. Nevertheless, nor does it mean that ALL people claiming to be hackers are fakes and wannabes. It’s the same in the digital underground as it is with any other realm of society. Currently, we see the commercialization of hacking. If you were to take a trip to a respectable bookstore with a good selection of books, you would find books with flat out hacking techniques. Whether these techniques can truly be classified as hacking by the classic definition of hacking is debatable. They claim to teach you hacking methods, how to become a hacker, and supposedly reveal hacker tricks to the common man. Another common misconception is that people who distribute and deal with illegal software, which is commonly known as “warez” are hackers. “Warez kings,” as they are commonly known, are not necessarily hackers, however that doesn’t mean that they are NOT hackers. You cannot determine the intellectual content of people by what they say or have. Moreover, hackers are not people who go around using programs in Windows such as “WinNuke” and various ICMP bombers and other miscellaneous Denial of Service programs designed to crash remote party’s machines. Hackers don’t distribute remote administration tools and use them as trojan horse viruses to wreak havoc on the general public and make other people’s lives miserable. Real hackers want to know as much as they can and are more helpful than wreckless. While it is true that there ARE hackers that DO commit malicious acts against users, they are not to be used as a model of the norm of hackers.
<urn:uuid:0d6bf642-e382-4f58-8144-d424baaff01d>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2002/04/08/the-history-of-hacking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00427.warc.gz
en
0.965317
1,857
3.046875
3
In a DNS poisoning attack, also known as DNS cache poisoning or DNS spoofing attack, an attacker takes advantage of known vulnerabilities within the domain naming system (DNS) to insert a false entry into the DNS records. The attacker then uses the false entry inserted into the DNS cache to route traffic intended for a legitimate domain to a fake one. Most of the attacks on DNS are aimed at spoofing the response to a DNS query. When an attacker is able to add a false entry to a DNS client or server’s records, the DNS record is said to be poisoned. The term “poisoned” is used because the false entry (the poison) is injected into the system at a single point and can spread throughout the system, affecting other points. To better understand how attackers execute a DNS poisoning attack, it’s important to first understand how DNS works. If you already have a strong understanding of DNS, its purpose, and how it operates, you can skip to the section “How is DNS poisoning done?”. Computers work with numbers and are uniquely identified on an interconnected network using special numeric identifiers known as IP (Internet Protocol) addresses. Humans, on the other hand, are better at identifying objects with names. For humans to effectively use the internet to identify and communicate with computers hosting the resources they need to access, there has to be a way to map human-readable names to the IP addresses used to identify computers on the network. In comes the Domain Name System (DNS), often referred to as the phonebook of the internet. A phonebook helps to get a person’s phone number when you know the person’s name. DNS works similarly by helping humans use names that map directly to IP addresses of computers on the internet to locate the servers they want to communicate with. DNS is the backbone of the internet as it makes the internet usable for humans. Let’s simplify how DNS works even further. DNS essentially consists of two main parts: A computer that hosts this database and supports the communication protocol is known as a DNS server. An entity that communicates with the DNS server by sending a name to be resolved to an IP address is known as a DNS client. DNS servers and clients often cache resolved queries to improve the speed of the name resolution process. A DNS operation occurs in the following steps: As I said earlier, this is a highly simplified look at DNS and how it works; however, with this understanding, you will better comprehend how a DNS poisoning attack is implemented and how it can be very dangerous to end users. DNS poisoning or spoofing is done when an attacker intercepts a DNS request and sends a fabricated (poisoned) response to the client. Imagine you’re at the airport waiting for your flight, the flight is in a couple of hours, so you decide to burn some time by checking up on your Twitter feeds. You see that the airport has free WiFi, and you naively connect to it. Unknown to you, the WiFi is insecure and an attacker has taken advantage of its loopholes to poison the DNS cache and redirect all requests for twitter.com made on the network to their machine. You visit the domain twitter.com and the attacker redirects you to a spoofed version of Twitter that looks exactly like the original. You’re taken to the login page and you submit your credentials giving away your Twitter account to the attacker. Once the attacker harvests your login credentials, they redirect you to the original Twitter website to continue your session. And just like that, you’re a victim of DNS poisoning. The Domain Name Systems’ (DNS) design favors speed and scale over security, which is why it uses the User Datagram Protocol (UDP) protocol and encourages caching. This vulnerability makes it attractive to attackers as there isn’t any validation or verification system built-in. With no validation or verification, it’s difficult to verify the source and the integrity of data moving through the network in a DNS resolution process. DNS poisoning can be done on both the client and the server-side. Typically, DNS cache poisoning is done on the client-side. The attacker poisons the client’s DNS cache with false entries and can spoof any DNS entries they want. A DNS client in this case can either be a standard DNS client or a recursive DNS server. On the server side, DNS poisoning can be done in two ways. First, an attacker might intercept all requests at the DNS server and spoof the DNS responses. A second strategy is to poison the server itself by changing the records (DNS cache poisoning); the server will then automatically direct the user to the illegitimate IP address—even after the issue is resolved. Attackers can also perform DNS poisoning on a large scale by poisoning the authoritative DNS server for a domain. Every time a request is made for the domain from anywhere around the world, the poisoned record is returned to the client. Once a DNS poisoning attack is in full effect, the attacker can use the opportunity for nefarious activities like stealing sensitive information (bank logins, social logins, etc.), redirecting traffic to gambling or advertising sites to promote whatever they want, or simply denying access to the original domain causing a Denial of Service (DoS) situation. The most effectively implemented form of a DNS poisoning attack displays a spoofed site to the end user with no visible difference from the original. This allows the attacker to take advantage of the user’s ignorance to steal sensitive information. When a user visits a well-spoofed domain, they see no difference between it and the original. This trust makes them supply any type of information the spoofed site requests no matter how sensitive the information is. This is the perfect scenario for hackers to steal as much information as they want. Another way a DNS poisoning attack can appear to an end user is when the domain refuses to load. This is done by attackers to frustrate the users of a service or cause harm to the business of that service. The attacker can substitute the IP address of the original domain with one that is not publicly accessible or simply spoof a “Not Found” page. Governments can also use DNS spoofing to implement censorship laws. For example, countries like China that do not allow the use of Facebook use DNS poisoning techniques to display a censorship page or redirect citizens to a different domain when they try to visit Facebook. In most of these cases, the end-user will likely never know they were the victim of DNS cache poisoning. Address Resolution Protocol (ARP) poisoning and DNS poisoning are both Man-In-the-Middle attacks. The main difference between these two is the addressing format used and the scale on which they occur. While DNS poisoning spoofs IP addresses of legitimate sites and its effect can spread across multiple networks and servers, ARP poisoning spoofs physical addresses (MAC addresses) within the same network segment (subnet). An attacker uses ARP cache poisoning (or ARP poison routing) to trick the network into thinking that their MAC address is the one associated with an IP address so that traffic sent to that IP address is incorrectly routed to the attacker. This enables the attacker to eavesdrop on all network traffic between its victims. DNS poisoning is a very dangerous form of attack because it can be extremely difficult for users to detect. Attackers are getting better at making fake websites identical to the original, making it easy to steal sensitive information from unsuspecting users. According to experts, 33% of data breaches can be prevented by ensuring that the DNS layer is well protected. To learn more about how DNSFilter uses DNSSEC to protect you against DNS poisoning, check out this article on our Knowledge Base.
<urn:uuid:0c7f3ffe-6cc4-4dc5-ac1f-9ec5dbf0018e>
CC-MAIN-2022-40
https://www.dnsfilter.com/blog/dns-poisoning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00427.warc.gz
en
0.92678
1,604
3.015625
3
There is a wide variety of security tools that help pentesters, developers, and analysts when it comes to services and applications, whether they need to find weaknesses, better understand in-depth behaviour, or simply monitor usage. At their core, security tools are simply the means by which a user can act in accordance with their intention, whether that’s attacking or defending. As you might expect, the boundary between these two territories is often rather blurry. Tools of the offensive and defensive side Nowadays, tools on the offensive side can mostly be taken care of by operating systems such as Kali Linux or Parrot OS. Fortunately, blue teams are also armoured with the right options to defend against threats. But in order to use these tools properly, one needs not only the right mindset, but field expertise as well. What are some examples of security tools? Without getting lost in the massive amount of opportunities available today, we’ve provided some categories below that may help you learn more about these tools. As contemporary tools offer several different services, the categories may overlap. Tools for static code analysis Static Application Security Testing (SAST) is designed for white-box source code analysis and can range from a package-based approach (e.g. bandit) designed for a specific language to more complex scanners such as Contrast Security’s SAST tool. Other Software Composition Analysers (e.g. Snyk) are designed to catch known vulnerabilities in third-party components. However, when it comes to understanding the internal logic of compiled native binaries, we need proper tools like IDA Pro, Binary Ninja, or Radare 2. These help reveal all the code paths that an application may take without executing the application itself. Tools for dynamic application analysis Dynamic Application Security Testing (DAST) checks the runtime behaviour of applications without accessing their source code. Veracode is an example of a vendor which offers such solutions. Modern approaches include Runtime Application Self Protection (RASP), which provides runtime protection to keep applications from being exploited (e.g. Contrast Security’s RASP solution). Fuzzers such as AFL help manipulate executable inputs to trigger potential security bugs. Others, such as the Unicorn CPU Emulators, allow for emulating binaries cross platform. Tools for the web In order to protect your code against the most critical web vulnerabilities, you first need to use proper frameworks such as Angular, Django, or Laravel that help eliminate the most obvious security issues. Other tools, such as Bleach or CSP evaluator can add an extra layer of security against XSS as long as they are bug-free. However, it’s still best to be prepared against all the vectors that OWASP top 10 collected. Tools related to networks The main objective of these tools is to intercept or record network traffic and allow analysts to monitor, analyse, or modify the requests on the go. This category includes Wireshark to capture packets, Burp Suite to scan for web vulnerabilities, nmap to discover networks, and SIEMs (Security Information and Event Management) such as OSSIM to collect network traffic from different sources to detect malicious activity. Get started with secure coding training today Reach out to our team and find out how we can help your company scale secure coding training efficiently. Copyright © 2022 Avatao
<urn:uuid:27971229-4e9c-4733-a7f7-33005336385a>
CC-MAIN-2022-40
https://avatao.com/security-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00427.warc.gz
en
0.934099
699
2.71875
3
EPC (Engineering, Procurement, and Construction) companies are responsible for designing and constructing some of the most complex bridges, buildings, energy grids, power plants, etc – in the world. Whether it is offshore, at high altitude or on uncertain terrains like sand, EPCs are a different breed of problem solver and continue to push the boundary when developing new structures. However, as the complexity of these builds increase, so to does the operation and maintenance of them. This is leading EPCs to expand their offering with the additions of ‘O’ (Operate) and ‘M’ (Maintain). So how has this transition to operation and maintenance provider impacted the industry? Extinction or Evolution? EPC companies are a vital part of the World economy, according to Statistia, in the US they’re responsible for 4.3% of the nation’s GDP a year and in a report by McKinsey it was noted that to keep up with the rates of global GDP growth, about $57 trillion in infrastructure spending is required by 2030 to ensure the EPC industry is able to maintain itself. The stark reality is that EPCs employ hundreds of thousands of workers, continue to develop economies and improve connectivity. So why are EPCs always under scrutiny? With only 1 out of 4 EPC projects being delivered within 10% of the original deadline and only 31% of projects being delivered within 10% of the budget given – a clear picture starts to emerge when you look at EPC project performance. Whilst EPC companies themselves are quick to point blame in low labor rates/productivity, a lack of collaboration between contractors or overstretched deadlines it is hard to ignore that, 30% of the work performed by EPCs is actually rework that happened in previous projects, which would suggest the work itself was not up to standard or that designs were not accurately mapped out. So is it time for the existing EPC model to go extinct or evolve? EPCM – A step to success The one thing we can all likely agree on is that the EPCs are a critical part of the World economy. Hence, suggesting that they simply go extinct wouldn’t be the smartest route. However, they can’t continue to operate as they are today. Projects are going over budget, not being completed on time and needing constant rework due to quality and design. EPCs will therefore have to evolve, and some earlier pioneers are already on this journey. The first and logical step for EPCs to take is to offer service and maintenance contractors for what they construct. The rationale behind this is that over the past 20 years there have been only a 1% increase in the overall productivity of EPCs – with profit margins being around 4% there hasn’t been the scope to invest in operations to improve efficiency. Offering service and maintenance contracts on what they build has the potential to increase lifetime revenues on a project by 120-200%, this increase in revenue will allow EPCMs to actively invest in technology, people and equipment to increase productivity and further improve profit margins. Service and maintenance contracts also double down on the incentivizing EPCMs to ensure projects are completed on time. Since, this is the only way they will then start to earn that revenue stream. And finally it encourages completing projects at a high quality. This will reduce the amount of maintenance and further increase the profitability of the service and maintenance contracts. It is key to note that to successfully provide service and maintenance contracts you’ll need a business system that contains specific functionality. The advantage is that IFS Cloud can help you significantly simplify your business system architecture by combining best-of-breed industry specific functionality and service and maintenance processes all from one platform – not only does this ensure you work from one source of truth, but it allows you to accurately monitor a project over its complete lifespan. EPCOM – The Future is Servitization While the addition of service and maintenance contracts will certainly help remedy a number of the industry challenges. Those that are really looking to be pioneers will need to go that step further and offer operation services. This may seem like a daunting prospect but in reality could be a simple transition. For example, imagine you’re a specialist in power plant construction, is it unrealistic to assemble a team who can operate that plant as a service? In the big picture, EPCOMs are the future as they can offer a complete end-to-end service. This provides a complete solution for the asset owner which could be private or a government contract. In the above example, the idea is that you are no longer just selling a power plant you are selling energy. You’re powering people’s homes, charging their vehicles and empowering individuals. The evolving complexities of EPCs As an EPC whatever journey you’re on, the industry must change. Why the complexity doesn’t show signs of going away, a business system like IFS Cloud can reduce it and support your path to success. To find out more speak to one of our industry experts today. Do you have questions or comments? We’d love to hear them so please leave us a message below.
<urn:uuid:056ade18-676d-4683-9df6-2e0b969458c4>
CC-MAIN-2022-40
https://blog.ifs.com/2022/09/epc-epcm-or-epcom-the-evolving-complexity-of-epcs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00427.warc.gz
en
0.960358
1,085
2.5625
3
Dangerous Objects such as weapons, Bombs, and chemicals that are hidden in baggage can be detected with Wi-Fi signals from normal Wi-Fi devices without opening it. Researchers from Rutgers University demonstrated the readily available WiFi signals from low-cost devices can penetrate into baggage without any dedicated devices or signals. With their model they utilized channel state information (CSI) from normal Wi-Fi Devices to be used in detecting suspicious objects, the CSI is a channel property of a communication link that describes how the signal propagates from the transmitter to receiver. Object Classification – Wi-Fi devices The system classified into two major components It first detects the existence of dangerous objects and identifies it’s type based on reconstructed CSI complex value. Then it calculates the risk level of the object by examining the dimensions of the object based on the reconstructed CSI complex of the signals reflected from the object. They said that Our system only requires a WiFi device with 2 to 3 antennas and can be integrated into existing WiFi networks with low costs and deployment efforts, making it more scalable and practical than the approaches using dedicated instruments. Researchers said our system can detect over 95% of suspicious objects in different types of bags and successfully identify 90% dangerous material types. Researchers evaluated 15 metal and liquid objects and 6 types of bags in a 6-month period. Each experiment is repeated 5 times by changing object’s position and orientation. They said Our system can be easily deployed to many places that still have no pre-installed security check infrastructures and require high-manpower to conduct security checks such as theme parks, museums, stadiums, metro/train stations, and scenic locations. Professor Yingying Chen said in an interview to CBS local that their new prototype detection is a game changer and We feel really excited and we realized this could be very useful. More technical details can be found in the paper “Towards In-baggage Suspicious Object Detection Using Commodity WiFi” published by researchers.
<urn:uuid:cc2e7ff5-33f1-4ee0-a120-f67efed04772>
CC-MAIN-2022-40
https://gbhackers.com/wi-fi-devices-weapons-and-bombs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00427.warc.gz
en
0.928673
413
2.859375
3
Quantum computing is transitioning from scientific curiosity to technical reality. There’s so much hype around it because it comes with a promise to solve problems that were previously unsolvable. It may take years before we will be able to take advantage of a quantum computer. Nonetheless, it's not too early to get in and start seeing what the roadmap looks like, MIT professor William Oliver said during the MIT Tech Review conference Future Compute. “We have the first small-scale quantum computers available in the cloud to be used by people worldwide. And also Google's recent demonstration of quantum advantage. Quantum computing is transitioning from scientific curiosity to technical reality. That's happening around us right now,” he said. Yet, as we have learned from the history of the development of the classical computer, advancing from discovery to useful machines is going to take time, development, and engineering. “If you want to take advantage of this in the future, you need to be in the game to play today,” Oliver said. The advent of quantum computers comes with a threat to current cryptographic schemes. Quantum cryptography comes with a promise to be unbreakable and is met with fear that it might break the classical cryptography that powers, for example, cryptocurrencies. “We believe that there exist post-quantum cryptography schemes that - as we know it today - would not be susceptible to quantum attacks. Many researchers are working towards those today,” Oliver told CyberNews during a brief discussion before the event. The hype around quantum computers Many things have to be done before we can use quantum computers at scale and, in particular, for commercial purposes. But that doesn’t mean we have to wait 20 years before we can start generating benefits from these computers. “In the near term, what we are finding is that companies that already put quantum computers online, and more are doing so each year, at the 50 qubit scale, and we are going to see that increased to hundreds of cubits in the next couple of years. And what this enables people and companies to do, is to play with them and work with them and develop algorithms on them,” Oliver said. Those algorithms may not affect company bottom lines at the moment, but the technology is going to scale over time, so businesses need to be ready for it, Oliver reckons. “I think there is a lot of work going on right now to getting these quantum computers online so more people can use them, not just the physicists and the engineers at universities, but more people around the world,” he said. There's certainly a lot of hype today around quantum computing as it holds a tremendous and exciting promise for commercial impact in the future. “On the other hand, we risk, of course, being on a hype cycle where there's disappointment if we make promises too soon and then don't deliver on those promises. I would encourage everybody to look at this for what it is. It is a very promising technology at a very early stage of development,” Oliver said. It’s up to each company to decide what’s best for it in the future. However, the MIT professor encourages everyone to start looking at quantum computing and decide on the right time to get it. “You may want to get in and start developing algorithms that are related to your business now, even though it may be a few years before you have a quantum computer that's large enough to implement that at a scale that exceeds what your classical computers can do. Nonetheless, it is not too early to get in and start seeing what the roadmap looks like,” he said. The investment in quantum computing is growing, as well as the number of companies that are jumping in. “It's still very early on. We need to continue this fundamental research, fundamental science, as well as the underlying fundamental engineering to be successful”, Oliver said. How is a quantum computer different from a classical one? “A classical computer is based on bits, classical bits of information, for example, a transistor, and it can be in state zero or state one. It's deterministic, seldom has an error,” Oliver said. Quantum bits, or qubits, are different. They are, as Oliver explained, any two-level systems. You can think of them as existing anywhere on planet Earth. If you are at the North Pole, you are in state 0, and if you are at the South Pole, you are in state 1. “But you can be anywhere on Earth. When you are anywhere that is not North or South pole, you are in a superposition state of zero and one. It is a manifestly quantum mechanical state, and it has certain properties. One of which is that measurement becomes probabilistic. For example, if you are pointed along the equator, and you make a measurement, half of the time you are going to get a 0, half of the time you are going to get a 1. The measurement would randomly put you in the North or South Pole. That's how it works,” Oliver explained. Quantum computers rely on coding information in a fundamentally different way and with different behavior than classical computers. As Visual Capitalists put it, the consequence of this superposition is that quantum computers can test every solution of a problem at once. For a more detailed explanation on how quantum computers work we suggest you watch Oliver’s lecture from 2019: Types of quantum computers Oliver highlighted three applications and approaches of quantum computing that we have today. The first one is what we call the universal fault-tolerant quantum computer. “This is a quantum computer that can run any quantum algorithm, and it can run any classical algorithm as well. Although, it often does no better than a classical computer at running those classical algorithms. But quantum algorithms can vastly outperform a classical computer,” the professor said. Then there are digital and analog quantum simulators. “Here we are talking about applications in quantum chemistry, drug development, material science. We are using a quantum computer to simulate a quantum system. Here quantum advantage exists over known classical algorithms. But we need to build a quantum computer large enough to be practical,” Oliver said. The third type of quantum computer is different from the former two. “It’s called the quantum annealer, and this focuses on optimization problems. So, for example, supply transport, pattern recognition, tasking problems,” Oliver said. More great CyberNews stories: Subscribe to our monthly newsletter
<urn:uuid:d0f69732-108e-4b1e-a1ea-fc7e9b0e0ba6>
CC-MAIN-2022-40
https://cybernews.com/editorial/the-hype-around-quantum-computing-its-not-too-early-to-get-in/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00427.warc.gz
en
0.958087
1,372
3.03125
3
What Is A Nanosecond, And Why Do We Need To Understand That? By: Bruce G. Kreeger From Wikipedia, the free encyclopedia To explain a nanosecond I am including a clip, below, of Admiral Grace Brewster Murray Hopper (1906-1992), who was a computer pioneer and U.S. Navy Officer. She earned a master’s degree (1930) and a Ph.D. (1934) in mathematics from Yale. Admiral Hopper is best known for her trailblazing contributions to computer programming, software development, and the design and implementation of programming languages. A maverick and an innovator, she enjoyed long and influential careers in the U.S. Navy and the computer industry. As the President of the Admiral Farragut Academy Alumni Association (Pine Beach, NJ) and a member of the Board of Trustees during my tenure, I had the honor of meeting Admiral Hopper and enjoying a meal with her. I was captivated. At Clarity, we frequently hear a continued question of “why does it take so long” to retrieve data from the server, the internet, and my computer? While there are many contributing factors like malware, internet speed, network inefficiency, slow processor, or slow hard drive, often it is about the nanosecond. At one point in time, I used to think higher speeds were attainable with higher degrees of bandwidth. This may be why the idea of ‘low latency’ seems so counter-intuitive. As you hopefully understand at this point, there are limitations to how fast data can move and that real gains in this area can only be achieved through efficiency improvements, in other words, the elimination (as much as possible) of latency. Ethernet, speed, and Internet speed, really are about latency. Ethernet switch latency is defined as the time it takes for a switch to forward a packet from its ingress port to its egress port. The lower the latency, the faster the device can transmit packets to its final destination. Also crucial within this “need for speed” is avoiding packet loss. The magic is within the balancing act: speed and accuracy that challenge our understanding of traditional physics. The next time you wonder why it takes so long, remember it’s all about the network, latency, and the time (in nanoseconds) your data goes from point A to point B and back. An example of the use of these measures is that if your CPU clock speed is 3.5 Mhz (Megahertz per Second)= 285.71428571429 ns(p) (ns=nanoseconds). Ipsofacto the higher the clock speed, the faster the computer is. We are currently using the Picosecond (One trillionth of a second) as a unit of measure and the nanosecond. Clarity provides support to end-users, and our PSAs include Help Desk and Team Management. These modules include self-service utilities to cut down recourse to human assistance. Our Help Desk platforms also provide several channels for problem reporting, such as web form, email, phone, and online chat. Clarity is an MSP (Managed Services Provider), CSP Cloud Service Provider (Cloud Service Provider), VoIP Trunking service provider, Microsoft 365 CSP, 3CX PBX Titanium Partner, along with other IT support offering such as UCaas (Unified Communications as a Service) and SaaS (Security as a Service). Clarity’s Dotman Tech division provides more services in Software Development, Content Creation, and Marketing Automation services. Call Clarity at 800-354-4160 today or email us at [email protected]. We are partnered internationally around the globe and we are open seven days a week 8:30 AM to 5:00 PM EST/EDT. https://claritytg.com and https://dotmantech.com.
<urn:uuid:b4e2f205-aecf-40bc-8442-a695d09f6e80>
CC-MAIN-2022-40
https://claritytg.com/index.php/what-is-a-nanosecond-and-why-do-we-need-to-understand-that/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00627.warc.gz
en
0.945809
833
3.265625
3
Access control in cyber security is a crucial mechanism that helps mitigate the risk of a malicious actor retrieving data or viewing resources without proper authorization. Besides controlling access to data, controlling access techniques also enable seamless logging of data and resource access events. An Access Control List (ACL) includes a set of rules that define permissions and maintain different levels of access to organizational data and network traffic. Access Control Lists are critical for network traffic control and security since they describe access rights and permissions. ACLs also offer a high level of granularity for controlling network traffic flow since they can be placed on any routing device to enable communication between two entities. In this article, we discuss what an ACL is, why it is essential for securing modern systems, different types of control lists, and address commonly asked questions. What is an Access Control List? An Access Control List is a table that informs the host operating system on user authorization rights and the level of permissions a user possesses to access data and system objects. Each resource or data file has a security property with which the ACL associates it. An ACL also has an entry for users with privileges to read, execute or write data onto these files. When a user requests to access data or a resource object, the operating system reads the ACL for the user’s entry. It determines whether they have access rights and the authority to perform the requested operation. ACLs are also installed in network devices such as routers and switches to act as packet filters for incoming traffic. A networking ACL includes preset rules that define which routing updates and packets can enter the private networks. The ACL defines the filtering criteria used to allow or deny packet forwarding to achieve this. What Does An Access List Entry Contain? Access Control Entries define access to files/directories/system objects and the flow of packets between the public internet and private networks. Contents of the access control list entries include: - Sequence number – the identification code for the access list entry - ACL Name – the name used to identify the ACL entry in a non-numbered access list - Remark – a comment or detailed description that can be used to share information across different categories of users - Network protocol – some access control lists use protocol-specific parameters to define access rights. This specification is used to grant or deny access to different networking protocols. - Log – ACLs that have logging enabled provide extensive insights into outgoing and incoming packets for the entire access list. - Access list statement – a deny or permit statement that defines the actions of users. - Destination or source address determines the access policy permissions and access rights for the destination, source IP address range, or individual IPs. Features of An Access Control List Key features of access control lists include: - The defined ACL rule set is coordinated using sequential identification - Incoming packets are carefully coordinated until they match the defined rules - Each access control statement ends with an implicit deny, so the packet is discarded if no condition satisfies the rule - ACLs lack innate monitoring and regulation, making it difficult to share knowledge and communication across key user groups Types of Access Control Lists ACLs are primarily categorized into four types. These include: Standard Access Lists Standard access lists only allow for the evaluation of a packet’s source IP address. These lists permit or deny an entire protocol suite and do not distinguish between IP traffic transmitted over different network protocols such as TCP, UDP, or HTTP. Features of a standard ACL include: - This form of ACL is typically applied close to the destination - A standard access list permits or denies the whole network or sub-network - To identify the access list entry, a numbered standard ACL uses the range 1-99 and the extended range 1300-1999 - In a numbered standard access list, ACL rules cannot be deleted. Deletion of a single access rule results in the deletion of the entire ACL - Named standard ACLs to permit the flexibility to delete specific rules from the ACL Extended Access Lists Extended access control lists act as the gatekeepers of internal networks that either deny or permit traffic based on destination address, source address, destination port, source port, network protocol, and time range. Features of the extended ACL include: - Extended ACLs are typically applied close to the source - An extended ACL implements packet filtering based on port numbers, source/destination IP addresses, and network protocol. - The extended ACL uses the address range 100-199 and the vast range 2000-2699 for entries. - In numbered extended ACLs, the whole list is deleted if one rule is deleted. - Named extended ACLs provide the flexibility to delete a single rule from the access list Dynamic Access Lists Dynamic ACLs (Lock-and-Key security) extend the capabilities of the standard and static extended ACLs by tying them to a Domain Name System, LDAP, or active directory server for dynamically filtering network traffic. This access list dynamically creates ACL rules based on authentication, authorization, and accounting service attributes. Key features of dynamic access lists include: - Each application/subnet mask requires only one dynamic access list. If more than one list is created, the network device will refer to the first defined dynamic access list - All dynamic name entries should be globally unique within the setup - The dynamic ACL relies on a Cisco controller and the Telnet protocol for user authentication - Only place the destination and source addresses in the temporary entry. The ACL will inherit other attributes, such as ports, from the primary dynamic ACL - All additional rules to the dynamic ACL are inserted at the beginning of the dynamic list. Reflexive Access Control Lists As access control lists do not keep track of any connections by default, a reflexive access list is purpose-built to guide the routing devices into keeping track of outgoing connections so they can automatically allow incoming packets. Reflexive access control lists are triggered when a session is initiated within the network and goes to the exit interface through the router. While doing so, the reflexive ACL creates a temporary entry that only allows inbound traffic from external connections that are part of the session. This temporary entry is later discarded from the control list when the session terminates. Key characteristics of reflexive access lists include: - Each reflexive access list is nested inside an extended ACL - A reflexive ACL cannot be applied directly to network interfaces - Reflexive entries are transient; they are created when a session is established and terminated once they are done monitoring the movement of packets. - They lack an implicit denial at the end of the list. - It is impossible to define reflexive ACLs with a numbered access list - It cannot be determined with standard access lists Access Control List – Common Examples Standard Access List Consider two routers, each with a loopback interface. The following steps outline how to configure access control lists that allow inbound traffic through the interface of R2. Assume the IP address ranges of router1 (R1) and router2 (R2) are 126.96.36.199/24 and 188.8.131.52/24, respectively. Assuming they are connected over a network IP 192.168.12.0/24, we build two static routes for the interfaces to connect: R1(config)#ip route 184.108.40.206 255.255.255.0 192.168.12.2 R2(config)#ip route 220.127.116.11 255.255.255.0 192.168.12.1 A single permit entry on R2 that only permits traffic from network 192.168.12.0/24 would look similar to: R2(config)#access-list 1 permit 192.168.12.0 0.0.0.255 Apply this inbound access list on R2: R2(config)#interface fastEthernet 0/0 R2(config-if)#ip access-group 1 in Extended Access List Assume we want to create an inbound access list to provide an administrator’s machine (IP 10.0.0.1/24) full access to a print server (IP 192.168.0.1/24) and deny any access to a user machine (IP 10.0.0.2/24). Create a permit statement that gives the admin machine print server access: R1 (config)#access-list 100 permit ip 10.0.0.1 0.0.0.0 Create an ACL statement to deny the user’s machine access to the print server: R1 (config)#access-list 100 deny ip 10.0.0.2 0.0.0.0 Apply the access list to the interface: R1 (config)#interface fa0/0 R1 (config-if)# access-group 100 in What is the difference between a network access control list and a file access control list? Filesystem ACLs define user permissions and access to files, while Network ACLs act as packet filtering controls that decide which traffic bundles can cross network interfaces. Filesystem ACLs are mostly installed on the host OS, while network ACLs mainly reside in network and routing devices. What are the types of access control for network access? Administering access control on the network is typically done in the following two stages: - Pre-admission: This access control comes into play before the access is granted and is used to evaluate initiated access requests to a network and grant permission to compliant and authorized users or endpoint devices. - Post-admission: This stage works when a user or endpoint device already within the organization’s network tries to access another part of the organization’s network. Apart from the stages above, the types of controls used to manage access rights include: Mandatory access control – A strict, secure model primarily designed for government and official operations. They are mainly used in conjunction with other access control techniques for cost savings and ease of use. Role-based access control (RBAC) – Used to grant access privileges based on job functions. Rule-based access control – Permits or denies access based on preset conditions that other categories of users can’t change. Attribute-based access control – Uses particular policies that combine attributes for executable files, client details, and resource objects, among others.
<urn:uuid:b2db7702-02d5-4b67-a754-9db8dc883313>
CC-MAIN-2022-40
https://crashtest-security.com/acl-access-control-list/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00627.warc.gz
en
0.864179
2,199
3.65625
4
4 Scary Facts About Malware Variants That Should Concern You When a piece of malware successfully targets a vulnerability, two significant things occur: - Security experts race to pinpoint what vulnerabilities are being targeted and how they can be patched. - Hackers around the world take note of the success and start investigating the malware for their own use. Most of the time, the second one takes place faster. Since malware codes are typically available from sources like the Dark Web, hackers can easily tweak the original program to a more threatening one. Hackers do that by patching the malware the same way it’s done with other forms of software. They remove weaknesses and old mistakes that may have hindered them from getting as much mileage as possible from the initial virus by updating it. Each version that’s created and unleashed is a variant of the original attack. As a result, an updated malware program can and will attack different types of data, protect itself more effectively, find new openings to exploit, and more. Today, new malware variants are on the rise, posing new threats to the security infrastructure of your business network. To ensure that your cybersecurity can stand against these attacks, you must first have a clear understanding of what these new variants are and how they can impact your business. As a Managed IT Service Provider (MSP), ITS understands that there’s not one concrete solution to these cyber threats as they will only keep evolving in the succeeding years. This is why it is our goal to educate clients about it and help them navigate through the changes. In this article, we listed down the four things you need to know about malware variants. 4 Scary Facts You Must Know About Malware Variants Malware doesn’t die when security provides patches to stop it. Variants continue to live on for years into the future, and some of those will be even more dangerous than the original attack. Knowing what you’re up against can help you alleviate the risk. Here are four things you should know about these malware variants: 1. Most malware is a mishmash of techniques formed over the years A malicious type of crowdsourcing quickly emerges where hackers use their own approaches and knowledge to create a variant and sell it as a new and improved version of the older malware. This is why we see a number of variants in the months following a famous hack—multiple hackers work to improve it and capitalize on the improvements. 2. Variants help old malware re-emerge Another problem with variants is that they can lie dormant for some time and then spring back to life right before your eyes. This is an unpleasant surprise for security experts. Malware that they haven’t seen in years suddenly comes back, ready to wreak havoc again. This time it’s loaded with the latest tricks and updates. These variants allow for some ancient malware to pose a new threat. An example of this is Locky, an infamous ransomware that was attacked in 2016. The attack was put down, and subsequent variants didn’t do much damage. However, new variants have emerged. The latest software variant was utilizing a new method of infection via clever phishing emails that encouraged the spread of Locky via a suspect download. As the name suggests, this ransomware steals access to sensitive files and locks them down until the victim pays the ransom. It’s a good example of what a long-term headache malware can become. 3. There are a lot of variants Variants aren’t like singular sequels–they are more like an ant queen giving birth to a new colony. Any small change is enough to create a new variant, and with hackers working around the world to enhance their attacks, the stream of variants is more or less unending. According to AV-Test, there were over a hundred million new malware samples since the previous year and 12 million new variants per month. To put that into perspective, 400,000 new variants emerge every day! Many of these variants are relatively harmless; however, some are much more dangerous. Security experts must find out which these are before the hackers beat them to it. 4. Variants exploit new vulnerabilities The worst types of variants are those that develop new tricks to bypass the latest security measures. Remember the Locky ransomware resurgence we mentioned? It was retooled to show up as an unknown file. This wasn’t a problem for security filters that operated on a default-deny basis where any unrecognized file is blocked. However, many businesses didn’t have this stringent protection, so it posed a threat, even though they were protected from the older version. Boost your network security against malware variants now If you’ve been in the business for years, you’ve probably heard of the different threats to your cybersecurity and have already implemented an appropriate solution to mitigate them. Do you think that’s enough to protect your business? Well, you should know by now that it’s not. Cyberthreats evolve just as fast as technology. If you’re going to settle with the solution you already have because it worked in the past, there’s a high chance you’ll not survive the next cyberattack. For this reason, experts suggest always updating devices and upgrading your system’s cybersecurity to ensure that all security flaws are covered. These quick tips can help you protect your business from cyberattacks: Here at ITS, we’ve helped hundreds of clients bolster their cybersecurity by providing and updating solutions as necessary for their businesses. If you want to learn more, this free e-book is full of practical information for business owners, such as yourself, who want to improve their data security.
<urn:uuid:60f5b45b-3d9b-4bd1-a613-980d1feee74b>
CC-MAIN-2022-40
https://www.itsasap.com/blog/malware-variants-facts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00627.warc.gz
en
0.945759
1,185
2.578125
3
I was recently sent a fantastic infographic called “The Secret Life of Garbage” which explains what happens in the end to end process of garbage collection, recycling and disposal. I’ve embedded the infographic below. Co-incidentally I’ve just finished an assignment with a waste management client so I thought I’d share a few insights into the process. - Recycling does happen (and it’s big business). I had a feeling that recycling was a myth and that everything got tipped into a big hole in the ground, but as landfill is so expensive it’s in the best interests of the waste management company to recycle as much as they can. - Although there is some automation the recycling process still remains highly manual with staff required to sort recyclables into categories. How manual? One of the staff members had no fingerprints as they had been worn away… - Almost anything can be recycled. Plastics, glass, paper, cars, scrap metal, plastic bags, commercial food waste, liquid waste, hazardous waste, oily rags! It’s recycled and sold. - The next big push is to recycle domestic food waste (combining with domestic green waste). This will be big business as it makes for lovely fertiliser. If it hasn’t come your way already it will soon. So if you can’t be bothered separating your food scraps it’s time to buy a waste disposal unit! - Garbage trucks really go through the wringer – a lot of the expense is in truck maintenance, repairs and tyres (which need replacing every 6 weeks @ $1,000 a tyre!) - Paper, plastics and glass are recycled by companies like Visy and Amcor and become products on the supermarket shelves again. Plastic bags are also recycled contrary to poular myth. - Even “contaminated” (dirty) items gan be recycled, but it’s a more costly process. - Garbage trucks have sophisticated data capture – every bin lift is captured on video and timings measured. Truck speed down to every bin lift is captured. This gives a huge amount of data for analysis purposes. - Costing new routes requires experience – the terrain, street layouts and distances all define the profitability of routes. As a “process person” recycling makes perfect sense to me and since I’ve worked in the industry I’ve become a recycling zealot! The one main lesson I’ve learned from my experiences is that nothing is garbage – if we think enough about it we can find a way to re-use or recycle anything.
<urn:uuid:d3f35c38-2e1e-442e-a8a6-cb9299dacc70>
CC-MAIN-2022-40
https://www.bpmleader.com/2012/11/08/the-secret-life-of-garbage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00627.warc.gz
en
0.951605
555
2.515625
3
Going “green” is not only good for the environment—it is also good for the wallet in the long run, and it helps boost the social capital of a company. According to environmental news website Clean Technica, there are two primary ways to be considered “green” in the manufacturing industry. One way is to create eco-friendly buildings and products by using cutting edge technology to minimize waste and increase savings. The second way is to reduce pollution and waste in conventional manufacturing. The following is an overview of some of the most recent green practices in the manufacturing industry. Weatherization involves constructing or modifying a building so that it is protected from the elements. This in turn reduces energy consumption, thereby enabling that building to save tens of thousands of dollars. Companies can save money and go green by weatherizing their manufacturing plants. There are various components that go into weatherizing a commercial building. For newly constructed buildings, weatherization involves determining which direction the sun will shine from at a particular time and then placing windows in strategic locations so that the building will receive natural light, but not too much heat. Weatherizing also involves insulating the building properly and choosing an eco-friendly heating/cooling system. Existing manufacturing plants can be weatherized by sealing up leaks, replacing old appliances and machines that use up too much energy, and installing energy efficient windows. Given the potential energy savings that weatherization has to offer, it is not surprising that President Obama made plans as early as 2009 to invest billions of dollars into weatherization initiatives, writes Tulsa Welding School. Additionally, the Department of Energy has developed weatherization assistance development and maintenance plans to promote weatherization. Alternative Forms of Energy Used in Manufacturing Plants Companies that are concerned not only about the environment but also their bottom line are turning to wind and solar to power their factories and manufacturing plants. For example, Honda has led the way in this field by using wind turbines to power two of its manufacturing plants, one in Ohio and the other in Brazil. Apel Steel Corporation in Alabama has taken another renewable approach. The company has installed enough solar paneling on one of its manufacturing plants to provide nearly all the electricity the plant needs. Renewable energy may have initial set-up costs, but the savings are easily seen after a short period of time. Energy Efficient Batteries R&D Magazine tells us that improvements in battery efficiency could influence the direction of some manufacturing industries. New, energy efficient batteries could impact the electronics manufacturing industry because they could allow electronic devices such as smart phones and tablets to become even smaller than they are now. Such batteries could also have a dramatic impact on the car manufacturing industry, making it increasingly easy and affordable to manufacture energy efficient electric cars. SolidEnergy Systems, which is made up of a team of chemists, materials scientists, physicists and engineers, have created an energy efficient battery with a lot of potential. Known as the Solid Polymer Ionic Liquid rechargeable lithium battery (or simply SPIL), this battery is also safer than conventional batteries. APEI Inc. and Novinda are just two of the many other companies that are conducting extensive research in this field. Since a number of companies are investing time, money and effort into creating efficient yet affordable green batteries, it might not be long before such batteries appear on the market and become as commonplace as the batteries being used now. It is clear that certain companies and manufacturers show great interest in finding new, innovative ways to preserve the environment and manufacture their goods in an eco-friendly manner. This leads to a win-win situation for all involved, as corporations are able to save large sums of money, their environmental footprint is lessened, and workers with green collar job training are able to take advantage of new job opportunities not previously available. About the Author: Monica Gomez is a freelance writer in the green industry. She has written many articles on how companies can become more environmentally friendly.
<urn:uuid:3da7c6f2-aba5-4e06-adb3-540d2798ad3e>
CC-MAIN-2022-40
https://www.mbtmag.com/global/article/13215308/greening-the-manufacturing-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00627.warc.gz
en
0.965672
803
3.28125
3
Co-authored with Dani Schrakamp Highways, bridges, railways, mass transit, ports, airports, and their cyber networks are all a part of critical transportation infrastructure, which is essential to the daily function of 21st century society. As urban population centers grow, so does the demand on transportation infrastructure. More and more commuters are shifting from using roadways to rail, bus and other means of public transportation. This shift is changing the role of public transportation systems and the station hubs that support them. Commuters demand full connectivity – transportation operators must assume that everything and everyone needs to be connected to a network. But the growth of connected devices within transportation ecosystems dramatically increases the number of potential attack vectors. And as we open our transportation networks – both physical and digital – to more points of connectivity, concerns of vulnerability to increasingly sophisticated direct and indirect cyber attacks are on the rise. The Internet of Everything is forming the foundation of the digital transformation of connected roads, rails, buses, airports, and ports being built all around the world. Improving global transportation systems increases mobility and improves safety and security for millions of people, in an environmentally conscious manner. Transportation agencies and organizations are approaching digital strategies that are changing the overall passenger experience, improving productivity, and generating new revenue streams; changing the traditional of the industry. In today’s post our digital citizen is really on the move. It’s that time of year when everyone, their mother, and your brother are all hitting the roads, the airports, and trains, heading for vacation. Rushed packing is complete; our digital citizen is ready to start the journey toward work-free liberation. But travel – by car, train, and plane – is challenging, especially during the holiday rush, right? Wrong. Not in the world where digitized transportation systems are enabling a seamless passenger experience. First, our citizen must head to the airport by car. Despite the time of year, traffic is efficient because the transportation authorities are on high alert to keep traffic and transit running smoothly. Much like Utah’s Department of Transportation that has implemented digital initiatives that have created a highly granular understanding of transportation systems and improved daily management of the roadways and in the infrastructure planning process. But a smart phone alert just interrupted the citizen’s smooth sailing, there’s a pattern of slow traffic up ahead due to the conveniently planned construction project. Not to worry, a slight reroute around the alert’s mapped view of the construction zone and we’re cruising once again. Like over 80 percent of Texas drivers, our citizen was pleased with preemptive notifications that allowed for proactive traffic avoidance. Our digital citizen has made it to the airport. And this is typically about the time that one would spot it, enemy number one: the dreaded lines that snake around the airline counters and the security checkpoints. Not today and not for our digital citizen. Travel hubs, like Copenhagen Airport, are adopting technology to monitor and prevent passenger congestion, allowing for less time in queue and more time to relax, shop, and find some pre-flight snacks. A smooth flight and a catnap later, and our citizen has arrived at their almost-final destination. The citizen then hops on the next mode of transportation, the train. Cities like Dubai are adopting digital strategies to fit the growing transit needs as more and more community members are turning to public transportation for means of everyday travel. Snoozing again upon feeling comfortable with the safety of the journey, because like Metrolink in Los Angeles, this railway is using technology to avoid accidents and complications. The onslaught of train riders awoke our citizen. With the public transit system providing reliable and plentiful timetables, clean trains, and simple ticketing procedures, why wouldn’t the train be packed? At the completion of the journey, like 85-90 percent of Seoul travelers, our citizen was very satisfied with the overall public transportation experience. Communities are learning how to adjust the critical needs for their transportation networks as they evolve into centers of human coalescence and interaction. This trend is setting up public transportation and mass transit to be a main artery of the world’s smartest cities, which aim to improve the quality of life of citizens, residents, and visitors, while supporting economic development and attracting business and workforce talent. Stay tuned for next Wednesday’s post to experience a day on the bay with our digital citizen. And be sure to check back each week as we explore new themes, challenges and observations. Additionally, you can click here and register now to get your IoE questions answered on how to become the next digital community. Finally, we invite you to be a part of the conversation by using the hashtag #WednesdayWalkabout and by following @CiscoGovt on Twitter.
<urn:uuid:f70969dd-4a0d-4faf-a420-d094e5127262>
CC-MAIN-2022-40
https://blogs.cisco.com/government/wednesdaywalkabout-series-trains-planes-and-automobiles
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00627.warc.gz
en
0.940507
978
2.65625
3
President Biden’s infrastructure plan in part seeks to ensure safe access to the Internet for all Americans, addressing the digital divide. What that means is bridging the gap between those with access to modern technology and communications infrastructure (such as high-speed Internet), and those lacking that access. That gap holds back prosperity for people around the world, including a large number of Americans. As horrible as the COVID-19 pandemic has been, it gave us a clear demonstration of the digital divide. The health care crisis forced many to work from home, or to take classes from home, causing many to regularly use online conferencing (Zoom et al) for meetings. That’s fine if you have computers and suitable Internet access, but what about those who lack sufficient computers or Internet access? They’re being left behind. The infrastructure plan fact sheet pointed at the 35 percent of Americans who live in rural areas that have spotty access to Broadband Internet. Even in areas where it exists, the cost is often beyond the budget of many Americans. The plan focuses on “future-proof” broadband internet infrastructure that’s expanded to reach every American. This will of course require improvements to telecommunications systems in every corner of the country. Improving Internet access in underserved areas like rural America is very ambitious. It’s one thing for a politician to present a bold plan, and yet another to implement it. Some interests, especially the telecommunications industry, argue that this should not be a government effort. They, of course, would prefer to own the resulting expanded Internet infrastructure. Others point to the many years folks have decried the digital divide, during which time the telecommunications industry has done little to solve that problem. What’s more important is to talk about what “future proof” means for Internet technology. A future-proofed Internet does not use older IPv4 technology. Instead, it requires adopting IPv6 in a big way. It also means thinking beyond dual-stack (IPv4/IPv6) and exploring what tools and platforms can help your infrastructure support IPv6-only environments. There are several ideas to consider: - IPv4 address space depletion (we’ve run out of IPv4 addresses) is beyond critical, while IPv6 offers essentially limitless IP addresses - The increasing cost of obtaining IPv4 addresses - Removing restrictions to organization scalability – especially with the shift to remote/WFH - IPv6 is considered to be inherently more secure IPv4 addresses are 32-bit numbers routinely presented as four decimal numbers such as 220.127.116.11. Therefore the IPv4 Internet can contain at most 4 billion or so devices, which must have seemed enormous in the 1970s, but is minuscule today. There may be 3 billion or more smartphones in peoples’ hands today, for example. As a result, there have been numerous strategies employed over the years to extend the life of IPv4 space (NAT, CGN, etc.) By contrast, IPv6 are 128-bit numbers, allowing for a theoretical maximum of 3.4×1038 devices. Today that seems enormous, but one supposes that in 50 years it may seem minuscule. Clearly IPv6 will solve that particular problem, even if in 50 years our descendants will see it as a limitation. At the time being, IPv6 adoption is the key to ensure the continued growth of the Internet. Cybersecurity is a big issue for which recent events demonstrate the need for more attention. An attack on the SolarWinds platform disclosed in December 2020 poses a grave risk to government infrastructure at all levels, as well as critical infrastructure systems, and private organizations. It is a massive and complex security intrusion into thousands of systems, which is still not fully understood. More recently, the company running the Colonial Pipeline, which delivers oil to the Eastern USA, suffered a ransomware attack during which the company shut down oil deliveries, causing havoc. Another recent ransomware attack, against QNAP NAS devices, left tens of thousands of people and small businesses unable to access their files because their NAS devices had been encrypted by the attack. IPv6 was designed with a security mindset from the beginning, whereas the IPv4 stack was designed with almost nonexistent security. Some examples are: - End-to-end encryption, IPSec, which was originally a hard requirement for IPv6 networks, but was later downgraded to a strong recommendation. IPSec also has features for authentication, integrity, replay detection, confidentiality, and access control. - The Neighbor Discovery Protocol (NDP) and Secure Neighbor Protocol (SEND) replaces the Address Resolution Protocol (ARP) of IPv4 systems. ARP is susceptible to man-in-the-middle attacks, while NDP and SEND are not. Further, SEND uses a degree of encryption to further raise the bar against attackers. But, adopting IPv6 is not as simple as declaring to the crew Make it So. If it were that simple IPv6 adoption would be further along than it currently is. For example, in November 2020 the US Federal Government issued a memorandum about completing the transition to IPv6 for federal networks. If the Biden administration Internet Infrastructure proposal ends up being a federally-owned network, this memorandum should affect it. In any case, the memo serves as an example of what’s required for IPv6 adoption. The Office of Management and Budget (OMB) first mandated that federal agencies enable IPv6 on their backbone networks in August 2005. Several other mandates were made by OMB over the years, but none of the deadlines were ever met. That tells us something about the difficulty of migrating from IPv4 to IPv6. Previous OMB policy statements recognized the need for so-called “dual stack” networks supporting both IPv4 and IPv6. The new memorandum says “in recent years it has become clear that this approach is overly complex to maintain and unnecessary.” Further, there are technical, economic, and security benefits to shifting to IPv6-only systems. To address this, Federal Agencies are required to: - Designate an agency-wide IPv6 transition team - Issue an agency-wide IPv6 policy on its public website - Identify potential IPv6-only pilot projects during 2021 - Develop an IPv6-only transition plan by the end of 2021 - Target 80% adoption of IPv6-only systems by 2025 - Work with external partners to identify systems that interface with Federal networks, and shift to IPv6-only network interfaces - Shift all externally facing systems to IPv6-only The Biden administration has set a number of large ambitious goals for America, one of which is improving broadband Internet connectivity for all Americans. Closing the digital divide can give rural Americans access to Internet-based services, giving them more opportunities. For example, the low cost of living in hundreds of small towns in America could prove attractive to digital nomads. Another side effect of the COVID pandemic is policies allowing for remote work. Instead of concentrating workers in high-cost-of-living zones, there could be hundreds of benefits from a shift to permanent remote working arrangements enabled by high-speed Internet access. Small town America could see an influx of people who can now work remotely. To make that work requires the sort of Internet infrastructure improvements envisioned by the Biden plan. It would be a shame to address these goals without fully embracing IPv6 on the resulting network.
<urn:uuid:bacc0ea9-25e7-4d22-85f6-f1606951decf>
CC-MAIN-2022-40
https://www.6connect.com/blog/ipv6-government-mandate-what-it-means-for-you/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00627.warc.gz
en
0.942401
1,543
2.53125
3
Bullying can be defined as any unwanted behavior in which a child/ teen misuses certain aspects – such as physical strength, access to embarrassing information, or popularity – to control or harm other kids. It can include anything from spreading rumors to name-calling to physical aggression. Essentially, bullying is an abuse of power and can have lasting impact on a person’s day-to-day life, mental health and decision-making prowess. Faronics products are extensively used in educational environments, and we believe in empowering victims of bullying with means of reporting incidents. If there are multiple communication channels available for students, bullying incidents can be nipped in the bud, preventing any damaging consequences in the future. Faronics Deep Freeze Cloud’s Anti-Bullying service is designed to be one such communication mechanism. The service has incident reporting features that allow kids to anonymously report bullying incidents via a PC or a custom mobile campus app. Anti-Bullying Service allows students to anonymously report bullying incidents on computers managed with Deep Freeze Cloud. To add Anti-Bullying to the Policy, go to Add Policy > Anti-Bullying > Select Enable (install and inherit settings from Faronics Default policy) or Enable (Install and use below settings). Selecting this option installs Anti-Bullying on all computers using this Policy. Selecting Enable (install and inherit settings from Faronics Default policy) or Enable (Install and use below settings) installs Anti-Bullying on the computers whenever the computers check-in. The computers check-in based on the heartbeat specified in Cloud Agent Settings. - Enable (install and inherit settings from Faronics Default policy) – installs the service and inhertis settings from the Faronics Default policy. - Selecting this option saves time in configuring all the policy settings. - Selecting this option makes the settings for the current policy read-only. - Enable (Install and use below settings) – installs the service and uses - Selecting this option will allow you to customize the settings for this service in the current policy. - Disable will not install the service or will uninstall the service from the computers whenever the computers check-in. The following configuration options are available : - Anonymous Reporting – select this option to ensure all Anti-Bullying reports are anonymous. User name or computer name will not be collected. - Non-anonymous Reporting (Collects User Name and Computer Name along with the incident) – select this option to collect the user name and computer name. - Also allow anonymous reporting – select this option to optionally allow anonymous reporting. Display Incident Reporting Form The Incident Reporting Form can be displayed in the following three ways: - Open the form as user logs in – select this option to display the form immediately when the user logs in. - Display the form x minutes after the user logs in – specify the value for x. The maximum value is 60 minutes. The user at the computers can still launch the Incident Reporting pop-up from the system tray. Incident Reporting Form Content - Add Content – add a description that will pop-up on the workstation. - Add Help Text – add additional description like a help text or a warning. - Email submitted forms to – specify the email address for the user who will receive Anti-Bullying reports. You can specify multiple email addresses by adding a comma between - Tag Reported Incidents with – specify a tag. You can specify a tag for non-anonymous Additionally Faronics’ custom mobile app offering – Campus Affairs – has a similar mechanism called ‘Report It’. It is primarily designed to enable reporting of campus related incidents, via the branded school/ college/ campus app. Such reporting mechanisms and timely intervention, can go a long way in curbing bullying in educational environments.
<urn:uuid:10341874-1100-4698-b65a-4db3dc237b0b>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/anti-bullying-awareness-empowering-students-instant-reporting-mechanisms
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00627.warc.gz
en
0.882892
824
2.5625
3
Machine Learning has been one of the most popular topics to study in recent years, thanks to the availability of massive amounts of data and high-potential processing equipment. It's a typical question to wonder what it is about machine learning that has made it so popular. Machine learning is an area of research that enables a machine to learn new things automatically based on its experiences and data without the need for human involvement. Nowadays computers, thanks to machine learning, do not require any additional modeling, such as statistical models, and they develop themselves as they acquire experience. Machine Learning is a versatile field. It uses the knowledge of statistics and probability in different algorithms to study the available data and create innovative applications. Statistics is one of the core components of machine learning and data analytics. There are two types of statistics used in the field of machine learning namely: Descriptive statistics: For continuous data types like age, they are used to summarise data, such as mean and standard deviation, whereas frequency and percentage are helpful for categorical data types like gender. Inferential statistics: Instead of gathering all of the data and deriving conclusions about the full population, it is a method of collecting a selection of data points, referred to as a sample. Hypothesis testing, numerical characteristic estimates, data correlation, and other techniques are used to draw these interferences. In this article, we will learn about the different elements of statistics used in machine learning and the major differences between statistics and machine learning. Statistical Terminologies used in ML Let's begin with the fundamental concept of statistics. It is a field of mathematics concerned with the gathering, analysis, interpretation, and visualization of empirical data. Statistics is used in machine learning to study the data and ask questions about it, preprocessing and cleaning the data, selecting the correct features for model evaluation, and model prediction. Statistical models are a class of mathematical models that are often described by mathematical equations that connect one or more variables to a rough representation of reality. The assumptions made by these statistical models usually depict a set of probability distributions that brings out its differences from mathematical, non-statistical, or machine learning models. There are few statistical concepts that are required to study descriptive statistics. Let us look at an example. The following table lists out the characteristics of ten individuals who have applied for a home loan: Characteristics of 10 loan applicants (source) In a data set, elements are entities or subjects for which all the information is collected. The elements in the table above are the 10 applicants. The characteristics of all the elements is called a variable. A variable contains different values for different elements. These are also known as attributes. The different variables in the table are the marital status, mortgage, income, rank, year, and risk. In the above example, the quality of the elements are depicted by variables like marital status, mortgage, rank and risk. Hence they are called qualitative variables. The numerical variables in the above table are income and year. Here, year is the discrete variable. The continuous variable here is income. (Must read: Types of data in statistics) A population is the collection of all components of interest in a given topic. A parameter is a population characteristic. The subset of any population is called a sample. A characteristic of a sample is called a statistic. The arithmetic average of any data set is called the Mean of the set. To calculate the mean value, we have to add all the values of the set and divide it with the total number of values. The Mean value of the incomes of the 10 applicants are: (38,000 + 32,000 + 25,000 + 36,000 + 33,000 + 24,000 + 25,000 + 48,000 + 32,100 + 32,200) / 10 = $ 32,530 When there is an odd number of data values sorted in an ascending order, the middle value of that data is called the median. For an even number of values, the mean of the two middle data values is the median. Here, the number of elements here are even, so after arranging them in ascending order, the two mid values are $32,100 and $32,200. Hence, the median income is $32,150. The mode is the data value having the highest frequency of occurrence. Modes can exist for both quantitative and categorical variables, although only quantitative variables can have means or medians. Since the income does not repeat for any of the applicants, there is no mode for the income. (Related reading: Overview of Mean, Median & Mode) The difference between the maximum and minimum value of a variable is known as the Range of that variable. Range of the income= Max income - Min Income = $48,000 - $24,000 = $24,000 It is the average of the highest and the lowest numerical value in a data set. Mid-range of the income = (Max income + Min income)/2 = ($48,000 + $24,000)/2 The variance of a population is defined as the average of the squared deviations from the mean, written as 𝜎². The standard deviation of a set of numbers indicates how far the individual numbers deviate from the mean. The pth percentile of a data set is the data value at or below which p percent of the values in the data set fall. The 50th percentile is the median of the income. We have already calculated the median income, that is $32,150. 50% of the data lie at this value or below it. In the given diagram, the first quartile (Q1) of a data set is the 25th percentile; the second quartile (Q2) is the median; and the third quartile (Q3) is the 75th percentile. The IQR is calculated by dividing the difference between the 75th and 25th observations by the formula: IQR = Q3 Q1. Graph of Percentile range and Interquartile range (source) The Z-score for a specific data item indicates how many standard deviations the data value is above or below the mean. A positive value of Z implies that the value is above the average. (Similar read: Z-test vs T-test) Univariate Descriptive statistics & Bi-variate Descriptive Statistics Patterns seen in univariate data can be described in a variety of ways, including central tendency (mean, mode, and median) and distribution (range, variance, maximum, minimum, quartiles, and standard deviation). On the other hand, Bi-variate analysis is the examination of two variables in order to determine the empirical relationship between them. The most common plots used to show bivariate data are scatter plots and box plots. A scatter-plot is a popular graph for two continuous variables. Scatter plots are also known as correlation plots since they demonstrate how two variables are linked. The amount and direction of a linear relationship between two quantitative variables are expressed by the correlation coefficient r. The correlation coefficient is: A box plot, often known as a box and whisker plot, is used to depict the distribution of data. A box-plot is often used when one variable is categorical and the other is continuous. (Suggested blog: Descriptive Analysis overview) Statistics vs Machine Learning Even though machine learning and statistics are very similar, there are some differences between them. The major difference between them lies in the purpose. Statistics deals with mathematics, so it does not function without data. A statistical model is a data model that may be used to deduce anything about the connections in the data or to construct a model that can predict future values. Machine learning is all about outcomes; you're probably working in a firm where your worth is completely determined by your performance. Statistical modelling, on the other hand, is more concerned with discovering connections between variables and the importance of those associations, while also allowing for prediction. We construct a line that minimizes the mean squared error across all of the data for the statistical model, providing the data is a linear regression with some random noise introduced, which is generally Gaussian in nature. (Also read: Importance of statistics for data science) Probability and statistics are integral parts of machine learning. In this article we have talked about the statistical concepts that are required for statistical modelling. There are also some basic differences between them that are also mentioned in this article. (Must catch: Importance of Statistics and Probability in Data Science)
<urn:uuid:6c993b6e-f733-454a-ac26-90a66b146fcd>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/15-statistical-terms-machine-learning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00627.warc.gz
en
0.925389
1,886
3.28125
3
August 6 is the 218th day of the year (219th in leap years) in the Gregorian calendar. One hundred forty-seven days remain until the end of the year. Over time, many significant incidents and events occurred on this date in history which included the Holy Roman Empire went out of existence as Emperor Francis II abdicates, Nuclear Bomb Hiroshima, Fanny Blankers-Koen (Netherlands) become the 1st women to win 3 Olympics golds, Jamaica gains independence from Britain, Israel Suicide Bomber, and NASA Mars Curiosity Rover Lands. Let’s discuss a few major Historical events in Today’s History i.e., August 6 What happened today in history in the year 1945? The United States drops an atomic bomb on Hiroshima On August 6, 1945, in Hiroshima, more than a quarter of a million people died due to the USA’s catastrophic nuclear bomb explosion. Another bomb was dropped on Nagasaki city of Japan on August 9, 1945. Over 2, 00,000 people died, and many were injured in this attack. During the Second World War, Japan was against America and its allies, including Britain and the Soviet Union. The allies were winning the war, and Japan was pushed back from several locations. Japan had been at war for so many years, and several soldiers were dying every day. As a result, countries near to it, like China and Japan together attacked America. Japanese troops were so cruel that the soldiers of Britishers and Americans who had surrendered were treated badly by the Japanese soldiers. The president of the United States, Harry S Truman, wanted to surrender to the Japanese soldiers as quickly as possible to save lives. So for that, he threw a nuclear bomb with a view that the Japanese, while seeing the destruction, will surrender. What was invented today in history? The World’s First Web Site On August 6, 1991, without fanfare, British computer scientist Tim Berners-Lee published the first-ever Website while working at CERN, the huge particle physics lab in Switzerland. Fittingly, the Website was about the World Wide Web project, describing the Web and how to use it. Hosted at CERN on Berners-Lee’s NeXT computer, the site’s URL was http://info.cern.ch. This day in sports history in the year 1926. Gertrude Ederle becomes the first woman to swim the English Channel. On August 6, 1926, on her second attempt, 19-year-old Gertrude Ederle became the 1stwoman to swim the 21 miles from Dover, England, to Cape Griz-Nez across the English Channel, which separates Great Britain from the north-western tip of France. Only five men had ever swim the waterway before. The challenges included quickly changing tides, 6-foot waves, frigid temperatures, and lots of jellyfish. That day, Ederle not only made it across, but she also beat all of the previous men’s times swimming 35 miles in 14 and a half hours.
<urn:uuid:92bea0f5-bdbc-421c-9b7b-26883d31498a>
CC-MAIN-2022-40
https://areflect.com/2020/08/06/today-in-history-august-6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00627.warc.gz
en
0.970708
659
3.234375
3
Password Protection: Outsourcing Systems Security Services Just like large businesses, small and medium scale businesses can be targeted by cybercriminals. Small businesses are more prone to attacks than larger establishments due to the presence of insecure networks. Using a password can be an effective way of protecting your system from attacks. Let’s dig deeper into the basics of password protection and the need to outsource IT services, including password protection services. Understanding Password Protection Passwords are a set of characters, words, or phrases aimed at differentiating authorized users from unauthorized ones. They are one of the most secure approaches to safeguarding the integrity of data in organizations. However, passwords are prone to security threats when mishandled. There have been cases of password hacking leading to data access by unauthorized users. That is why you need to pay attention to password management. Password management refers to the best practices and principles that users must follow to ensure the efficiency and effectiveness of passwords. Securing passwords in the digital era can be challenging. With the increase in the number of online services accessed by individuals, cybercrimes are also rising Some of the threats to protecting passwords include: - Data Breach. Where confidential data and login credentials are stolen from a website database. - Sniffing Attack. Illegal network access steals passwords. - Login Spoofing. Where cybercriminals use a fake login page to collect passwords illegally. - Brute Attack. Automated tools are used to steal passwords, allowing attackers to access data. How Do I Remember Passwords? With so many sites to log into daily, it is possible to forget passwords. Here are some tips to help you build memorable and unique passwords to enhance your information security: Using a website’s color of its logo or name can enable you to create a secure and memorable password. For example, with Twitter, you can use TW or T as the last or first letters of your password. Create a Unique Code If you want to make the password harder to compromise, replace some letters with numbers or deliberately misspell some words. Since a password is a secret, no one may be bothered with checking the spelling. You could also use abbreviations or acronyms as passwords. Develop a Tip Sheet A tip sheet can give you some clues about your password. As such, avoid recording passwords to expose them to others. Instead of writing down a password, it is better to write a cryptic clue that is only known to you. Key Cybersecurity Practices for Businesses Cyber attacks in many businesses are caused by a lack of expertise for enhanced security, failure to update security programs, lack of employee training, and more. You can prevent your business from cyberattacks by adopting the following practices: A lot of data breaches happen due to weak, stolen, or lost passwords. Encourage your employees to use passwords consisting of numbers, letters, and symbols. Additionally, change passwords frequently. Setting up a firewall for your business can protect your data against cyber attacks. In addition to the internal firewall, you can install an external firewall for additional protection. Encourage the employees working from home to have a firewall on their networks as well. For compliance, home networks require support and firewall software. Train Your Employees Training your employees about security policies and the best security practices can be essential in protecting your network. Have regular updates concerning new protocols as they emerge. Hold your employees accountable by letting them sign a document agreeing that they are aware of the laid down security policies. Benefits of Outsourcing Data Protection Services In addition to maintaining a functional IT infrastructure, you are responsible for protecting your business against cyber attacks. If you don’t have adequate IT personnel, you can outsource services to a third party. This strategy is beneficial in the following ways: When you outsource data protection services, you get an opportunity to use the best tools without directly investing in those tools. The IT companies are equipped with the systems and tools to ensure excellent protection. Enhanced Security Options Security companies can offer layered protection with extensive security procedures. For data protection, adopt risk management practices and security standards. The security professionals can guide you in meeting these requirements. Training an internal security team can be costly and time-consuming. Investing in systems security hardware and software is also expensive. A systems security expert can let you access security services cost-effectively. You can also get shared access to tools, techniques, and knowledge of expert security professionals. Choosing an Outsourcing Partner After deciding to outsource IT services, and more so security services, the next thing is to look for a partner. Choosing the right company can help you to remain focused on your business and maintain your IT infrastructure. One of the most important factors to consider when choosing a partner is experience. Find out if the company you want to hire has completed other projects in the past. Communication is also an important aspect of your relationship with the company. The team members from the company should be able to coordinate and collaborate with your team effectively. Communication can have a big impact on the success of a business relationship with your partner. Also, ensure that the organization you want to choose has been in the market for a long time to prove its reliability. Check the number of employees the company has and whether they are expanding or shrinking. Help Is Available Technology advances can offer an avenue for cybercriminals to attack your systems. The Good news is that even if you don’t have the internal expertise to manage security, you can get help from the 4BIS.COM Inc team. Indeed, our professionals are committed to giving you IT support and IT security services to protect your business in Cincinnati. As you partner with us, you will enjoy peace of mind and do business with no interruptions. Call us on (513) 469-7887 to get a free quote today. 4BIS.COM, Inc is a complete IT Support and Managed IT Services Provider, Computer Reseller, Network Integrator & IT Consultant located in Cincinnati, Ohio focusing on customer satisfaction and corporate productivity. Our mission is to develop long-term partnerships with our customers and ensure they stay up-to-date with the evolution of business processes and information technology.
<urn:uuid:01070d3a-9511-49c0-a54c-d8046b592416>
CC-MAIN-2022-40
https://www.4bis.com/password-protection-outsourcing-systems-security-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00627.warc.gz
en
0.931882
1,299
2.71875
3
Cloud computing has recently gained popularity due to the grace of flexibility of services and security measures. Before it, on-premise computing was the one reigning the kingdom due to its sheer benefits of data authority and security. The critical difference on the surface between the two is the hosting they provide. In on-premise computing, to host the data, the company uses software installed on company’s server behind its firewall, while with in-cloud computing the data is hosted on a third party server. However, this is only the surface difference—the deeper we dig, the larger the differences become. On-Premises: On-premise involves personal authority on both computing and the data—they only are responsible for the maintenance and upgrading costs of the server hardware, power consumption, and space. It’s relatively more expensive than cloud computing. Cloud: On the other hand, cloud users need not pay the charges of keeping and maintaining their server. Companies that opt for the cloud computing model need to pay only for the resources that they consume. As a result, the costs go down drastically. On-Premises: As the name itself suggests, it’s an on-premises environment, in which resources are deployed in-house on the local server of the company. This company is solely responsible for maintaining, protecting and integrating the data on the server. Cloud: There are multiple forms of cloud computing, and therefore the deployment also varies from type to type. However, the critical definitive of the cloud is that the deployment of data takes place on a third party server. It has its advantages of responsibility such as the transfer of security and extension space. The company will have all the access to the cloud resources 24×7. On-Premises: Extra sensitive data is preferred to be kept on-premise due to security compliances. Some data cannot be shared to a third party, for example in banking or governmental websites. In that scenario, the on-premise model serves the purpose better. People have to stick to on-premise because they are either worried or have security compliances to meet. Cloud: Although cloud data is encrypted and only the provider and the customer have the key to that data, people tend to be skeptical over the security measures of cloud computing. Over the years, the cloud has proved its brilliance and obtained many security certificates, but still, the loss of authority over the data reduces the credibility of their security claims. On-Premises: As made clear before, in an on-premise model, the company keeps and maintains all their data on their server and enjoys full control of what happens to it; this has direct implications on superior control on their data as compared to cloud computing. But, so might not be entirely accurate because the cloud gives full access to the company’s data. Cloud: In a cloud computing environment, the ownership of data is not transparent. As opposed to on-premise, cloud computing allows you to store data on a third party server. Such a computing environment is popular among either those whose business is very unpredictable or the ones that do not have privacy concerns. On-Premises: Many companies have to meet compliance policy of the government which tries to protect its citizen; this may involve data protection, data sharing limits, authorship and so on. For companies that are subject to such regulations, the on-premise model serves them better. The locally governed data is stored and processed under the same roof. Cloud: Cloud solutions also follow specific compliance policies, but due to the inherent nature of cloud computing (i.e., the third party server), some companies are not allowed to choose cloud. For example, although the data is encrypted on the cloud, the government never chooses the cloud because losing authority over their information is direct annihilation of their compliance measures. Many factors differentiate cloud and on-premise computing. It’s not that one is better or worse than the other, but instead that they have a different set of customers for them. To overcome these hurdles, a new technology, namely Hybrid Cloud, has emerged which takes care of authority issue related to cloud computing through a hybrid deployment of on-premise, public and private cloud.
<urn:uuid:17a936f7-f9d3-4c02-ac86-2584aa395103>
CC-MAIN-2022-40
https://www.idexcel.com/blog/tag/on-premise-vs-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00027.warc.gz
en
0.952957
880
2.765625
3
Technology is innovating and revolutionizing the world at a rapid pace with the application of Machine Learning. Machine learning (ML) and Artificial Intelligence (AI) might appear to be the same, but the reality is that ML is an application of AI that enables a system to automatically learn from data input. The functional capabilities of ML drive operational efficiency and capacity automation in various industries. Technological Innovation for Convenience Workforce handling is tedious and less productive; this is where Artificial Intelligence has lucratively overcome the age-old system of manual labor. With the world moving at such a fast pace, monitoring has become a constraint for most organizations; for this very reason, Artificial Intelligence and Machine Learning are used more as tools of convenience rather than just pieces of technology. We have seen how accounting systems have replaced ledger books. At the same time, processes have been set up to align machines with organizational requirements effectively to balance everyone’s demands. However, with the way Artificial Intelligence is advancing, it seems this technology is quickly going to change the way processes are functioning. Not only trends on social media will be affected, but even marketing will see a complete makeover through the use of Artificial Intelligence. The Effect on Various Fields When it comes to Artificial Intelligence, everybody wants a taste of it. From marketing experts and tech innovators to education sector decision-makers, Artificial Intelligence holds the capability to pave the path for a healthy future. Artificial Intelligence has been designed to provide utmost customer satisfaction. To derive maximum results from the nuances of AI customer-centric processes will need to align their business metrics to the logic of this latest technology. As Big Data evolves, machine learning will continue to grow with it. Digital Marketers are wrapping their heads around Artificial Intelligence to produce the most efficient results by putting in minimal efforts. The entire algorithm and the build of Artificial Intelligence will be used to predict trends and analyze customers. These insights are aimed at helping marketers build patterns to drive organizational results. In the future, it seems like every basic customer need would be taken care of through fancy automation and robotic algorithms. The healthcare industry is one of the widely reckoned industries in the world today. Simply put, it has the maximum effect on today’s society. Through the use of Artificial Intelligence and Machine Learning, doctors are hoping to be able to prevent the deadliest of diseases, which even includes the likes of cancer and other life-shortening diseases. Robots Assistants, intelligent prostheses, and other technological advancements are pushing the health care sector into a new frenzy, which will be earmarked towards progressing into a constantly evolving future. In the financial sector, it’s vital to ensure that companies can secure their operations by reducing risk and increasing their profits. Through the use of extensive Artificial Intelligence, companies can build elaborate predictive models, which can successfully mitigate the potential of on-boarding risky clients and processes; this can include signing on dangerous clients, taking on risky payments, or even signing up on hazardous loans. No matter what might be the company’s requirement, Artificial Intelligence is a one-stop shop when it comes to preventing fraudulent activities in day to day operations – this, in turn, will lead to money savings possibilities, profit enhancement and risk reduction within every organizational vertical. We are steadily heading towards a future that will be marked complete with the rise of robotics and automation; this is not going to be restricted to the medical sector only; intelligent drones, manufacturing facilities, and other industries are also going to be benefited by the rise of robotics. Artificial Intelligence methodologies like Siri and Cortana have already seen the light of day – this is just the beginning. More and more companies are going to take these capabilities to a new level. As more and more military operations begin to seek advantages from the likes of mechanized drones, it won’t be long before e-commerce companies like Amazon start to deliver their products through the use of drones. The potential is endless, and so are the possibilities. In the end, it is all about using technology in the right manner to ensure the appropriate benefits are driven in the right direction.
<urn:uuid:76c6f72a-2c88-4b62-9049-428751269406>
CC-MAIN-2022-40
https://www.idexcel.com/blog/tag/uses-of-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00027.warc.gz
en
0.938306
852
2.828125
3
Blockchain Is Great… The concept of blockchain-protected transactions is great in those systems, such as cryptocurrency (e.g. Bitcoin), that can handle the massive overhead associated with the way in which this technology is currently implemented. Using secure ledgers that consist of individually-protected entries between two endpoints, being able to follow transactions in a multi-point system and viewing histories of transactions are extremely powerful tools. While many people initially thought cryptocurrencies were doomed to be instantly hacked to death, blockchain has proven to be a formidable protective agent. …But Not In The IoT Many people are pushing the use of blockchain processes in the IoT. The premise is sound – tons of autonomous devices making continuous connections and exchanging data without oversight clearly need some sort of management option. That stated, the concept of large blocks of transactions being chained together and computed on each device is not realistic. Sure the concept is sound but the implementation, in its current state, just will not work. Architecture Is Required Many people envision IoT as some jumbled mess of devices all freely communicating with no organization in sight. This is simply not true. In fact, IoT devices behave in a similar manner to any other connected computer. They run on local networks, called subnets, that are connected to higher and lower subnets and together form larger networks. In the IoT, more so than traditional enterprises, devices tend to do most of their work locally with higher-level devices such as hubs connecting data between levels. This is where architecture can be used to overcome the challenges of blockchain and is, in fact, how Bear implements blockchain protection today. Instead of every device constantly providing computed entries into a large ledger, higher level devices – which we term Local Domain Controllers (LDCs) – get a report of a transmission and a reception for a given exchange and then enter that information into a ledger just for the devices in the local subnet. Higher-level LDCs receive periodic, blockchain-protected, updates of transactions for lower-level subnets and pass those along to central domain controllers. In these cases, only the communication of the transfer of data is recorded in the upper-level LDC as the data is blockchain-encrypted directly to the higher level domain controller. These central domain controllers – which might reside in a cloud – contain the entire ledger for the overall system with the individual entries distributed in an IoT-friendly manner. In this way, the core principles of blockchain are protected – individually-protected transactions between two devices; ledger entries for the transmission/reception of said data and, at the core, a complete history of transactions – without compromising the efficiency required by IoT devices.
<urn:uuid:b2472ac6-d6a3-4353-899d-60de837f282b>
CC-MAIN-2022-40
https://bearsystems.com/2017/11/03/problem-traditional-blockchain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00027.warc.gz
en
0.945925
543
2.78125
3
Knowledge management: Business Strategy By Juan Emilio Alvárez, Specialist in Digital transformation and BPM From the organizational point of view, knowledge is the generating base of competitive advantages and the driver of the economy. The need to generate new ideas ina a quick and dynamic way has made it easier for the value of information and knowledge to rise. According to Davenport (1997), “Knowledge” is information combined with experience, context, interpretation and reflection. It has a high value because it is prepared to be applied to decisions and actions ”. The greatest asset that organizations have consists of the knowledge they possess. Either the set of skills and capacities of the organization itself or the people who compose it are the elements that allows it to achieve a certain degree of competence. But frequently this knowledge is not so explicit because of: - information in general and knowledge in particular, is scattered and not available in time and where it is needed - knowledge is available to very few people - It is difficult to keep it when the company processes are restructured or employees are moving from one area to another. Therfore, the usual thing is that organizations often do not have their knowledge in a systematized way, so the opportunity to properly take advantage of it is lost. To avoid this, organizations have to establish a system that allows identifying the knowledge linked to the definition of the organization's strategy and the development of its operational and Intellectual Capital. "Knowledge Management", has as its main tasks to identify, gather, analyze and relate the information of the organization: strategy, regulatory framework (internal and external), processes, quantitative and qualitative capacities of each of the employees and collaborators, to transform it into knowledge and make it available to the entire organization; In short, its main objective is to improve efficiency and effectiveness by reducing the need to rediscover knowledge and increasing the quality of the decisions taken by the organization, since the information available will then be safe and reliable. Organizations that are considering entering into Knowledge Management, have to take into account at least the following aspects: - Information and knowledge. The first thing is to distinguish in the organization what is information and what is knowledge. Sometimes it is not easy and they get confused, coming to consider that everything is information, but with little value. The information has to lead to knowledge as an object of added value within the entity. - Knowledge and its transcription. Almost all organizations have electronic or physical documents, that is, they have a lot of information, but have they transformed all these documents into explicit knowledge? Companies have to be able to relate all this information and apply techniques or mechanisms that transform it into valuable knowledge or objects of knowledge, and that are equally accessible as the information. - Complementary knowledge and skills. Once the knowledge objects have been identified, those complementary entities should be analyzed which, if we add them to the knowledge object, we manage to increase its value. - Individual and collective capacities. The organizations with the highest added value are those that have managed to combine all the individual and collective capacities of each of the elements of the organization, through a single collective knowledge, without losing individual knowledge. For this, the key is to relate the dependencies of knowledge. An ideal tool is the "Knowledge Maps" which should include, at least, the main "Knowledge Objects" of the Organization, reference sources and experts, their structure, applications and control and status variables, as well as the relationship and dependencies on each other. KNOWLEDGE OBJECTS: WHAT ARE THEY? Knowledge objects are the elements represented in the “Knowledge Map” and make up the organization's knowledge directory. To facilitate their location on the Map or Knowledge Management System (QMS), they can be classified by processes or knowledge areas. In this way, the objects of knowledge can be classified into: - Explicit and be either in structured supports: manuals, databases, projects, studies, or in unstructured supports: reports, notes, ... - General, in an organization there is generic knowledge that all its members must have, we refer for example to the areas of information technology or quality. - Specific, required to develop specific activities. - Implicit, located in people's minds, Examples of objects of knowledge, which can be found in the formats commonly used in the organization: standards, methodologies, references, good practices, agreements, contracts, offers, proposals, projects, market studies, seminars, debates, prototypes, pilots , templates, manuals, guides, systems, ... Once the different business objects have been identified, it is also important and necessary to establish parameters that define them, such as location, relevance, availability, applications, control and / or status, relationships between them, etc ... 5 PHASES TO MANAGE KNOWLEDGE Implementing a Knowledge Management project is always complicated and it is necessary to approach it in a structured way and with a method that helps to identify, classify and define these objects of knowledge as well as to extract the value that they contribute to the organization. A standard knowledge management project will normally consist of the following phases: - Establish the scope and vision: what is the orientation we want to give to Knowledge management and, above all, the objective we seek. - Orientation to concepts: It deals with knowledge organized in themes, objects and purposes. The simplest are focused on databases, with fields that refer to the knowledge available in the organization, the activity to which they are applied and the people in the organization who have said knowledge " - Process orientation: Provides a representation of key processes and sources of knowledge that must be maintained to support and facilitate their development. This system is conceived as a whole composed of a set of related elements: objective, processes, activities, actors, technology, organizational structure, business and regulatory rules, events, etc. - Skills Orientation: they identify the skills of the staff and the organization and the associated sources of knowledge. A matrix is usually constructed that associates the relevant knowledge and the positions of the organization, differentiating in each position the level of mastery that must be had for each knowledge. - Define the Knowledge Management System coordinates, where needs and expectations are identified, identify and select the participants, the resources to use and the dynamics to follow. - Identify the Knowledge that provide more value to the organization, - Elements that require an intensive use of knowledge (strategy, processes, people, regulatory frameworks, stakeholders, etc.), - Value the objects of knowledge, - Obtain the knowledge gap, - Identify the knowledge to develop, locate the objects of knowledge and find the people who have it. - Structure, visualize and socialize knowledge: - Choose the Type of Map and its Format, its syntax through previously agreed forms and symbols, - Establish categories and hierarchies of knowledge, - Identify the connections of the objects of knowledge, - Create the visual network system, - Approve the map. - Establish communication, training and publication / presentation mechanisms. - Analyze and measure the identified objects of knowledge. Once the approved knowledge map is available, the criteria and measurement indicators must be established to indicate whether the organization: - Loses consciousness (leaks), - Maintains the same (measures taken with little value) or - Gain (you are on the right track) in knowledge within the organization and the value that it contributes in all axes: Strategic - Operational - Human. In summary, Knowledge Management in organizations is a fundamental issue that should contemplate the alignment of the company in all its dimensions. In this sense, organizations should consider the scope and vision at least in the combination of the two orientations: - Process Orientation, based on the design of a Business Architecture, with which the identification and classification of the explicit, tacit and general knowledge of the organization will be achieved in a very effective way, fully defined, classified and interrelated, - Orientation to competencies, where the objective will focus on identifying the necessary knowledge by the people of and in the organization, where the implicit knowledge or experience of each collaborator will be taken into account. Then it would only be necessary to interrelate the two scenarios to achieve a truly Knowledge-based organization.
<urn:uuid:60614fdd-a39d-48f2-a213-f9696561b387>
CC-MAIN-2022-40
https://albatian.com/en/blog-ingles/gestion-del-conocimiento-como-estrategia-empresarial/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00027.warc.gz
en
0.934481
1,707
2.796875
3
What is Container Image? A container image is a static file with executable code that can create a container on a computing system. A container image is immutable—meaning it cannot be changed, and can be deployed consistently in any environment. It is a core component of a containerized architecture. Container images include everything a container needs to run—the container engine such as Docker or CoreOS, system libraries, utilities, configuration settings, and specific workloads that should run on the container. The image shares the operating system kernel of the host, so it does not need to include a full operating system. A container image is composed of layers, added on to a parent image (also known as a base image). Layers make it possible to reuse components and configurations across images. Constructing layers in an optimal manner can help reduce container size and improve performance. In this article, you will learn: Docker Image Architecture Docker is the world’s most popular container engine, so we will focus our discussion of container image architecture on Docker. A Docker image is a collection of files, including binaries, source code and other dependencies, needed to deploy a container environment. In Docker, there are two ways to create an images: - Dockerfile—Docker provides a simple, human-readable configuration file that specifies what a Docker image should contain. - Create an image from an existing container—you can run a container from an existing image, modify the container environment, and save the result as a new image. What is the Difference Between Docker Containers and Images? A Docker container image describes a container environment. A Docker container is an instance of that environment, running on Docker Engine. You can run multiple containers from the same image, and all of them will contain the same software and configuration, as specified in the image. Docker Images and Layers When you define a Docker image, you can use one or more layers, each of which includes system libraries, dependencies and files needed for the container environment. Image layers can be reused for different projects. To save time, most Docker images start from a parent image. For example, here is the Dockerfile of the MySQL image on Docker Hub, which can be used to create containers running the MySQL database. On top of this parent image, you can add layers that include additional software or specific configurations. When a container runs, Docker adds a readable/writable top layer over the static image layers. This top layer is used by the container to modify files during runtime, and can also be used to customize the container. This way, multiple containers created from the same image can have different data. There are two ways to view layers added to the base image: - /var/lib/docker/aufs/diff directory on the container - Using the Docker CLI history command Parent and Base Images There is a subtle technical different between parent and base images: - A base image is an empty container image, which allows advanced users to create an image from scratch. - A parent image is a pre-configured image that provides some basic functionality, such as a stripped-down Linux system, a database such as MySQL or PostgreSQL, or a content management system such as WordPress. However, in the container community, the terms “base image” and “parent image” are often used interchangeably. There is a large number of ready-made parent images available on Docker Hub, and on many other public container repositories. You can also use your own images as a parent for new images. Each Docker image comes with a file called a manifest. This is a JSON file that describes the image and provides metadata such as tags, a digital signature to verify the origin of the image, and documentation. Docker Image Security Best Practices Container images play a crucial role in container security. Any container created from an image inherits all its characteristics—including security vulnerabilities, misconfigurations, or even malware. Here are a few best practices that can help you ensure you only use secure, verified images in your container projects: - Prefer minimal base images—many Docker images use a fully installed operating system distribution as their underlying image. If you don’t need general system libraries, avoid using base images that install an entire operating system, or other components that are not essential for your project, to limit the attack surface. - Least privileged user—Dockerfiles must always specify a USER, otherwise they will default to running the container as root on the host machine. You should also avoid running applications on the container with root privileges. Running as root can have severe security consequences, because attackers compromising the container can gain control over the entire host. - Sign and verify images—you must make sure the image you are pulling to create your container is really the image you selected from a trusted publisher, or created yourself. By using only signed images, you can mitigate tampering with the image over the wire (a man in the middle attack), or attackers pushing compromised images to a trusted repository. - Fix open source vulnerabilities—whenever you use a parent image in production, you need to be able to trust all the components it deploys. Automatically scan images as part of your build process, to ensure they do not contain vulnerabilities, security misconfigurations, or backdoors. Keep in mind new vulnerabilities may be introduced over time, even in images that were originally verified as secure. Related content: read our guide to Docker security best practices › Container Image Q&A What is Docker Hub? Docker Hub is the world’s largest container image registry. You can use it to access publicly available Docker images, or store your own images. There are many other tools you can use to manage a container image repository, including: - Amazon Elastic Container Registry (ECR) - Azure Container Registry (ACR) - Google Container Registry - JFrog Artifactory - Red Hat Quay What is a Docker Image Security Scan? Docker image security scanning lets you find security vulnerabilities in Docker image files. Image scanning works by analyzing packages and dependencies defined in a container image, and checking each of them for known security vulnerabilities. Some container image registries provide built-in image scanning tools.
<urn:uuid:d4c64ba2-8df4-479e-a2b4-bdbe2b398b39>
CC-MAIN-2022-40
https://www.aquasec.com/cloud-native-academy/container-security/container-images/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00027.warc.gz
en
0.890575
1,304
2.890625
3
This week, researchers have demonstrated two interesting ways to extract data from machines that are physically isolated from any networks and potentially dangerous peripherals at once. The GAIROSCOPE method suggests using the gyroscope of the nearest smartphone for these purposes, and the ETHERLED method suggests using LEDs on network cards. Like other attacks on isolated machines, GAIROSCOPE and ETHERLED rely on the potential ability of an attacker to compromise the target environment in advance and install malware on the machine using infected USB drives, a watering hole attack or compromise of the supply chain. Although this is not an easy task, researchers leave its solution “outside the brackets”, focusing entirely on the process of extracting information. Let me remind you that we also talked about the fact that specialists from the Ben-Gurion University presented a new type of cyber-biological attack that can bring biological warfare to a new level. Dr. Mordechai Guri, an Israeli information security specialist and the head of the R&D department at the Cybersecurity Research Center at Ben-Gurion University, developed and demonstrated the ETHERLED data mining method. ETHERLED converts blinking diodes into Morse code, which can then be decoded by an attacker. Of course, to intercept such signals, you will need a camera located in direct sight of the LED indicators on the network card of the target computer. It may surprise someone that a machine isolated from any networks uses a network card, but the expert notes that ETHERLED can also use other peripheral devices that have LED indicators. For example, routers, NAS, printers, scanners, and so on are suitable for an attack. The attack begins with the installation of special malware on the target computer, which contains a modified version of the firmware for the network card. Changes allow attacker to control the frequency, duration and colors of the LEDs flashing. In addition, the Guri malware is able to change the connection state or modulate the LEDs necessary for signaling. The fact is that a malicious driver can use documented and undocumented hardware functions to control the speed of a network connection, as well as to enable or disable the Ethernet interface, which causes the status indicators to blink and change color. Each data frame begins with the sequence “1010”, indicating the start of a packet, followed by a 64-bit payload. To extract data through individual LEDs, the expert generated Morse code dots and dashes (from 100 to 300 ms) separated by indicator off pauses (from 100 to 700 ms). It is noted that if desired, the speed can be changed and increased up to ten times. To capture such signals remotely, a potential attacker can use anything from simple smartphone cameras (up to 30 meters), drones (up to 50 meters), hacked webcams (10 meters) and hacked surveillance cameras (30 meters), and ending with telescopes or cameras with powerful telephoto lenses (more than 100 meters). The time required to transfer data via ETHERLED varies from 1 second to 1.5 minutes for passwords; from 2.5 seconds to 4.2 minutes for private bitcoin keys; 42 seconds to an hour for 4096-bit RSA keys. All the same, specialists from Ben-Gurion University, under the leadership of Dr. Mordechai Guri, demonstrated the second concept for stealing data from isolated machines. The GAIROSCOPE attack is based on the generation of resonant frequencies that can be intercepted by the gyro sensor of a smartphone located close to the target machine (up to 6 meters). The researchers say that this time, the attack may require infecting the smartphones of employees working in the victim organization with a special application for the attack to work. At the next stage of the attack, the attacker must collect confidential data (for example, encryption keys, credentials, and so on) on an isolated machine using pre-installed malware, and convert this information into sound waves indistinguishable by the human ear, which will be transmitted through the speakers or speaker of the infected cars. Such a transmission must be intercepted by an infected smartphone located in close proximity to the isolated machine, or by an insider. A special malicious application on a smartphone “listens” to the gyro sensor built into the device, after which the data is demodulated, decoded and transmitted to the attacker via the Internet. Interestingly, the malicious application on the receiving smartphone (in this case, One Plus 7, Samsung Galaxy S9 and Samsung Galaxy S10 were tested) only has access to the microphone and does not arouse unnecessary suspicion among users. Experiments have shown that this method can be used to transmit data at a rate of 1-8 bps at distances of 0-600 cm from an isolated machine (up to 800 cm in narrow and enclosed spaces).
<urn:uuid:8c0224b9-674b-4591-b5dd-46404038dbb4>
CC-MAIN-2022-40
https://gridinsoft.com/blogs/demonstrate-data-extraction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00027.warc.gz
en
0.916983
1,089
2.96875
3
What is Cloud Computing? Talking about the cloud has become more common in today’s world. Uploading photos or information to the cloud is now a well known concept. As common as it has become, for much of the general population the power and innovation of the cloud goes unnoticed and untapped. A major innovation and development of the cloud is cloud computing. Microsoft defines cloud computing simply as “the delivery of computing services…over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale.” Cloud computing allows people and businesses to store, transfer and process information through cloud-based services instead of on a single computer or server. It is extremely beneficial as it often reduces costs, increases productivity and speed, can be globally accessed, and is extremely reliable. There are 3 cloud computing deployment options: - Public Cloud - Private Cloud - Hybrid Cloud Depending on the IT infrastructure and business needs, one cloud deployment type may be more practical to be used over another. Nightfall, a company built to provide cloud security helps explain the differences and benefits of each: - Public Cloud There can be a lot of advantages and benefits to using public cloud computing. A public cloud is hosted through a third party like Google or Microsoft Azure, which means that time and resources can be spent on maintaining security and efficiency instead of infrastructure. Public cloud computing options are often less expensive than private cloud computing. - Private Cloud A private cloud is different from a public cloud in the fact that only a single customer or company can access the information stored in the cloud. For businesses using Cloud Computing that deals with sensitive and confidential information, a private cloud suits their needs for maintaining confidentiality. - Hybrid Cloud While setting up a hybrid cloud can be complicated, once set up correctly a hybrid cloud combines the benefits of both public and private cloud computing. “It combines the on-premise datacenter of the private cloud with a public cloud, allowing information to flow between both.” Cloud Computing Services After determining the cloud deployment, a cloud service must also be chosen. There are 3 main cloud computing services: - Infrastructure as a service (IaaS) - Platform as a service (PaaS) - Software as a service (SaaS) Cloud computing services are what makes the cloud so useful for businesses. They provide the infrastructure within the cloud to perform tasks, store information, or run an application. In these courses, you will understand a basic overview of cloud systems, technologies, security, management, and troubleshooting. Ascend’s virtualization course is the only course of its kind that covers all major virtualization technologies. A Microsoft Azure course is being developed and will soon be offered along with our other courses.
<urn:uuid:dd048fbb-bfd1-40ab-bf39-12d6faea2e55>
CC-MAIN-2022-40
https://ascendeducation.com/the-world-of-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00228.warc.gz
en
0.930115
579
3.171875
3
New Training: Python Operators In this 6-video skill, CBT Nuggets trainer Ben Finkel covers the four types of operators available in the Python language: Arithmetic, Logical, Relational, and Bitwise. Gain an understanding of what a programming language is and why they work the way they do. Watch this new Programming and Development training. Learn Programming and Development with one of these courses: This training includes: 50 minutes of training You’ll learn these topics in this skill: Combination, String, and Bitwise Operators What is a Logical Operator? All programming languages have logical operators. Logical operators are an important way for programming languages to work with multiple data sets. So, what is a logical operator? A logical operator is your 'And or 'Or' keywords. They are commonly used with comparison operators. More often than not logical operators, when used with comparison operators, are used to control the flow of a program. For instance, let's say that you have two variables: X and Y. Both X and Y have a value of 0. You want to check to see if both variables have the same value (in this case 0) and if they do, do something. Let's look at this in pseudo code: If (X == 0 AND Y==0) Then doSomeThings… In this case, doSomeThings will only work if both X and Y equal 0. Likewise, if you only need to do something if either X or Y has a value of 0, but you don't care which one, then you can use the 'Or' logical operator: If (X == 0 OR Y==0) Then doSomeThings… In that example, doSomethings will work whether X or Y equal 0 without requiring both to have that value.
<urn:uuid:0757fe49-ec10-43a1-9695-a6e16abdc5aa>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-python-operators
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00228.warc.gz
en
0.895464
409
4.3125
4
Surge Protection Grounding and Installation Best Practices For proper operation, install all Surge Protective Devices (SPDs) per the guidelines set forth by the manufacturer. Improperly installed devices will not perform as intended and consequently, will not protect the equipment. The conductor length between the SPD and the protected equipment should be a minimum of three feet in length to allow enough time for it to react. The conductors can be greater than 3 feet as long as they are isolated and are not subjected, or directly exposed to internally or externally-generated, transient voltage spikes and/or surges. Always make sure that the field wiring (unprotected wires) and the protected wiring occupy separate conduit feeds. When unprotected and protected wires occupy the same conduit, surge energy can be induced on to the protected wiring and completely bypass the Surge Protective Device (SPD). The use of a grounding bus bar is strongly recommended as a means of terminating SPD ground wires to existing electrical grounding leads. This will ensure a solid mechanical connection of all grounding wires. Twist-on wire connectors, also known as wire nuts, are not recommended for terminating SPD ground wires. They can add resistance by becoming loose and/or corroded over time. In addition, twist-on wire connectors can unnecessarily extend the length of the grounding conductor. This would degrade the performance of the SPD due to the lack of a short, low impedance ground path. When installing multiple SPD’s and terminating to a common ground, a dedicated ground wire ran from each individual SPD to a common grounding bus bar is strongly recommended. “Daisy-chaining” multiple SPD ground wires together via the SPD grounding terminals or by using twist-on wire connectors is not recommended as this increases the resistance and extends the length of the ground path. If the desired grounding/bonding point is a greater distance than the lead wires from the surge protector to the protected equipment, use a larger gage grounding conductor (#6 AWG) from the grounding buss to the desired grounding/bonding point. Make sure that the grounding conductors are short and straight where possible. Look for a low impedance ground source. While NEC states 25 Ohms or less, IEEE and DITEK require less than 5 Ohms. This will give the surge energy a low impedance dissipation point to shunt the overvoltage energy to, and away from your protected circuit. For referencing ground, you can look to an Electrical Service Ground, Grounded Building Steel, Local Electrical Ground or a Dedicated Ground Rod. Do not reference metallic water pipes. Typically, PVC is used when repairing or replacing pipes. Since PVC is non-conductive, ground reference will be lost. To learn more about grounding and bonding, read our white paper Grounding 101.
<urn:uuid:596914d9-cbce-4cfc-98a2-a95bc9c5a3f4>
CC-MAIN-2022-40
https://www.diteksurgeprotection.com/blog/surge-protection-grounding-and-installation-best-practice
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00228.warc.gz
en
0.904474
563
2.609375
3
The PDPL, Ensuring Data Privacy and Protection Montenegro’s Personal Data Protection Law 79/08 and 70/09, also known as the PDPL for short, is a data protection law that was passed in 2012. As Montenegro is one of a handful of nations in Europe that is not a part of the European Union and as such, does not fall under the jurisdiction of the General Data Protection Regulation or GDPR, the country needed a data protection law that would be comparative to other European data privacy laws. To this extent, the PDPL was largely modeled after the EU’s Data Protection Directive or Directive 95/46/EC, subsequently placing the law in general compliance with the EU’s current GDPR Law. As such, the PDPL puts forth the legal framework that data controllers, processors, and organizations must adhere to at all times when engaging in data processing activities. How are data controllers and processors defined under the PDPL? Under the PDPL, the term data controller is defined to mean “An individual or legal entity who processes personal data on the territory of Montenegro or on the territory outside of Montenegro where, under international law, Montenegrin regulations apply; or is incorporated outside Montenegro or does not have a residence in Montenegro but uses equipment for data processing situated in Montenegro, except if the equipment is used only for transfer of personal data over the territory of Montenegro”. As the PDPL contains no provisions that explicitly state the territorial scope of the law, the term data controller accounts for individuals and legal entities both inside and outside of Montenegro. Alternatively, the term data processor is defined to mean “A public authority, public administration body, self-government, or local administration authority, commercial enterprise, or other legal person, entrepreneur or a natural person, who performs tasks concerning the processing of personal data on behalf of the controller”. In terms of the types of personal data that are covered by the PDPL, the law “applies to automated or non-automated processing of personal data contained or intended to be contained in a filing system”. Moreover, the processing of personal data includes all functions and operations undertaken in regard to personal data, including collection, processing, transmitting, classifying, and deleting. What are the obligations of data controllers and processors under the PDPL? Under the PDPL, data controllers and processors who process the personal data of Montenegrin citizens are required to fulfill the following obligations and responsibilities: - Data controllers and processors are prohibited from processing personal data more than is necessary to achieve the intended purpose for which it was collected. - Data controllers and processors must ensure that all data in their possession is complete, accurate, and regularly updated when needed. - Data controllers and processors are responsible for retaining personal data that would allow for the identification of data subjects for no period longer than is necessary to fulfill the purposes for which it was collected, unless otherwise stated by law. - All data processing activities must be based on the expressed consent of applicable data subjects, or on one of five other alternative grounds set forth by the law. - Data controllers and processors are responsible for erasing personal data that has not been processed in accordance with the law, or at the request of a data subject. - In instances where data controllers or processors establish an automated data filing system, said parties are responsible for appointing an individual who is responsible for the protection of personal data, otherwise known as a data protection officer or DPO, permitting said parties to employ more than ten staff members who process personal data. - After data processing activities have been completed, data controllers and processors are responsible for either destroying the personal data in their possession, or returning it to the data subjects from whom they collected it from. - Data controllers and processors are responsible for implementing all necessary organizational, personnel, and technical measures to ensure the protection of personal data that is in their possession. - Data controllers and processors must create a written agreement between one another, for the purpose of regulating the personal data that data processors process on behalf of data controllers. This written agreement must also outline the obligations of data processors to act in accordance with the direction and instruction of data controllers. What are the rights of Montenegrin citizens under the PDPL? The PDPL provides Montenegrin citizens with the following rights as it relates to data protection and privacy: - The right to be informed. - The right to access. - The right to rectification. - The right to erasure. - The right to restrict the processing of personal data. - The right to object to the processing of personal data. - The right not to be subject to data processing decisions made solely on the basis of automated decision making. In terms of punishment as it pertains to violations of the law, the PDPL is enforced by the Agency for Personal Data Protection and Free Access to Information, or the AZLP for short. To this end, the AZLP has the power to impose the following sanctions for non-compliance under the law: - Impose a temporary ban against data controllers or processors who are found to have collected or processed personal data unlawfully. - An order that a data controller or processor delete personal data that has been collected on unlawful grounds. - Monetary fines ranging from €500 to €20,000 ($584 to $23,361) for legal entities, €150 to €6,000 ($175 to $7,007) for entrepreneurs, and €150 to €2,000 ($175 to $2,335) for individuals. As Montenegro is one of the various countries within Europe that is not a part of the EU and does not fall under the jurisdiction of the General Data Protection Regulation as a result, the PDPL serves to protect the privacy and personal data of Montenegrin citizens. However, despite the fact that Montenegro is not a part of the European Union, the EU’s data legislation policies throughout the years have undoubtedly played a large role in influencing many of the provisions of the PDPL. In this way, Montenegrin citizens are afforded a level of data privacy that is similar to their European counterparts.
<urn:uuid:089caacb-54da-4751-a2f1-584b7f5a3b9c>
CC-MAIN-2022-40
https://caseguard.com/articles/the-pdpl-ensuring-data-privacy-and-protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00228.warc.gz
en
0.925826
1,282
2.546875
3
The National Institute of Standards and Technology (NIST) is planning to use IBM’s Watson to evaluate how critical publicly reported computer vulnerabilities are and assign an appropriate severity score. Publicly known information-security vulnerabilities are usually assigned a CVE number to serve as an ID and make it easier for everybody to track, and a Common Vulnerability Scoring System (CVSS) score, to make it easier for companies to prioritize responses and resources according to the threat. CVSS scores range from 0.0 to 10.0 and are calculated by taking into consideration things like: - The complexity of an attack that can result in the exploitation of a vulnerability - Whether the attack requires use interaction - Whether the effect of the attack on confidentiality, integrity and availability of the target system and the data is manipulates is none, low, or high, and so on. CVSS scores are still assigned by NIST’s human analysts, and the process is time-consuming. According to Matthew Scholl, chief of the National Institute of Standards and Technology’s computer security division, it takes analysts 5 to 10 minutes to calculate a score for simple vulnerabilities and far longer for new, unusual and complex ones. IBM Watson calculates CVSS scores To free up their analysts’ time and allow them to concentrate on more important matters, NIST is testing whether an artificial intelligence system such as Watson can take over for them. So far, the results are encouraging. Scholl told NextGov that, for a while now, Watson has been tasked with poring through historical reports, data and CVSS scores from the institute’s human analysts and assigning its own scores. The test revealed that Watson does extremely well when it comes to common vulnerabilities, but has trouble assigning an appropriate score for novel and/ or complex ones. Luckily, it also releases a confidence percentage for each CVSS score and if that percentage is lower than a predefined threshold (high 90s), a human analyst is tasked to take a look at it and come up with a suitable score. NIST plans to use Watson to assign risk scores to most publicly reported vulnerabilities by October 2019, if the program is securely integrated with other NIST systems and is able to handle the workload. They are also looking into using the technology in other NIST areas.
<urn:uuid:77155e1f-a246-451d-bb9d-0fbe1d9dbf4a>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2018/11/05/ai-assigns-cvss-scores/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00228.warc.gz
en
0.940153
484
2.578125
3
During 2020’s watershed moment for decentralized finance, the non-fungible token industry (NFT) erupted. NFTs are digital representations of physical assets, such as art or real estate. These tokens are unique due to the inability to replicate them. The NFT sector has seen expansive growth due to the relative efficiency of buying and selling tokens, as well as the reduced probabilities of fraud. According to research from analytics platform DappRadar, NFT trading volume increased to $10.67 billion in the third quarter of 2021. This is a 704% rise over the previous quarter. NFTs are digital assets that represent real-world items. NFTs are non-fungible, meaning that they cannot be replicated. For example, if someone lends you $20, you don’t need to repay the exact $20 bill to complete the loan transaction. This is because that money is replaceable by any other $20 bill that holds the same value. This also applies to the crypto sphere. For instance, Bitcoin has 21 million identical tokens that can all equally be traded for another. Conversely, NFTs for a specific work of art can only represent that particular item and are not equally tradable. For instance, if you buy one of a kind artwork, that item is irreplaceable and unique – it cannot be equally traded for another work of art. The domain of NFT is still relatively young. In principle, NFTs may be used for anything that is unique and requires verifiable ownership. Here are some current examples of NFTs: NFTs first caught the attention of the mainstream crypto community with the launch of CryptoPunks. A CryptoPunk is a 24×24 pixel, 8-bit style avatar that is offered as an NFT. A pair of developers utilized software to create 10,000 unique avatars on the Ethereum platform. Ethereum is an open software platform that acts as a public ledger for validating and recording crypto transactions on a blockchain network. Users of the network may build, post, sell, and utilize apps on the platform using Ether, the network’s cryptocurrency. Experts refer to the network’s decentralized apps as “dApps.” The popularity of CryptoPunks was in large part due to its novelty in the NFT realm. No two avatars are alike, and each avatar can officially be owned by one person. The limited quantity and single ownership of CryptoPunks increased its popularity and desirability. Additionally, some avatars are rarer than others, increasing their value. The cheapest of these avatars starts at 100 ethereum, or $400,000. However, several CryptoPunks have sold upwards of millions of dollars. Since then, numerous NFT platforms and innovative application cases have emerged in various industries. As a culture, we tend to value and gather things. So, the fact that NFTs have been used in a variety of contexts isn’t unexpected. In 2017, Ethereum launched its collectible game, CryptoKitties. Unlike traditional games, CryptoKitties allows in-game assets to be tokenized, providing true ownership. As a result, players can collect and exchange weapons, vehicles, and characters as NFTs. Aside from art, real-world collectibles can also be tokenized, stored, and transferred while retaining proof of authenticity and ownership. Designers and filmmakers, for example, can protect their work from copyright infringement by registering their work and even receiving ongoing royalties. When purchased, NFTs are tracked through the blockchain, allowing royalties to flow back to the creator each time the digital asset is sold. As a result, growth in the NFT space has the potential to open a slew of new revenue streams. In addition to these examples from the gaming, art, and collectibles industries, proponents of NFT technology have cited a variety of other applications, ranging from introducing people to finance to combating fake news. For example, the Italian blockchain firm LKS has created an NFT that can aid in limiting fake news and protecting digital content copyrights. The NFT is designed to include the identity of the publisher, the publishing time, and the original link. Similar to other blockchains, the information contained within cannot be edited or changed. As a result, it is easy to detect data that has been forged or altered from the original. For those who are unfamiliar with blockchain or cryptocurrency, the world of NFTs remains indeed very intricate. Both the public and the mainstream media have many misapprehensions about NFTs. Building decentralized apps for NFTs can be difficult, and it can be hard to emulate the same “feeling” as standing before a physical commodity in the digital realm. However, among the earliest forms of NFTs were organizations tokenizing and monitoring supply chain products to increase security and reduce fraud. NFTs can significantly improve inventory management, software licensing, and even identity management in the enterprise, such as corporate entities using NFTs to record and confirm credentials. It will be intriguing to see how NFTs develop and, as its customer base expands, how different industries will incorporate and create new uses for NFTs. NFTs have already had a massive effect, unleashing new digital economies, raising the cash flow of traditionally illiquid items, and altering our perception of asset ownership.
<urn:uuid:c18326cf-0fe4-4a5b-9cff-1dadce31042c>
CC-MAIN-2022-40
https://plat.ai/blog/how-nonfungible-tokens-are-introducing-a-new-digital-economy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00228.warc.gz
en
0.946126
1,125
2.515625
3
What is a Data Breach? Data Breach Definition A data breach definition is an event that results in confidential, private, protected, or sensitive information being exposed to a person not authorized to access it. It can be the consequence of an accidental event or intentional action to steal information from an individual or organization. For example, an employee could accidentally expose sensitive information or they could purposely steal company data and share it with—or sell it to—a third party. Alternatively, a hacker might steal information from a corporate database that contains sensitive information. Whatever the root cause of a data breach, the stolen information can help cyber criminals make a profit by selling the data or using it as part of a wider attack. A data breach typically includes the loss or theft of information such as bank account details, credit card numbers, personal health data, and login credentials for email accounts and social networking sites. An information breach can have highly damaging effects on businesses, not only through financial losses but also the reputation damage it causes with customers, clients, and employees. On top of that, organizations may also be subjected to fines and legal implications from increasingly stringent data and privacy regulations like the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). How Does a Data Breach Happen? A data breach can be caused by an outside attacker, who targets an organization or several organizations for specific types of data, or by people within an organization. Hackers select specific individuals with targeted cyberattacks. Data breaches can be the result of a deliberate attack, an unintentional error or oversight by an employee, or flaws and vulnerabilities in an organization’s infrastructure. Loss or Theft A common form of security incident is the loss of devices or unauthorized access to credentials, resulting in cyber criminals obtaining confidential information. For example, a lost laptop, mobile phone, or external hard drive that is unlocked or unencrypted can easily lead to information being stolen if it ends up in the wrong hands. Even a locked device could be hacked into by a sophisticated attacker. An insider attack is a data breach caused by an employee leaking information to a third party. Also known as a malicious insider, this individual will access or steal data with the intent of causing harm to the organization or another individual within the company. For example, the malicious insider could have access to the company’s financial details or a client list, which they could pass on or sell to a competitor. Alternatively, the malicious insider could access information about high-risk individuals within the organization—or even password details—and sell them to a hacker for a profit. Targeted data breach attacks see a cyber criminal or a group of attackers target specific individuals or organizations to obtain confidential information. Attackers use various methods to gain unauthorized access to corporate networks and systems or to steal user login credentials. Common types of targeted cyberattacks that can result in a data breach include: - Phishing attack: A phishing attack involves cyber criminals using social engineering to steal information like credit card details, login credentials, and user data. Attacks typically masquerade as an email or Short Message Service (SMS) from a trusted individual to dupe the victim into opening a malicious link or visiting a spoofed website. - Malware attack: A malware attack occurs when an attacker tricks a target into opening a malicious attachment, link, or website. The attacker will then inject malware onto the user’s device to steal their credentials. - Vulnerability exploits: Cyber criminals routinely search for potential vulnerabilities in organizations’ hardware or software before the vulnerability becomes known to the company. This form of attack, known as a zero-day attack, occurs when a hacker creates an exploit then launches it before the organization is able to patch the vulnerability. - Denial-of-service (DoS) attack: A DoS attack is an intentional attack that aims to overload an organization’s network or website with fake requests. This will prevent legitimate users from gaining access, crashing the system, or damaging it. Attackers can also use multiple infected machines, known as a botnet, to launch distributed denial-of-service (DDoS) attacks. What Can Attackers Do with Stolen Data? Attackers tend to target high-value data such as corporate data or personally identifiable information (PII), which they can sell for financial gain or cause harm to the individual or organization. As attackers become increasingly sophisticated, their methods become meticulously planned to unearth vulnerabilities and identify individuals who are susceptible to an attack. Once they gain access to data, the effects can be hugely damaging. A data breach can lead to organizations not only losing their data, which could be sensitive financial information or corporate secrets, but they can also suffer fines, financial loss, and reputational damage, which are often irreparable. An attack on a government agency could leave confidential and highly sensitive information, such as military operations, national infrastructure details, and political dealings, exposed to foreign agencies, which could threaten the government and its citizens. Individuals who suffer a breach could lose their personal data, such as banking details, health information, or Social Security number. Armed with this information, a cyber criminal could steal the individual’s identity, gain access to their social accounts, ruin their credit rating, spend money on their cards, and even create new identities for future attacks. Some of the biggest data compromise events in history had long-lasting effects on the organizations that suffered them. These data breach examples include: In 2016, internet giant Yahoo revealed that it had suffered two data breaches in 2013 and 2014. The attacks, which affected up to 1.5 billion Yahoo accounts, were allegedly caused by state-sponsored hackers who stole personal information, such as email addresses, names, and unencrypted security questions and answers. A data breach against financial firm Equifax between May and June 2017 affected more than 153 million people in Canada, the U.K., and the U.S. It exposed customers’ personal data, including birth dates, driver’s license numbers, names, and Social Security numbers, as well as around 200,000 credit card numbers. The breach was caused by a third-party software vulnerability that was patched but not updated on Equifax’s servers. In 2018, Twitter urged its 330 million users to change and update their passwords after a bug exposed them. This was the result of a problem with the hashing process, which Twitter uses to encrypt its users’ passwords. The social networking site claimed it found and fixed the bug, but this is a good example of potential vulnerability exploits. Twitter also suffered a potential breach in May 2020, which could have affected businesses using its advertising and analytics platforms. An issue with its cache saw Twitter admit it was “possible” that some users’ email addresses, phone numbers, and the final four digits of their credit card numbers could have been accessed. First American Financial Corporation In May 2019, insurance firm First American Financial suffered an attack that saw more than 885 million sensitive documents exposed. The attack resulted in files containing bank account numbers and statements, mortgage records, photos of driver’s licenses, Social Security numbers, tax documents, and wire transfer receipts dating back to 2003 digitized and made available online. The attack is believed to have been caused by an insecure direct object reference (IDOR), a website design error, which makes a link available to a specific individual. Unfortunately, that link became publicly available, meaning anyone could view the documents. In September 2019, a server containing phone numbers linked to more than 419 million Facebook users’ account IDs was exposed. The server was not password-protected, which meant that anyone could find, access, and search the database. Three months later, a database containing roughly 300 million Facebook users’ names, phone numbers, and user IDs was exposed by hackers and left unprotected on the dark web for around two weeks. How to Prevent a Data Breach? Data breach prevention is reliant on an organization having the right, up-to-date security tools and technologies in place. But it is also imperative for all employees within the organization to take a comprehensive approach to cybersecurity and know how to handle a data breach. This means understanding the security threats they face and how to spot the telltale signs of a potential cyberattack. It is important to remember that any organization’s cybersecurity strategy is only as strong as its weakest link. It is therefore vital for all employees to follow cybersecurity best practices and not take any actions that put them or their organization at risk of a data breach. Organizations and employees must implement and follow best practices that support a data breach prevention strategy. These include: - Use strong passwords: The most common cause of data breaches continues to be weak passwords, which enable attackers to steal user credentials and give them access to corporate networks. Furthermore, people often reuse or recycle passwords across multiple accounts, which means attackers can launch brute-force attacks to hack into additional accounts. As such, use strong passwords that make it harder for cyber criminals to steal credentials. Also, consider using a password manager. - Use multi-factor authentication (MFA): Due to the inherent weakness of passwords, users and organizations should never rely on passwords alone. MFA forces users to prove their identity in addition to entering their username and password. This increases the likelihood that they are who they say they are, which can prevent a hacker from gaining unauthorized access to accounts and corporate systems even if they manage to steal the user’s password. - Keep software up to date: Always use the latest version of a software system to prevent potential vulnerability exploits. Ensure that automatic software updates are switched on whenever possible, and always update and patch software when prompted to do so. - Use secure URLs: Users should only open Uniform Resource Locators (URLs) or web addresses that are secure. These will typically be URLs that begin with Hypertext Transfer Protocol Secure (HTTPS). It is also important to only visit trusted URLs. A good rule of thumb is to never click any link in an email message, but - Educate and train employees: Organizations must educate employees on the risks they face online and advise them on the common types of cyberattacks and how to spot a potential threat. They also should provide regular training courses and top-up sessions to ensure employees always have cybersecurity at the top of their minds and that they are aware of the latest threats. - Create a response plan: With cyber criminals increasing in sophistication and cyberattacks becoming more prevalent, businesses must have a response plan in case the worst happens. They must know who is responsible for reporting the attack to the appropriate authorities, then have a clear plan in place for the steps that need to take place. This must include discovering what data and what kind of data was stolen, changing and strengthening passwords, and monitoring systems and networks for malicious activity. How Fortinet Can Help? The Fortinet FortiNAC solution enables organizations to gain total control and visibility of everything connected to their network. The network access control (NAC) tool provides device and user control and strengthens the network infrastructure security. FortiNAC protects against Internet-of-Things (IoT) threats and orchestrates automatic responses across the network. With it in place, organizations can gain visibility of every device and user that joins the network, control where devices can go, and react to events that would normally take days in just a matter of seconds. This technology is crucial to establishing a zero-trust approach, which sets a policy of trust no longer being implicit for applications, devices, or users attempting to access a network. This helps IT teams easily understand which users and devices are accessing the organization’s network.
<urn:uuid:7c2bcd19-928f-498c-8350-20792c6a62ca>
CC-MAIN-2022-40
https://www.fortinet.com/cn/resources/cyberglossary/data-breach
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00228.warc.gz
en
0.943875
2,422
3.53125
4
Stanford Students Work to Demystify Quantum Computing for High Schoolers (StanfordDaily) The Stanford Quantum Computing Association (SQCA), aims to foster community in the quantum computing field and produce new solutions to problems through paper-reading groups, hackathons and continuous projects. Martinez-Piedra interned at IBM in the summer of 2019. There, he met fellow interns then-Ph.D. candidate Kanav Setia of Dartmouth College and then-undergraduate Jason Necaise of the Massachusetts Institute of Technology. That summer, their first idea of many surfaced: a company that writes software for people working on quantum chemistry. After examining the landscape of how many people were using such software, they realized that instead of advancing quantum chemistry alone, they could have an effect on the larger industry. “One of the key things in order for the field to progress was to bring more people in,” said Setia. “In order to do that you had to lower the steep learning curve associated with quantum computing.” Necaise and Setia went on to found and pilot qBraid, a quantum computing platform for learning, programming and running quantum algorithms with courses for experience levels ranging from high school to graduate school. This year, in addition to organizing boot camps, campus events and outreach efforts to increase accessibility to the field of quantum computing. There are three courses currently listed on the qBraid website: QuBes, aimed at juniors and seniors in high school, QuInts, for those in undergraduate programs or professionals with some mathematical background and the upcoming QuPro, a hands-on collection of tutorials for graduate students and researchers in the field.
<urn:uuid:16723965-3da7-4e81-bcd6-7834d36c1806>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/stanford-students-work-to-demystify-quantum-computing-for-high-schoolers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00228.warc.gz
en
0.940275
348
2.546875
3
In today’s “Acronym Soup” blog post, we are back on the BIOS side of things with this question: what is BDS? First, for the answer – which many might have already guessed since it has to do with BIOS: BDS is short for Boot Device Selection. Of course, once you know the meaning of this acronym, it is probably rather easy to imagine the function of BDS as well. In this case, BDS is exactly what it sounds like – a BIOS function that allows a user to select and prioritize the order in which the system should attempt to boot from different media devices. These could be an internal Hard Disk Drive (HDD), external HDD, CD/DVD Drive, external USB drive, SD card or a similar media device, chosen in accordance with how the user wants the system to operate. This process happens during platform initialization soon after the power button is pushed. During BDS, the will BIOS move through a prioritized list of devices identified in the BIOS configuration for instructions regarding from which device the system should boot. Some manufacturers also give end users the opportunity to make boot device selections on the fly, by using function or hot keys to toggle through a list of different potential devices as the system initializes. When OEM and ODM manufacturers license our Aptio® V UEFI firmware, they often make modifications to BDS options in order to add value and differentiate their systems from those of other firms. They may also add or customize BDS capabilities based on the intended audience or use of specific systems. As the leading independent BIOS vendor (IBV) in the industry for many years running, we know quite well that the different OEM and ODM system manufacturers often implement BDS in different ways in the BIOS configuration menu. Therefore, we must always stress to end-users about the importance of reviewing the documentation that came with the system or the support section of the manufacturer’s website for instructions and assistance with BIOS configuration changes. Always remember that making changes to BIOS settings can cause a system malfunction or even disable a system if done incorrectly, so always proceed with caution. We recommend to always keep the original manufacturer’s BIOS settings as the default if you are unsure what you are doing with changing BDS or other BIOS settings, and also to make backups of important system files and data before you make any changes. Thanks for reading today’s Tech Blog, we hope you found this bit of Acronym Soup refreshing and enjoyable! Feel free to drop us a line via social media or our Contact Us form and let us know what you thought of today’s post, and also what you might like to see in future posts!
<urn:uuid:0b13d288-da77-4f41-8df0-ef0fdb437e38>
CC-MAIN-2022-40
https://www.ami.com/blog/2019/03/27/acronym-soup-what-is-bds/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00228.warc.gz
en
0.935645
546
2.546875
3
In this Help Net Security video, André Ferraz, CEO at Incognia, talks about the impact of location-based fraud, which is more prevalent than one would imagine, and it impacts different industries in many different ways. Fraudsters simply don’t want to reveal their physical location since this would make them susceptible to identification, therefore they spoof location signals. There are many types of location-based fraud, one example being fraud farms. Fraud farms are a physical location which contains a significant amount of fraudulent activity. It can target different industries for different use cases. The second example is a bot farm. Bot farms refer to multiple devices run at the same time from the same location. In many cases, bot farms are used to spread disinformation campaigns to social media. The third example is related to food delivery and ride hailing businesses where there are examples of fake rides. Another example is from the gaming industry where location spoofing can be used to hide gamers’ locations.
<urn:uuid:350e32f1-33d3-4324-8e2c-1482301ba7af>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2022/09/19/location-based-fraud-video/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00228.warc.gz
en
0.939686
202
2.59375
3
OT cybersecurity refers to the set of procedures and best practices designed to mitigate and prevent the exploitation of cyber-physical systems and industrial control systems (ICS). Industrial control systems are digital networks employed across a wide variety of sectors and services to automate production processes. From energy grids to manufacturing plants, industrial control systems are used across a wide variety of business and critical infrastructure sectors. The importance of ICS security is a function of the unique risk associated with the operation of operational technology. Plant and factory level employees are often exposed to safety risks. Global supply networks rely on the consistent availability of industrial control systems within ports and other shipping nodes. The public requires critical infrastructure, such as water and energy systems, to operate around the clock. Any disruption across this wide network has far reaching consequences, making the availability and resilience of operational technology key for public wellbeing. Information and Operational Technology serve different purposes. IT cybersecurity concerns enterprise-level equipment used to manage data. OT cybersecurity concerns production level equipment used to manage physical products. These differences lead to unique security environments. IT security operators must keep up with quickly evolving equipment, platforms, and applications. The high rate of change means modular networks and routine updates. Furthermore, the value of IT is often linked with data and intellectual property stored within the network. For this reason, IT security’s primary concern is the confidentiality of the data. On the other hand, OT cybersecurity operators maintain systems with legacy equipment because of the high cost of equipment replacement and the slow change of system requirements. This means industrial control systems often contain known vulnerabilities. The value of operational technology, however, is most directly related to the continual and consistent operation of the equipment. Therefore, there are fewer opportunities for system downtime, updates, and equipment replacement. As a result, OT cybersecurity’s primary responsibility is the availability of the industrial control system. The technological architecture of organizations using operational technology can be organized into five distinct layers. Within industry, this is known as the Purdue Model and provides a method for understanding the distinct functions of technologies at various levels of an organization. Enterprise Levels (4 - 5) The Enterprise Network, and Business Planning elements of the network compose levels 5 and 4 respectively. Collectively these are the levels of the corporate office. Routers, servers, personal computers, and printers are all likely to be devices within these layers. Production Levels (1 - 3) Levels 3 through 1 constitute the production environment of a given network. Level 3, or Site Control, commonly houses data storage devices (historians) and the central management system (likely an engineering workstation). Here, plant-wide information is simultaneously accessed and warehoused. Level 2, or Area Control, contains more specific control information. This might be the Supervisory Control and Data Acquisition (SCADA) interface for a subcomponent of devices or even the specific Human Machine Interface (HMI) of a single device. Level 1, or Basic Control, refers to the actual distributed control system (DCS) or programable logic control (PLC) that actuates an operational process. The Purdue Model is helpful in understanding the complexity of modern technological architectures, but is also complicated by the continued rise of the Industrial Internet of Things (IIoT). As information increasingly informs production processes each layer is becoming more intertwined. Therefore, even while the Purdue Model may not reflect the logical typology of network architectures, it does provide a functional map of such systems. High-profile OT cyberattacks demonstrate that network vulnerabilities exist at each level of network architecture. The Stuxnet worm, uncovered in 2010, was a SCADA exploitation designed to destroy nuclear refiners within Iran. The worm was injected into a closed network via removable media device – likely a USB drive. Once within the enterprise layer the code was able to exploit installation permissions to automatically execute malware. In this case the malware degraded integrity down the network architecture causing specific PLCs to report incorrect information back to area control workstation. This risk could have been mitigated by: In 2021, a hacker attempted to pollute a Florida public water supply by exploiting the outdated operating system of a particular water plant. The attack began when a personal computer within the network made a visit to an unsecured web address. As a result, the hacker was able to gain network access through the combination of a remote management application and weak password security. After this, the command to pollute the water supply was easily made – though thankfully was observed and reversed by plant personnel. The attack could have been diverted more easily however, had the security operators: Colonial Pipeline was the victim of a target ransomware attack in 2021 that was facilitated by a compromised virtual private network. The hacker group utilized un-retired credentials to access a legacy virtual private network (VPN). The pipeline had been unaware of the continued existence of the VPN and as a result had not factored it into their security considerations. From this vulnerability, however, the hacker group was able to encrypt corporate systems which directly resulted in system downtime. This loss of accessibility could have been avoided if they had: As high-profile cyberattacks increase in number and sophistication, private companies today must be prepared to defend against increasingly capable adversaries and even nation-state attacks. This daunting task can be best approached by systematically understanding network architecture and the lessons of past attacks. The non-profit MITRE Corporation has published the MITRE ATT&CK for ICS framework for this precise purpose. The MITRE ATT&CK for ICS framework acts as a common industry lexicon by describing eleven categories that are important for understanding how adversaries enter, explore, and exploit your network. Adversaries enter and stay within networks through initial access, evasion, persistence, and by inhibiting responses. Adversaries gather information about the compromised network through discovery, collection, and lateral movement within the ICS environment. After gaining access and information, adversaries are able to execute code, manipulate command and control functions and impair process control in order to negatively impact the overall industrial control system. Understanding the evolving security landscape is now a requirement of sound business practice. Without robust industrial security measures, companies take on significant risk to their safety, profitability, and reputation. Profitability can be decreased from unexpected production downtime, legal costs, and increased insurance costs, among other concerns. Our OT Risk Calculator can provide a customized estimate of each factor to show what a cyberattack could really cost your company. Furthermore, a cyber incident can quickly shake confidence, resulting in brand and reputational damage which translates into a reduced customer base. As a result, it is increasingly important that security professionals learn how to effectively ask for the proper OT cybersecurity budgeting. Explaining these risks to management and requesting an expanded OT security program will ultimately result in gains across the entire business. When determining where to further invest in security, there are many standards out there that can help. One of these is the NIST Cybersecurity Framework (CSF), which can provide a simple method for identifying what opportunities exist to optimize your security processes. The NIST CSF is a voluntary set of guidelines that were created to aid the development of business security strategies. The framework is organized around a security cycle: identify, protect, detect, respond, recover. Each stage requires an understanding of the distinct elements of the OT network and the proper people, processes and technologies for effective implementation. Another popular standard used to design an OT/ICS security program is the ISA/IEC 62443 standard. The ISA/IEC 62443 series of standards offers a flexible framework of security controls that define ICS security techniques, processes, and procedures to aid organizations in mitigation and risk reduction for security vulnerabilities in ICS. Organizations can adopt and enforce security controls that work reliably across devices, networks, and infrastructure based on this single congruous framework. Within OT environments, specific best practices can be employed to maximize security effectiveness. The first requirement is robust asset management. With a full understanding of the network, it is possible to establish centralized management with effective monitoring techniques. Centralized monitoring will subsequently enable security operators to implement automated vulnerability and anomaly detection abilities. Each of these steps and best practices fundamentally serve to assure that the right data is in the hands of your OT defenders. Understanding your system and knowing how to ask OT security vendors the right questions is the first step in reaching this goal. Understanding OT cybersecurity can be complicated. The increasing frequency and sophistication of ICS cyberattacks, the rise of the IIoT, and many other factors add to the complexity. Yet, the importance of critical infrastructure is too great to ignore its security. Furthermore, it is now impossible to safely operate an industrial control system as a business without a rigorous OT cybersecurity approach. Industrial Defender offers resources to aid in the search for solutions to each of these issues. Our OT Security 101 Webinar provides guidance to better understand the security principles of OT cybersecurity. Industrial Defender’s OT Cybersecurity Solutons Buyer’s Guide provides information that can help narrow the search for an ICS security solution. The Defender Sphere is designed to help clarify the various vendors, services, and equipment involved in the operational technology landscape. By taking advantage of these and additional resources, security professionals can understand industrial control systems and achieve robust OT cybersecurity to support key business interests and safety requirements.
<urn:uuid:c59fecc8-8bb7-4237-8e48-95e491c89996>
CC-MAIN-2022-40
https://www.industrialdefender.com/blog/ot-cybersecurity-the-ultimate-guide
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00228.warc.gz
en
0.93363
1,897
3.234375
3
Will the endangerment of Monarch butterflies affect RECs? This is an interesting article on the endangerment of Monarch butterflies. It could have major implications on RECs affecting future right-of-way management and site management of facilities. Source: RE Magazine, Cooperative.com Somewhere in the southeastern United States right now, a boldly striped caterpillar is hatching from a pinhead-sized egg and beginning to fatten up on leaves. Within a few weeks, the full-sized six-legged larvae will shed its skin and spin itself into a pale-green chrysalis that will dangle from a silk string for nearly two weeks. Finally, an adult monarch butterfly will burst through, dry its familiar orange-and-black-veined wings, and take flight, heading north. By mid-June, this first of four annual monarch generations will have reached the central U.S., where they will mate and lay their eggs on milkweed, the monarch’s only host plant and a critical link in its reproduction. Around that same time, the U.S. Fish and Wildlife Service (FWS) will be completing a review of the monarch that could carry major implications for electric cooperatives in the path of this famously itinerant insect. The agency will announce whether the butterfly should be protected under the Endangered Species Act; a final decision would be due in June 2020. “It will have a significant impact on co-ops if there is a fully endangered listing,” says Josh Young of East Kentucky Power Cooperative (EKPC), a G&T in central Kentucky whose service territory lies in the path of the monarch’s migration. “It will affect right-of-way management and site management of different facilities.” Monarch butterflies sip flower nectar prolifically and are important pollinators for dozens of key plant species. They’re under threat from the eradication of milkweed, the perennial flowering plant where monarchs lay eggs, feed as caterpillars, and transform into adult butterflies. Milkweed is toxic and often considered a nuisance plant. Monarchs are also under pressure from habitat loss because of deforestation of their winter homes in Mexico and California. Whatever the listing, threatened or endangered, NRECA will push for allowing electric co-ops to continue their outside activities, from vegetation management to new infrastructure construction. “We want members to be proactive until then,” says Janelle Lemen, NRECA’s environmental regulatory issues director. One way is to plant and maintain acres of milkweed and nectar plants. Another is to sign a Candidate Conservation Agreement with Assurances (CCAA), a voluntary pact that a coalition of companies and industries enter to blunt the impact of an FWS listing. To be effective, the agreement must be signed before a listing becomes final. Lemen says the draft CCAA is due out soon. “The CCAA offers a win-win opportunity by giving regulatory certainty to co-ops proactively providing habitat on their systems regardless of whether FWS lists the monarch,” Lemen says. As part of the CCAA process, NRECA can point to activities co-ops have already undertaken to support the species, including building pollinator gardens and any modifications made to herbicide applications and mowing in easements and rights-of-way, says Stephanie Crawford, an NRECA regulatory advisor who has been working to shape the CCAA. Experts anticipate that the monarch will likely be listed. The severity of the designation will depend on the numbers. The latest count of monarchs in Mexico, where millions overwinter each year, is positive. January reports found that populations this year have rebounded by as much as 144 percent above the 2018 count. The number that overwinter in California, however, is at a historic low. Other pollinators are also facing drastic population declines. “Whether it’s the monarch or another species, the pollinator issue is not going away,” Lemen says. ‘A part of doing business’ The monarch butterfly is the only insect that completes a round-trip migration. Some travel thousands of miles over their lifetime. Those that fly to Mexico will spend the winter there, then fly back north into the southeastern United States to reproduce in early spring, and then head farther north for the summer, sometimes into Canada. To provide a way station on their migratory journey, Dairyland Power Cooperative, the LaCrosse, Wisconsin- based G&T, has built nearly 300 acres of pollinator habitat at solar farms, substations, and a capped coal ash landfill. Brad Foss, Dairyland’s senior environmental biologist, says many members and the public have been vocal that the co-op should not only comply with environmental standards but go above and beyond. “That’s part of why we are doing these pollinator projects,” he says. “Environmental stewardship is very important at Dairyland and to our members. It is an expected part of doing business.” To prepare and seed a pollinator habitat can cost $2,500 to $5,500 per acre or more, depending on the plant varieties and the labor involved, Foss says. Annual maintenance is additional. Steps involve treating the area with herbicide to remove invasive plant species, seeding the plot with native forbs and grasses, strategic mowing, spot spraying, weed whipping, and using controlled burns to keep non-native foliage at bay. Foss notes that state agriculture and natural resources departments and companies that specialize in pollinator habitats and prairie restoration can be helpful when planning and growing plots to sustain monarchs. “The key is to be proactive and collaborate with others with expertise and interest in the project,” he says. ‘THE NEW NORMAL’ By mid-summer, generation two has emerged and found its way to the lower regions of the Upper Midwest. Like generation one, this brood is short-lived (less than two months), but it fulfills its critical role of mating and laying eggs. Connexus Energy has long seen the value of pollinator plants in its rights-of-way and surrounding its solar energy facilities. Based in Ramsey, Minnesota, the state’s largest distribution co-op maintains about 55 acres of pollinator habitat, with the first planted in 2014 and the most recent in 2018. Including such plots is now standard operating procedure when building solar energy facilities in Minnesota. Pollinators “are essential to Minnesota’s economy” and its $90 billion agricultural sector, the governor said in the state’s annual report. “We are not as concerned about the impact of a listing because we have already adopted native plantings for our solar gardens,” says Brian Burandt, vice president of power supply and business development at Connexus. “Pollinator habitat is the new normal.” Not only is it good for species conservation, but the practice helps reduce vegetation management costs, he says. “If you don’t do anything, you will have noxious weeds that will grow 3 feet high and shade solar projects,” he says. Each year, hundreds of Connexus members tour the pollinator-friendly gardens, which are also popular for their honey-making beehives. Wolverine Power Cooperative cultivates pollinator habitat along the rights-of-way of its 1,600-mile transmission network through minimal mowing, removing undesirable trees and shrubs, and spot applications of herbicide. The G&T headquartered in Cadillac, Michigan, sows pollinator-friendly seed mixes during construction restoration activities and plans to incorporate milkweeds for monarchs. “Electric cooperatives often go above and beyond federal and local environmental regulations in our commitment to being good citizens, neighbors, and stewards of our natural resources and wildlife,” says Wolverine Vice President Joseph Baumann. An endangered or threatened designation of the monarch “can add a level of complexity to operations,” Baumann says, “but our commitment to environmental stewardship often positions us ahead of new compliance requirements.” Wolverine Power works with FWS and other agencies to maintain habitats, establish additional habitats, and help re- establish threatened and endangered species’ habitats, he says. “Should the monarch butterfly be assigned an endangered status, we expect our current maintenance program and pollinator habitat growth practices to place us largely in compliance with its designation,” Baumann says. “Some further steps to protect the species we expect may include added restrictions on herbicides and access to our rights-of-way.” Monarchs stop and fuel up in Kentucky before completing their nearly 3,000-mile journey to warmer climates south of the border. Some of their grandparents did the same on their spring migration north. That’s why EKPC plants a mix of pollinator-friendly nectar plants that blossom throughout the seasons of the monarch’s lifespan. This year, the G&T will expand its monarch way stations at its headquarters in Winchester, Kentucky, and its Blue Grass and Spurlock generating stations. It will transition 10 acres of fescue to monarch habitat rich with milkweed and wildflowers like coreopsis, black-eyed Susan, asters, goldenrod, and ironweed. “Early May through October, we will have flowers out there that will be food sources for monarch butterflies,” says Young, EKPC’s supervisor of natural resources and environmental communications. Even with its proactive approach, an FWS listing would inject major operational challenges for EKPC. Without permits for summer herbicide application, co-ops may be left with mowing or even hand-clearing vegetation under their power lines—significantly more expensive methods than herbicide, he says. “We typically maintain rights-of-way using herbicide during the summer when the plants are growing with leaves. It is a low-volume ‘backpack’ application,” Young says. “But if it is a full endangered listing, maintenance of that area in summer will be very hard to get approved due to the potential impacts.” Whether the G&T pursues further voluntary mitigation by signing the CCAA has yet to be determined. “Based on what happens in June, we will proceed with a better understanding of the mitigation and what the agreement fully includes,” Young says. A deciding factor hinges on whether FWS decides that an impact on milkweed harms an endangered species. “There are millions of milkweed plants that grow in our rights-of-way every year,” Young says. “If impacting one of those is considered having an adverse impact to an endangered species, if that is the case, then it is very, very concerning.” Crawford says even if FWS stops short of recommending a protected status for monarchs, co-ops aren’t likely to diminish their stewardship work. “They’re already providing valuable habitat for monarchs and other pollinators on their lands,” she says. “We’ll see this trend continuing.”
<urn:uuid:d03c013f-73c9-425a-8d90-59a3332fe6b5>
CC-MAIN-2022-40
https://finleyusa.com/6721-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00428.warc.gz
en
0.937016
2,343
2.90625
3
The demand for cyber security experts increases every year, especially since almost half of UK businesses fell victim to attacks of this nature in 2017. Despite the financial cost, however, there is a growing shortage in cyber security professionals worldwide. In the UK, two-thirds of companies do not have a cyber security professional on board, which led to 7,000 available job positions per quarter in the last year alone. And by 2019, online crime will cost businesses worldwide approximately £1.4 trillion a year. While plenty of those with IT or related backgrounds enter this field, many do not realise that getting a university degree won’t guarantee you a career in cyber security. Getting a foot on the career ladder requires experience and specific knowledge and apprenticeships can be one of the best ways of doing this, fast-tracking your career success without costing you hefty student fees. Apprenticeships are real job positions that combine on-the-job training and classroom learnings. If you choose this route, among the many advantages you can enjoy include: While these benefits can help you advance your career, keep in mind that you need to be fully committed when going down this road, as apprenticeship often takes years to complete. You would also need to balance having a job and studying to gain cyber security qualifications. You will, however, be paid and gain the kind of experience that university students usually won’t get. If you do not see yourself finishing a university degree, then apprenticeship is a good fit for you. An apprenticeship programme is on equal footing with university degrees. If you are over 16 years of age, you can apply for one. There is also no upper age limit to be an apprentice. There are a number of things you need to take note of to be someone’s apprentice: There are multiple opportunities to start out as a cyber security apprentice, depending on where you are currently in your education: Note that a Computer Science GCSE and/or A level can give you an advantage, but it is not mandatory. Plenty of employers do not require these, but will look for at least one STEM subject. Always check a company’s requirements before applying. Recognising the need for more and better cyber security professionals in the country, the government launched a couple of apprentice initiatives that you can choose from: Back in 2003, Tech Partnership (formerly known as e-skills UK) was formed and licensed as the country’s sector skills council. It aimed to help make the UK’s digital economy globally competitive. With the help of various employers, this non-profit organisation created thousands of new apprentice programmes (among others), effectively employing nearly 3,000 young people per year—including those in the cyber security industry. The company ceased operations in September 2018, due to government changes in policy for skills. However, its legacy lives on. On their website, you will find the organisations they transferred their responsibilities to, including various employers and universities, as well as The Department of Education. If you’re looking for apprentice programmes in cyber security, they have listed plenty of options you can look at. Apart from word of mouth, Google is also your friend when looking for apprentice jobs. Plenty of search results would come up. Alternatively, you can also look at our own job portal. Apprenticeships revolve around developing your skills in protecting organisations against cyber threats. You will also learn to monitor networks for vulnerabilities and respond effectively and efficiently to any form of hacking. You can focus on the technical aspects of it (e.g. security design, testing, investigating, response, etc.) or on the risk analysis aspect (e.g. governance, compliance, operations, etc.). Some of the job roles available in your career are: According to government requirements, as an apprentice, you are entitled to: If you’ve got your eye on a number of apprenticeship roles, take a look at these tips to increase your chances of getting accepted: Research what the company is looking for in an apprentice and make sure you have those qualifications. You should also look for reviews from previous employees, especially their company culture and the career path they offer. This way, you’ll know if this position is the right fit for you. Your CV would help you stand out among a myriad of hopefuls for that same position. Instead of just listing down your previous experiences and skills, you can change its order to reflect the role you are seeking or the company’s mission/vision. You can also write a cover letter, so you can better explain why you are a good fit for the business. Interviews can be stressful, which could then affect your focus (and your first impression) come D-day. To help you prepare, do the following: In the event that you didn’t land the job, you can politely ask the interviewer for feedback. This way, you’ll know which areas to improve for your next interviews. Just make sure you keep an open mind and take everything constructively. The internet is rife with stories of employees losing their shot, because of their social media posts. Start with your email, making sure it’s professional. Any embarrassing social media content should be set to private. You would want your online presence as presentable as you would be in an actual in-person interview. Once you are hired, expect the following during your apprenticeship: Apprenticeships open a lot of doors. The company that you currently work for may even offer you a full-time position or get you to the next apprenticeship level. Even if you don’t get absorbed by the company, the practical experience that you’ll gain will be enough to prove that you are valuable to plenty of other businesses that need cyber security professionals. If you are looking for a cyber security apprenticeship position, you can register on our site today and receive custom alerts when an available role is posted. Feel free to drop us a line via our contact form for any questions.
<urn:uuid:ceb71a89-7cab-4bdf-aa64-355ab79e5394>
CC-MAIN-2022-40
https://www.cybersecurityjobs.com/a-guide-to-uk-cyber-security-apprenticeships/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00428.warc.gz
en
0.958135
1,249
2.5625
3
IP uses four key mechanisms in providing its service: Type of Service, Time to Live, Options, and Header Checksum. The Options, commonly referred to as IP Options, provide for control functions that are required in some situations but unnecessary for the most common communications. IP Options include provisions for time stamps, security, and special routing. IP Options may or may not appear in datagrams. They must be implemented by all IP modules (host and gateways). What is optional is their transmission in any particular datagram, not their implementation. In some environments the security option may be required in all datagrams. The option field is variable in length. There may be zero or more options. IP Options can have one of two formats: Format 1: A single octet of option-type. Format 2: An option-type octet, an option-length octet, and the actual option-data octets. The option-length octet counts the option-type octet, the option-length octet, and the option-data octets. The option-type octet is viewed as having three fields: a 1-bit copied flag, a 2-bit option class, and a 5-bit option number. These fields form an 8-bit value for the option type field. IP Options are commonly referred to by their 8-bit value. For a complete list and description of IP Options, refer to RFC 791, Internet Protocol at the following URL: http://www.faqs.org/rfcs/rfc791.html
<urn:uuid:28859643-d9d4-45f6-a707-a7baae7c2289>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960xr/software/15_2_6_e/configuration_guide/b_1526e_consolidated_2960xr_cg/m_sec_acl_filtering_ip_options_cauvery.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00428.warc.gz
en
0.85045
336
3.671875
4
By Charles Parker, II; Cybersecurity Lab Engineer There are vast numbers of municipalities of various sizes adjacent to each other throughout each state in the nation. Each of these obviously has a computer network, of varying sizes, in place for the day to day operations. One of these counties, in Michigan, also recently had an interesting issue. Genesee County has had much written about it, as the city of Flint is at the center of the media storm. In this county, there was recently a successful ransomware attack, unfortunately. Ransomware has been over the last few years been exceptionally successful as an attack. The trend continues, as published repeatedly across many industries. One of these was the municipal offices of Genesee County, located in Michigan. The successful attack used one of the ransomware tools. The Genesee County Clerk stated the county servers were shut down due to this. The ransomware followed its standard protocol and encrypted the files. There naturally was a demand for money with this. Once received the attackers would provide the decrypt key. The initial forensic work indicated no files were exfiltrated, which was a good thing. What to do? This was a rather significant issue for the county. There were a few options for the county to follow, given the parameters of the attack. They could pay the fee and hope they would provide the decrypt key. The county would also have to hope the attackers did not leave any malware or back doors in the network. As an alternative, they could not pay the fee and use back-ups, which would require time and accurate and viable back-ups being in place prior to the attack. As the third option, do nothing and hope for the best. The county ended up not paying the ransom. This was the safest bet as long as the county had up to date recent back-ups, which had been tested, in place. Fortunately for the county and their general fund, and their insurance company, there were adequate back-ups in place. The back-ups had been done the evening before at midnight. This indicated the data replication would be minimal. There would still be al mass amount of time, as the back-ups needed to be used to replace the encrypted data and files. The attacks can vary in depth and width across the network, depending on the network itself and the form of ransomware. This could affect one system or the complete set of servers. In this case, nearly all of the networks in the system were affected. The county had signs in the window of the offices that the computer system was down, they were using manual systems, and the computer systems had been down for several days. The one relatively pertinent system for payroll was not, however, affected. This was a rather large project. The county contacted and had been working with the Michigan State Police and the FBI for their expertise. They may have been other third-party contractors involved. Ransomware is a curious tool. While very devastating, it may also be viewed as being modular, in that the malicious tool may be adjusted according to the end result needed. All it takes is one employee in the wrong department to click on the wrong link. This issue did, however, show the importance of back-ups and testing them to ensure these really are backing up. This also shows there still is a distinct need for the employees to be trained. On a brighter note, the county was able to hire a CISO, focused on the county and its work. Acosta, R. (2019, April 4). Ransomware computer virus hits the county network. The Flint Journal, A1. Ciak, M. (2019, April 4). Genesee county hacking incident ‘more extensive than initially thought’. Retrieved from Genesee County hacking incident ‘more extensive than initially thought’ Dissent. (2019, April 3). Genesee county’s email system not functional after the ransomware attack. Retrieved from https://www.databreaches.net/genesee-countys-email-system-not-funcitonal-after-ransomware-hack/ Olenick, D. (2019, April 5). Genesee county ransomware attack more severe than originally thought. Retrieved from Genesee County ransomware attack more severe than originally thought | SC Media Pierret, A. (2019, April 3). Genesee county’s email system not functional after a ransomware attack. Retrieved from Genesee County’s email system not functional after ransomware hack Winant, D. (2019, April 4). Servers in genesee county were hacked. Retrieved from https://www.wnem.com/news/breaking-servers-hacked-in-gen-co/ About The Author Charles Parker, II has been in the computer science/InfoSec industry for over a decade in working with medical, sales, labor, OEM and Tier 1 manufacturers, and other industries. Presently, he is a Cybersecurity Lab Engineer at a Tier 1 manufacturer and professor. To further the knowledge base for others in various roles in other industries, he published in blogs and peer-reviewed journals. He has completed several graduate degrees (MBA, MSA, JD, LLM, and PhD), completed certificate programs in AI from MIT and other institutions, and researches AI’s application to InfoSec, FinTech, and other areas.
<urn:uuid:635957b4-a239-43f6-95c5-c2401fe94814>
CC-MAIN-2022-40
https://www.cyberdefensemagazine.com/genesee-county-systems-pwned/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00428.warc.gz
en
0.975948
1,138
2.65625
3
As a result of the NSA surveillance, organizations are increasingly moving towards the use of encryption technologies. It turns out however that encryption is not easy to get right, Heartbleed is only about a year old but other vulnerabilities like FREAK or POODLE have also surfaced in that time frame. This time, a new attack named LogJam is targeting the cryptographic component named Diffie-Hellman, a means of securely exchanging cryptographic keys over a public channel. First, an explanation of Diffie-Hellman (DH for short). It’s a key that’s known to two parties with no other prior knowledge of one another, and that can encrypt their further communication. If DH is weak, the key used to encrypt the connection would become weak too, thus the entire communication on that channel could be de-crypted. DH is widely used in cryptographic protocols, it is an important part of VPN protocols like IPSec/IKE and SSH. Its use in SSL is optional, the enabling of Perfect Forward Secrecy (or PFS for short) activates Diffie-Hellman. The use of PFS is on the rise, the security community believes it to make global surveillance and eavesdropping in general more difficult. LogJam – this week’s new problem – is actually a combination of two issues: - It describes a downgrade attack in SSL - The downgrade requires an active attacker positioned on the network path between the client and the server. - The attacker can trick servers to use export grade (e.g. crackable) 512bit Diffie-Hellman groups. - This is only applicable to SSL with PFS enabled. Mitigation: servers should disable support for export grade ciphers, DHE_EXPORT specifically. Clients should validate the length of the DH generator returned by the server. Disabling PFS would also mitigate this specific attack, but PFS is believed to be more secure otherwise. - It is pre-computing Diffie-Hellman key exchanges - It describes a way to pre-compute large parts of the computation required to crack a Diffie-Hellmann key exchange. - This is applicable to all protocols that use Diffie-Hellman, such as VPNs, SSH & SSL with PFS enabled. Mitigation: instead of using default Diffie-Hellman parameters supplied by applications such as Apache and OpenSSH, we should generate those separately for every installation, making the pre-computation less useful. Both export grade ciphers and DH parameters should be configurable in almost all software today without patching, which makes the change easier to implement. The severity of this attack is definitely lower than that of Heartbleed and is comparable to FREAK: it requires an active attacker performing a man-in-the-middle and the result is also similar: the ability to decrypt and/or rewrite all communications between the two endpoints. By Balázs Scheidler CTO at Balabit Bio : Balázs Scheidler, Balabit Co-Founder and CTO, is known across the open source community as a subject matter expert and a pioneer in user behavior big data analytics. He is considered the “father” of syslog-ng, the trusted log management solution used in more than 1 million installations worldwide, such as Amazon’s Kindle Fire, Facebook and the Computing Centre of the National Institute of Nuclear Physics and Particle Physics (CC-IN2P3). He is the recipient of numerous awards, and is a married father of two.
<urn:uuid:12f6f128-6e67-4367-9bf3-d0aa6c3bc4ef>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/an-under-the-hood-look-at-logjam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00428.warc.gz
en
0.918852
744
2.609375
3
The approach to implementing cloud computing security controls in an official network demands a systematic balance in between the connection points, making it simpler for employees to use associated services. What Is Covered in Network Security? Network security covers all data protection solutions within it that have fundamental security controls for the physical environment and logical security. These are the ones that are inherent in service or avail to be consumed as SaaS / PaaS / IaaS services. Following two major components are essentially covered at this stage: - Physical Environment Security – It is used to ensure that access to cloud service is fairly distributed among all, monitored properly, and safeguarded by fundamental physical elements with which the service is constructed. - Logical Security Controls – This another cloud security control comprised of a protocol, link, and application layer services. Both data, as well as machine protection, are of the utmost important aspect for cloud service providers as well as consumers. For both these entities, the aim here is to give assurance of ongoing availability, data integrity, and confidentiality of all PCs and other resources. Failure in achieving the same results in a negative impact from a client, brand awareness, confidence, and entire security posture standpoint. Considering the point that cloud computing demands a huge amount of constant connection to and from the network elements, the 24*7 active availability of devices is essential to have. In online platforms, the basic definition of network perimeter carries different meanings under a variety of guises and deployment architectures. Now we are going to have a look at some extra add-on elements, which enhance and strengthen overall security posture in network security. You are going to learn the proper use of important cloud computing security controls and how they perform their fundamental operations in technology deployments. Top 2 Cloud Computing Security Controls In The Network - Cryptography and Encryption – The point to using cryptography and encryption is global to provision and protect confidential activities in the official network. Supporting the mission and vision of a company, the CCSP should assure that he or she knows the method to deploy and use cryptography services online. In addition to this, it is essential to consolidate services of strong key management and lifecycle of the secure key management into a cryptography solution. The demand for data confidentiality with demands to enforce extra cloud computing security controls and strategies for protecting data and communications is a perfect combination. Whether it’s an encryption solution for military service or simply the utilization of self-signed certificates, every individual has his or her list of requirements and the definitions of securing communications and cryptography-based architecture looks like. In different fields of security, encryption could be subjective when users drill down deep into strengths, algorithms, implementation methods, ciphers, and more. As per the thumb role, encryption standards should be chosen based on the information they secure. The core success factor for encryption solutions is to activate secure and legitimate access to information while securing and applying controls against unauthorized access. - Key Management – Earlier in the banking workplace culture, two persons with keys were authorized to open the safe – this result in decreasing number of crimes, thefts, and bank robberies. The encryption approach, as with the bank operations, should never be tackled or controlled by a single human. The responsibilities of segregation and encryption should always travel from one hand to another. Key management should be distributed from the vendor hosting records, and the data holder should be placed to make important decisions. Ultimately, the owner should be at a place where he or she has permission to enforce cloud computing security controls, increase encryption level, and manage key management processes, choose the storage path for encryption keys, and hold responsibilities and ownership for key management. Users have to eliminate the dependency or myth that ‘a CSP is only responsible for encryption solutions and cloud computing security controls properly.’ Cloud consumers are not restricted or limited from data spillage or shared keys within the online platforms. It is so because they have a separate and unique encryption method to enforce an extra level of security and confidentiality at the transport and data level. Additional Information – There exist two popular approaches to enforce cloud computing key management services – remote key management services and client-side key management. The major difference between both can be observed in offered processes and controls on the customer’s end. Adopt CloudCodes As One-stop Solution To Your Problem The two important cloud computing security controls (mentioned in this post) can either be achieved separately or in one comprehensive solution. It is the decision of enterprises what they want to choose. If they wish to use CloudCodes CASB solution then, they will profit with more than two cloud computing security controls. DLP solutions, access controls, mobile device management solutions, etc., will be offered by this approach. Users can automate their methods to secure confidential information and gain strong cloud network security.
<urn:uuid:54f1a2da-f76d-4c18-a85f-c3123fdb11e1>
CC-MAIN-2022-40
https://www.cloudcodes.com/blog/core-cloud-computing-security-controls.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00428.warc.gz
en
0.929166
976
2.546875
3
Well, we often hear the famous phrase, ‘Once in a Blue Moon’ which is used to define the rarity of an incident. The appearance of an additional moon during a given period can be termed as a blue moon. It is a rare but fascinating phenomenon. The next Blue Moon in 2021 will appear on August 22, 2021. The Blue Moons in the same year will be in the year 2037. The last Blue took place on October 31, 2020. Generally, blue moons are witnessed only about every two or three years however, in a rare condition in 2018, we saw two blue in one year which were only two months apart – one of them being a lunar eclipse. Different cultures have named these moons according to their occurrence in every month. For example, the full moon appearing in January is the “wolf moon. In the recent times, the term Blue has been applied to the second full moon within a single calendar month. Normally, there are approximately 29.5 days between full moons, hence it is quite unusual for two full moons to fit into a 30- or 31-day-long month. The name of every moon was set by a rule followed by the almanac. For example, the Lenten Moon would fall during Lent; during the last days of winter, hence it was called the Lenten Moon. The first full moon of spring was called the Egg Moon — or Easter Moon, or Paschal Moon. This moon would fall before the first week of Easter. The Moon Before Yule and the Moon After Yule. Hence, whenever there appeared four season moons in one season the fourth one was termed as Blue Moon, allowing the other full moons to occur at the proper times in sync to the solstices and equinoxes. What is a seasonal blue moon? There are 12 months in a year and the length of each month varies depending on the single orbit of the moon around the Earth. The months are further divided into seasons – winter, spring, summer, fall, each lasting for three months and hence we have three full moons. If any season witnesses four full moons, then the third full moon may be called a Blue Moon. A Blue appeared on November 21, 2010, another was seen on August 20-21, 2013, followed by one on May 21, 2016, and another on May 18, 2019. In the year 2067, a monthly blue moon will appear on March 30, and a seasonal Blue moon on November 20. In this instance, there are 13 full moons between successive December solstices – but only 12 full moons in one calendar year and no February 2067 full moon. Seasonal Blue Moon is the basically the original astronomical definition of a Blue. Between each astronomical season, that is the time between each solstice and equinox, generally there are three Full Moons. But some years, there are four Full Moons in a season again regarding the fourth one as a blue moon. In 2019, the full moon in May was a seasonal Blue Moon. The equinox on March 20 marked the beginning of, and less than four hours later the first Full Moon alignment was witnessed, on March 21. The second Full Moon was on April 19, and the third, the Blue Moon, on May 18. The fourth and last Full Moon before the June solstice was on June 17. Hence, we use the term once in a blue, considering its appearance. Since it appears only in two to three years. Someday a blue moon might just appear out of the blue, ironically. The moon that appeared in October 31, 2020, was not look typically blue like you might see in an image online, and blue-colored moons that we draw in pictures but will be a blue one for sure. Photoshop may give you a full blue moon but an original one that appears as a miracle of the universe is a sight to see. Blue moons are another wonderous phenomenon of nature that makes us appreciate the beaty of the universe we live in. It is an enigma that has taken scholars and experts billion of years to study and understand the functionality and yet we are just half way there.
<urn:uuid:4d64356d-5f17-498c-a446-9a42a6cfe7cf>
CC-MAIN-2022-40
https://beyondexclamation.com/the-next-blue-moon-in-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00628.warc.gz
en
0.976296
848
3.21875
3
Technologies in the field of data science are progressing at an exponential rate. The introduction of Machine Learning has revolutionized the world of data science by enabling computers to classify and comprehend large data sets. Another important innovation which has changed the paradigm of the world of the tech world is Artificial Intelligence (AI). The two technological concepts, Machine Learning and Artificial Intelligence, are often used as interchangeable terms. However, it is important to understand that both technologies supplement each other and are essentially different in terms of their core functions. It is often predicted by technology enthusiasts and social scientists that human beings in the workforce will soon be replaced by self-learning robots. It is yet to be seen whether there lies any truth in these predictions or not but for 2017, the following trends have been prominent in the development of Machine Learning. Table of Contents Giant companies will develop Machine Learning based Artificial Intelligence systems In 2016, we saw many prominent developments in the domain of Machine Learning, and numerous artificial intelligence applications found a way to our phone screens and caught our attention. In the previous year, companies just touched the tip of the iceberg and in 2017, we will continue to see more developments in the field of machine learning. Big names such as Amazon, Google, Facebook and IBM are already fighting a development war. Google and Amazon launched successful apps, which include Amazon Echo and Google Home, at the beginning of the year, and we have yet to see what these tech giants have in stores for their customers. Algorithm Economy will be on the Rise Businesses greatly value data to take the appropriate actions, whether it is to understand the consumer demand or comprehend a company’s financial standing. However, it is not the data alone they should value because without an appropriate algorithm, that data is worth nothing. Peter Sondergaard, Senior Vice President of Gartner Research, says that, “Data is inherently dumb and the real value lies in the algorithms which deduce meaningful results from a cluster of meaningless data”. Algorithm Economy has taken center stage for the past couple of years, and the trend is expected to follow as we expect to see further developments in machine learning tools. The use of algorithm economy will distinguish small players from the market dominators in 2017. Small businesses that have just entered the transitional phase of embedding machine learning processes in their business models will be using canned algorithms in tools such as BI, CRM and predictive analysis. On the contrary, large enterprises will use proprietary ML algorithms. Expect more Interaction between Machine and Humans Google Home and Amazon Echo received an exceedingly positive response from the audience which made it evident that consumers perceive human-machine interaction positively. Innovative technologies embedded with machine learning processes prove to be helpful under various circumstances; for example, helping people with eyesight issues to navigate. But will they completely replace human-human interaction? Maybe 25 years down the road, but we do not see that happening anytime soon. Machine learning has made it increasingly possible for machines to learn new skills, such as to sort, analyze and comprehend. But nevertheless, there are certain limitations to it. Automated cars have frequently been tested, and even with modified algorithms and advanced technologies, the chance of an error is still present. This example alone is enough to convince that machines will not completely replace humans, at least not anytime soon. Machine learning and Artificial Intelligence is a promising field with much potential for growth. We have seen some recent developments in the sector which, not long ago, people believed were not possible. Therefore, we cannot give a definite verdict regarding the industry’s potential for growth. However for now, intelligent machines are only capable of handling the repetitive tasks and can follow a predetermined pattern. It lacks the skill to figure out things which are out of the ordinary, and we still require human intervention for keeping the chaos at bay in such situations. Like this article? Subscribe to our weekly newsletter to never miss out! This article was originally posted on SimpliLearn.
<urn:uuid:56d09691-4868-4102-ba11-2f07eec799e7>
CC-MAIN-2022-40
https://dataconomy.com/2017/08/trends-machine-learning-2017/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00628.warc.gz
en
0.950897
812
2.734375
3
There’s a lot of information about data science; what it does and how it has revolutionized some of the world’s leading economies. Many companies are now using data analytics to make predictions, and the results are pretty impressive. From the chatbots you find on web pages to the personalized marketing pop ups from your favorite Nike store - all these explain the power of data science. We’ve highlighted four amazing applications of data science below. Used in sentiment analysis Data scientists use sentiment analysis to measure the feelings and attitudes associated with a given string of words. It could be a speech from a renowned political figure, a spoken dialogue, or an excerpt from your favorite music. Over the years, data science has been evolving and opening opportunities for a broader range of applications. Data Science with sentiment analysis is currently being used to explore far and beyond human behaviors. Human reactions and impacts from events occurring within the local and online environments can now be modeled with accurate data. Politicians often use sentiment analysis to market themselves by predicting how various segments of a population will perceive their message. This is the current trend for multiple brands looking to sell their products and outshine their competitors. Banking and finance Banks and lending firms use data science to manage their inventories, make smarter lending decisions, and even keep away fraudsters. From risk modeling, customer segmentation to real-time predictive analytics, data science is the unsung hero in the banking sector. By using various data science tools, banks can tailor personalized marketing to their clients and use AI algorithms to improve their analytics strategy. On the other hand, the finance sector uses data science to model sustainability through risk management and cost-efficiency. Machine learning algorithms are used for predictive analytics that allows firms to foresee customer lifetime value and to improve customer and client relationships. There are various ways in which healthcare systems are making use of data science. Data analytics and Artificial Intelligence have complemented each other, and this is true in predictive modeling for diagnosis. Clinical and operational data are now used to create models that can learn historical patterns, which are later used for insightful predictions. Data science and analytics play a significant role in medical image analysis and drug discovery. The latter helps to speed up the drug manufacturing process and boosts revenue. Transport and logistics With the help of data science, the transport and logistics industry can now track fuel consumption patterns, freight conditions, and driver behavior from the comfort of their smart devices. With active vehicle monitoring, several variables such as consumer profile, economic indicators, location, etc. can predict when and how certain products should be delivered to customers. These insightful reports help with decision making and execution strategies. Several industries have leveraged data science to drive growth, minimize risks, and make informed decisions. The future of data science and analytics is already with us. To beat the odds of competition and to serve their clients better, every business should tap the irresistible power of data science.
<urn:uuid:4e1dcb5e-ed64-4661-83a8-49a0b94d9d21>
CC-MAIN-2022-40
https://www.mytechlogy.com/IT-blogs/27540/4-amazing-applications-of-data-science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00628.warc.gz
en
0.9287
608
3.140625
3
The annual World Password Day raises awareness of the importance of having robust and resilient password hygiene practices in place. We hear from various industry experts who discuss best practices for password security. Cian Heasley, Security Consultant at Adarma: “It’s common for people to resort to simple, easy to remember passwords reused across most, if not all accounts. This is a recipe for disaster and could result in identity theft or account takeover. Length and complexity are essential for a strong password, as passwords with these characteristics require more effort and time for an adversary to crack. Passwords should contain at least 10 characters and include a combination of special characters, as well as upper-case and lower-case letters, and numbers. Having said this, the rule of thumb when it comes to passwords is that you should never reuse them. Reusing passwords is a massive red flag and can leave users’ accounts susceptible to being compromised. “To maintain healthy password habits, it’s important that people make their passwords manageable. This can be done by striking a balance between memorable and complex passwords. People are more likely to forget an overly complex password, making it of no use. Users should try to make use of passphrases where they can arrange unrelated words in an odd order to create a powerful password. “Password managers are a useful tool for overcoming the challenges of traditional password security methods as they help to maintain good password practices. Password managers generate complex, random and unique passwords for all the individual sites a user visits and stores them all securely so users don’t have to worry about remembering them. They also alert users if they are reusing the same password across different accounts and notifies them if a password appears within a known data breach so that they know to change it. “The capabilities of each password manager vary between providers. For example, LastPass produces a master password by appending an email and a password and then hashing it. This then produces a vault key, which is hashed again with the password before it’s stored on the cloud. Furthermore, these tools have become even more useful and efficient in recent years due to technological improvements. Some password managers have incorporated biometrics into their authentication, allowing users to login to their account through touchID instead of via their master password. This eliminates the need to type out the same password with each attempt, considerably improving the user experience.” Thomas Richards, Principal Security Consultant at the Synopsys Software Integrity Group: “The username/password combination remains at the core of all digital authentication; the use of which will not end in the foreseeable future. While MFA adds an additional layer of security to better protect systems and end-users from compromise, passwords are still a core component of such MFA authentication. “Password compromises can often be attributed to other security issues such as vulnerable software or poor development practices. When caused by poor password hygiene, there is likely a technical control which isn’t fully implemented, such as the requirement for strong/effective passwords. Humans tend to choose the easiest approach and without policies to require strong/long passwords, users prefer to default to weak/short passwords. “I wouldn’t necessarily support the notion that more education alone is the way forward; however, companies should continue their cybersecurity training – including training around password security best practices. In this training, the curriculum should incorporate what constitutes a strong password. Companies should also stay up to date with industry standard best practices for password security. “Password managers provide many benefits that assist people with managing the many different passwords needed in today’s world. They provide secure storage, feedback if a password is considered weak, and can generate complex passwords as needed. All of these things help the user maintain their passwords according to best practices to reduce the risk of a compromise. Companies that have created password managers have put great thought into protecting passwords. Strong encryption is used for all storage and transmission of the password so that even the hosting company is compromised, the data is always encrypted with only a key or password the user knows. “Password managers are also easy add-ons to web browsers, mobile phones, or are even part of the operating system or browser. This integration makes using them very easy for the user. Apple Keychain is an excellent password manager that is deeply integrated within the iOS and mac ecosystem; however, it is limited to only Apple devices. The Google Chrome web browser has built-in password manager capabilities much like the Apple Keychain. With Chrome being cross-platform, the user is able to take their passwords with them when not on an Android device. “Strong passwords are the foundation of Internet security best practices. Passwords should be as long as possible and contain a mixture of upper- and lower-case letters, numbers and symbols. I also recommend to people that instead of using a single word with variations, create a three- or four-word sentence. The length and complexity of a sentence greatly reduces the chance of a password being brute-forced in a password cracking attempt. For added security, enable Multi-Factor Authentication where possible on any web application that allows it. Multi-Factor Authentication, coupled with a strong password, provides a robust defence for your Internet accounts against attackers.” Sadiq Khan, CISO at BlueVoyant: “World Password Day is of extra importance this year because of a rapid increase in attacks designed to get around measures that make account logins more secure. First and foremost, it’s still important to use strong passwords. BlueVoyant continues to observe large volumes of compromised credentials being sold on Dark Web forums, which are in turn used to breach victim organisations. Organisations should ensure they have monitoring in place to detect when their credentials are compromised and potentially being sold by cybercriminals. “In addition to password hygiene, Multi-Factor Authentication (MFA) should be enabled by default across all organisations. MFA is a more secure way of authenticating compared to merely using a password, requiring users to provide at least two verification factors in order to access a device or account. BlueVoyant has seen threat actors move on from potential victim organisations once they determine MFA is in place, and move on to the next target looking for an organisation that doesn’t have it. “However, given the uptick in organisations using MFA in their cyber defence, there has been a recent increase in MFA-bypass attacks. These attacks rely on social engineering techniques to lure and trick users into accepting fake MFA requests. Some specific methods of attacks include sending a large amount of MFA requests and hoping the target finally accepts one to make the noise stop, or sending one or two prompts per day, which attracts less attention, but still has a good chance the target will accept the request. “Attackers will also use more aggressive social engineering, such as Vishing (voice phishing) that requires calling the target, pretending to be part of the company and telling the target they need to send an MFA request as part of a company process. Sometimes attackers even use bots to call, instead of an actual person. “In the past few months, some well-known hacking groups have gotten around MFA controls to breach very large companies that are household names. “The best defence is training employees. They should know if they are ever unsure about an MFA request, to reject it. Instead, they should only accept MFA requests they know they have initiated.” Hadi Jaafarawi, Managing Director – Middle East, Qualys: “Passwords have been around for years and they will continue being used. Why? They are an extremely simple approach to enforce some degree of security that works when everything around it is done correctly. “The challenge with passwords is that they have become increasingly complex to manage sufficiently, due in part to the sheer number of accounts that users hold. The rules around passwords can make them harder for people to remember, so they either reuse one password for multiple accounts or write them down. Equally, best practices for secure passwords can be missed. Take something like enforcing a limit to the number of times users can attempt to enter a password so that attackers can’t use dictionary attacks or password libraries to brute force their way in. This might be obvious for applications that are customer-facing, but those rules should also apply to internal applications or cloud services too. “In today’s world, passwords alone are not enough to keep IT access secure. As such, tools like Multi-Factor Authentication (MFA) – which requires users to provide two or more verification factors to gain access to a resource – have become available to further improve security hygiene. Companies, no matter the industry or size, must recognise the value of strong security and doing the small things, like implementing MFA, right. “What can companies be doing to improve password hygiene? For starters, ensure that users cannot use a simple dictionary word as their password, and enforce different controls so they cannot reuse the same password multiple times. It is important to apply rules on length of passwords and the variety of characters used, in addition to looking out for poor security practices such as missing MFA or lack of role-based access control. “A great approach to identity management is required across the Middle East. For example, the Kingdom of Saudi Arabia’s Essential Cybersecurity Controls requires Multi-Factor Authentication (MFA) for remote access, while the UAE Information Assurance (IA) Regulation requires strong authentication for access to physical and digital systems. Bahrain’s National Cyber Security Centre highly recommends the use of MFA for accounts in its Cyber Essentials guide too.” Toni El Inati – RVP Sales, META & CEE, Barracuda Networks: “Simply put, protecting user credentials is one of the most important things organisations can do to defend against ransomware and other cyberattacks. In fact, the 2021 Verizon Data Breach Investigations Report (DBIR) reveals that threat actors value credentials more than any other data type, including personal data. Once compromised, stolen credentials can be used in a myriad of malicious ways including unauthorised access, credential stuffing, password spraying and brute force attacks. “Despite significant awareness, employees still utilise weak passwords so user training needs to go hand in hand with tools and policies. Password management is a critical first step, but it’s not enough. Companies need to deploy anti-phishing protection as well as the right application and edge security solutions. Passwords aren’t going away anytime soon and the with 80% of all basic web application attacks still relying on stolen credentials, neither are attacks.”Click below to share this article
<urn:uuid:4564a789-8b83-4ace-a1b3-6cc80e152995>
CC-MAIN-2022-40
https://www.intelligentcio.com/me/2022/05/05/expert-advice-for-layering-up-your-defences-this-world-password-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00628.warc.gz
en
0.937901
2,216
2.828125
3
When you write code in which the results set returned by a SELECT statement includes more than one row of data, you must declare and use a cursor. A cursor indicates the current position in a results set, in the same way that the cursor on a screen indicates the current position. A cursor enables you to: The example below demonstrates the following sequence of events: EXEC SQL DECLARE Cursor1 CURSOR FOR SELECT au_fname, au_lname FROM authors END-EXEC . . . EXEC SQL OPEN Cursor1 END-EXEC . . . perform until sqlcode not = zero EXEC SQL FETCH Cursor1 INTO :first_name,:last_name END-EXEC display first_name, last_name end-perform . . . EXEC SQL CLOSE Cursor1 END-EXEC
<urn:uuid:c8cf0ca7-ae4b-46a7-a56b-9747c7f3be0d>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/visual-cobol/vc50/EclUNIX/BKDBDBCURS.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00628.warc.gz
en
0.689667
179
2.84375
3
This course contains many z/OS-related tips, tricks, techniques, and best-practice items that you may find useful in your day-to-day activities. It covers several new areas of functionality associated with z/OS 2.3 and z/OS 2.4. Operators and system programmers who need to monitor and interact with z/OS systems. It is assumed that the student has successfully completed all previous z/OS courses in this curricula, or has equivalent knowledge. After completing this course, the student will be able to identify z/OS system commands and activities to improve system performance, and more quickly resolve system problems. Tips and Tricks – z/OS Operations Identifying System Attributes – D IPLINFO Displaying CPU Details Displaying Other System Software Details Displaying all Systems in a SYSPLEX Executing MVS or JES Commands in Batch JES2 Command Statement in JCL Executing MVS or JES Commands Using Batch SDSF Using Automatic Command Processing to Invoke MVS and JES Commands Useful Commands When JES Issues Warning Messages Intermediate Action to Relieve JES2 Resource Shortage Refreshing Software Attributes – LLA Refreshing Software Attributes – Catalog Address Space PDS or PDSE? Tips and Tricks – z/OS Systems Programmer Refreshing Software Attributes – SMF Displaying Allocations Using ISRDDN IBM Doc Buddy SDSF Enqueue Panel Tuning Tips – Component Trace Tuning Tips – SMF Records JES2 Hold and Dormancy JES2 Checkpoint Statistics Using PERFDATA to Display Statistics Resetting PERFDATA Statistics Using $JDHISTORY to Analyze Resource Shortages
<urn:uuid:09ad1d48-7e08-4c94-abac-6900b18ca79d>
CC-MAIN-2022-40
https://interskill.com/?catalogue_item=z-os-advanced-tips-tricks&noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00628.warc.gz
en
0.775844
399
2.6875
3
Even though it can be messy, burglars have no problem when it comes to breaking the glass to conduct a break-in. A fragile material, it makes for an easy way to access a property. While there are some exceptions, such as polycarbonate and bulletproof glass, many homes are installed with standard glass windows and doors that offer less protection against burglars. This is why many families install glass break sensors in their homes as they deter thieves and provide protection for their loved ones and prized possessions. If you’re thinking about investing in one of these systems, it’s crucial to learn how do glass break sensors work? and why they are a must in the event of a home invasion. What are Glass Break Sensors? Glass break sensors are devices that detect when a pane of glass is broken or shattered. They are most commonly positioned near glass doors or windows and are used as a burglar deterrent as well as an immediate alert if your property experiences a security breach. How Do They Work? If you are wondering how do glass break sensors work, they can either work alongside window and door sensors or independently monitor sound and vibration. They work by detecting shock waves or frequencies that correlate with shattering glass and subsequently sound an alarm, alerting the homeowner of a security breach. Glass break sensors are excellent home security additions because you can set them to being armed at all times. This differs from standard motion detectors, as those systems have to be turned off when anyone is present in the home. When it comes to the science behind these systems, there are two main types of glass break alarms. The first alarm has built-in shock sensors that work by monitoring the vibrations of breaking glass. The other alarm has acoustic monitoring, with a small microphone installed, which listens out for the specific frequencies made by shattering glass. For example, if an intruder were to break in through a window, the acoustic alarm would pick up the high-pitched shattering sound, generating the alarm. Why Are They a Must in the Event of a Home Invasion? There are many reasons why glass break sensors are an extremely important addition to any home security system. They provide extra protection in the event of a home invasion and give homeowners peace of mind knowing that these systems can deter potential intruders. Essentially acting as an extra layer of security to keep both your property and family safe, here are four reasons why you should invest in these great systems: Glass break sensors can be installed in different locations around the house, making them versatile home security systems. You can also purchase as many as you need and have them placed around every glass entry point. As well as being versatile enough to be located in different positions, they also have a great range. However, the more expensive the system, the higher the quality it will be. With this in mind, if you live in a large property with vast room sizes, it’s advised you opt for the devices with the best ranges. They are Easy to Install As well as their versatility and great range, glass break detectors are extremely easy to install. With the manufacturer’s instructions providing straightforward assembly steps, you most likely won’t need to call a professional technician to install them. They Support Motion Sensors Many family homes have motion sensors installed for security purposes. However, as previously mentioned, these systems have to be disabled when people are in the house to avoid them constantly going off. With this constant deactivation, many people often forget to turn them back on when they vacate the house, leaving properties vulnerable to being burgled as intruders can break in undetected. Glass break sensors are active 24/7 and instantly sound an alarm if an intruder tries to enter your home by breaking a window or door. Not only this, but if a burglar tries to break in through an upstairs window, you may not be able to hear them if you are downstairs or vice versa. However, with the installation of a glass break sensor, you will immediately be informed of a break-in if an intruder tries to break in through a window or door. They are Visible For extra security, it’s always a good idea to install glass break sensors in locations where they are visible. With them being in plain sight, anyone who walks past or is standing by your window or door will be able to see the system, therefore hopefully deterring them from attempting a break-in. For example, if an intruder has cased your home and is planning on targeting it for a break-in if they notice you now have glass break sensors installed, it will dramatically reduce their burglary attempt, as there is more chance of them getting caught. They Offer Wider Coverage It’s not always feasible to place cameras or motion detectors at every location around your home. There may be an obstruction impeding their operation, or there may be legal difficulties regarding surveillance of areas common to both you and your neighbors. Glass break sensors can be placed on any glass surface, from front doors to basement windows, ensuring all entry points to your home are covered in case of a home invasion. They Work Before the Burglar Gets into Your Home Without a glass break sensor, a criminal simply needs to break a window to turn a lock and access your home. Motion detectors only work after the intruder has entered your home, making you more vulnerable to theft. On the other hand, glass break sensors initiate the alarm system the second the glass smashing is detected, often causing the would-be intruder to flee, ensuring that they never enter your premises. Find out more information like this on our home security blog Secure Your Home and Stay Safe If you’re serious about protecting your family and your property from a home invasion, then it’s imperative you invest in glass break sensors. Monitoring your home 24/7 and alerting you if there is ever a breach, they’ll provide you with an extra sense of security and deter potential thieves from breaking in. Last Updated on
<urn:uuid:7d0003d2-bc83-4206-892d-a5a10e89e32d>
CC-MAIN-2022-40
https://www.homesecurityheroes.com/how-do-glass-break-sensors-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00628.warc.gz
en
0.954068
1,242
2.8125
3
What is the constant function? The constant function used to provide a way to define a very simple price. Always returns to the fixed price. In order to create a pricing plan with "Constant", select control: (1) a. add the desired name for the control. b. specify minimum and maximum value. Note! When you set the number for the maximum value you can not choose a number more than that. By selecting "Price" you can switch to price configuration. c. In Price is vendor price. Retail Price is a suggested retail price. You can switch to preview mode by selecting "Preview". The section will be open in a new window. In Price and Retail Prices will remind constant despite the selected amount (up to maximum value). Was this article helpful? Articles in this section - Configure quantity change rules - Design section overview - How to configure the price plan? - What is the choice price table? - What is the constant function? - What is quantity price table function? - What is Tier mode? - How to calculate with tier mode Graduated? - How to calculate with tier mode Up to? - How to calculate with tier mode - Volume?
<urn:uuid:c07950c2-1b1a-446f-b914-f9a90ba63d8b>
CC-MAIN-2022-40
https://support.appxite.com/hc/en-us/articles/360012023560-What-is-the-constant-function-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00628.warc.gz
en
0.834677
260
2.515625
3
Blog: Monday Project Management Blog You’ve undoubtedly seen many bar graphs in your lifetime. They’re common in magazines, newspapers, academic material, websites, and just about any other medium where someone wants to visualize meaning between two data sets. Incidentally, you probably understood what the graph’s data represented, which is what makes it such a useful visualization tool. In this guide, we’ll take a deep dive into bar graphs, including all the various types and how they work. We’ll also cover some best practices for creating your own. But first, let’s start with a basic definition. What is a bar graph? Sometimes referred to as a bar chart or column chart, a bar graph is a visual tool used to compare frequencies, counts, totals, or data averages across different categories. The data in bar graphs can be anything from the number of students enrolled in a class to business earnings across quarters to yearly rainfall. In most bar graphs, you list one data category along the left side of the graph, running from bottom to top, while you list the other category along the bottom, from left to right. You place each bar on one axis, corresponding to a specific data point, which stretches to correspond with the matching data point on the other axis. Bar graphs are extremely useful for understanding the differences between two data sets at a glance. The two axes in bar graphs are referred to as the y-axis, where data points are plotted vertically, and the x-axis, where data is plotted horizontally. This is important to keep in mind, as the names of the different bar graphs can be a little confusing with respect to the axes. There’s often some confusion between bar graphs and histograms. While they look very similar, they’re actually quite different. The differences between a bar graph and a histogram A histogram is another visual chart that uses bars to represent information on two axes. One key difference is that a histogram is a statistical graph that displays continuous data and its frequencies. The other key difference is that histograms represent non-discrete data points. In short, the histogram data is non-discrete and visualizes the frequency of occurrences. The easiest way to distinguish a bar graph from a histogram is to check whether you could move the bars around without changing the meaning of the data. If you can, you’re looking at a bar graph. With an understanding of the fundamentals of bar graphs, let’s look at the different types of bar graphs and what you use them for. The different types of bar graphs Basic bar graphs are straightforward. But there are a few different types to choose from, and some are better than others, depending on the type of data you’re visualizing. Vertical and horizontal bar graphs The most common bar graphs are called vertical bar graphs. They’re great for comparing data categories such as age groups or profits. On a vertical bar graph, each bar is placed on the x-axis corresponding to a parameter and grows upward to match the corresponding parameter on the y-axis. If you want to visualize data spread across a timeline, a vertical bar graph is one of the best ways to do so. Where vertical bar graphs get troublesome is if you’re working with text-heavy data parameters. Positioning the bars along the x-axis doesn’t leave much room for labeling — this is where a horizontal bar graph comes in handy. Horizontal bar graphs are simply vertical bar graphs rotated to the right. By flipping the axes and having the bars grow from left to right, you leave more room for labels on the left side of the chart. As we look at the other kinds of bar graphs, keep in mind that you can represent each one in a vertical or horizontal orientation. It really comes down to which one is more visually appealing and easier for the viewer to understand. Grouped bar graphs Bar graphs compare parameters from two data sets. But in some cases, you may want to add other parameters from one of those two data sets, which is where grouped bar graphs come in handy. For example, if you have a vertical bar graph showing your company’s profits over the last four quarters, each bar would represent a single quarter matched to its value on the left. But if you wanted to visualize how two different products contribute to the overall profit, you could group two bars for each quarter to show how profits are doing across each quarter and for each product. In theory, you can break data sets down into multiple categories and add several grouped bars, but doing so makes the graph harder to understand. Stacked bar graphs Following a similar formula, stacked bar graphs let you split a single column into multiple parameters. In a stacked bar graph, a single column is distinguished with different colors and corresponding labels to pull multiple parameters from a single data set. They’re good for big picture views of data since a large stacked bar chart can include quite a bit of information. Following the previous example of quarterly profits by product, if the company in question has several products, a grouped bar graph would be the perfect tool to reach for to see how each product contributes overall. With a deep understanding of bar graphs, the next step is knowing when to use them. When should you use a bar graph? With the right tools and data, bar graphs are easy to create and powerful visualization tools to help you convey meaningful, measurable information, whether it’s to a teacher, a colleague, or a boss. If you want a powerful way to visualize the differences between a handful of parameters from two data sets, a bar graph is one of the best ways to do so. Here are some opportunities where a bar graph would serve you well: - Correlation: If you want to visualize how two sets of seemingly disparate data correlate, you can check and present your findings with a bar graph. - Understanding large data sets: If you’re working with a large amount of data in a tabular form, then visualizing the data can help clarify the bigger picture. - Sortable comparisons: If you’re working with two sets of data where you need to rank parameters, the ability to rearrange and sort the columns and data points on bar graphs is visually powerful. Best practices for creating great bar graphs Creating bar graphs is straightforward, but to be sure, the idea is to communicate a complex idea in a way that’s simple to understand. Bar graphs should allow viewers to infer insights without too much mental overhead. These best practices will help get the job done: - Column widths: Make sure your columns are wider than the spacing between them. - Styling: Use colors to make distinctions and reinforce the idea of the graph rather than for aesthetic purposes. - Keep bars simple: Many tools let you add quirky design elements or images to your bar graph’s columns. It’s best to keep things simple to convey the overall message rather than distract from it. - Choose the right bar graph: If you find that your information is crowded or hard to visualize, a different orientation or graph type is probably necessary.
<urn:uuid:b455fbcb-d857-4bce-b028-da7490b351e0>
CC-MAIN-2022-40
https://www.businessprocessincubator.com/content/bar-graph/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00628.warc.gz
en
0.909526
1,491
3.609375
4