text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The figure reinforces the gaping disconnect between government organizations and schools — and the potential fallout from ransomware attacks. Ill-prepared organizations often pay millions of dollars to restore data and rebuild networks after a ransomware attack. Example victims include: State and Local Government: Ransomware Research Perspectives The irony: Nearly 80 percent of state and local IT leaders said they believe ransomware is an ongoing threat to their organizations and will not disappear any time soon, the survey indicated. Moreover: More than 90 percent of respondents with a plan are confident their organization could survive a ransomware attack, while only 56 percent of those without a plan share that confidence. 79 percent disagreed with the statement that ransomware will subside significantly over the next 12 to 18 months. Over 75 percent expressed confidence in their organization’s ability to prevent compromise via common attack vectors. At least 67 percent said they need to make additional investments to respond effectively to ransomware attacks. 31 percent have a completed incident response plan for ransomware, and 22 percent did not know if they had made such preparations. Ransomware-as-a-service and various sophisticated technologies are making it easier than ever to execute ransomware attacks, Palo Alto Networks indicated. As such, ransomware attacks look poised to increase in severity and volume in the foreseeable future, and state, local and educational organizations must plan accordingly. How Can State, Local and Educational Organizations Protect Against Ransomware Attacks? State, local and educational IT officials cited providing employees with security for their home networks (41 percent) and hiring more IT or security staff (37 percent) as the top things they can do to protect against ransomware attacks, the survey showed. In addition, partnering with an MSSP provides a viable option for state, local and educational organizations to guard against ransomware attacks, the survey revealed.
<urn:uuid:f0ed9d0c-bb05-4447-9013-d9f4a4c54bb2>
CC-MAIN-2022-40
https://www.msspalert.com/cybersecurity-research/most-state-and-local-governments-still-lack-ransomware-cyberattack-response-plans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00031.warc.gz
en
0.944611
368
2.578125
3
The spread of big data tools and technologies is not only altering the way government data is being analyzed, it is reshaping the data center itself. The big data trend is exerting its influence almost everywhere data is being mass produced, from the high-performance computing arena to the operations and planning departments of the nation’s big cities. Yet another area where big data is having a substantial impact has received less attention: in the data center itself, where technology managers say big data is gradually reshaping traditional data systems and practices. Big data refers to the problem of processing very large data sets that defy conventional data management and analysis technologies. The growth of these data sets, and the need to extract value from them, has compelled agencies to start using tools such as the Apache Foundation’s Hadoop distributed computing framework, columnar databases and other big data management solutions. The adoption of these technologies, in turn, is leading to a gradual restructuring of the data center, including the move to more converged IT infrastructures, alterations in data center traffic patterns and changes in the basic economics of storage. Agency interest in big data management is hardly surprising giving the spectacular growth of data. Van Young, big data and cloud solutions strategist at HP, citing IDC figures, noted that the global data population is projected to reach 40 zettabytes (1 billion terabytes) by 2020, compared with 1.8 zettabytes in 2012. At the agency level, the struggle to manage data on that scale has already begun to tax conventional IT systems, and it is encouraging data center managers to adopt solutions that were considered exotic only a couple of years ago. “You need to deploy some newer technology,” Young said, who spoke recently at a Public Sector Partners Inc. big data conference. Hadoop as change agent Hadoop is among the more notable of the recent arrivals. The open source software framework includes the Hadoop Distributed File System (HDFS), which distributes large data sets across the servers in a Hadoop cluster. Another key Hadoop application, MapReduce, provides distributed data processing across the cluster. This structure keeps data and processing resources in close proximity within the cluster. A Hadoop cluster’s servers are typically interconnected via gigabit Ethernet or in some cases 10 gigabit Ethernet. All of these new technologies within the Hadoop architecture are leading the way to a more converged data center infrastructure. “Hadoop, when you think about it, is the combination of storage, servers and networking together — and the cluster communication in between,” said Young, who contrasted Hadoop’s approach to the previous practice of managing servers and storage in individual silos. “Hadoop changed the landscape of data center infrastructure,” Young said. “Converged infrastructure is really the key.” The impact of Hadoop, however, varies from agency to agency. Peter Guerra, principal in Booz Allen Hamilton’s Strategic Innovation Group, said many federal agencies are testing the Hadoop waters. Some agencies are operating Hadoop pilots along side what their data centers normally run on a day-to-day basis. Changes in data center ecosystem But Hadoop has proven more transformative in other agencies, particularly those in the Department of Defense, where Guerra said some federal organizations are opting for Hadoop over traditional network-attached storage (NAS) and storage-area network (SAN) technologies. “We have seen clients go away from a NAS and SAN type architecture, favoring large Hadoop clusters instead for long-term data storage needs,” Guerra said, calling it a pattern that occurs among agencies that deploy Hadoop as enterprise technology in mission-critical areas. Storage isn’t the only aspect of the data center affected by the big data shift, however. Robert Wisnowski, a big data storage and cloud solutions strategist at HP, said big data is also changing traffic patterns in the data center. Hadoop nodes in a cluster communicate among each other in an east-west pattern instead of north-south, the latter a pattern that is more typical in traditional data centers. “So this is something to keep in mind and consider as you are looking at the impacts to your data center,” Wisnowski said. “I want to think about flatter, simpler types of data center fabrics to improve performance and reduce that latency and reduce the cost.” In other cases, however, big data systems can exploit data center upgrades that evolved independently of the big data trend. Rich Campbell, chief technologist at EMC Federal, said the majority of the company’s federal customers are transitioning from GigE to 10GigE. Those upgrades were not necessarily planned with big data in mind, he said, but the technology will be able to leverage the new networks just the same. Similarly, agencies that have virtualized servers offer a resource that big data can tap. “They don’t need more servers, they just use them in a different way,” he said. Yet another impact of big data technology on the data center: the potential to reduce cost. A traditional enterprise data warehouse, for instance, uses extract, transfer and load (ETL) tools to integrate data from various sources and shuttle the data to a warehouse. Guerra said this process can prove complicated, time consuming and expensive. Hadoop, in contrast, allows for the integration of data prior to analytics. In some circumstances, this can reduce an agency’s use of data integration and ETL tools, which Guerra said saves on software licensing costs. The converged big data infrastructure also saves money. Wisnowski said the act of bringing together storage and servers lets IT organizations reduce costs. And converged systems, he said, do more in a smaller footprint, trimming power and cooling expenses.
<urn:uuid:4179bf34-d9f3-4ede-bf5c-905ad322b97a>
CC-MAIN-2022-40
https://gcn.com/cloud-infrastructure/2014/02/how-big-data-is-remaking-the-government-data-center/290174/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00031.warc.gz
en
0.931067
1,237
2.625
3
One of the big selling points of SD-WAN tools is their ability to use the Internet to deliver private-WAN levels of performance and reliability. Give each site connections to two or three Internet providers and you can run even demanding, performance-sensitive applications with confidence. Hence the amount of MPLS going away in the wake of SD-WAN deployments. (See Figure 1.) The typical use case here, though, is the one where the Internet can also do best: networks contained within a specific country. In such a network, inter-carrier connectivity will be optimal, paths will be many, and overall reliability quite high. Packet loss will be low, latency low, and though still variable, the variability will tend to be across a relatively narrow range. Global Distance = Latency, Loss, and Variability Narrow relative to what? In this case, narrow when compared to the range of variation on latency across global networks. Base latency increases with distance, inevitably of course, but the speed of light does not tell the whole story. The longer the distances involved, the greater the number of optical/electronic conversions, bumping up latency even further as well as steadily increasing cumulative error rates. And, the more numerous the carrier interconnects crossed, the worse: even more packets lost, more errors, and another place where variability in latency creeps in. A truly global Internet-based WAN will face innumerable challenges to delivering consistent high-speed performance thanks to all this complexity. In such a use case, the unpredictability of variation in latency as well as the greater range for the variation is likely to make the user experiences unpleasantly unreliable, especially for demanding and performance-sensitive applications. Global Fix: Optimized Middle Miles To fix the problem without simply reverting to a private WAN, one can seek to limit the use of public networks to the role they best fill: the ingress and egress, connecting a site to the world. But instead of having the Internet be the only path available to packets, you can also have a controlled, optimized, and consistent middle-mile network. Sites connect over the Internet to a point of presence (POP) that is “close” for Internet values of the term—just a few milliseconds away, basically, without too many hops. The POPs are interconnected with private links that bypass the complexity and unpredictability of the global Internet to deliver consistent and reliable performance across the bulk of the distance. Of course, they still also have the Internet available as backup connectivity! Given such a middle-mile optimization, even a globe-spanning SD-WAN can expect to deliver solid performance comparable to—but still at a lower cost than—a traditional private MPLS WAN.
<urn:uuid:d88819db-4cad-435c-abeb-97aec2a3dd1e>
CC-MAIN-2022-40
https://www.catonetworks.com/blog/unstuck-in-the-middle-wan-latency-packet-loss-and-the-wide-wide-world-of-internet-wan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00031.warc.gz
en
0.927678
565
2.546875
3
The importance of machine learning data This article was originally published at Algorithimia’s website. The company was acquired by DataRobot in 2021. This article may not be entirely up-to-date or refer to products and offerings no longer in existence. Find out more about DataRobot MLOps here. Machine learning data analysis uses algorithms to continuously improve itself over time, but quality data is necessary for these models to operate efficiently. Why is machine learning important? Machine learning is a form of artificial intelligence (AI) that teaches computers to think in a similar way to humans: learning and improving upon past experiences. Almost any task that can be completed with a data-defined pattern or set of rules can be automated with machine learning. So, why is machine learning important? It allows companies to transform processes that were previously only possible for humans to perform—think responding to customer service calls, bookkeeping, and reviewing resumes for everyday businesses. Machine learning can also scale to handle larger problems and technical questions—think image detection for self-driving cars, predicting natural disaster locations and timelines, and understanding the potential interaction of drugs with medical conditions before clinical trials. That’s why machine learning is important. Why is data important for machine learning? We’ve covered the question ‘why is machine learning important,’ now we need to understand the role data plays. Machine learning data analysis uses algorithms to continuously improve itself over time, but quality data is necessary for these models to operate efficiently. To truly understand how machine learning works, you must also understand the data by which it operates. Today, we will be discussing what machine learning datasets are, the types of data needed for machine learning to be effective, and where engineers can find datasets to use in their own machine learning models. What is a dataset in machine learning? To understand what a dataset is, we must first discuss the components of a dataset. A single row of data is called an instance. Datasets are a collection of instances that all share a common attribute. Machine learning models will generally contain a few different datasets, each used to fulfill various roles in the system. For machine learning models to understand how to perform various actions, training datasets must first be fed into the machine learning algorithm, followed by validation datasets (or testing datasets) to ensure that the model is interpreting this data accurately. Once you feed these training and validation sets into the system, subsequent datasets can then be used to sculpt your machine learning model going forward. The more data you provide to the ML system, the faster that model can learn and improve. What type of data does machine learning need? Data can come in many forms, but machine learning models rely on four primary data types. These include numerical data, categorical data, time series data, and text data. Numerical data, or quantitative data, is any form of measurable data such as your height, weight, or the cost of your phone bill. You can determine if a set of data is numerical by attempting to average out the numbers or sort them in ascending or descending order. Exact or whole numbers (ie. 26 students in a class) are considered discrete numbers, while those which fall into a given range (ie. 3.6 percent interest rate) are considered continuous numbers. While learning this type of data, keep in mind that numerical data is not tied to any specific point in time, they are simply raw numbers. Categorical data is sorted by defining characteristics. This can include gender, social class, ethnicity, hometown, the industry you work in, or a variety of other labels. While learning this data type, keep in mind that it is non-numerical, meaning you are unable to add them together, average them out, or sort them in any chronological order. Categorical data is great for grouping individuals or ideas that share similar attributes, helping your machine learning model streamline its data analysis. Time series data Time series data consists of data points that are indexed at specific points in time. More often than not, this data is collected at consistent intervals. Learning and utilizing time series data makes it easy to compare data from week to week, month to month, year to year, or according to any other time-based metric you desire. The distinct difference between time series data and numerical data is that time series data has established starting and ending points, while numerical data is simply a collection of numbers that aren’t rooted in particular time periods. Text data is simply words, sentences, or paragraphs that can provide some level of insight to your machine learning models. Since these words can be difficult for models to interpret on their own, they are most often grouped together or analyzed using various methods such as word frequency, text classification, or sentiment analysis. Where do engineers get datasets for machine learning? There is an abundance of places you can find machine learning data, but we have compiled five of the most popular ML dataset resources to help get you started: Google’s Dataset Search Google released their Google Dataset Search Engine in September 2018. Use this tool to view datasets across a wide array of topics such as global temperatures, housing market information, or anything else that peaks your interest. Once you enter your search, several applicable datasets will appear on the left side of your screen. Information will be included about each dataset’s date of publication, a description of the data, and a link to the data source. This is a popular ML dataset resource that can help you find unique machine learning data. Microsoft Research Open Data Microsoft is another technological leader who has created a database of free, curated datasets in the form of Microsoft Research Open Data. These datasets are available to the public and are used to “advance state-of-the-art research in areas such as natural language processing, computer vision, and domain specific sciences.” Download datasets from published research studies or copy them directly to a cloud-based Data Science Virtual Machine to enjoy reputable machine learning data. Amazon Web Services (AWS) has grown to be one of the largest on-demand cloud computing platforms in the world. With so much data being stored on Amazon’s servers, a plethora of datasets have been made available to the public through AWS resources. These datasets are compiled into Amazon’s Registry of Open Data on AWS. Looking up datasets is straightforward, with a search function, dataset descriptions, and usage examples provided. This is one of the most popular ways to extract machine learning data. UCI Machine Learning Repository The University of California, School of Information and Computer Science, provides a large amount of information to the public through its UCI Machine Learning Repository database. This database is prime for machine learning data as it includes nearly 500 datasets, domain theories, and data generators which are used for “the empirical analysis of machine learning algorithms.” Not only does this make searching easy, but UCI also classifies each dataset by the type of machine learning problem, simplifying the process even further. The United States Government has released several datasets for public use. As another great avenue for machine learning data, these datasets can be used for conducting research, creating data visualizations, developing web/mobile applications, and more. The US Government database can be found at Data.gov and contains information pertaining to industries such as education, ecosystems, agriculture, and public safety, among others. Many countries offer similar databases and most are fairly easy to find. Why is machine learning popular? Machine learning is a booming technology because it benefits every type of business across every industry. The applications are limitless. From healthcare to financial services, transportation to cyber security, and marketing to government, machine learning can help every type of business adapt and move forward in an agile manner. You might be good at sifting through a massive organized spreadsheet and identifying a pattern, but thanks to machine learning and artificial intelligence, algorithms can examine much larger datasets and understand connective patterns even faster than any human, or any human-created spreadsheet function, ever could. Machine learning allows businesses to collect insights quickly and efficiently, speeding the time to business value. That’s why machine learning is important for every organization. Machine learning also takes the guesswork out of decisions. While you may be able to make assumptions based on data averages from spreadsheets or databases, machine learning algorithms can analyze massive volumes of data to provide exhaustive insights from a comprehensive picture. Put shortly: machine learning allows for higher accuracy outputs across an ever growing amount of inputs.
<urn:uuid:7a6c0003-84ee-4619-aa65-264f68bbac12>
CC-MAIN-2022-40
https://www.datarobot.com/blog/the-importance-of-machine-learning-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00031.warc.gz
en
0.930008
1,774
3.25
3
Understanding Network Latency in Ethernet Switches In today’s network age, network latency is not a new term for most people. We often hear about network latency but what is network latency exactly? What causes it? How to minimize network latency? This post will explore the factors that cause network latency and explain how to solve network latency in Ethernet switches. Figure 1: Network Latency What Is Network Latency in Ethernet Switches? Generally, latency is a measure of delay. Network latency is any kind of delay it takes for data or a request to go from the source to the destination over a network. Latency is one of the elements that contribute to network speed. Obviously, network latency is often expected to be as close to zero, which can hardly be realized. Network switches are critical elements of network infrastructure, the latency of which is the one section of the overall network latency. Sometimes when the data packet passes through a device, there is a delay while your switches or routers decided where to send it next. Though the individual pause seems to be brief, they can add up. Therefore, high bandwidth, low latency switches have now become the trends of network deployments for higher performance. What Causes Network Latency? There may be many reasons resulting in network latency. The possible contributors to it include the following factors: The time it takes for a packet to physically travel from its source to a destination. Error from router or switch since each gateway needs to spend time checking and changing the packet headers, therefore it takes much time for an Ethernet packet to traverse an Ethernet switch. Anti-virus and similar security process, which needs time to finish message recombination and dismantling before sending. Storage delays when packets suffer from storage or disk access delays at intermediate devices like switches and bridges. Software bugs from the user’s side. The problem is from the transmission medium itself, which takes some time to transmit one packet from a source to a destination from fiber optics to coaxial cables. Delays occur even when packets travel from one node to another at the speed of light. How to Measure Network Latency in Ethernet Switches? As we can see from the last chapter, switch latency is one of the key components that contribute to network latency. So how can we measure it? Switch latency is measured from port-to-port on an Ethernet switch. It may be reported in a variety of ways, depending on the switching paradigm that the switch employs. It can be measured with different tools and methods in Ethernet switches, such as IEEE specification RFC2544, netperf, or Ping Pong. IEEE specification RFC2544 provides an industry-accepted method of measuring latency of store and forward devices. Netperf can test latency with request or response tests (TCP_RR and UDP_RR). While, Ping Pong is a method for measuring latency in a high-performance computing cluster, which measures the round-trip time of remote procedure calls (RPCs) sent through the message passing interface (MPI). How to Minimize Network Latency With Ethernet Switches? To reduce network latency with Ethernet switches, there are a few different techniques as described below. Expand the Needed Capacity To reduce latency and collisions, it is vital to provide the needed capacity using your Ethernet switch. Check your switch and see if it can provide you with the feature to expand network capacity. First, a fast engine is what you need. Ethernet switches with zero packet loss help the network gain better performance. LACP is a standard feature that helps to build better network performance by trunking ports. FS S3910 series switches support LACP to increase bandwidth to improve network performance. Use VLANs to Segment Network As the traditional flat network can easily overload switch links, Ethernet switches with VLAN features can send traffic to the location where it should go easily. There are many Layer 2 and Layer 3 VLAN Ethernet switches that Ethernet switches provide to segment traffic, such as based on port, dynamic VLAN assignment, protocol, MAC address and other types. Use Cut-through Switching Technology Cut-through switching is a method for packet-switching system, aiming to reduce network latencies to the minimum. This technique reduces latency through the switch when an Ethernet switch starts forwarding a packet before the entire packet has been received, normally, as soon as the destination address is processed. But note that it cannot operate when sending traffic from a slow port to a faster port or from one port to another port of the same speed. Above are some tips to minimize network latency with Ethernet switches. There are many low latency Ethernet switches in the market which helps to gain better network performance. To minimize network latency, however, it is fundamentally necessary to not only focus on the switches that comprise the network but to also comprehend the latency and latency variation of the network as a system. This article has hopefully helped answer the questions of what is network latency in Ethernet switches and how to deal with minimizing network latency. Network latency is not eliminated but can be minimized as low as possible. The ways to minimize network latency with Ethernet switches can be available options for you.
<urn:uuid:900a4345-26d9-4f70-89d7-286d9bfee69a>
CC-MAIN-2022-40
https://community.fs.com/blog/understanding-network-latency-in-ethernet-switches.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00031.warc.gz
en
0.932798
1,075
2.9375
3
Most conversations about the energy transition to a low carbon energy future include renewable technologies such as photovoltaics (PV) and wind energy, energy storage, and electric vehicles (EVs). But this major change to the power grid represents more than just a migration to cleaner energy technologies. It reveals fundamental forces changing how energy is produced, delivered, and consumed by billions of consumers and businesses in a modern global economy. This white paper from Guidehouse explores dual purpose microgrids’ critical role of offering long duration backup power at customer sites and balancing wholesale wind and other renewables on the larger grid to guarantee resilient electricity. Brought to you by:
<urn:uuid:0ac909d7-4995-4104-8fb0-9f3320bd49b3>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/business/guidehouse-insights-enhancing-resiliency-energy-transition
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00232.warc.gz
en
0.891179
135
3.125
3
A year into its mission, the rover is scaling a Martian mountain and telling us Earthlings all about it. NASA's Curiosity rover takes "selfies" as it probes the Martian surface. (NASA photo) NASA's $2.5 billion Curiosity rover captured the world's attention in August 2012 when it touched down on Mars using an unprecedented sky crane landing technique described by engineers as "seven minutes of terror." One year later, the rover is scaling a Martian mountain and telling us Earthlings all about it. Imbued with personality and humor – check out the rover's Twitter account – Curiosity is unlike any NASA creation before it, and with deserved pomp and circumstance celebrated its one-year anniversary atop the Martian surface on Aug. 5 by singing "Happy Birthday" to itself. Of course, it wasn't enough to play a recorded rendition – Curiosity did its own version by vibrating its sample analysis unit at varying frequencies. Yet Curiosity, which takes glamorous "selfies" with its 12 engineering cameras, is about substance over style. It has delivered scientists more than 180 gigabits of data and 70,000 Martian images and fired more than 75,000 laser shots to investigate the composition of Martian targets. Along the way, Curiosity has driven more than one mile, adeptly navigating Mars' iron-rich and rocky surface with only a few hiccups along the way, including a brief computer glitch that halted it in March. Yet Curiosity has already fulfilled its main mission objective: To determine whether Mars was once hospitable for life. Spoiler alert: It was, and Curiosity analyzed rock samples in Mars' Yellowknife Bay to prove it. "Successes of our Curiosity -- that dramatic touchdown a year ago and the science findings since then -- advance us toward further exploration, including sending humans to an asteroid and Mars," NASA Administrator Charles Bolden said in a statement. "Wheel tracks now will lead to boot prints later." The rover is measuring radiation and weather on Mars' surface – data that will be used in plotting human missions to Mars in the coming decades. Curiosity has been on the move over the past month, traveling about 750 yards toward Mount Sharp. Curiosity's next goal is investigating the lower levels of the three-mile high mountain. Mars may have once been hospitable billions of years ago, but its environment is anything but friendly to man or machine now. Yet NASA engineers believe Curiosity still has at least one more year of top-notch research to come before any trouble might set in. "We now know Mars offered favorable conditions for microbial life billions of years ago," said the mission's project scientist, John Grotzinger of the California Institute of Technology in Pasadena. "It has been gratifying to succeed, but that has also whetted our appetites to learn more. We hope those enticing layers at Mount Sharp will preserve a broad diversity of other environmental conditions that could have affected habitability." NEXT STORY: Readers complain about 'brass creep'
<urn:uuid:de037dd0-b920-4829-b798-e7fb8b55d61a>
CC-MAIN-2022-40
https://fcw.com/2013/08/nasa-curiosity-cruising-one-year-after-mars-landing/212546/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00232.warc.gz
en
0.945476
610
3.234375
3
Phase 1 of DARPA’s Fast Lightweight Autonomy (FLA) program concluded recently following a series of obstacle-course flight tests in central Florida. Over four days, three teams of DARPA-supported researchers huddled under shade tents in the sweltering Florida sun, fine-tuning their sensor-laden quadcopter unmanned aerial vehicles (UAVs) during the intervals between increasingly difficult runs. DARPA’s FLA program is advancing technology to enable small unmanned quadcopters to fly autonomously through cluttered buildings and obstacle-strewn environments at fast speeds (up to 20 meters per second, or 45 mph) using onboard cameras and sensors as “eyes” and smart algorithms to self-navigate. (Learn More, courtesy of DARPA and YouTube) Potential applications for the technology include: - Safely and quickly scanning for threats inside a building before military teams enter - Searching for a downed pilot in a heavily forested area or jungle in hostile territory where overhead imagery can’t see through the tree canopy - Locating survivors following earthquakes or other disasters when entering a damaged structure could be unsafe “The goal of FLA is to develop advanced algorithms to allow unmanned air or ground vehicles to operate without the guidance of a human tele-operator, GPS, or any datalinks going to or coming from the vehicle,” said JC Ledé, the DARPA FLA program manager. “Most people don’t realize how dependent current UAVs are on either a remote pilot, GPS, or both.” “Small, low-cost unmanned aircraft rely heavily on tele-operators and GPS not only for knowing the vehicle’s position precisely, but also for correcting errors in the estimated altitude and velocity of the air vehicle, without which the vehicle wouldn’t know for very long if it’s flying straight and level or in a steep turn.” “In FLA, the aircraft has to figure all of that out on its own with sufficient accuracy to avoid obstacles and complete its mission.” The FLA program is focused on developing a new class of algorithms that enables UAVs to operate in GPS-denied or GPS-unavailable environments—like indoors, underground, or intentionally jammed—without a human tele-operator. Under the FLA program, the only human input required is the target or objective for the UAV to search for—which could be in the form of a digital photograph uploaded to the onboard computer before flight—as well as the estimated direction and distance to the target. A basic map or satellite picture of the area, if available, could also be uploaded. After the operator gives the launch command, the vehicle must navigate its way to the objective with no other knowledge of the terrain or environment, autonomously maneuvering around uncharted obstacles in its way and finding alternative pathways as needed. The recent four days of testing combined elements from three previous flight experiments that together tested the teams’ algorithms’ abilities and robustness to real-world conditions. Some of those conditions include: - Quickly adjusting from bright sunshine to the dark building interiors - Sensing and avoiding trees with dangling masses of Spanish moss - Navigating a simple maze, or - Traversing long distances over feature-deprived areas On the final day, the aircraft had to perform the following tasks sequentially: - Fly through a thickly wooded area and across a bright aircraft parking apron - Find the open door to a dark hangar - Maneuver around walls and obstacles erected inside the hangar - Locate a red chemical barrel as the target, and - Fly back to its starting point, completely on their own. Each team showed strengths and weaknesses as they faced the varied courses, depending on the sensors they used and the ways their respective algorithms tackled navigation in unfamiliar environments. Some teams’ UAVs were stronger in maneuvering indoors around obstacles, while others excelled at flying outdoors through trees or across open spaces. The test runs had the combined feel of part air show, part live-fire exercise, with a palpable competitive vibe between the teams. “The range is hot, the range is hot, you are cleared to launch,” crackled the voice of the test director over the walkie-talkies audible in the adjacent team tents, giving a green light to launch an attempt. Sitting under his own shaded canopy, the director followed the UAV’s flight on two video monitors in front of him, which showed views from multiple cameras placed along the course. Metal safety screens, which resembled giant easels, protected the camera operators on the course, as well as teams and course officials, from any rogue UAVs. Once a UAV was out of visual range, team members followed the progress on monitors. The first successful foray from sunlight through a doorway and into darkness brought a cheer. “It’s in the hangar!” came a gleeful cry over the walkie-talkies. And when a UAV maneuvered successfully around the interior obstacles and reached the targeted red chemical barrel, an official goal observer took to the microphone intoning: “Goal, Goal, Goal!”, indicating the UAV had reached the objective as verified by all three “goal cameras” pointed at the barrel. The final stretch involved the UAV flying back to the starting point and landing. To be sure, there were sighs of despair as well. Sometimes a quadcopter would reach a point along the course and, inexplicably, hover as if dazed or confused about what to do next. After a pause, it would fly back to the starting point, having been programmed to do so if it didn’t know what to do next. “I think it’s basically completely lost,” one researcher lamented after his team’s vehicle got close to the target in a clearing in the woods, but then took a wrong turn into another clearing and just kept going further away from the barrel. In that case, a safety pilot took over and landed the UAV so it wouldn’t be damaged, using the emergency RF link that had been installed for these experiments in the event a vehicle headed out of bounds or began flying erratically at high speed toward an object—which happened on several occasions. Undaunted by such glitches, teams would return to their tents, make some tweaks to the algorithms on laptops, upload them to the bird, and then launch again for another try. And no, not every landing was soft. A few times the quadcopter was flying so fast, the safety pilot didn’t have time to make the split-second decision to take over. More than once that resulted in a wince-evoking “crunch”—the hallmark acoustical signature of a UAV smacking squarely into a tree or side of the hangar. Back to the team’s shade tent for some adjustments to the algorithm before uploading to a replacement craft. Each team had several UAVs on standby in their tents, and like pit crews at a raceway would quickly replace the broken bird with a fresh one to get in as many attempts as possible during their allotted 20-minute slot for each task. During each day’s morning and afternoon obstacle-course runs, at least one team was able to fly the mission autonomously, including a return to the starting point or a location close to the start—to the applause of all researchers and the test evaluators sitting under their canopies. Success was largely a matter of superior programming. “FLA is not aimed at developing new sensor technology or to solve the autonomous navigation and obstacle avoidance challenges by adding more and more computing power,” Ledé said. “The key elements in this effort, which make it challenging, are the requirements to use inexpensive inertial measurement units and off-the-shelf quadcopters with limited weight capacity.” “This puts the program emphasis on creating novel algorithms that work at high speed in real time with relatively low-power, small single board computers similar to a smart phone.” Each team brought unique technologies and approaches for outfitting their UAVs. To hear a little about their approaches watch the video below: (Courtesy of DARPA and YouTube) “I was impressed with the capabilities the teams achieved in Phase 1,” Ledé said. “We’re looking forward to Phase 2 to further refine and build on the valuable lessons we’ve learned. We’ve still got quite a bit of work to do to enable full autonomy for the wide-ranging scenarios we tested, but I think the algorithms we’re developing could soon be used to augment existing GPS-dependent UAVs for some applications. I think that kind of synergy between GPS-reliant systems and our new FLA capabilities could be very powerful in the relatively near future.” To learn more, visit www.darpa.mil. This piece is featured in the AST July 2017 *Fully Interactive* Digital Magazine: 66 pages jam-packed with the Newest Physical & Cyber Technologies in the Government Security Market Today.
<urn:uuid:6e72fdb8-0c18-43e5-90f0-d5645ed8e4c3>
CC-MAIN-2022-40
https://americansecuritytoday.com/smart-quadcopters-find-way-without-humans-gps-multi-video/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00232.warc.gz
en
0.94908
1,953
2.8125
3
Cybersecurity in the education space is different from the mid-sized nonprofit IT networks that Community IT typically supports. As we’ve supported education nonprofits we’ve learned a few cybersecurity best practices for charter schools that can help you keep your technology both accessible and safe. In a nonprofit organization, we know that strong cybersecurity policies include multi-factor authentication, strong passwords, and a commitment to ongoing cybersecurity training for nonprofit staff members. We expect adult users in an office environment to participate collectively in strong cybersecurity to protect the organization overall, an organization in which they have a financial and emotional stake. And lapses in cybersecurity generally are either honest mistakes or the work of a disgruntled employee. Cyber attacks are also expected, but the types of attacks we see are usually generic, where the organization is not a specific target and the hack is one of opportunity. In a school, many of those basics are turned on their head. - Students need easy access; make the access too difficult and participation will drop in ways fundamentally different from an employee. - There is an incentive to simplify account access due to volume. A school may support hundreds or thousands of students; make the cybersecurity too individualized and the IT support (and expenses) will mount. But this can mean security suffers as a “standard” login or initial password is easily exploited. - An office-based organization rarely has the same level of complexity of account access as a school; students, parents, teachers, and administrators all need various connections and levels of secure access. Privacy concerns can conflict with convenience. - Any institution with children’s accounts is a specific security target in ways that adult organizations are not; in addition to “generic” opportunistic financial hacking, cybersecurity has to guard against non-financial threats from without and within the online environment, such as sexual predators and bullies. - Education institutions may be quite sophisticated or very unsophisticated in their technology approaches and comfort level; online education presents additional challenges to novice users, for students, parents, and staff. - Schools and their students often face funding challenges and financial pressures that are different from office environments; education technology policies have to budget for loss and theft of devices in different ways than other organizations, and long term budget planning can conflict with politics or suffer from frequent changes in decision makers. Cybersecurity Best Practices for Charter Schools Clearly, cybersecurity protections are crucial for education technology. Here is the collection of cybersecurity best practices that we’ve developed over years of supporting a range of charter schools. Equipment and Set Up and Pricing You can have both Office 365 and G Suite in your environment Accounts should be originated in Azure AD and then provisioned into G Suite and set up for single-sign-on. Starting in Azure AD and going to G Suite is currently “free”. You can go from G Suite to Azure AD, but it requires a per user fee. Microsoft Education has really great discounts, offers a 15-1 licensing benefit from faculty to student accounts and is really affordable. If your environment uses Microsoft Windows be sure to enroll the computers in AutoPilot (requires Intune license) when they are purchased and then use Intune or another device management tool to deploy your standard configurations and applications. If your environment uses Chromebooks be sure to get the Enterprise Chrome OS so that you have better management capabilities. Have a process in place for reviewing and vetting new apps. It’s likely that needs will change during the semester, so have a process in place for teachers to request and manage applications and websites that are specific to their class. This can avoid scenarios when insecure or poorly designed solutions are used for classroom instruction, opening the classroom up to outside access or exploitation. Student Account Access Using a Single Sign On (SSO) Dashboard like Clever is a great way to unify logins to the wide range of educational apps that are required. It helps to reduce the number of username and password combinations that your learners need to remember Good passwords are important! Having young learners means their accounts must be secure and individualized passwords are essential to prevent intrusions. Passwords should be at least 12 characters long and be memorable. We like to use the generator at XKPasswd – Secure Memorable Passwords to generate them. Student passwords can be known by teachers, but they need to be stored securely. That means no spreadsheets with the passwords and definitely no use of the same password for all students! Many Student Information Systems can securely store a student’s password. For students we usually turn off the ability for them to reset their own password. Often having a student reset their password can create more problems than the potential security benefit that it provides. If students or administrators need a student to reset their password, it requires help desk assistance and that improves overall tech help communication, too. Have a clear process in place and communicate it clearly so resetting a password doesn’t add frustration into the mix. Staff and Teachers should protect their accounts with Multi-Factor Authentication (MFA). They have access to so many systems with sensitive data that providing this extra layer of protection is critical. Turn off services that aren’t needed. If students aren’t required to use email in their education context, then that service should be deactivated for them. Only enable and provide access to the systems that are required for education. Online Meeting Security Follow good online meeting practices. Many schools are using Zoom to run their online classrooms. It’s important to restrict access to those virtual classrooms to only authorized accounts associated with the school. Implementing this restriction will reduce the risk of Zoombombing attacks. For general meetings that are open to the public ensure that you have implemented a waiting room feature. You should also turn off video sharing and screensharing, and mute attendee audio by default. We have more tips at Nonprofit Cybersecurity Tips for Zoom. You could also use Facebook LIVE or Youtube Live for broadcasting content to a large audience. Bottom Line on Cybersecurity Best Practices for Charter Schools: the security issues are different, but the need for security is, if anything, more important. At Community IT, we continue to learn and share our education tech best practices with our community. The opportunities for positive technology experiences and mission delivery in the nonprofit education sector are always accompanied by very real security concerns. As remote learning has evolved so rapidly recently it is not surprising that institutions are having trouble keeping up. Having a trusted technology partner to help navigate vendors and help desk support is essential to a successful implementation of any new education technology platform. Having an assessment and strategic plan/budget in place is crucial to implementing change at speed. Being able to call on existing relationships is also vital. Community IT is relatively rare among MSPs in serving nonprofits exclusively; we understand how nonprofits work and have learned how charter schools operate through our experiences with several clients over the years. Community IT also is unique in utilizing an IT Business Manager as a central point of contact for our clients and tech teams. Our Business Managers bring deep understanding of nonprofit culture to change management and project implementation. In our recent Remote Learning Implementation Case Study we walk through the role of the IT Business Manager, who utilized his own experience as a parent of a student learning remotely, allowing him to identify the challenges the learners at this charter school client would face, and successfully manage the roll out of a four-year strategic plan in less than three months. Ready to reduce your nonprofit cybersecurity risk? At Community IT Innovators, we’ve found that many nonprofit organizations deal with more cybersecurity risks than they should have to. As a result, cyber damages are all too common. Whether at a third party vendor or a phishing or ransomware attack on your own organization, you need to be prepared for cybersecurity risks and understand cybersecurity best practices for charter schools. Our process is different. Our techs are nonprofit cybersecurity experts. We constantly research and evaluate new technology solutions to ensure that you get cutting-edge solutions that are tailored to keep your organization secure. We published our Nonprofit Cybersecurity: a Guide for 2020 to help our community understand the issues. And we ensure you get the highest value possible by bringing 25 years of expertise in exclusively serving nonprofits to bear in your environment. We regularly present webinars at Community IT about cybersecurity issues, to help our community learn practical safety measures we can all take. If you’re ready to gain peace of mind about your cybersecurity, let’s talk.
<urn:uuid:91df61b6-9214-40c2-82ad-bc226871a2b3>
CC-MAIN-2022-40
https://communityit.com/cybersecurity-best-practices-for-charter-schools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00232.warc.gz
en
0.942513
1,751
2.65625
3
In the United States, more than 10 million people a year contract appendicitis, with in excess of 50,000 cases resulting in death. For more than 300,000, the treatment prescribed won’t be antibiotics or any other medication. Instead, the appendix will be eliminated. It has long been believed that the appendix is a ‘vestigial’ or functionless organ. Surgical removal is seen as an effective treatment, providing permanent immunity from future problems. If there weren’t so many inherent risks associated with surgery, appendix removal would likely be a common precautionary procedure. No appendix? No risk of appendicitis, and no downside with losing a seemingly useless body part. In the IT world, just as the appendix is not serving any useful purpose in the body, many default services provided with modern IT platforms are equally superfluous to requirements.
<urn:uuid:55ac5305-c144-47dc-90c7-32c10062e508>
CC-MAIN-2022-40
https://www.newnettechnologies.com/is-system-hardening-like-an-appendectomy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00232.warc.gz
en
0.935221
179
3.078125
3
It’s World Password Day! Cybersecurity professionals initiated this day in 2013 to help us all remember best practices for password security. Passwords have a long history of being the first line of defense in protecting company resources. In honor of this day however, we can’t help but ask the question: are passwords still doing their job effectively in modern IT environments? The password is dead. Long live the password. As much as we’d like the above statement to be true, the password has not truly expired. The death of the password was first predicted almost two decades ago at the RSA Security Conference in 2004. Passwords were deemed unable to “meet the challenge” of securing critical resources and extinction seemed inevitable. This was 17 years ago. And, the death of passwords has been talked about at every security conference since. Fast forward to today and, despite the rapid advancement of technology, we are still relying on passwords for security. For example, just last year, hackers breached Colonial Pipeline Co. with a single compromised password and walked away with $4.4 million after shutting down the largest fuel pipeline in the U.S. and creating gas shortages across the East Coast. Why on earth are passwords still so frequently used as the only authentication factor? Although the password is alive (and stays that way because of convenience), its ability to protect your organization on its own is dead. We have entered the era of multi-factor authentication (MFA). The Problem with Passwords Alone Passwords are not inherently bad but, unlike other authentication factors, they are uniquely subject to human behavior. This is where the risk lies. We are all imperfect humans with unique priorities in life that are not always related to cybersecurity. Every individual is tasked with the management of dozens upon dozens of passwords across work and personal accounts throughout their lifetime. Keeping track of all of these passwords can be fatiguing. What’s the path of least resistance for users? Creating passwords that are easy to remember. Reusing the same or similar passwords for multiple accounts. Sharing passwords with friends and family for account access (e.g., Netflix). Sharing passwords with colleagues for work resource access. Resetting forgotten passwords via email. Passwords are such a normalized part of our lives, the average person doesn’t think twice about these behaviors. The following statistics about typical password use are concerning: - 42% of people prioritize having a password that’s easy to remember over one that’s very secure - 61% of people use the same password or a variation for both work and personal accounts - 91% of people understand the risk of reusing passwords yet 66% admit to doing it anyway From the IT admin side, the concept of user management with a password made sense for many years because it is a remarkably easy approach to controlling who has access to which IT resources. Unfortunately, the ease for end users and admins also equates to ease for individuals with malintent. Gaining access to restricted IT resources becomes as simple as obtaining a password, hence the massive issue with phishing and spear phishing. Today’s malicious players have various strategies they leverage to acquire credentials. The most common include: - Credential stuffing. Hackers use lists of stolen credentials in bulk automated login attempts. This strategy is largely successful due to password reuse across accounts. - Password spray. Hackers acquire lists of users and try the same common password. This strategy is aided by the use of easy-to-remember passwords such as qwerty123. - Phishing. Hackers attempt to gain credentials directly from the user, often by impersonating password resets or account issues via email. This strategy plays on our natural human emotions. Spear phishing emerged to be an even more targeted approach to securing credentials. Millions and millions of passwords and accounts are probed every day, and as we have seen with thousands of major data breaches, hackers have been having success. Still to this day, credentials are involved in 61% of data breaches. Let’s face it, passwords are no longer enough to secure your organization. It’s Time to Implement Multi-Factor Authentication MFA, also known as 2FA or two-factor authentication, layers another authentication factor on top of the password during the login process and greatly enhances security. With MFA enabled, hackers need more than just a password to access an account. They also need access to the second factor, such as a randomly generated security code on the user’s smartphone or a physical hardware key. Since most hacking is done remotely, MFA can make an account virtually unhackable. In fact, the additional barrier provided by MFA can prevent 99.9% of account takeover attempts. It’s no surprise then that the Center for Internet Security recommends MFA as the first choice for all authentication purposes. Password policies are a secondary security measure. No matter the size of the organization, all IT teams should consider implementing MFA on top of passwords to protect company resources effectively. When MFA is tied to a user’s system and their critical accounts such as email, the chances of an identity compromise decrease dramatically. MFA is a significant step forward in strengthening security and protecting your business from the costs of a data breach. Today’s cloud identity management platforms are taking steps to implement MFA for applications and systems. By unifying device management and single sign-on (SSO) into a directory service, JumpCloud’s cloud directory platform is leading the way with our integrated authenticator app JumpCloud Protect. IT admins are able to layer the type of MFA that works best for their organization across all of the devices, applications, networks, and infrastructure they need to secure, while maintaining fluid user workflows. To gain a better understanding of how our platform can support your MFA implementation, try JumpCloud Free today for up to 10 users and 10 devices.
<urn:uuid:d39af7d4-105f-4eeb-ab0e-e31c62e63f7a>
CC-MAIN-2022-40
https://jumpcloud.com/blog/password-dead-era-mfa
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00432.warc.gz
en
0.946545
1,223
2.796875
3
If you have never used municipal WiFi but you have noticed people out in public connecting to the Internet, chances are they are using a municipal WiFi connection to access the Internet. A municipal WiFi connection is basically a wireless hotspot that is owned and operated by a municipality. The WiFi hotspots are accessible by the public with both free and paid access depending upon the location and the purpose. If you are accessing municipal WiFi in a public library or other similar venue chances are you will not pay a fee to connect to the Internet. If the municipal WiFi is offered to a specific neighborhood or corporate complex then you may pay a fee to establish Internet connectivity. In terms of public transportation venues such as airport terminals, public transit hubs, and downtown shopping areas, access to municipal WiFi may vary when it comes to free or paid access. Although municipal WiFi is very convenient and allows you to connect with any device, there are risks associated with using this type of connection. However, if you are aware of some of the ‘ins and outs’ of using municipal WiFi you can safely use this connection whenever you are in a public area. An Overview of Municipal WiFi Security Hazards When you access municipal WiFi it is important to have a fundamental understanding of how hackers operate when they exploit a public network. Understanding some of the techniques they use enables you to take the necessary precautions to protect yourself and your identity while enjoying the convenience of municipal WiFi. Municipal WiFi is Not Encrypted: When you access municipal WiFi it is typically an open access network that does not provide any security for transmission of data. Encryption is the process of password protecting all of your data. When the data is transmitted over an Internet connection it is scrambled to avoid interception by hackers. When it arrives at its destination the only way to unscramble the data so it is legible is to use an encryption key that contains your password. Networks that provide encryption automatically offer this process when you communicate with others using a secure network. Hackers are aware that in general, municipal WiFi is not secure which means any data that you send such as text or email messages, credit card numbers, or other personal information is openly visible to any hacker who can deploy simple tools for eavesdropping on you. Access Point Impostors: Most PCs and mobile devices automatically detect the nearest wireless access point. This presents a playground of opportunity for hackers who are capable of creating an impostor access point. The access point looks similar to the municipal WiFi hotspot only it uses crafty methods to entice you to choose the false access point. Once you have connected to the false access point the hacker has open access to any computing tasks you conduct from there. It is also important to note that with access point impostors hackers are capable of creating web pages that look like login pages you use to access your accounts. Instead, when you login your username and password is forwarded to a remote server where it can be used to commit criminal acts. Distance Exploits: When it comes to municipal WiFi it is not necessary for the hacker to be in the general area to see who is connected to the network. Since there is no firewall protection when you access the Internet hackers can see people who are active using simple tools to view the network from a remote location. Once they see who is on the network they probe for vulnerabilities and when they find one, they launch an attack. These are just a few risks you should be aware of when accessing municipal WiFi. The privacy threats are many on a public network. Since a municipal WiFi network is so vast and thousands of people are using the ID associated with it, implementing security applications and encryption measures proves to be a monumental task which can also be very costly. So the only way to enjoy the benefits of a public network is to be proactive in protecting yourself while using municipal WiFi. Here are few tips: Obtain Network Identification Before you logon make sure you know the identification name and number for the municipal WiFi network. This will help to prevent you from encountering an access point impostor. It does not provide 100 percent protection but it will make you aware enough to use caution before using a municipal WiFi access point. Use a VPN or Proxy If you have access to a VPN (Virtual Private Network) this can provide you with additional protection when accessing the Internet in a public venue. If this is not possible there are proxy services you can use to hide your identity. These services are offered online and allow you to use a different server other than the municipal WiFi network to access the Internet. Encrypt Your Files Although municipal WiFi may not offer encryption you can take the necessary steps to protect yourself by encrypting your own files on your PC or mobile device. Many operating systems provide built-in encryption that you can activate or you can obtain encryption software to protect your files. Use Comprehensive Security Simply deploying an antivirus program will not provide you with the protection you need to safely navigate a municipal WiFi network. Instead, make sure your device has comprehensive security protection installed such as a firewall, anti-malware, email scanning, and other protection elements that are included with a complete security program. Municipal WiFi providers sometimes generate an extra stream of income by tracking your activity while you surf the Internet. They accomplish this by inserting tracking cookies on your PC among other methods. Your location, where you do business, and other information is gathered and sent back to advertisers in exchange for a specified amount of money. The advertisers then bombard you with ads that are relevant to your location as well as the places where you shop or do business. We are not saying that it is a bad practice to use municipal WiFi however, it is important to take proactive steps to protect your privacy for the reasons described in this article.
<urn:uuid:2204f889-c20f-464f-a1c3-090fdab1d256>
CC-MAIN-2022-40
https://internet-access-guide.com/tips-on-using-municipal-wifi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00432.warc.gz
en
0.928403
1,178
2.953125
3
What Are Cookies Cookies are small text files (typically made up of letters and numbers) placed in the memory of your browser or device when you visit a website or view a message. Cookies allow a website to recognize a particular device or browser. You can find out more information about cookies at www.allaboutcookies.org. Cookies in Use |Necessary||OptanonConsent||lifars.com||This cookie is set by the cookie compliance solution from OneTrust. It stores information about the categories of cookies the site uses and whether visitors have given or withdrawn consent for the use of each category. This enables site owners to prevent cookies in each category from being set in the users browser, when consent is not given. It contains no information that can identify the site visitor||1 year| |OptanonAlertBoxClosed||lifars.com||This cookie is set by the cookie compliance solution from OneTrust. It is set after visitors have seen a cookie information notice and in some cases only when they actively close the notice down. It enables the website not to show the message more than once to a user. The cookie contains no personal information.||1 year| |APISID, SSID, SID, SAPISID, HSID||google.com||This cookie are used by the google reCAPTCHA.||1 year| |Functionality||lang||cdn.syndication.twimg.com, syndication.twimg.com||Remembers the user’s selected language version of a website||Session| |MCPopupSubscribed||lifars.com||This cookie is associated with the MailChimp sign up form which appears on the website. The cookie is used to remember whether a user has subscribed to the newsletter, and to ensure that the user only sees the message once.||3 months| |MCPopupClosed||lifars.com||The cookie is associated with the MailChimp sign up form which appears on the website. The cookie is used to remember whether a user has closed the popup message, and to ensure that the user only sees the message once.||3 months| |SID, HSID, demographics, VISITOR_INFO1_LIVE, PREF, APISID, GPS, SSID, LOGIN_INFO, YSC, SAPISID||youtube.com||These cookies are used to enable YouTube embeds.||3 months| |Performance||_hjIncludedInSample||lifars.com||Determines whether the visitor is included in the sample which is used to generate Heatmaps, Funnels, Recordings, etc.||Session| |Targeting or Advertising||ads/ga-audiences||google.com||Pixel tracker that is used by Google AdWords to re-engage visitors that are likely to convert to customers based on the visitor’s online behaviour across websites.||Session| |ads/user-lists/#||google.com||Used by Google when a user visits a page on a website containing a remarketing tag. Google puts a cookie on the user’s device and adds the cookie to a ‘user list’. This is simply a collection of visitor cookied generated by one (or more) remarketing tags.||Session| |test_cookie||doubleclick.net||Tests for cookie setting permissions for Google DoubleClick||1 day| |IDE||doubleclick.net||Used by Google DoubleClick to register and report the website user’s action s after view ing or clicking one of the advertiser’s ads with the purpose of measuring the efficacy of an ad and to present targeted ads to the user||1 year| Choices about Cookies You can review your current LIFARS cookie settings at any time: You can configure your browser to accept all cookies, reject all cookies, or notify you when a cookie is set. The following link provides instructions for adjusting cookie preferences for common browsers: If your browser is not listed on the site above, check your browser’s “Help” menu to learn how to change your cookie preferences.
<urn:uuid:00afa031-0d49-487c-ac87-2b2ece13b400>
CC-MAIN-2022-40
https://www.lifars.com/cookie-policy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00432.warc.gz
en
0.822358
927
2.71875
3
A LOT of code has been written – enough for LOT to deserve caps. By DARPA's estimate, it is in the order of hundreds of billions of lines of open-source code, and I am probably safe in conjecturing that there is a LOT more proprietary code. And just like how history repeats itself, software repeats itself, in some form or the other. In a spirit similar to history, mistakes in software are also often repeated. For example, people have tried to parse HTML using regular expressions again and again, prompting one of the funniest stackoverflow answers I have seen. The business of a static analysis tool is premised on developers making the same kind of mistakes and vulnerabilities, and the fact that programming without errors remains a hard problem. You don't have to take my word that software is repeated. Gabel and Su studied how unique really is software. They looked at 420 million lines of source code, and asked this interesting question: if you were writing a new program, and you could only copy paste from existing programs, how far could you get? They found software to be heavily redundant at smaller block sizes (size of the copy pastes) of 1-7 lines, and less redundant with increasing block sizes. Developers copy-pasting code is quite common. Stacksort is a parody mimicking the cynical view of the software development process these days: it looks up stackoverflow for sorting routines and tests them until something seems to work. While it was meant to be a comic-based joke, together with the availability of large amounts of code, it inspires the following question: can the vast amount of human knowledge amassed in the form of code be exploited to improve the quality, correctness, security and experience of programming? The MUSE Program DARPA's initiative called MUSE does exactly that. MUSE has received a lot of sensational coverage (Engadget, Computer World) in the press, and various views have been expressed on what this program seeks to do. Newsweek questioned if computer programming is a dying art, The New Yorker questioned the need to learn coding, Popular Science called it autocomplete for programmers, Wired and Engineering asked if computers would start writing their own code. For the MUSE program, GrammaTech has joined forces with researchers at Rice Univerity, UT Austin, and University of Wisconsin-Madison to develop PLINY. The vision of PLINY is to be the next-generation software engineering tool that learns from the plethora of code that I mentioned above. PLINY can be thought of as a guide to the developer, letting developers take more of a supervisory role in programming. PLINY is aimed to help in constructing, debugging, verifying, maintaining, and repairing code. To enable this vision, everything from superficial syntax to deep semantic facts about vast amounts of code must be learned, potentially having a major impact on how future code will be written. The MUSE program has been compared to auto-complete (for example, when Google search automatically prompts you what you want to search for based on what you have typed so far) and auto-correct (for example, when your text messenger automatically decides to correct your spelling). Such systems for for natural languages have had limited success, providing enough fodder for Internet ridicule. But, in a large-scale study, Devanbu found that programs are approximately 8-32 times more predictable than English text! After getting over an initial surprise, I realized that the numbers make sense. Let's say you randomly took paragraphs from different books in a large library, and randomly took pieces of code from different projects. It is likely that the code pieces would have more commonality than the English text because code has a lot more structure, and developers use common idioms more often than not. This is an argument towards MUSE — irrespective of the degree of auto-correction or auto-completion that can be achieved for coding, there is vastly untapped potential in existing code corpus, that can at a minimum improve the state of current programming practices. I once had a professor who was terrible at spelling, but wrote excellent, persuasive papers. He said that being able to not worry about exact spellings, and relying on spell checkers/correctors, freed him to write at the speed of his thinking, allowing him to work quicker, and iterate more. Perhaps we can do the same for programming, enabling developers to increase productivity by letting them think at a higher level: searching, navigating, selecting from recommendations, and providing hints to the development system. The developers would still be in charge, using their domain knowledge and expertise, and creative understanding of the problem scenario. For aMUSEment, I pondered about why we keep re-implementing/repeating code. I came up with the following: - We do not know that something was already implemented. - The requirements were slightly different: the existing implementation is not generic enough. - Efficiency reasons: the existing implementation is not specialized enough. - The existing version is not maintainable, we don't understand how it works. - The preconditions to use a piece of code is unclear. - We tried using existing code and it did not work. - Not invented here principle. Maybe with the exception of the final point, I think the rest can all benefit from novel solutions that look at existing corpus of code, learn and reason using them, and provide inputs to the developer to do their job more productively. In addition to being a challenging research problem, MUSE also poses other practical problems, like developer adoption. One of the biggest impediments to adoption of such tools is when the developer doesn't understand a particular suggestion or a correction. This comes up a lot when using static analysis tools too — analyses use deep local and global reasoning, using approximations to find bugs. In order to understand why an analysis thinks there is a particular bug, a developer needs to reconstruct the analysis's reasoning, at least partially, which is non-trivial. The more time it takes the developer to understand it, the more likely he will discard the analysis results. GrammaTech has a lot experience in disseminating such analysis reasoning information (for example, by highlighting paths when followed could cause the bug), and is doing more advanced research in this area. In the case of MUSE, this issue is aggravated — the reasoning for a suggestion could be based on a large code corpus, together with intricate learning and probabilistic reasoning. All in all, we have our hands full with hard problems to solve. - There is a lot of code available, that we can look at, learn from, and build tools that drastically improve the code development process. - If you are a software engineer, NO, you are not going to be obsolete, at least not in the near future. - Gabel, M., & Su, Z. A study of the uniqueness of source code. In FSE 2010. - Devanbu, P. New Initiative: The Naturalness of Software. In ICSE NIER Track 2015.
<urn:uuid:d1888310-2831-4fb1-b3f9-1962d5727b99>
CC-MAIN-2022-40
https://resources.grammatech.com/improve-software-quality-and-robustness/history-repeats-itself-so-does-your-software-3
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00432.warc.gz
en
0.948577
1,470
2.6875
3
Exploring IoT Connectivity Options All IoT deployments require hardware, software and wired or wireless connectivity and most require some level of services to implement. Wired networks have a long-standing tradition of meeting the demands of Internet of Things (IoT). Ethernet and fieldbuses have been the default technology options for exchanging factories’ time-critical data since the 1980s, said Jeroen Hoebeke, an associate professor at IDLab, an Imec research group at Ghent University/University of Antwerp. Yet, while they are robust, wired networks are costly and involve a high degree of complexity. Cables need to be run to every machine, sensor and actuator, he said. In this article, we explore some of the options for IoT connectivity. Wired vs. Wireless Connectivity for IoT Devices As such, wireless connectivity has been a driving force in the proliferation of IoT deployments, said Sandra Wendelken, senior research analyst, mobile services and software at IDC. However, each IoT wireless connectivity option brings inherent advantages and disadvantages, said Adam Benson, chief technology officer of Inpixon. The tradeoffs involve factors such as range (i.e., the distance the signals travel), power consumption (i.e., hard-wired power versus battery-powered and battery life), bit rate (i.e., how much data can be sent and how fast), resistance to interference (i.e., some technologies handle signal reflection and absorption better than others), location accuracy (how precise a fixed or moving device can be pinpointed), real-time versus near real-time (e.g., this second versus a few seconds ago) and price, Benson said. Since the majority of IoT devices use wireless connectivity, it is then a question of what range of wireless is required for the application, said Robin Duke-Woolley, CEO, Beecham Research Ltd. “If only limited coverage is required, such as in a hospital ward, then Wi-Fi and other short range wireless technologies are appropriate,” Duke-Woolley said. “If a wider coverage is needed, it is then a question of the data rate required for the application and the reliability versus the cost.” LoRa and Sigfox connectivity are used for low-data-rate applications, while 4G and 5G broadband are used for high data rates, Duke-Woolley said. Mobile networks are also better for assets that move, such as trucks. Narrowband IoT (NB-IoT) and LTE-M (aka CatM1) are cellular alternatives to LoRa in particular. In a remote area, satellite may be the only option but currently is more expensive than other connectivity types, he said. However, the choice of IoT wireless connectivity will depend on the objective, Benson said. And sometimes a combination of technologies will suit the needs best. Wendelken said that the IoT wireless connectivity category can be broadly split into two types: short-range connectivity and long-range connectivity. Short-Range IoT Connectivity Options Short-range wireless networks have limited coverage areas and generally use mesh networking technologies, including ZigBee, Z-Wave and Thread; radio frequency identification (RFID); Wi-Fi and Bluetooth. Each technology has benefits and drawbacks depending on the application, according to Wendelken. - ZigBee, Z-Wave, Thread. Mesh networks have short range, high data rates and low power to relay sensor data over multiple nodes generally in a mesh topology, Wendelken said. The ZigBee IEEE 802.15.4 standard can complement Wi-Fi and is generally used for home sensor networks. Z-Wave, also generally used in smart home applications, and Thread are additional mesh networking protocols. Thread is also a Wi-Fi complement and has some applications in the enterprise IoT segment for monitoring controls, machines and valves, according to Wendelken. Mesh networks are deployed in industrial applications that require high data throughput, but the lack of range can be a problem, adding to network complexity. Short-range connectivity is more typically used for smart home and personal area networks for consumer applications, although mesh networks are used in some smart city applications, Wendelken said. Thread is a short-range technology sometimes used in the enterprise IoT segment, she said. The technology is now used to track livestock with individual nodes on animals and low-power sensors forming the mesh network. Additional Thread use cases include control modules for LED lighting, solar farms, manufacturing machine heat and humidity monitoring, and thermostatic and HVAC controls. - RFID sends short data bursts from an RFID tag to an RFID reader. The technology has had strong uptake in the retail sector to track goods and shipments. Data travels just a short distance but the technology helps optimize product stock planning and supply chain management, Wendelken said. - Bluetooth is not used extensively in industrial IoT settings because of limitations in coverage and power consumption, Wendelken said. Bluetooth Low Energy (BLE) offers short range, but it can communicate moderate amounts of information reliably and with little power consumption, Benson said. BLE also typically uses 2.4 GHz, similar to some Wi-Fi channels but is more accurate than Wi-Fi at positioning but still not particularly precise, he said. “That makes it a good choice for connecting devices such as smartwatches and headphones and for certain proximity services,” he said. “BLE can also be used in a wall- or ceiling-mounted ‘beacon’ that transmits its unique identifier to nearby smartphone apps or IoT devices to help the user navigate an indoor building or to trigger location-specific content (e.g., to serve a museum patron relevant information about the painting they’re near).” - Ultra-wideband (UWB). Ultra-wideband has been available for some time, but the technology has only recently been added to smartphones, cars and tracking tags, according to Benson. The common feature is an ability to more precisely position a person or thing down to centimeter level. Centimeters and milliseconds matter in use cases such as finding an employee quickly in an emergency, executing an evacuation, mustering, identifying crowding, locating unescorted visitors or avoiding collisions (e.g., person vs forklift). UWB-enabled real-time location systems can deliver visibility, safety and productivity for corporate enterprises, manufacturing, warehousing, logistics, health care and other industries. UWB implementations may require a higher initial investment that BLE or Wi-Fi. However, costs continue to decrease and some experts anticipate UWB will develop into the premier standard for short-range communication and localization, Benson said. - Wi-Fi. Wi-Fi offers reasonably long range (hundreds of feet) and speeds capable of supporting large numbers of web surfers and even streaming content, e.g., music, movies, Benson said. It’s also well-suited to communicate to downstream, fixed IoT devices, such as thermostats. “Wi-Fi’s downsides are that it requires a fair amount of power, so access points need to be hard-wired to power,” Benson said. “And for location context the ability to use Wi-Fi signals can be good for larger objects but may not provide the precision required for other positioning applications.” Wi-Fi is not used heavily in industrial settings but is often used in enterprise use cases, according to Wendelken. Both technologies are prevalent in consumer applications, such as personal area networks, wearables and smart home applications. “As soon as there’s a use case that requires a very high quality of service and a very low latency, Wi-Fi immediately becomes a problem,” said Arun Santhanam, vice president, Capgemini. Long-Range IoT Connectivity Options Long-range networks are generally composed of cellular networks, including 2G through 5G technologies, and low-power wide area networks (LPWAN) in licensed forms, such as narrowband IoT (NBIoT) and LTE-M, along with unlicensed LoRa, SigFox and myThings connectivity technologies, according to Wendelken. - Cellular-based IoT is generally used for fleet management, telematics, industrial and consumer use cases as well as connected cars, Wendelken said. Early cellular networks using 2G and 3G technology often track mobile assets, such as vehicles that don’t require large data throughput, she said. Many legacy 2G and 3G IoT connections involve automotive services and consumer applications, according to Wendelken. “One challenge is that tier 1 mobile operators have sunset plans for their 2G and 3G networks, which means enterprise customers must transfer the IoT devices to 4G and 5G networks more prominently in use,” Wendelken said. “5G networks are expected to offer much higher bandwidth and data speeds and lower latency than previous cellular generations.” As 5G networks continue to roll out and become more dense, more public safety and smart city use cases, especially those involving video, will continue to emerge, she said. 5G IoT connectivity will also find use cases in augmented reality/virtual reality, robotics, video analytics, drones and massive sensor and device connections across numerous vertical markets. - NBIoT, LTE-M. NBIoT is suited to simple IoT applications, such as monitoring machines in a factory and tracking the location and transport conditions, such as the temperature of sensitive raw materials bound for the factory, according to Wendelken. NBIoT can also be used for smart metering, facility management services, property alarms, and smart city infrastructure applications, such as streetlamps, she said. LTE-M is best suited for low-density sensors, automated meter reading and asset tracking. Hybrid asset tracking applications could include a short-range connection for the real-time location system, with LTE-M as the backhaul connectivity to reduce costs. Mobile operators also offer NBIoT and LTE-M connectivity for IoT deployments. LTE-M offers data rates of 4 megabits per second while NBIoT offers 127 kilobits per second data rates in a 200 KHz band and is best for stationary assets, Wendelken said. LTE-M and NBIoT are secure, operator-managed in licensed spectrum, and designed for IoT applications that are low cost and use low data rates, she said. Many IoT deployments using 2G/3G networks will likely be transitioned to one of the two technologies, which co-exist with 4G and 5G cellular networks, and will transition well to 5G LPWANs, according to Wendelken. - LoRa, Sigfox and myThings: Other LPWAN networks use technologies such as LoRa, Sigfox and myThings that rely on unlicensed spectrum. LoRaWAN is a protocol based on LoRa technology developed by the LoRa Alliance. LoRaWAN has a low data rate and long-range IoT communications in the license-free industrial, scientific and medical band, Wendelken said. “The unlicensed LPWAN technologies also provide myriad use cases,” she said. “LoRaWAN and Sigfox are used for precision agriculture, building monitoring, smart city applications, remote equipment monitoring, and asset tracking. MyThings is generally used for occupancy sensing, indoor environmental quality monitoring, environmental monitoring and programmable logic controller integration.” Sigfox is software based, and the network and computing complexity is managed in the cloud rather than on the devices. Sigfox operates in 200 KHz of unlicensed spectrum to exchange radio messages 100 Hz wide and transferred at 100 or 600 bits per second data rate, depending on the region, Wendelken said. “The myThings connectivity system is built for complex industrial and commercial sensor networks . . . and the technology offers a data rate of 512 bits per second,” she said. - Chirp spread spectrum (CSS). Chirp is a proven and reliable wireless technology well suited for industrial environments because it is low power, long range and robust against interference, Benson said. Chirp’s low-power characteristics make it possible to configure tags to transmit their data bursts no more frequently than needed, particularly when leveraging inertial-measurement unit data (from an accelerometer, etc.), which extends battery life. “Its long-range capabilities translate into less infrastructure required, making it more affordable than some alternative technologies,” Benson said. “And the robustness of the signal means data transmission integrity is high even in challenging radio frequency environments with a non-line of sight requirements or an abundance of reflection and absorption.” This consistency of communication is especially imperative in safety and security use cases where location determination needs to be accurate and in real time, he said. - Low-earth orbit satellites (LEO). Companies have been using large, high-altitude, geostationary satellites to connect remote areas to the outside world for a while, according to Deloitte. While these satellites have served a purpose, they lag fiber and cable-based Internet in terms of reliability and responsiveness and can be expensive. Alternatively, clustered low-earth orbit satellites are the better choice to connect these remote locations to the rest of the world’s LEO satellites are an exciting opportunity and they’re going to be most powerful and most popular with consumer-packaged goods manufacturers and manufacturers of other types of consumer products, said Eric Goodness, research vice president at Gartner Inc. There have been a number of consumer-packaged goods manufacturers that have invested in some of the LEO deployments and they’re going to be used for some rural areas where there is not a lot of opportunity, he said. “I definitely believe that LEO satellites are going to be part of a multimodal connectivity plan for enterprises,” Goodness said. “The thing with emerging connectivity providers like LEO satellites or 5G, there’s just been so much hype and ramp up ahead of actual deployment that I just think there’s a little bit of marketing fatigue out in the marketplace.” When investigating IoT connectivity options, there is no one-size-fits-all solution, Hoebeke said. “It all depends on the actual IoT use-case that needs to be supported and the underlying trade-offs that have to be made in terms of network coverage, capacity, latency, reliability, energy usage, etc.,” he said.
<urn:uuid:f2283b5b-a750-41e3-945c-f06ba4d653df>
CC-MAIN-2022-40
https://www.iotworldtoday.com/2021/09/13/exploring-iot-connectivity-options/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00632.warc.gz
en
0.927218
3,054
2.515625
3
LIME, the acronym for local interpretable model-agnostic explanations, is a technique that approximates any black box machine learning model with a local, interpretable model to explain each individual prediction. The idea stems from a 2016 paper1 in which the authors perturb the original data points, feed them into the black box model, and then observe the corresponding outputs. The method then weighs those new data points as a function of their proximity to the original point. Ultimately, it fits a surrogate model such as linear regression on the dataset with variations using those sample weights. Each original data point can then be explained with the newly trained explanation model. More precisely, the explanation for a data point x is the model g that minimizes the locality-aware loss L(f,g,Πx) measuring how unfaithful g approximates the model to be explained f in its vicinity Πx while keeping the model complexity denoted low. Therefore, LIME experiences a tradeoff between model fidelity and complexity. For humans to trust AI systems, it is essential for models to be explainable to users. AI interpretability reveals what is happening within these systems and helps identify potential issues such as information leakage, model bias, robustness, and causality. LIME offers a generic framework to uncover black boxes and provides the “why” behind AI-generated predictions or recommendations. The C3 AI Application Platform leverages LIME in two interpretability frameworks integrated in ML Studio: ELI5 (Explain Like I’m 5) and SHAP (Shapley Additive exPlanations). Both techniques can be configured on ML Pipelines, C3 AI’s low-code, lightweight interface for configuring multi-step machine learning models. Data scientists use these techniques during the development stage to ensure models are fair, unbiased, and robust; C3 AI’s customers use them during the production stage to spell out additional insights and facilitate user adoption. 1Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin, “Why Should I Trust You?: Explaining the Predictions of Any Classifier”
<urn:uuid:6d2f0334-7b3f-4833-a148-2051da394ec2>
CC-MAIN-2022-40
https://c3.ai/glossary/data-science/lime-local-interpretable-model-agnostic-explanations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00632.warc.gz
en
0.887685
441
3.015625
3
I understand the spirit of the question, but the way it’s phrased indicates that both “contextual clues” and “communication process” need to be better defined. What you’ve asked is essentially the same as, “How does the road help the car move forward?” Contextual cues (in the world of Communication Theory, the term is “cues” not “clues,” so I’ll likely slip into using “cues” instead of how the question is phrased – occupational hazard), like the proverbial road, are part of the process. They are as integral to the function of communication as the sender and receiver of messages. They can help, but they can also hinder. So let’s take a step back and look at the bigger picture. Communication has complex structures that can be broken down into some very simple, common themes. The process consists of, effectively; - A sender. This is the origination of a communication process. It could be a person, it could be an animal, it could be a device (like a broadcast station or a road sign). - A receiver. This is, obviously, the recipient in this exchange. This could be a person, it could be an animal, it could be a large group of people, it could be a machine. It could be the intended target, or an accidental target. - A message. This is the content of the communication process. It could be a spoken phrase, the text of a blog, the word “Stop” on a street. - A medium. This is the means by which the message is sent. It could be sent in a face-to-face format, the television, the internet, or it could even be in your own brain (intra-personal communication, for instance). The medium is an often-forgotten, or ignored, aspect of this. We’ll return to this part in particular in a moment. - Feedback. This is the means by which the sender of communication understands that his message has actually been received. In some cases, it’s immediate (called synchronous communication). In other cases, there may be a time delay (called asynchronous communication). Talking with someone who is standing in front of you provides you with immediate feedback. Talking with someone on the phone (another medium), also provides you with feedback, albeit with a slight delay. Texting someone takes a bit longer, and so on. Sometimes, though, you get no feedback at all (which is, in and of itself, feedback that can be interpreted). - Noise. This is generally understood to be anything that interferes with the communication process. It could be actual noise, like static, on the phone. It could be lag that ruins your game or conference call. It could be emotional hysteria that prevents someone from listening to another’s argument. It could be prejudice that makes a receiver reject a sender’s message before ever hearing it. In short, it’s an undesirable element that causes an interruption in effective communication. You will notice that I didn’t list of ‘context cues’ as a key component. That’s because this is a process, all of these add up to be greater than the sum of the parts. The communication process is an ecology, not a mechanical “fill in the blank” assembly line of moving a message from one place to the other. What do I mean by “ecology?” Think of it this way. If I were to wave my hand and magically remove all the trees in the world, we would not have a simple mathematical equation: New World = World – Trees There are dependencies on trees, and co-dependencies that trees create simply by existing in the world. You’ve heard about our importance of bees, for instance, and how the extinction of bees would have profound repercussions on life on Earth? Same principle. The ecology of communication is just like that. You can’t remove, say, the message and still have a communication process. You can’t take out the medium and still have a communication process. You can’t remove feedback… well, you get the idea. So here comes the important point: the ecology of that communication process is the context. Context, and the cues, exist at every stage along the way. Communication Theorists like Walter Ong, Harold Adams Innis, Marshall McLuhan, Neil Postman and others were strongly convinced that the means by which we communicated was as important – if not more important – than the actual messages we communicated. These context cues, the communication world that surrounds the ideas we actually communicate, are (and yes, this is the technical term) called “metacommunication.”). These cues are the world that we live in when we communicate. As communicators, we tend to understand these intuitively (for the most part – even the best of us can miss them). For example, the same two people talking about the same topic in the same location can have radically different meanings just by changing the time of the conversation. A married couple discussing improving their sex life is perfectly fine at the dinner table, but then our mental camera pulls back to reveal that their “intimate” conversation is in full earshot of all the Thanksgiving guests assembled for the feast. Different context, different cues, different meaning. A “time and place” doctrine. Often, the interpretation of these cues – and their importance – affect not only the communication process, but the nature of the relationship between participants as a whole. Take this example: There is a very definite and obvious context cue that she is ignoring, and he is not. Does this cue help or hinder the communication process? Now, think broader. This is a face-to-face conversation. What if the same conversation happened over the phone where he could not see her? How would that conversation have gone? How does the change of medium affect the cues that are present, and therefore the possible responses he could give? What if this were over text? How would the delay in response affect the process? Is the nail “noise?” It most assuredly is. Is it a cue? Absolutely. Is it affected by the medium of communication (face-to-face)? 100%. At the same time, does the context help him understand the source of her problem? Yes, it does. So the cue actually helps him by providing more information than he would have had if he had talked to her by text and by phone. What he does with that information is, you guessed it, also part of the communication process. So, to that end, there are cues that help, and cues that hinder – even in the same conversation. At the same time. The thing about processes is that they are, by definition, dynamic. Simply stating that there are cues (or clues) to augment or detract from the communication process is to attempt to “remove the trees,” so to speak. The better way to ask that question, then, is “What is the role of contextual cues in communication?” To get a good, thorough answer, you will find no shortage of libraries of books that answer that topic. I’ve given you just a few of the biggies in links above. If you truly want to find out more information, those would be great starts (though I wouldn’t start with Ong’s book, it’s a bit dense. Try Postman first, though McLuhan’s The Medium is the Massage is a fun trip into the hallucinogenic academia funhouse).
<urn:uuid:82960d3e-81f8-45a3-b4db-9e7fc124d79f>
CC-MAIN-2022-40
https://jmetz.com/2020/09/quora-question-how-do-contextual-clues-help-the-communication-process/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00632.warc.gz
en
0.951728
1,630
3.140625
3
|Click here for more articles about the RSA Conference.| In a world where malware authors use obfuscation to mask their malicious intent, security researchers may do well to do the same. It is this idea Trustwave researchers Ziv Mador and Ryan Barnett are planning to build on in their upcoming presentation at the RSA Conference in San Francisco next month. According to Mador, Trustwave's director of security research, the presentation is about leveraging the tactics of attackers in ways that can help an organization's defense -- starting with taking the concept of obfuscation and turning it against the attacker. The ultimate goal, the researchers explain, is to break the Web injection functionality of Trojans, such as ZeuS. "ZeuS' Web injects functionality happens by hooking into various Windows processes, including wininet.dll," says Barnett, lead security researcher at Trustwave. "This provides raw access to the HTTP data as if it is going across the network. It is at this point, before the data reaches the actual Internet Explorer Web browser process, that ZeuS attempts to modify the HTTP data. Even if the Trojan runs within the browser, the obfuscation can protect it, Mador adds, unless it is configured to run after the deobfuscation loop executes. "Most banking Trojans are not programmed that way," he says. "I would add that exploit kits often use dynamic variable names to randomize the content between different web requests. For example, random variable names are used as randomization seeds for the obfuscation. Web fraud detection code can use similar techniques for protecting itself. For example, the Web inject is configured to remove the protective code from certain location in the page. By randomizing the content of the Web page the same way, Web injects is expected to break. Improving the banking Trojans to remove them from random locations in the page adds complexity to the authors of banking Trojans." There are some examples of banking Trojans that work as plugins for Mozilla Firefox or Google Chrome that may be able to access the DOM and circumvent the technique, Barnett concedes. "Web fraud detection vendors have software that can work as extensions/plugins for the user's Web browser," he says. "This is another area where security researchers/vendors can reverse-engineer banking Trojan plugins to better detect and prevent them from working by monitoring and protecting registering these hooks." In some ways, the idea is similar to the concept of a Russian Matryoshka doll, where a doll is placed inside a larger doll, Barnett says. In this case, the HTML of the bank login page is placed inside a virtual Russian doll, which provides a layer of obfuscation ZeuS is not currently prepared to handle. "[ZeuS] is looking for that raw HTML, and if it doesn't see it, the Web inject does not work," he says. "So it breaks it." "What we're saying is look to tactics ... attacker groups are using," he adds. "Perhaps you can use that in a different way, or apply it in a different area." The presentation is scheduled for Feb. 26. Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.
<urn:uuid:781c3da4-4405-44bc-8186-8a61574628b1>
CC-MAIN-2022-40
https://www.darkreading.com/attacks-breaches/using-attackers-tactics-to-battle-banking-trojans
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00632.warc.gz
en
0.921387
775
2.515625
3
The UK missed out on becoming one of the first countries to embrace the new 4G LTE technology, allowing speeds of over 100Mbps on mobile devices. In order to not miss out on another major step, the UK government is actually investing two steps ahead in 6G, with a £15 million investment in quantum technology studies. Quantum technology is the next step after fiber optics in terms of speed, but nobody knows what the actual power of quantum computing is right now. The £15 million investment is a first look into the new wave of science. “From cameras that can see through smoke to cracking down on internet fraud, quantum technologies are taking innovation to a whole new level and offer an unparalleled opportunity to shape the next generation of high-tech products that will improve our day-to-day lives," Vince Cable, Government Business Secretary, said. "This £15 million investment will ensure we have the flexible, highly-skilled workforce needed to turn these futuristic ideas into a reality.” It is not an exact investment in 6G networks, but given 5G will already push speeds to 10Gbps, it is highly likely quantum technology will be needed to push the speeds even further ahead. Several quantum researchers are looking into advancements in computing, including things like quantum chips, batteries and networks. The issue is, quantum physics is still relatively new and needs a lot of research before any definites are announced. 5G networks are likely to become commercially available by 2020, the same time self-driving cars, drones and augmented reality is said to take off in the United States and Europe. If we are putting each generation shift in the wireless industry at five years, the 6G networks should be available by 2025. This is quite optimistic however, given the difficulties in making quantum technology cost effective and faster than 5G networks. That said, in 2010 the idea of 10Gbps on a smartphone would have seemed impossible and ridiculous, but now we are seeing Samsung and countries show off the huge speed increase at labs across the globe.
<urn:uuid:c09c7023-31c5-42cf-824a-4d6fb2ac1d4c>
CC-MAIN-2022-40
https://www.itproportal.com/2015/03/23/uk-government-already-investing-6g-networks-5g-exists/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00632.warc.gz
en
0.950453
415
2.703125
3
Fiber optic cable can be damaged if pulled improperly. Broken or cracked fiber, for example, can result from pulling on the fiber core or jacket instead of the strength member. And too much tension or stress on the jacket, as well as too tight of a bend radius, can damage the fiber core. If the cable’s core is harmed, the damage can be difficult to detect. Once the cable is pulled successfully, damage can still occur during the termination phase. Field termination can be difficult and is often done incorrectly, resulting in poor transmission. One way to eliminate field termination is to pull preterminated cable. But this can damage the cable as well because the connectors can be knocked off during the pulling process. The terminated cable may also be too bulky to fit through ducts easily. To help solve all these problems, use preterminated fiber optic cable with a pulling eye. This works best for runs up to 2000 feet (609.6 m). The pulling eye contains a connector and a flexible, multiweave mesh-fabric gripping tube. The latched connector is attached internally to the Kevlar®, which absorbs most of the pulling tension. Additionally, the pulling eye’s mesh grips the jacket over a wide surface area, distributing any remaining pulling tension and renders it harmless. The end of the gripping tube features one of three different types of pulling eyes: swivel, flexible, or breakaway. Swivel eyes enable the cable to go around bends without getting tangled. They also prevent twists in the pull from being transferred to the cable. A flexible eye follows the line of the pull around corners and bends, but it’s less rigid. A breakaway eye offers a swivel function but breaks if the tension is too great. We recommend using the swivel-type pulling eye. A pulling eye enables all the fibers to be preterminated to ensure better performance. The terminated fibers are staggered inside the gripping tube to minimize the diameter of the cable. This enables the cable to be pulled through the conduit more easily.
<urn:uuid:f9d58249-6a0b-4783-8108-29c641441ec4>
CC-MAIN-2022-40
https://www.blackbox.com/en-ca/insights/blackbox-explains/inner/detail/fiber-optic-cable/installation-of-fiber-optic-cables/pulling-eyes-and-fiber-cable
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00632.warc.gz
en
0.912226
421
2.71875
3
Cyberbullying is the use of the Internet and any other technology to harm or harass other people, in a deliberate, repeated, and hostile manner. Unfortunately this phenomena are in constant increase, in many cases the repercussion are serious especially for youngsters, due this reason legislation and awareness campaigns have arisen to combat it. Be aware, cyberbullying is not limited to children because similar practices are diffused within adults, in this case many experts used the term cyberstalking or cyberharassment to identify the harmful conduct perpetrated by adults toward adults. Typical cyberbullying conduct is based on the in cyberharassment in public forums, social media, blog and any other form of online media with the intent to threaten a victim’s reputation, earnings, employment or safety. Very common is the habit to encourage others to harass the victim and trying to compromise its online experience on social media. Fortunately law enforcement is aware of the cyberbulling phenomena, despite it was erroneously considered a low threat for a long time. Principal law framework today consider cyberbullying as a crime and numerous awareness campaign are conducting on a global scale to originate the concerning habits especially through the scholastic education. After this short intro I desire to present an eloquent and interesting infograph that was sent me by a representative of the organization BestEducationDegrees – http://www.besteducationdegrees.com/cyberbullying/ Following some interesting statistics on cyberbullying: Cyberbullying is defined as the “willful and repeated harm inflicted through the use of computers, cell phones, and other electronic devices.” With 80% of teens on cell phones and the same on social media sites, it’s time to understand that technology is connecting teens in ways they can’t escape. 1 in 6 (16.2%) of teens are cyber bullied [22.1% girls / 10.8% boys] 18.6% of white [25.9% girls / 11.8% boys] 8.9% of blacks [11% girls / 6.9% boys] 13.6% of hispanics [18% girls / 9.5% boys] 15.5 of 9th graders [22.6% girls / 8.9% boys] 18.1 of 10th graders [24.2% girls / 12.6% boys] 16 of 11th graders [19.8% girls / 12.4% boys] 15 of 12th graders [21.5% girls / 8.8% boys] Off-line bullying rates 1 in 5 are bullied offline [22% girls / 18% boys] Cyberbullying rates by state Alabama [12.3%], Alaska [15.3%], Arkansas [16.7%], Colorado [14.4%], Connecticut [16.3%], Florida [12.4%], Georgia [13.6%], Hawaii [14.9%], Idaho [17%], Illinois [16%], Indiana [18.7%], Iowa [16.8%], Kansas [15.5%], Kentucky [17.4%], Louisiana [18%], Maine [19.7%], Maryland [14.2%], Michigan [18%], Mississippi [12.5%], Montana [19.2%], Nebraska [15.8%], New Hampshire [21.6%], New Jersey [15.6%], New Mexico [13.2%], New York [16.2%], North Carolina [15.7%], North Dakota [17.4%], Ohio [14.7%], Oklahoma [15.6%], Rhode Island [15.3%], South Carolina [15.6%], South Dakota [19.6%], Tennessee [13.9%], Texas [13%], Utah [16.6%], Ver-mont [15.2%], Virginia [14.8%], West Virginia [15.5%], Wisconsin [16.6%], Wyoming [18.7%] But cyber bullying is punishable by the law. 49/50 states have bullying laws (Montana is the one state that doesn’t) 47/50 include “electronic harassment. 44/50 include school sanctions. 18/50 specifically include “cyberbullying” and 12/50 include criminal sanctions. With Federal cyberbullying laws pending. What it causes Teenagers who are cyberbullied are 3 times more likely to commit suicide. Teenagers who are traditionally bullied are 2 times more likely to commit suicide. Suicide attempts that require treatment: 1.5% for youths not bullied 2.3% for youths physically bullied 5.4% for youths cyberbullied 6% for youths physically and cyberbullied Only 1/10 victims ask their parents for help. Leaving 9/10 to deal with the abuse alone. Tips for parents • Unconditional support. • Inform the child of options in dealing with the bully. • Work with school officials. • Work with the parents of the bully. • Contact IT providers to get content removed and bullies blocked. • If necessary, contact the police. Tips for Educators • Teach that cyberbullying is wrong. • Listen and respond to all reports of bullying. • Have students work on projects against cyberbullying. • Have a system for complaints to be documented. • Host speakers on the topic of bullying. • Ensure that school is a safe place; free from cyberbullying. Cyberbullying is real and often more emotionally brutal than traditional bullying. Stay informed and protect your children because sometimes words hurt more than sticks and stones. Do not forget that cyberbullying will die!
<urn:uuid:54f9eb16-8ee8-47ec-8dc8-9e943d155249>
CC-MAIN-2022-40
https://securityaffairs.co/wordpress/19368/cyber-crime/cyberbulling-infograph.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00632.warc.gz
en
0.912189
1,244
3.5
4
This article describes the security levels concept as used in the Cisco ASA firewall appliance. The following information applies to both the older 5500 series and the newer 5500-X series of appliances. What is Security Level A Security Level is assigned to interfaces (either physical or logical sub-interfaces) and it is basically a number from 0 to 100 designating how trusted an interface is relative to another interface on the appliance. The higher the security level, the more trusted the interface (and hence the network connected behind it) is considered to be, relative to another interface. Since each firewall interface represents a specific network (or security zone), by using security levels we can assign ‘trust levels’ to our security zones. The primary rule for security levels is that an interface (or zone) with a higher security level can access an interface with a lower security level. On the other hand, an interface with a lower security level cannot access an interface with a higher security level, without the explicit permission of a security rule (Access Control List – ACL). Security Level Examples Let us see some examples of security levels below: - Security Level 0: This is the lowest security level and it is assigned by default to the ‘Outside’ Interface of the firewall. It is the least trusted security level and must be assigned accordingly to the network (interface) that we don’t want it to have any access to our internal networks. This security level is usually assigned to the interface connected to the Internet. This means that every device connected to the Internet can not have access to any network behind the firewall, unless explicitly permitted by an ACL rule. - Security Levels 1 to 99: These security levels can be assigned to perimeter security zones (e.g. DMZ Zone, Management Zone, Database Servers Zone etc). - Security Level 100: This is the highest security level and it is assigned by default to the ‘Inside’ Interface of the firewall. It is the most trusted security level and must be assigned accordingly to the network (interface) that we want to apply the most protection from the security appliance. This security level is usually assigned to the interface connecting the Internal Corporate network behind it. The diagram above illustrates a typical example of security levels assignment in a network with an Inside, Outside, and DMZ zones. This represents a pretty common network setup of an enterprise/corporate network whereby users are connected to the Inside (internal) network, some public servers (e.g web server, email server etc) are located in a DMZ network and the outside of the ASA is connected to the Internet. Throughout this website, I represent the Cisco Firewall with the “Electrical Diode” symbol. As you can see, the Internal Corporate Network is connected to the Interface with the highest security level (Interface G0/1 with Security Level 100) which is also named as ‘Inside’. The Interface name ‘Inside’ is given by default to the interface with the highest security level. Also, the INTERNET facing interface (G0/0) is named ‘Outside’ and is assigned a security level of 0. A Perimeter Zone (DMZ) is also created with a Security Level of 50. The Red Arrows in the diagram represent the flow of traffic. As you can see, the Inside Zone can access both DMZ and Outside Zones (Security Level 100 can access freely the Security Levels 50 and 0). The DMZ Zone can access only the Outside Zone (Security Level 50 can access Level 0), but not the Inside Zone. Lastly, the Outside Zone cannot access either the Inside or the DMZ zones. What is described in the example above is the default behavior of the Cisco ASA Firewalls. We can override the default behavior and allow access from Lower Security Levels to Higher Security Levels by using Static NAT (only if required) and Access Control Lists. Rules for Traffic Flow between Security Levels - Traffic from Higher Security Level to Lower Security Level: Allow ALL traffic originating from the higher Security Level unless specifically restricted by an Access Control List (ACL). If NAT-Control is enabled on the device, then there must be a dynamic NAT translation rule between High-to-Low Security Level interfaces (e.g PAT etc). - Traffic from Lower Security Level to Higher Security Level: Drop ALL traffic unless specifically allowed by an ACL. If NAT-Control is enabled on the device then there must be a Static NAT between High-to-Low Security Level interfaces. - Traffic between interfaces with same Security Level: By default this is not allowed, unless you configure the same-security-traffic permit inter-interface command (ASA version 7.2 and later). Using Interfaces with Same Security Levels on Cisco ASA Most Cisco ASA firewall models allow you to have a maximum number of VLANs greater than 100 (e.g 150, 200, 250). Each Layer 2 VLAN on the ASA is essentially a different security zone, with its own Security Level number. As we know, security levels can range from 0 to 100 (i.e we have 101 security levels). One obvious question arises here: How can we have lets say 150 VLANs on the firewall, but we have only 101 possible security levels? The answer is simple: We can have the same security level number on different interfaces / subinterfaces (security zones). This feature will allow us to have more than 101 communicating interfaces on the firewall. By default, interfaces with the same security level can not communicate between them. To allow traffic to flow freely between interfaces with same security level, use the following command: ASA(config)# same-security-traffic permit inter-interface There is another option also for this command: ASA(config)# same-security-traffic permit intra-interface The last command above allows traffic to enter and exit the same interface, which by default is not allowed. This is useful in networks where the ASA firewall acts as a HUB in a HUB-and-SPOKE VPN topology, where spokes need to communicate with each through the hub (this is also called “hairpinning“). Security level is one of the core concepts of ASA firewalls. You must carefully plan the assignment of security levels on a per-interface basis and then control traffic between interfaces accordingly. Let me know in the comments section below if you have any questions or if you need any help configuring security levels. Table of Contents - What is Cisco ASA Firewall – All you need to Know - Traffic Rate and Bandwidth Limiting on Cisco ASA Firewall - Cisco ASA 5505-5510-5520-5540-5550-5580 Performance Throughput and Specs - Password Recovery for the Cisco ASA 5500 Firewall (5505,5510,5520 etc) - Cisco ASA 5505, 5510 Base Vs Security Plus License Explained
<urn:uuid:d09950c8-f1da-412d-b4b7-409bd01444a2>
CC-MAIN-2022-40
https://www.networkstraining.com/cisco-asa-firewall-security-levels/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00632.warc.gz
en
0.891122
1,448
2.875
3
Over the years, fiber optic connectors are no longer a deep concern for network installers. The industry standards of making connectors replaced the complex installation. A number of fiber optic connector types have been evolved and withstood the test of time to become industry standards. The main types of fiber optic connectors include: ST, FC, SC, and LC. These connectors come in many configurations and usages. This article touches on the very basics of fiber optic connectors types, market as well as its installation. The common fiber optic connector types include ST, SC, FC, LC, MU, E2000, MTRJ, SMA, DIN as well as MTP & MPO etc. Each one has its own advantages, disadvantages, and capabilities. All fiber optic connectors have four basic components, which are the ferrule, connector body, cable, and coupling device. They have been widely used in the termination of fiber optic cables, such as fiber optic pigtail, fiber optic patch cables and so on. In this passage, we would mainly give brief introduction to the four most common fiber optic connector types. One most common of fiber optic connectors is the ST connector. This simplex fiber connector evolved from previous designs and was finally introduced by AT&T in the mid-late 1980s. It has become the de-facto standard in the security market and is commonly used in the AV market on such products as HDSDI, RGB/DVI, and others. It is available in both multimode and singlemode versions. The insertion loss of the ST connector is less than 0.5 dB, with typical values of 0.3 dB being routinely achieved. It is relatively easy to terminate in the field. Besides, it has good strain relief and good, but not exceptional attenuation characteristics. Developed by Lucent Technologies, the LC fiber optic connector has become the ubiquitous fiber optic connector for telecom applications. But LC connector does not stand for Lucent Connector. It is used in conjunction with small form pluggable (SFP) optical transceivers. These SFP devices are now becoming very common in Pro AV applications for such products as HDMI, DVI, audio, optical distribution amplifiers, optical/electrical/optical (OEO) switches, and so forth. The LC connector is smaller than all other connectors and is a push-pull design connector. The SC fiber optic connectors are common in singlemode fiber optic telecom applications and analog CATV. Like the LC connector, this is also a push-pull design and is also commonly used in patch panels that act as the connector interface between the main field cable and smaller patch cords connected to the fiber transmission equipment. Some manufacturers of fiber AV equipment also use SC connectors in conjunction with their optical emitter and detector devices. The FC fiber optic connector has become the connector of choice for singlemode fiber. It is mainly used in fiber optic instruments, singlemode fiber optic components, and high-speed fiber optic communication links. This high-precision, ceramic ferrule connector is equipped with an anti-rotation key, reducing fiber endface damage and rotational alignment sensitivity of the fiber. The key is also used for repeatable alignment of fibers in the optimal, minimal-loss position. Multimode versions of this connector are also available. The typical insertion loss of the FC connector is around 0.3 dB. You can check the detailed specification of the above four fiber optic connectors on the below table. |Connector Type||Singlemode (9/125) Insertion Loss (dB)||Multimode Insertion Loss (dB||Return Loss (dB)| Fiber Optic Connector Market The business of global fiber optic connectors have achieved big success over the past few years. The global fiber optic connector market is expected to reach USD 5.9 billion by 2025, according to a new report by Grand View Research, Inc. and is expected to gain traction over the forecast period. The global marketplace is majorly driven by the growing adoption of the fiber optic technology. The fiber optic connector market type includes different fiber optic connectors such as SC connector, LC connector, FC connector, ST connector, MTP connector, and others. Based on fiber optic applications, the market is segmented into military & aerospace, oil & gas, telecom, medical, BFSI, railway, and others. Fiber Optic Connector Installation It is easy to install fiber optic connector, a fiber optic cable connection can be completed within 30 minutes. Just follow the following steps: - Step Ⅰ: Strip the plastic jacket at the end of the fiber optic cable. Optic cable ends have jackets to prevent any damage in shipping from the manufacturer. Clamp the plastic jacket, using a fiber optic stripper tool, which has a designated slot to fit the size of a fiber optic jacket. Squeeze the handles of the stripper like pliers. Pull the jacket away from the fiber optic cable. - Step Ⅱ:Open the back chamber of the epoxy glue gun by twisting off the back cap. Insert the epoxy glue tube into the chamber and squeeze lightly. You will only need a few ounces of glue for the task. Screw the cap back on the epoxy glue gun chamber. - Step Ⅲ:Inject epoxy glue into the fiber optic connector socket. Each fiber optic connector has two sockets on each side of it to form the connection. Insert the glue gun into the connector socket. Press and hold the trigger to insert the glue. The glue should spot should not be larger than an eye pupil. - Step Ⅳ:Insert one fiber optic cable end into the connector sockets. Hold the cable in the socket and count to 10. Let go of the fiber optic cable and connector. Check that the cable stays in position once you let go of it. - Step Ⅴ:Place the new fiber optic connection into an an epoxy curing oven. Turn on the oven and turn the timer knob to six minutes. Insert the fiber optic connector attached to the cable into one of the curing oven slots. Press the start button on the oven. Pull out the connector from the oven slot. Wiggle the connector end to test the stability of the connection. If it seems fragile, reinsert the connector into the oven and cook it for a few more minutes. Repeat steps three to five to seal the fiber optic connector on both sides. This article mainly discussed about the fiber optic connector type, fiber optic connector market, as well as its installation. With the wide variety of fiber optic connectors available today, companies can easily convert to fiber optic networks and start enjoying the benefits of a faster, more efficient work environment. If you need fiber optic connectors, FS.COM is a wise choice. They provide a full range of fiber optic connectors and offer customdized service. The fiber optic connect market ahead will be more beoyant, we shall see.
<urn:uuid:f1b69653-3006-4147-b9ff-2b43132eb434>
CC-MAIN-2022-40
https://www.fiber-optic-cable-sale.com/tag/fiber-optic-connector-market-type
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00632.warc.gz
en
0.917868
1,507
2.546875
3
Guest OpEd By Stewart Dewar, Senstar Product Manager When tasked with selecting equipment to protect a site, security professionals typically choose the technologies with which they have familiarity and confidence. This makes perfect sense – no one wants to risk using unproven technology when security is on the line. That being said, new technologies shouldn’t be discounted, especially if they meet performance requirements and offer distinct benefits. A good example of this is security devices that use wireless communication. Many traditionally hard-wired devices are now available in wireless versions, including network cameras, access control devices and intrusion detection sensors. Going wireless dramatically simplifies installation and lowers costs, as technicians no longer need to physically run copper or fiber cable to each device. Some devices may also be battery or solar-powered, leading to further savings. However, questions remain about the reliability and vulnerability to hacking of these new devices, especially given people’s experiences with consumer-focused technologies like WiFi or Bluetooth. For security professionals, the key is understanding that not all wireless technology is the same and that there are solutions designed specifically for security applications. Radio Frequencies and Protocols Matter Devices can use different radio frequencies and communication protocols, and these design choices impact the security of the wireless link. Most security devices use radio spectrum in the Industrial, Scientific and Medical (ISM) bands, which includes the popular 2.4 and 5 GHz bands (used by WiFi, but also by other more specialized protocols). In North America, the 915 MHz band is also available (other jurisdictions use nearby frequencies). When making sense of product capabilities, security professionals should keep the following guidelines in mind: Security devices should never use WiFi. While WiFi can employ strong encryption, whitelists and other protections, the risk to critical systems is simply too high. WiFi-based devices are vulnerable to network congestion, RF interference, hacking, Internet of Things (IoT) malware, and misconfiguration. With this in mind, WiFi’s convenience and user benefits make it hard to avoid and it certainly deserves its place in non-security, non-critical applications. Devices using the 915 MHz band typically support longer transmission distances than ones using higher frequencies, mostly due to lower RF attenuation (higher frequency signals are more susceptible to absorption and scattering caused by rain, snow, and foliage). In addition, FCC regulations allow for more powerful transmitters in 915 MHz based devices. For applications on the perimeter or in remote building locations, the maximum communication range needs to be taken into account. Unlike WiFi, low-power RF technologies like IEEE 802.15.4 (popularized by the Zigbee IoT protocol) are designed to work in RF congested environments and are optimized for secure machine-to-machine (M2M) communication. For low-bandwidth applications like intrusion detection or access control, this set of technologies holds great future potential and its reliability is already field-proven. (A simplified intrusion detection solution for sliding and swinging gates, the FlexZone Wireless Gate Sensor detects attempts to open, cut, climb or otherwise break through sliding and swinging gates and complements the coverage of the FlexZone fence-mounted sensor. Courtesy of Senstar and YouTube. Posted on Mar 1, 2016.) Reliability, Resiliency and Vulnerability Like their wired counterparts, the connectivity of wireless devices must be monitored by the security system. Communication loss should immediately be reported as a supervision alarm. This also means that the communication links must be reliable enough that operators view a communications loss as a potential threat and do not disregard it immediately as a false alarm. Consider the following scenarios that could negatively affect communications: Device malfunction or loss Wireless equipment must support frequent and periodic check-ins, within tens of seconds. If the link has been compromised to the point where alarm messages cannot be sent/received, or if bidirectional communications cannot be guaranteed, the system should indicate the device is offline. There is nothing to prevent a third-party from overwhelming the radio signal used by a device. However, the effectiveness of this type of attack is short-lived on a properly designed device, as an interference alarm will be raised almost immediately. If the jamming signal is strong enough to prevent all communication, the equipment should still be declared offline based on the check-in results. One relatively new technology, mesh networks, shows great potential in security applications. In a mesh network, each device acts as a node within a dynamic self-organizing, self-healing topology. This architecture is particularly useful for systems that use large numbers of discrete sensors located in close proximity to each other. For example, low-power intelligent fence lighting can use a mesh network for communication between fixtures. Mesh networks provide two key benefits: First, if a node malfunctions or is physically damaged, the system adapts and remains functional. Second, the network mesh extends the coverage distance, as the furthest sensor can relay its messages to the central security network via the other nodes. (Senstar LM100 is the world’s first hybrid perimeter intrusion detection and intelligent lighting system. Combining high performance LED lighting with accelerometer-based sensors, the LM100 deters potential intruders by detecting and illuminating them at the fence line. Courtesy of Senstar Corporation and YouTube. Posted on Sep 26, 2017.) Resistance to Advanced Attacks Physical damage and RF jamming are the two most basic attacks against wireless devices and are easily addressed. The next question is how well can a wireless security device fare against a sophisticated hacking attempt? First, let’s look at encryption. AES encryption is used today in financial transactions worldwide and is considered highly secure when correctly implemented. When used on a security device like a fence sensor, breaking the encryption would require far greater resources than virtually any other conceivable type of attack. In addition, with the exception of network cameras that use open standards, the protocols used in intrusion detection and access control devices are typically proprietary and their short over-the-air time makes demodulation via commercial radio sniffing devices extremely difficult. As the encryption is virtually unbreakable, would-be attackers would likely try other disruptive approaches: This attack involves recording and replaying encrypted radio traffic that is not understood in an attempt to confuse or break the system. This attack can be thwarted by including sequence checking in the underlying protocol. Device swapping or cloning Device swapping consists of someone attempting to use similar equipment running on the same radio channel to trick the system into reporting the status of the shadow device instead. Properly designed equipment will limit access to whitelisted equipment via unique identifiers embedded into the physical hardware components during manufacturing. Another, albeit difficult variation on this attack, is cloning a device to use the same identifier. In this case, two simultaneous radio broadcasts using the same identifier would result in RF interference alarms being generated. (FlexZone is a cable-based fence-mounted system that detects and locates any attempt to cut, climb or otherwise break through the fence. FlexZone adapts to a wide variety of fence types and is ideal for sites of all sizes. Courtesy of Senstar Corporation and YouTube. Posted on May 27, 2016.) Doing More with Less Security professionals, by trade, should be cautious when using new technology to secure sites. At the same time, new technology is a driving force behind better security and enables organizations to “do more with less”. Wireless security devices, when designed and deployed correctly, can maintain the highest levels of security while reducing installation and operating costs. To help decide if a given wireless security device or system is suitable for a site, ask the vendors tough questions regarding its reliability, resiliency and potential vulnerabilities. Senstar Takes Triple Honors in the 2018 ‘ASTORS’ Homeland Security Awards Program Best Fencing Solution Best Perimeter Protection System Best IP Video Surveillance Solution Senstar Thin Client The Annual ‘ASTORS’ Awards Program is specifically designed to honor distinguished government and vendor solutions that deliver enhanced value, benefit and intelligence to end users in a variety of government, homeland security and public safety vertical markets. Over 130 distinguished guests representing National, State and Local Governments, and Industry Leading Corporate Firms, gathered from across North America, Europe and the Middle East to be honored among their peers in their respective fields which included: - The Department of Homeland Security - The Federal Protective Service (FPS) - Argonne National Laboratory - The Department of Homeland Security - The Department of Justice - The Security Exchange Commission Office of Personnel Management - U.S. Customs and Border Protection - Viasat, Hanwha Techwin, Lenel, Konica Minolta Business Solutions, Verint, Canon U.S.A., BriefCam, Pivot3, Milestone Systems, Allied Universal, Ameristar Perimeter Security and More! The Annual ‘ASTORS’ Awards is the preeminent U.S. Homeland Security Awards Program highlighting the most cutting-edge and forward-thinking security solutions coming onto the market today, to ensure our readers have the information they need to stay ahead of the competition, and keep our Nation safe – one facility, street, and city at a time. The 2018 ‘ASTORS’ Homeland Security Awards Program was Proudly Sponsored by ATI Systems, Attivo Networks, Automatic Systems, Desktop Alert, and Royal Holdings Technologies. Nominations are now being accepted for the 2019 ‘ASTORS’ Homeland Security Awards at https://americansecuritytoday.com/ast-awards/. |Access Control/ Identification||Personal/Protective Equipment||Law Enforcement Counter Terrorism| |Perimeter Barrier/ Deterrent System||Interagency Interdiction Operation||Cloud Computing/Storage Solution| |Facial/IRIS Recognition||Body Worn Video Product||Cyber Security| |Video Surveillance/VMS||Mobile Technology||Anti-Malware| |Audio Analytics||Disaster Preparedness||ID Management| |Thermal/Infrared Camera||Mass Notification System||Fire & Safety| |Metal/Weapon Detection||Rescue Operations||Critical Infrastructure| |License Plate Recognition||Detection Products||And Many Others!|
<urn:uuid:9b52aadf-9213-4614-92e7-fd9d1cea1346>
CC-MAIN-2022-40
https://americansecuritytoday.com/using-wireless-communications-in-security-applications-multi-video/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00632.warc.gz
en
0.899532
2,227
2.828125
3
Open source: Security through transparency The contrast between proprietary and open source software is as old as the IT industry itself. Software in almost every category is available either from suppliers who develop and market their code by themselves or from developer communities who work with open code. Over the last decade the aversion to using open software, especially in the corporate field, has undergone a marked change. Managers realised that if even IT giants such as Facebook, Google and Amazon were relying on open source, ordinary companies should be able to do so too. The advantages of open source are well known: lower costs, the security and higher quality that arise from a large developer community and the absence of ties to one manufacturer are powerful arguments. In some areas open source products are already leaders in their field. Linux, Firefox and WordPress, for example, are hugely successful in the consumer sector. MySQL, Apache, Free BSD, Zimbra and Alfresco are frequently encountered in the corporate environment. However, the distinction is not black and white: software cannot simply be divided into open and closed, free and non-free, open source and proprietary. There are all sorts of subcategories, which give rise to huge differences in their licensing terms. For companies, however, it is largely only the categories of open source and proprietary software that are of relevance, and it is the combination of the two in the form of commercial open source software that in fact provides the best of both worlds. Below is a summary – by no means complete – of the most important categories of software on the market. Software is divided roughly into “free” and “non-free”, with a special category that combines open source and proprietary software. There are thus many different types of software, with software of different origin being used to meet different needs. Many software solutions are available in different versions, with different licence conditions and often a different range of functions. However, a general cultural change is taking place in favour of open source. For example, the EU and the government of the USA are investing huge amounts of money to increase their use of open source. And at CERN, which has long been a pioneer of IT, scientists are being encouraged to conduct their research using the next generation of open solutions. The trend is no longer limited to software. “Open hardware” is now becoming widespread: the Raspberry Pi, the Kano, the Arudion, the Firebox-based MatchStick, the NAO and the Hummingboard are all examples that show how open projects are gaining momentum and awakening new trends, such as the Internet of Things. And yet open source is not something really new. The ultimate open source computing platform is still the mainframe, which was also the nucleus of the present personal computer and hence has always represented a significant open source community. Security concerns with open source? Quite the opposite! With the increasing acceptance of open source software, pure proprietary software is losing ground in the market. Many users have doubts about the future flexibility of proprietary software and many experience dependence on the supplier as an unwanted restriction. As they eye up the future of digital business and government services, companies such as Facebook and Google regard open source as indispensible; most providers are already using open source in various areas of their IT operations. In particular, open source solutions provide a platform for customer-ready technology that can be customised for different products. Nevertheless, despite the growing acceptance of open source, companies still have concerns about liability and security. But what are the facts of the case? The preconception that open source software is not secure is certainly not valid. The worldwide network of developers, architects and experts in the open source community is increasingly being recognised as an important resource. The community provides professional feedback from experts in the sector who can help companies produce more robust code and create patches faster and can develop innovations and improvements to new services. In a proprietary model the software is only as good as the small group of developers working on it. Companies that rely on third-party vendors for their proprietary software may feel safer, but they are labouring under an illusion: in the name of proprietary intellectual property producers can easily prevent business customers finding out whether there are security flaws in their code – until hackers exploit them. There have been numerous examples of this in the recent past, causing problems for many customers. Because of the high level of transparency within the open source community, the work of this network of experts is of first-class quality; members attach great importance to maintaining an unblemished reputation. Nobody puts their professional credibility at risk when the whole community can view the code published under their name and comment on it. In consequence community members subject their newly compiled code to painstaking checks before they publish it. This should allay the unjustified fear of security flaws. Commercial open source solutions – a give and take Naturally companies want a development model that supports continuous improvement. The open source development model enables companies to support the project in a technologically appropriate way with code tailored to their requirements – and hence to give something back to the community. In commercial open source software all new code undergoes a strict quality assurance process to ensure the security of corporate clients and their end users. Changes that are of benefit to the wider body of corporate customers are checked and the community then adds them to its codebase. To be able to utilise all the advantages of open source, there must be a close relationship with a provider of commercial open source solutions. This is essential in order to promote creativity and contributions within the community. Companies can also provide code to support their business. Providers of commercial open source solutions supply the support and the strict product development process, including the tests with databases, containers and quality assurance that typically form part of the development of proprietary software. Open architecture plus unlimited scalability provides reliable solutions Social media, the cloud, big data, mobility, virtualisation and the Internet of Things are constantly turning IT upside down. Existing technologies struggle to keep up with these changes. Companies and institutions must provide their services via numerous channels while ensuring complete data security. With rigid, proprietary systems this is virtually impossible to achieve and the open source community demonstrates daily that open source code products are more than ready to take on important services. Apache is already the number one. MySQL is on the way up; sooner or later OpenStack is highly likely to become the software of choice for the management of computing centres and OpenAM is one of the best products for access rights based on digital identities. Companies that refuse to use open source are likely to fall behind in terms of function breadth and depth and are unable to offer their clients a comprehensive digital user experience. The success of open source is measured by its ability to ensure a high level of security and innovation. If openly developed software were not safe, security and innovation would not be possible. Open source thus provides security through transparency – something that does not apply to proprietary software. Companies would do well to keep a good eye on open source solutions. [su_box title=”Simon Moffatt, Solutions Director at ForgeRock” style=”noise” box_color=”#336588″]Simon Moffatt, is a solution director, at ForgeRock. The ForgeRock mission is to transform the way organizations approach identity and access management, so they can deliver better customer experiences, strengthen customer relationships, and ultimately, drive greater value and revenue. We make it happen with the best commercial open source identity stack for securing anything, anywhere, on any device.[/su_box]
<urn:uuid:0c0f0a22-27a0-4fd8-8e51-71dc43d77e9f>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/how-can-companies-benefit-from-embracing-open-source-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00632.warc.gz
en
0.951747
1,540
2.75
3
A MIME type identifies the type of a file. When a server sends resources to a web browser it includes the MIME type of each file, and web browsers use that information to determine how to handle files. For instance, when a browser gets a file with the MIME type text/html it knows it needs to render it as HTML. Or, when it sees the image/jpeg type it knows the file is a JPG (or JPEG) image. For some types it might need to perform some specific magic. PDFs (application/pdf), for instance, are typically opened in the browser’s PDF viewer. The media types standard is managed by IANA, and you can find a full list of MIME types on their website. MIME types have a type, subtype and optional parameter: On Linux servers you can check the MIME type using the file utility with the --mime-type option. Here are some examples: $ file --mime-type index.html index.html: text/html $ file --mime-type index.php index.php: text/x-php $ file --mime-type robots.txt robots.txt: text/plain $ file --mime-type logo.jpg logo.jpg: image/jpeg $ file --mime-type backup.zip backup.zip: application/zip file utility looks at the file itself to determine the MIME type. For binary files, such as images, it checks the signature (also called a magic number). For instance, PNG files use the hexadecimal value 89 50 4E 47 0D 0A 1A 0A as its signature. You can view the value with the $ xxd image.png | head -1 00000000: 8950 4e47 0d0a 1a0a 0000 000d 4948 4452 .PNG........IHDR $ file --mime-type image.png image.png: image/png Plain text files don’t have a magic number. To check the file type it inspects the file itself. One thing to be aware of is that file extensions have no meaning on Unix-like servers. File extensions exist purely for your convenience. It is perfectly fine to have, say, a PHP script without an extension – it doesn’t change the file’s MIME type. And you could give a PHP script and extension such as .txt or .png as well. In fact, this is sometimes the case when a website has been compromised – a malicious PHP script may be hiding as an innocent looking plain text or image file. On cPanel servers you can see all existing MIME types via Advanced » MIME Types. You can’t change existing MIME types via the interface (they are managed at the server level). You do have the option to add a new MIME type. This can be useful if a new MIME type emerges that the server doesn’t know about yet. In practice, it is extremely unlikely this will happen. If you suspect a MIME type is missing, please contact us and we will add it if needed.
<urn:uuid:8bb420ba-dbd3-449b-a44d-30a99f2f5524>
CC-MAIN-2022-40
https://www.catalyst2.com/knowledgebase/dictionary/mime-types/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00032.warc.gz
en
0.82392
668
2.90625
3
The results showed that several psychological well-being measures gradually increased within participants from the beginning to the end of the course. That was especially true for life satisfaction, perceived well-being, self-awareness and emotional self-regulation. The participants in the study also reported a significant decrease in anxiety, perceived stress, negative thoughts, rumination and anger tendencies. The researchers observed, simultaneously, improvements in the positive aspects and a reduction of negative emotions, both in the short term and longitudinally throughout the program. Nicola De Pisapia, researcher of the Department of Psychology and Cognitive Sciences of the University of Trento and scientific coordinator, explained the fundamental principles of the study: “The training that we proposed to the participants was inspired by the idea – present in both Western and Eastern philosophical traditions – that happiness is inextricably linked to the development of inner equilibrium, a kinder and more open perspective of self, others, and the world, towards a better understanding of the human mind and brain. In this training process we need on the one hand the theoretical study of philosophy and science, and on the other meditation practices”. The study was conducted over nine months (with seven theoretical/practical weekends and two meditation retreats) at the Lama Tzong Khapa Institute of Tibetan culture in Pomaia (Italy). For the theoretical part, the participants attended a series of presentations and watched some video courses, and took part in open discussions on topics of psychology, neuroscience, the history of Western thought and the philosophy of life of Buddhism. The scientific topics included neuroplasticity, the brain circuits of attention and mind wandering, stress and anxiety, pain and pleasure, positive and negative emotions, desire and addiction, the sense of self, empathy and compassion. For the practical part, a series of exercises were proposed, taken from different, Buddhist and Western, contemplative traditions (for example, meditation on the breath, analytical meditation, personal journal). In recent years, excluding the “recipes” that mistake happiness for hedonism, and the New Age obsession with positive thinking, research has shown that meditation practices have important benefits for the mind, while studies on happiness and wisdom have been scarce. De Pisapia therefore concluded: “I believe that in times like these, full of changes and uncertainties, it is fundamental to scientifically study how Western and Eastern philosophical traditions, together with the most recent discoveries on the mind and the brain, can be integrated with contemplative practices in secular way. The goal is to give healthy people the opportunity to work on themselves to develop authentic happiness, not hedonism or superficial happiness. With this study we wanted to take a small step in this direction”. Call for Greater Happiness Most people want to be happy, and many of them look for opportunities to achieve a more satisfying life (Diener et al., 1998). This pursuit seems to be universal, but it is particularly pronounced in modern societies (Veenhoven, 2015). One reason for the heightened interest in happiness is the greater awareness that we have considerable control over our happiness. Happiness is no longer considered a matter of fate (Nes and Røysamb, 2017), but rather a condition that can actively be pursued, developed, and sustained (Sezer and Can, 2019) and that is a personal responsibility (Elliott and Lemert, 2009). Sheldon and Lyubomirsky (2019) argued that 40% of one’s level of happiness is a function of purposeful and intentional action, although that may be an overestimation (Brown and Rohrer, 2019). Another reason for the call for greater happiness is the rising evidence of the positive effects of happiness on other areas of life such as health (Veenhoven, 2008) and civil behavior (Guven, 2008). Employers are keen to raise happiness in their workforce, particularly in view of the evidence that life satisfaction fosters productivity more than job satisfaction (Gaucher and Veenhoven, 2020; Bergsma and Veenhoven, 2020). The call for greater happiness is met in two ways: by improving external living conditions and by strengthening life skills that enable people to live in the upper range of their happiness potentials (Sheldon and Lyubomirsky, 2019). A new field of research and practice centers around structured training and educational initiatives designed to strengthen individuals’ life skills. This field is aptly labeled “happiness education” and is comparable to, and often intertwined with, existing “health education.” Happiness education can be found in a growing number of advisory books, on self-help websites, and at the mounting supply of (online) courses on happiness (Bergsma, 2008; Parks et al., 2013). Alongside such education, a practice of happiness coaching has developed (Grant and Spence, 2010; Freire, 2013). Professional life coaches offer advice on how to live a more rewarding life, and they have gained a greater share of the work of psychologists and social workers (Tarragona, 2015). These developments are inspired by the scientific fields of “positive psychology” and “positive education,” which came into existence around the year 2000 and added scientific rigor to practices in the expanding training sector (Boniwell, 2012). Positive psychology interventions (PPIs) have been developed with the aim of strengthening people. These interventions typically consist of a combination of teaching and exercises. The common aims of such training techniques are to get individuals to see and seek meaning in their work and lives, to know who they are, and to foster positive feelings and self-reliance (Sin and Lyubomirsky, 2009). Happiness Training Techniques One kind of PPI focuses on increasing satisfaction with one’s life. This kind is commonly presented as “happiness training” (Fordyce, 1977). These training techniques help an individual to gain insight into the sources of their happiness and to learn skills that are functional for living a happy life (Feicht et al., 2013). The focus of these training techniques is not on a specific life domain, such as work or marriage, but on one’s life as a whole (Bergsma and Veenhoven, 2020). An advanced Google search on “happiness training” yielded 69,800 hits in December 2019. Some examples are the “Happiness Training Plan” (College of Well-being, n.d.), the Buddhist-inspired online course “A Life of Happiness and Fulfillment” (Indian School for Business, n.d.), and the Action for Happiness Course (Action for happiness, n.d.). Doubts About the Effectiveness of Happiness Trainings The majority of happiness training techniques focus on individuals. Happiness training techniques applicable to organizational contexts are still underdeveloped and not often utilized (Nielsen et al., 2017) since organizations focus on work-related skills and engagement rather than on wider life skills (Ivandic et al., 2017; Donaldson et al., 2019a, b; Roll et al., 2019). One of the reasons for this could be existing doubts about the effectiveness of happiness training interventions (Donaldson et al., 2019b). These doubts are rooted in theories of happiness and in reservations about PPIs in general and about happiness training in particular. Qualms About the Possibility of Greater Happiness There are doubts that the level of individual happiness can be raised because, among other concerns, happiness is believed to depend on social comparison. In this view, people are happier if they think they are better off than others, making happiness a zero-sum game (Brickman and Campbell, 1971). Others claim that happiness is part of a fixed genetic disposition and therefore determined by personality traits that remain constant (e.g., Omerod, 2012). A third reason is that the conscious pursuit of happiness may be self-defeating because higher expectations of happiness will lead to frustration if not realized (e.g., Ford and Mauss, 2014), which implies that the use of a happiness training technique will decrease one’s happiness. A fourth reason is that the pursuit of happiness stimulates people in individualistic societies to focus on individual goals, whereas more socially engaged ways to seek happiness are deemed more effective (Ford et al., 2015). Looking for happiness may even increase loneliness (Mauss et al., 2012), and valuing happiness may give rise to depression (Ford et al., 2014). Chasing happiness may also be self-defeating if people seek more positive effects directly, while, in contrast, aiming to fulfill basic psychological needs of relatedness, autonomy, and competence may yield better results (Sheldon and Lyubomirsky, 2019). Although most of these doubts have been discarded in the scientific literature (Veenhoven, 2010), they still live in public opinion. The dark sides of the pursuit of happiness, as well as the caveats and limitations, have a higher attentive value for the media than the stories with a happy ending (Soroka and McAdams, 2015). Limited Effects of Positive Psychological Interventions (PPIs) in General Three major meta-analyses on the effectiveness of PPIs have not yielded impressive effects. Sin and Lyubomirsky (2009) reported a modest effect (mean r = +0.29, median r = +0.24) on “well-being.” These numbers are difficult to interpret because the studies covered different notions of well-being, most of which belong in the life-ability quadrant of Figure 1 (see below). Bolier et al. (2013) report a smaller effect (d = +0.34) on subjective well-being that partly waned at follow-up (d = +0.22) and after the removal of outliers (d = +0.17). The authors were not very specific about the subjective well-being measures they included. Multi-component PPIs have a small to moderate effect on subjective well-being (Hedges’ g = +0.34), but again, the authors were not very specific on the subjective well-being measures they included. The removal of outliers or low-quality studies lowered the effect on well-being (g = + 0.24 without outliers, g = +0.26 for high-quality studies) (Hendriks et al., 2019). The modest effects of the meta-analyses we described may be too high because negative findings tend to be underreported in scientific literature. A recent re-analysis of the studies included in the first two meta-analyses mentioned above used an improved correction for small sample sizes and found an effect of 0.1 of PPIs on well-being (White et al., 2019). Reservations About Happiness Training Techniques in Particular In a recent Delphi study by Buettner et al. (2020), 14 leading scientists rated the effectiveness of “Ways to Greater Happiness” on a five-step scale. Their effectiveness rating for “Develop skills for greater happiness, using self-help or professional coaching” was 3.1, while their average rating for methods such as “Invest in friends and family” and “Get physical exercise” was about 4. The general public seems to have a mixed attitude toward happiness advice and training. There is much interest but also a lot of skepticism and grumbling about the “tyranny of positivity” (Held, 2002, 2018). One of the reasons may be that the term “happiness” is used to promote the particular trendy practices of the moment, such as meditation and veganism. This is part of the wider problem of the term “happiness” being increasingly used in sales communication as a “feel-good” term (e.g., Coca-Cola with its “Open Happiness” slogan). A shared definition of happiness is lacking, and this is another reason to question the message of happiness coaches and trainers. Are these doubts about the effectiveness of happiness training techniques justified? In this study we seek to answer the following questions: • Do happiness training techniques add to happiness? ○ If so, how strong is the effect? ○ If so, how long-lasting is the effect? • What kind(s) of training techniques work best? ○ What nature of training techniques works best? ○ What modes of training techniques work best? • What types of people profit most from joining a happiness training course? Concept of Happiness In answering these questions, we focus on happiness in the sense of “life satisfaction,” which we will define in detail below. To our knowledge, the research literature on this subject has not been reviewed with that specific definition in mind. Meanings of the Word In a broad sense, the word happiness is used to denote a “good life” and used as a synonym for “quality of life” or “well-being.” This meaning prevails in moral philosophy where it serves as a starting point for speculations about what qualities make the best life, such as the importance of “wisdom” (McMahon, 2018). In contemporary social sciences the term is increasingly used for one particular quality of life, that is, how satisfying one’s life is. Since this is a measurable phenomenon, its determinants can be identified inductively using empirical research (Diener et al., 2015). Definition of Happiness Happiness is defined as the degree to which individuals judge the overall quality of their life as a whole favorably (Veenhoven, 1984). This definition fits the utilitarian tradition and is most closely associated with Bentham’s (1789) view of happiness, which is described as “the sum of pleasures and pains” (Veenhoven, 2009). This concept is central in the World Database of Happiness, which we draw from for this research synthesis. Other Notions of Quality of Life and Satisfaction We realize that some readers will associate “happiness” with other notions of well-being, in particular readers with a background in positive psychology where the term “eudaimonic well-being” is currently used for positive mental health (Delle Fave et al., 2011). Therefore, we are expanding on this difference using Veenhoven (2000) classification of four qualities of life. This classification is based on two distinctions: vertical and horizontal. Vertically, there is a difference between opportunities and actual outcomes of life. This distinction is important because people can fail to use the life chances offered to them. The horizontal distinction refers to external qualities of the environment and internal qualities of the individual. Together, these two dichotomies produce four qualities of life, all of which have been denoted by the word “happiness.” In Figure 1, our concept of happiness is positioned in the right-bottom quadrant, as an inner outcome of life. Positive mental health (eudaimonic happiness) belongs in the top-right quadrant of Figure 1, that is, as a precondition for happiness. We only include measures of happiness that belong to the right-bottom quadrant. Our conceptual focus is sharper than that of earlier meta-analyses of positive psychological interventions, which included measures of well-being that also cover other quadrants of Figure 1. As such, our results are easier to interpret. Components of Happiness The overall evaluation of life draws on two sources of information: (1) how well we feel most of the time, and (2) to what extent we perceive that we are getting from life what we want from it. We refer to these sub-assessments as “components” of happiness, respectively called “hedonic level of affect” and “contentment” (Veenhoven, 1984). Diener et al. (1999) make a similar distinction between affective and cognitive appraisals of life, but do not conceptualize an overall evaluation in which these appraisals are merged. In this research synthesis we include all three variants, overall happiness and its two components. Original Research: Open access. “The Art of Happiness: An Explorative Study of a Contemplative Program for Subjective Well-Being” by Nicola De Pisapia et al. Frontiers in Psychology
<urn:uuid:ad9cda75-039c-434d-b39b-daf29f9123d4>
CC-MAIN-2022-40
https://debuglies.com/2021/03/20/the-art-of-happiness-that-can-be-learned/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00032.warc.gz
en
0.940713
3,452
2.625
3
DECA and MoCA are collectively internet adaptors that are used to connect DirecTV box to your ISP connection in a much efficient and reliable way. MoCA is an acronym for “Multimedia over Coax”. This technology delivers a similar high-speed network connection that Ethernet cable network delivers which makes them quite equivalent to each other. Although, the only difference is MoCA delivers the internet through your existing coaxial cable and other cable TV) to your room. On the contrary, DECA is an internet adaptor that attaches your DirecTV box to the internet. Both of these technologies; DECA and MoCA, are responsible to extend your existing internet connection to a greater extent and many extensive devices. Both usually function in a somehow similar way, however, there are still many differences that make these two different from each other. In this article, we will provide you some comparing points between DECA and MoCA that will solve your number of confusion. Stay with us. MoCA And DECA—DirectTV: The purpose of both the MoCA and DECA is to enable more features on your existing coaxial Ethernet connections. DECA stands for DIRECTV Ethernet Coaxial Adapter which attaches your DirecTV box to the Internet connection that gives access to many network features, you can use them anytime. You can connect DECA to DVR boxes using the DECA-induced internet connection. Moreover, you are allowed to access extra Video On Demand (VoD) programming via the Internet connection. Multi-room viewing is a great featuring technique used in many of the offices and workplaces, it is enabled by DECA adaptors. Furthermore, DirecTV announced that MoCA will be competent with its coaxial network connection although not above the frequency band of lesser than 800 MHz, which they are using for cable connection. Therefore, the bridge connection for DirecTV MoCA will likely to work with DirecTV installations, until and unless there is an entirely isolated coaxial cable network for your ISP. Now that you have a fair share of understanding about how MoCA and DECA work with the DirecTV installation, let us know more about its working and other capabilities that make them different from each other. Keep on reading. Difference Between DECA And MoCA: Even after understanding the definition and working of both these network extending technologies, people remain quite confused over MoCA and DECA which is why pointing out differences between both of them, is worth-assessing. Directv Uses DECA, Not MOCA. What Is Correct, what Is Not? Many people say that DirecTV uses only DECA and not MoCA while explaining how DirecTV was never competent enough with MoCA. Well, this is not entirely true nor false. We’ll tell you HOW. Standard For Networking: MoCA is a standard used for “networking over coax cables” in-home or any generalized area which is set up by the Multimedia over Coaxial Alliance. Such standards usually take their names from their allocated groups. On the contrary, MoCA is generally a standard while DECA is not a standard but a DirecTV Ethernet-to-Coaxial Adapter—a piece of hardware. DirecTV uses MoCA as a standard while DECA is the network adapter required to connect the DirecTV receiver’s RJ-Cat5 Ethernet port into the MoCA coax network. DirecTV Uses MoCA cables and DECA Adaptor: DirecTV has made the announcement long ago that it can be compatible with MoCA as well. However, the developers for MoCA have made a few things clear. Here they are. MoCA has specifically 2 frequency bandwidths at which the network can be operated: - High-RF MoCA for Cable MSOs. - Verizon FiOS from 850-1500 MHz, and finally. - Mid-RF MoCA for DirecTV from 500-850 MHz. Most of the tech-savvy readers must have known the cable TV broadcasts that are lesser than 850 MHz on the coaxial and satellite TV broadcasts above 950 MHz need installation of MoCA to have less interference with generated signals on the line and 2 isolated RF bands. This is the reason these two MoCA versions are deployed to enable the Multi-Room DVR feature available from DirecTV as well as other TV providers. On the contrary, DECA supports DirecTV without any necessary exceptions and conditions. In the end, we could say that DECA is a much cheaper alternative for MoCA, especially for its readable availability and cost-effectivity.
<urn:uuid:e8fd9a4c-b7c9-4714-b286-a5072c623bb1>
CC-MAIN-2022-40
https://internet-access-guide.com/deca-vs-moca/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00032.warc.gz
en
0.94588
971
2.703125
3
“Data deduplication is inarguably one of the most new important technologies in storage for the past decade” says Gartner. So let’s take a detailed look at what it actually means. Data deduplication or single instancing essentially refers to the elimination of redundant data. In the deduplication process, duplicate data is deleted, leaving only one copy (single instance) of the data to be stored. However, indexing of all data is still retained should that data ever be required. In the same way that the phrase “single instancing” turns the noun “single-instance storage” into a verb, the word “dedupe” becomes the verb for “deduplication.” For example: “After we deduped our critical business data with Druva’s inSync Backup, 90% of the storage space and bandwidth it was using opened up–giving us the breathing room we need to innovate!” Example: A typical email system might contain 100 instances of the same 1 MB file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy reducing storage and bandwidth demand to only 1 MB. The practical benefits of this technology depend upon various factors, including - Point of Application: Source vs. Target - Time of Application: Inline vs. Post-Process - Granularity: File vs. Sub-File level - Algorithm: Fixed-size blocks vs. variable length data segments A simple relation between these factors can be explained using the diagram below: Target- vs. Source-based Deduplication Target-based deduplication acts on the target data storage media. In this case, the client is unmodified and is not aware of any deduplication. The deduplication engine can be embedded in the hardware array, which can be used as NAS/SAN device with deduplication capabilities. Alternatively, the engine can be offered as an independent software or hardware appliance which acts as intermediary between backup server and storage arrays. In both cases, it improves only the storage utilization. Source-based deduplication, in contrast, acts on the data at the source before it’s moved. A deduplication-aware backup agent is installed on the client which backs up only unique data. The result is improved bandwidth and storage utilization. However, this imposes additional computational load on the backup client. Inline vs. Post-process Deduplication In target-based deduplication, the deduplication engine can either process data for duplicates in real time (i.e. as and when data is sent to the target) or after its been stored in the target storage. The former is called inline deduplication. The obvious advantages are: - Increase in overall efficiency as data is only passed and processed once - The processed data is instantaneously available for post storage processes, such as recovery and replication, reducing the RPO and RTO window. The disadvantages are: - Decrease in write throughput - Extent of deduplication is less; only fixed-length block deduplication approach can be used. The inline deduplication only processes incoming raw blocks and does not have any knowledge of the files or file-structure. This forces it to use the fixed-length block approach (discussed in detail later). The post-process deduplication asynchronously acts on the stored data. And it has an exact opposite effect on advantages and disadvantages of the inline deduplication listed above. File vs. Sub-file Level Deduplication The duplicate removal algorithm can be applied on full file or sub-file levels. Full file level duplicates easily can be eliminated by calculating single checksum of the complete file data and comparing it against existing checksums of the already-backed-up files. It’s simple and fast, but the extent of deduplication is less, as this process does not address the problem of duplicate content found inside different files or data-sets (e.g. specific email messages). The sub-file level deduplication technique breaks the file into smaller fixed or variable size blocks, and then uses a standard hash-based algorithm to find similar blocks. Fixed-Length Blocks vs. Variable-Length Data Segments A fixed-length block approach, as the name suggests, divides the files into fixed-size length blocks and uses a simple checksum-based approach (MD5/SHA etc.) to find duplicates. Although it’s possible to look for repeated blocks, the approach provides very limited effectiveness. The reason is that the primary opportunity for data reduction is in finding duplicate blocks in two transmitted datasets that are made up mostly – but not completely – of the same data segments. For example, similar data blocks may be present at different offsets in two different datasets. In other words, the block boundary of similar data may be different. This is very common when some bytes are inserted in a file, and when the changed file processes again and divides into fixed-length blocks, all blocks appear to have changed. Therefore, two datasets with a small amount of difference are likely to have very few identical fixed length blocks. Variable-Length Data Segment technology divides the data stream into variable-length data segments using a methodology that can find the same block boundaries in different locations and contexts. This allows the boundaries to “float” within the data stream so that changes in one part of the dataset have little or no impact on the boundaries in other locations of the dataset. Each organization has a capacity to generate data. The extent of savings depends upon – but not directly proportional to – the number of applications or end users generating data. Overall the deduplication savings depend upon following parameters: - The number of applications or end users generating data - Total data - Daily change in data - Type of data (email messages, documents, media, etc.) - Backup policy (weekly-full, daily-incremental, or daily-full) - Retention period (90 days, 1 year, etc.) - Deduplication technology in place The actual benefits of deduplication are realized once the same dataset is processed multiple times over a span of time for weekly/daily backups. This is especially true for variable length data segment technology which has a much better capability for dealing with arbitrary byte insertions. The deduplication ratio increases everytime to pass the same complete data-set through the deduplication engine. If compared against daily full backups, which I think is not widely used today, the ratios are close to 1:300. Most vendors use this as a marketing jargon to attract customers, even though none of their customers could be doing daily full-backup. If compared against modern day incremental backups, our customer statistics show that, the results are between 1:4 to 1:50 for source based deduplication. Want to learn more? Download our white paper, 8 Must-Have Features for Endpoint Backup. Curious about Druva and what we do? Find out more information about our single dashboard for data governance by checking out these popular resources: - View a video of how Druva inSync works - Take a quick screenshot tour - Watch an on-demand demo of Druva inSync
<urn:uuid:423b20d1-62a6-4347-9590-c77704effb64>
CC-MAIN-2022-40
https://www.druva.com/blog/understanding-data-deduplication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00032.warc.gz
en
0.902194
1,595
2.90625
3
“By 2020, the number of passwords used by humans and machines worldwide is estimated to grow to 300 billion.” predicted by Cyber Security Media. Passwords are still a problem, this pain has been going on for generations. People have too many applications and accounts on the internet, we don’t even count them all and need to handle so many logins nowadays. Since password security is vitally important, enterprises enforce employees regularly change passwords and even invest a lot in password management, it’s clear that many companies struggling to properly manage passwords and prevent password-related attacks. However, most people struggle with remembering all their passwords and in a single month of 2017, Microsoft had to reset 868,000 passwords and spent 12 million for resetting users’ passwords. With the insights in the 2017 Data Breach Investigations Report (DBIR), 81% of data breaches are caused by compromised, weak and reused passwords. Not much has changed in 2020, over 80% of hacking-related breaches are still tied to passwords. Passwords are outmoded security mechanisms that have existed since the 1960s. The pursuit of security has been accompanied by increasingly complex password authentication methods and it has gradually caused frustration for users and management due to their complexity and frequent resets in these 40 years. Yet, As technology boosts, there is a strong stimulus to combine the simplest verification with security or even get rid of passwords. Since the average business employee must keep track of 191 passwords, according to a report from password management firm LastPass, there should be some twists and turns to help businesses and enterprises prepare for passwordless authentication. It’s hard to eliminate passwords all at once, but it can be started by reducing the use of passwords as much as we rely on them. If we are going to get rid of passwords, we need to make sure we have mechanisms in place to validate trust in users, and passwordless doesn’t mean there is no authentication, it indicates strong, secure authentication that reduces friction. What is Multi-Factor Authentication (MFA)? The difference between two-factor authentication (2FA) and MFA is that 2FA adds not only knowledge factors but either a possession factor or an inherence factor to be the available checks to verify identity, whereas MFA may use three or more checks. “A password is something you know. A device is something you have. Biometrics is something you are,” -Stephen Cox, chief security architect of SecureAuth. There are various Authentication factors, the most common of which include as following: A knowledge factor- Something you know, such as passwords, PIN Codes, etc. A possession factor- Something you have, such as ID cards, a security token, the devices you own, or the app on your mobile phone. An inherence factor- Something you are, that’s what we called a biometric factor, is the innate physical trait from the user, Including fingerprints, facial or voice recognition. Most people currently use smart devices with biometric, which can be used the check to verify identity as part of MFA. Biometric verification usually offers less hassle than OTP verification that users can use it easily and securely. Implementing strong MFA for secure allows users to reduce their reliance on passwords and no need to change passwords so often and require less frequent resets. MFA and Biometric Authentication With powerful MFA, people may gradually eliminate the risk of using passwords as a single way of authentication and get rid of credential theft by combining additional methods of identity verification especially biometric authentication that can not be easily stolen and copied remotely. There will be more choices for verification in the future, and we believe that these alternatives methods can lead us carefully abandon our passwords and at the same time balance strength of IT security and ease of use. Today, AuthenTrend is trusted by Microsoft Intelligent Security Association, Fido and RSA and develops not only a variety of fingerprint security keys but also the first fingerprint crypto hardware wallet in Taiwan. Our flagship product, ATKey.Pro, is the slimmest, compact and best fingerprint experienced security key, that supports Fido2 and U2F to enable users to leverage common devices to easily authenticate to online services in both mobile and desktop environments. Unliked traditional fingerprint devices, our patented standalone enrollment technology lets users enroll fingerprints directly on the cards or USB keys, no app download required. Our fingerprint enabled card type blockchain Cold Wallet — AT.Wallet just passed the IP68 waterproof test and won a prize at CES 2020 Innovation Award Honoree. AuthenTrend is leading the way in authentication with biometric technology. Our mission is to replace passwords with fingerprints for higher security and convenience.
<urn:uuid:24e2d6b0-1e16-48b7-91ba-2bcbfb284ef3>
CC-MAIN-2022-40
https://authentrend.com/ja/the-trend-of-authentication-the-journey-toward-a-passwordless-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00032.warc.gz
en
0.942874
975
2.75
3
Recycled doesn’t always mean destroyed Everyone should consider computer recycling. As technology continues to improve at a rapid pace, the demand increases to get our hands on the latest and greatest release of a new laptop or desktop computer. If you keep up with the trends in technology, you are more likely to have older electronics that have become useless to you. Unfortunately, you can’t just throw your old desktop or laptop computer out with the trash. That’s why computer recycling is a viable option. Advantages of Computer Recycling: There are several advantages to computer or electronic recycling including health benefits, economic benefits and the creation of more jobs. Here are just a few specific reasons why everyone should consider computer recycling: Keeping Our Landfills Safe Today, electronic waste (e-waste) is the fastest growing category of trash in our society. Improper electronic disposal can result in them ending up in our landfills, causing severe environmental issues. Computers contain chemicals and materials that are harmful to the environment, so recycling your computer helps to decrease the number of electronics that end up in our landfills. Protects Us Against Stolen Information Leaving your old computer at the dumpster puts you at major risk for identity theft. In today’s world of technology, data thieves and cyber-terrorist are clever enough to steal your personal information even if you think your computer is broken or the files have been deleted. Considering a certified recycler to dispose of your electronics can protect your identity confidential information. Recovery of Resources Some of the parts from an old computer can be recovered to make a new one. This also means that new resources can be conserved, while the amount of time and energy used during the manufacturing process is decreased.
<urn:uuid:e7c014cf-a65a-483a-8971-3ef456c7bf09>
CC-MAIN-2022-40
https://datashieldcorp.com/data-destruction/computer-recycling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00032.warc.gz
en
0.931022
372
2.703125
3
Will AI replace psychologists? Artificial intelligence (AI) is helping shape many industries and may even influence the future of psychology. Will AI replace psychologists altogether, though? Or will AI assist psychology professionals and their patients in creating long-term, effective solutions, while leaving room for human psychologists? These are the questions experts in both fields are considering. Psychology and artificial intelligence go hand in hand, after all. We’ve always built machines to be smarter and faster than us, but does it really mean that AI will one day take psychologists’ jobs? Wait… could AI really replace psychologists? Imagine going in for your weekly therapy session and talking to, well, a robot. The robot psychologist would listen to your concerns, provide some helpful advice, and maybe even write you a prescription for medication. How would that make you feel? Would you feel comfortable with AI taking over as your psychologist? It’s a little hard for most people to even imagine. We’re so used to the idea of talking to a human for our mental health that it’s difficult to imagine how artificial intelligence could help us in the same way. Because of this, it’s unlikely that AI will fully replace psychologists, at least soon. But that doesn’t mean that this revolutionary technology won’t help shape the future of psychology. In fact, it already is. Demand for mental health services soared during the COVID-19 pandemic, and that demand will not go away soon. AI and mental health services will need to merge in order to keep up with demand and provide outcomes for patients. AI technology in psychology—what does it involve? Healthcare technology solutions come in many forms. Depending on the area of medicine, artificial intelligence can have several different uses. There are 4 main types of AI, each with a different level of sophistication. They are: Currently, researchers have been exploring the possibilities in the limited memory (level 2) stage. Humanity has long since mastered level 1 AI technology, but we are still a long way off from self-aware artificial intelligence that could take on the same work as human beings. An AI psychologist would have to be able to relate and empathize with their patients to be effective. Psychology and artificial intelligence are always developing, however. The field of AI is exploring many exciting technologies, including: - Machine learning - Deep learning - Facial recognition - Neural networks - Natural language processing - Evolutionary computation Many of these technologies are still in their infancy. We have only just scratched the surface of level 3 technology and it’s far from perfect at this stage. In the field of psychology, experts are still relying on dataset analysis and predictive analytics to assist psychologists with AI. AI psychology, data, and improvement of patients’ experiences Combining psychology and artificial intelligence involves the use of large datasets, which can provide answers to psychologists’ biggest questions and streamline their daily responsibilities. Data analytics in healthcare have already proven to be crucial for improving patient outcomes and experience and they have huge potential for helping people with mental illness seeking therapy as well. Data can be helpful in removing friction from the mental healthcare process. For instance, AI software development can involve creating tools for patients to find a therapist near them. Finding a therapist easily and being able to choose from providers who take their insurance provides a much better experience for patients. Another example is in diagnostics. Mental healthcare providers already use data to make diagnostic decisions, often turning to flowcharts and other aids to determine what type of mental illness and specific patterns a patient is dealing with. Artificial intelligence and data simply make this process easier and more accurate for providers and patients alike. Artificial intelligence is key to delivering patient-centered healthcare. Although it might seem counterintuitive, bringing advanced technology in and removing subjectivity from the processes of finding a therapist and getting a diagnosis makes for a more “human” experience for patients. It’s all about knowing what patients want and need from their psychological care, and data is crucial for truly understanding that. AI psychologist vs. human therapy Until recently, there’s been no danger of knowing whether you’re talking to a real person. Artificial intelligence hadn’t progressed to where it would be difficult to tell the difference. However, as AI technology improves, we have to think critically about the difference between what human therapists currently offer and what a machine can provide. There may be a reckoning in the future, as more people prefer the experience of working with a robot therapist for most forms of therapy. Right now, a human therapist provides the human emotions and feedback most people appreciate in their mental health treatment. They can empathize as a fellow human, while a machine can only draw on the knowledge it has gained from data and learning. That’s not to say that AI therapy doesn’t have some advantages. Robot therapists never get tired or have a bad day. They don’t have biases shaped by cultural forces and they offer anonymity and convenience to patients. Psychology and AI-assisted diagnostics People self-diagnose their mental health concerns all the time, especially now that we have the internet at our fingertips anytime, anywhere. But if we could all diagnose our mental illnesses so effectively, there would be no need for human therapists. We still need trained professionals to diagnose mental health problems. With that said, diagnostics are still notoriously challenging, even for professionals. Symptoms may overlap and it takes a lot of experience and skill to determine the actual problems a patient is facing. The good news is that we now have enough data and knowledge of different mental health conditions to use AI in quickly and accurately providing diagnostic help for mental health professionals. By streamlining the diagnosis process, psychologists can handle larger caseloads and provide a better experience for their patients. Will psychologists be automated? In a sense, psychologists have already been automated in some ways. They are using advanced technology to remove the emotion from the diagnosis and treatment process, with excellent results. Emotions can be helpful in providing therapeutic benefits, but they have no place in evaluation. Even interventions can involve filling in the blanks based on best practices. That’s something AI can handle very well. Many people are now using apps based on AI as a supplement to their mental healthcare. They might use them to help implement the strategies provided by their therapist, but the apps do not require the supervision of a mental health professional to be beneficial. Empathy and emotional intelligence in psychology So, where do human emotions come into all of this? Well, it really boils down to how an actual human can make someone feel when they need to talk about their mental health. Talk therapy isn’t really something a robot can do effectively yet. People appreciate empathy and the lived experiences of others. A trained psychologist not only has in-depth knowledge of the therapeutic process but also can identify with the patient’s experiences. The thing about human experiences is that they’re difficult to quantify. It’s hard to put your finger on why a computer isn’t as effective at communicating with a human being as another human. We absorb so many small social cues, contextual exceptions, and other complex communication tools as children that it is difficult for artificial intelligence to mimic them all. And then, there’s the reality of a person sitting across from you, rather than a robot. A person looks like a human and their body language, eyes, and expression all add to our ability to communicate. Until it’s possible to incorporate these into the AI therapy experience, many people will probably prefer working with a human. There is a reason apps can only take people so far in their mental health journey right now. When someone needs to work one-on-one with a therapist, they need the support and comfort of another person sitting across from them. Are psychologists ready for AI to assist them? It’s not surprising that many professionals in the field have some reservations about welcoming AI technology into their work lives. Change is difficult, as we can see from the slow adoption of technologies such as Big Data and face recognition in healthcare. However, others are embracing technology as their workloads increase and they need help in managing their patients’ needs. Looking to AI as a tool for improving patient outcomes can help ensure that therapists will be on board with incorporating this technology. Artificial intelligence can be an “assistant” of sorts, removing friction within everyday workflows and freeing up psychologists to use their skills for working directly with patients, something that most AI tools can’t do effectively. Psychologists who don’t look ahead to the future of the psychology field may find themselves in trouble down the road. They need to be ready and willing to use these tools without reluctance or suspicion. Not only will it be necessary for modern workflows, but patients will come to expect the benefits of advanced technology. Some individuals may not be ready to embrace the AI revolution, but it’s coming. And by working with artificial intelligence instead of against it, AI will be less likely to replace psychologists in the future. Why still consider working as a psychologist? Does psychology HAVE a future? With all this talk of psychology and artificial intelligence, does it even make sense to get into the field at this point? Well, yes. There are a couple of reasons that psychology can still offer great career opportunities. First, psychology jobs pay well and are great for people who love to work with and help others. A psychologist’s job is to understand and provide insights that help people manage their lives more effectively. Even if you use artificial intelligence tools to accomplish those goals, you will be the one putting the pieces together, problem-solving, and ultimately helping that patient. The future of psychology and AI are closely linked. After all, we are essentially creating our AI systems to mimic our own intelligence. To do that, we need a deep understanding of how humans think and feel. Can psychologists work in artificial intelligence, in that case? Absolutely! Whether you want to work with patients directly or you’re excited about the possibilities of artificial intelligence as the psychology of the future, there are so many great employment opportunities. Until (or if) psychiatrists are replaced by AI, we will need human brains to help patients and develop systems that will best serve patients. The future of psychology careers is still bright. Will AI replace psychologist roles, including psychiatrists, in the future? It’s still debatable at this point, but probably not entirely, at least for a long time. Clearly, artificial intelligence and similar technologies are already impacting the industry. But we’re definitely a long way off from that hypothetical robot sitting in an office, listening to patients talk about what’s bothering them. Not only does technology have a long way to go, but people are naturally suspicious of change. Many feel very comfortable talking to a human therapist but wouldn’t feel the same about working with a robot. Societal change happens slowly, especially for mental health. With that said, we’ve seen how quickly AI and technologies like computer vision software have advanced. It’s not unbelievable to think that in another 50 years, we could see a very different future for psychology. With a Bachelor’s in Psychology along with an MBA, Sarah Daren has a wealth of knowledge within both the mental health and business sectors. Her expertise in scaling and identifying ways tech can improve the lives of others has led Sarah to be a consultant for several startup businesses, most prominently in the wellness industry, wearable technology, and health education. Have an idea for a revolutionary mental health app and need help with its development? Contact our team of AI developers and data scientists to discuss it in more detail.
<urn:uuid:60b2bcef-bf0e-453d-b4bc-7e62b3781de7>
CC-MAIN-2022-40
https://indatalabs.com/blog/will-ai-replace-psychologists
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00032.warc.gz
en
0.953336
2,484
2.5625
3
Microsoft Lobe is a new product from Microsoft, allowing general users to create and train machine learning models without any coding. Lobe, which was acquired by Microsoft in 2018, is now available to download for Windows and Mac. After two years of work, Microsoft finally announced the public preview in its official blog post. The software's simple visual interface allows people to easily create intelligent apps that can understand hand gestures, hear music, read handwriting, and more, without writing a single line of code. Lobe uses open-source machine learning architectures and transfers learning to train custom machine learning models on the user's own machine. Lobe's current version learns to look at images using image classification – categorizing an image into a single label overall. Microsoft is also working on expanding to more types of problems and data in future versions. For people who have privacy concerns, Lobe does not require an internet connection or logins. It works as a standalone app on your system without even the need for a cloud service. For now, Lobe will only output the machine learning model. The Lobe team is also working on a collection of apps and tools to help run your model without any code. "Lobe is taking what is a sophisticated and complex piece of technology and making it actively fun. What we find is that it inspires people. It fills them with confidence that they can actually use machine learning. And when you have confidence, you become more creative and start looking around and asking, 'What other stuff can I do with this? " said Bill Barnes, Manager for Lobe.
<urn:uuid:d4067af3-3285-400c-b004-1b39c32092cb>
CC-MAIN-2022-40
https://www.ciobulletin.com/others/microsoft-lobe-lets-you-build-machine-learning-models
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00032.warc.gz
en
0.927089
373
2.59375
3
Solutions are a Visual Studio concept. A solution is a container holding one or more projects that work together to create an application. Splitting your solution into a number of projects has several advantages: - Multiple users can work on a solution, as different users can work on different projects. - Breaking a solution down into a number of smaller projects makes them easier to handle and quicker to build. - The projects in a solution can be in different programming languages, so splitting your application into a number of projects enables you to take advantage of mixed language programming. - Solutions can contain projects, where some are compiled as managed code and some as native COBOL. You can use the various interoperability techniques to combine native COBOL with managed COBOL. A solution has the extension .sln, and is a human readable text file, which you could edit, though we recommend that you use the Visual Studio IDE to do so. A COBOL project has the extension .cblproj, and again is human readable. It is in msbuild format, which is explained in the Visual Studio Help. Different types of project have different extensions, so for example a C# project has the extension Templates for different types of COBOL projects are supplied. Each template creates the appropriate file structure to hold the project and defines the appropriate build settings. See the Visual Studio Help for more information on solutions and projects.
<urn:uuid:d9c7f0bc-8ee0-4224-ba56-ddf5477e9769>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/enterprise-developer/ed30/VS2013/H2VSVSSLNP.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00032.warc.gz
en
0.910224
307
2.90625
3
How can you prepare students to study remotely? This guide will look at how universities can enable remote study and the impact this has on students, the organization, faculty, and staff. Provide access to intranets, VLEs and/or LMSs: Any systems that contain course materials or otherwise must be consistently accessible. Ensure this is the case and perhaps consider halting any plans for routine maintenance on essential services if it may disrupt access to that service. This extends to being able to submit course work. Provide access to files and cloud storage: This is the same as above but relates to files/cloud storage solutions. Ensure faculty and students can access their files without having to go on-campus. Maintain clear communication with faculty and students: Keep communication open, concise and easily accessible for everyone, with clear signposting to the resources they require. Even with every resource available from home, if your users don’t know they’re available from home, they will not be able to access them. Try not to rely on your users finding the information; for example, if you create a web page with relevant information, try to put that page in front of your users by sending it out via email, or by adding an alert/banner to portals and your home page. What is distance learning and what is distance teaching? The phrases ‘distance learning’ and ‘distance teaching’ simply refer to the practises of learning and teaching without being present at a physical location. So, what is the difference between online learning and distance learning? - Online learning refers to situations where students and faculty may be either physically together in the same location or separated. - Distance learning applies when those involved are always separated. - Online learning tools may sometimes be referred to as cloud-based learning solutions. Why choose distance learning? Generally, the advantages of distance learning are based around versatility and the ability to overcome disruption. An agile approach to learning and teaching allows universities to adapt during difficult circumstances. So, what are the advantages of distance learning? Advantages of distance learning Minimizes impact of disruptions to routine — disruptions to teaching, studying and coursework-submission are dramatically reduced, allowing the activities to continue despite pupils and staff working off-campus. If online learning measures have been taken before interruptions to service hit, universities are able to respond in the timeliest manner possible, minimizing disruptions. Ability to learn and study effectively off-campus — being able to study as well as when on-campus provides flexibility and allows students to learn in the way they find most productive, maximizing their chance of achieving the highest grades they are capable of. Improves access to resources and teachers/experts — enabling distance learning can improve this communication. With systems in place and students more used to leveraging digital resources to learn, there is often a better line of communication between students and those they learn from. Saves money on facilities, hardware, travel infrastructure — with fewer students and staff on campus, hardware and specialist labs may be condensed and consolidated. Reducing square footage and necessary equipment can save money in the long run for universities. It can also save money for students when it comes to travel expenses, and save money for universities in providing travel infrastructure. Reduces commuting and equipment set-up time — with reduced equipment and travel needs, time can also be shaved off commuting and equipment set-up, leaving more time to learn, study and otherwise achieve. This also applies to university operations and can provide more time to approach strategic projects. Increases potential for partnership programmes — enabling distance learning can increase the potential for partnership, collaboration, and exchange programmes, thereby increasing the commercial potential for universities. The disadvantages of distance learning revolve around the limitations and negative effects of not being in the same physical place. The goal of enabling distance learning and remote study involves reducing or negating as many of these disadvantages as possible. Disadvantages of distance learning Longer response times — collaboration, feedback and resource sharing may be more difficult or time-consuming to access and arrange. The lead-time on getting a response to a question over email is greater than a question asked in person. Problems due to unreliable internet access — for those with power or internet issues, distance learning may be compromised or not possible. Whether this is down to the means available to the student/faculty or whether it’s down to what’s available in specific areas, it is worth considering. Limitations for course elements covered — there may be elements of a course that are not possible to cover through distance learning. For example, an examination that requires an invigilator will not be viable through distance learning. Another example lies in courses that require access to particularly specialist equipment such as science laboratories, or large medical equipment; for example, MRI machines. Entails isolation and requires self-motivation — without being in the environment students heavily associate with learning, it may be difficult for some to get into the correct mindset to learn and study. Isolation itself can also have a negative effect on motivation and some students may learn more effectively with others around to discuss challenges with. The costs and savings of distance learning Initially, for universities exploring distance learning, the associated costs may look daunting. VDI can quickly become expensive, hardware requirements are very different to those of on-site learning and big changes to systems and processes require work. However, as illustrated by the COVID-19 pandemic, the task of responding to disruption will almost definitely be more expensive, more time-consuming and less successful than accounting for it ahead of time. Being prepared to handle service disruptions pays off over the long-term. Continuity of service in times of disruption, when it comes to software delivery, is possible through: - application virtualization Implementing both solutions quickly and in emergencies may be difficult, as you may end up having to go through a vendor who doesn't offer the relevant application packaging services and/or training, such as the AppsAnywhere Packaging Subscription Service. In addition to this VDI is supremely expensive if it isn't implemented using certain principles and methodologies. Without proper planning of software provisioning, universities will need a 1:1 ratio of VDI/application virtualization licenses to users. Fast-tracking installations will incur extra costs with most suppliers, as will sourcing the hardware required in due time. It is for this reason we recommend application virtualization over VDI where possible. If a pandemic is to blame, a number of difficulties may arise: On-site installations. It may not be possible for professionals to come on-site to carry out installations and necessary training. Managing processes on a reduced timescale. The above point doesn’t even address the difficult task of managing these processes in the proper order on a reduced timescale. High demand. Moreover, in times of emergency, it is likely that demand will be high from multiple sources, pushing lead times ever higher or even making implementing a solution entirely impossible. The consequences of failing to provide education in disruptive times are not light. Given students are customers and the main source of revenue for universities, losing that revenue would be fatal to any university. Equally, failing to provide the service students are paying for may open organizations up to litigation, which could also be fatal on a large scale. Considerations for the solutions you require To understand what’s required to get started in enabling long-term remote working, we’ll discuss the facets of the particular context that users will be in. This will allow us to understand which technologies will be essential, which ones are not appropriate and what the immediate obstacles to overcome are. Once that is known, it’s easier to gain a clear understanding of what, specifically, is required to achieve remote lectures and home study. Students and faculty will most likely be using a non-managed machine, for example, their own: - Apple Mac - Linux device - With some technologies, it is also possible to deliver to any device with an HTML5 browser This tells us that traditional imaging will likely be impractical in terms of gaining physical access to non-managed machines, and we haven’t yet considered the sheer number of machines there are likely to be. You may, however, need to use imaging to image virtual desktops with key software, rather than physical machines. Device type is, understandably, key to how a software title should be delivered. The following are examples of some of the challenges related to device type: Windows machines are the simplest to deliver to as they are technically compatible with any key delivery method. Mac devices are more complex and will usually require a VDI/RDS solution such as Parallels RAS, or a specialized Mac deliver service such as Jamf Pro. Chromebooks are incapable of installing 3rd party software and are thin in computing power, which essentially renders all delivery methods inviable except for VDI, and, for lighter software titles, application virtualization solution, Cloudpaging. Obviously, users will be off-site. Many software licenses dictate that university software may only be run on-site, and to enforce this it is often necessary for a title to connect to an on-campus license server. VDI is a perfect solution to this as the virtual machine and, by extension, the software title itself will actually run and be executed on a server on-site and then pixel-streamed to the endpoint. Alternatively, universities can provide student and staff VPN, to get access to software titles that require connection to on-campus license servers. The software title itself will factor into how it is to be delivered. For any free and open-source software (FOSS) apps, it is logical and efficient to provide a download link at the vendors' website/download portal in order to avoid having to use up any desktop or application virtualization licenses. BYOD is the principle of allowing students and faculty to use their own devices to access university software, where possible using the same tools and with a consistent user experience. Short of providing a portable managed device or installing non-portable managed devices in the homes of your users, BYOD is essential to fully providing the ability to work and study remotely. What are the benefits of BYOD for students? Flexibility: BYOD allows students to work from wherever they work best. The ability to work in an environment they find productive is invaluable to students and allows them to approach work in the way in which they are most likely to achieve. Familiarity: BYOD permits students to use their own machines; machines with which they are more familiar and can achieve more efficient workflows. This also reduces the chance of losing files. Constant access to software: If all their university software is accessible from their own machine, students have constant access to their software and don’t need to worry about lab opening times and teaching schedules when looking for a place to work. What are the benefits of BYOD for the university, faculty and staff? All of the above: The benefits of BYOD for students all apply to faculty and staff. Increased enrolment: With the value added to the student experience with BYOD, universities can attract and retain more students through providing a better offering. Furthermore, students achieving higher grades due to better facilities will improve the reputation of a university and attract even more students! Reduced demand on support: With students being more familiar with their machines and the process of installing university software on them, the need for support in this area will be reduced, saving costs and freeing up time to address strategic goals. Reduced specialist labs: The more students use their own hardware, the lower the hardware investment required by the university is! This extends to maintenance costs and replacement at the end of the hardware’s life cycle. An added benefit is that, with fewer specialist labs, campus square footage can be reduced and consequentially, overhead costs. If BYOD is already enabled, how quickly can it be deployed? If service is disrupted to the degree that students and staff are unable to access university campuses, if BYOD is already enabled through the appropriate technologies, deploying and allowing constant access is essentially a simple matter of managing provisioning logic! The result is enabling home study and remote lectures in a matter of days, which is in stark contrast to the weeks or months it may take to source and install the required technologies from scratch! How does application virtualization help students study remotely? Application virtualization doesn’t require a constant internet connection once apps are fully virtualized as it is executed on end-point hardware. Application virtualization doesn’t require a constant internet connection once apps are fully virtualized: Because software is technically running using the endpoint’s hardware (despite being isolated inside a ‘virtual bubble’), a network connection doesn’t need to be maintained to continue running. This means that connection interruptions don’t prevent the app from being used and performance isn’t affected by drops in data speeds. Students are able to work on the move, in areas with spotty connections and they’ll never lose work from a disconnection. The full application doesn’t need to virtualize to become accessible: This allows students to begin using apps without waiting for a lengthy download to complete. This means students have: - quicker access to software - a better user experience - more time to spend on completing university work Software behaves as if locally installed: This means that context menus and local resources/storage can be utilized exactly the same way as if the software were physically installed on the machine. On-demand access: As soon as an app is required, it can be virtualized and used. There is no lead time on accessing an app other than the initial, essential portion of the download and no support from IT or university staff is required. No complex installation process: There is no manual installation process with application virtualization. This is less prone to human error resulting in 100% success rate of accessing applications while also saving time. How does VDI help enable distance learning? VDI can be run on low-spec machines as it doesn't rely on or utilize the hardware of the end device. Can be run on low-spec machines: Desktop virtualization doesn’t rely on the hardware of the end device. The result is that any device with a reliable internet connection can run any app, regardless of how demanding it is or of what hardware specs it may need. Cross-platform delivery: Because applications delivered using VDI are executed on a university-owned or third-party server, the pixel streamed to the endpoint, it doesn’t matter what the operating system of the endpoint is. You can even deliver heavyweight applications to Chromebooks. This lowers the required investment from students to be able to access all of their university software, making remote learning accessible to all students. Access ANY app: Once again, due to software being executed on a server on-site, any applications that require a connection to an on-campus license server can still be delivered. Students do not need to visit specific, physical locations to access specialist apps and are provided with the flexibility to truly engage in distance learning. In summary, making distance learning effective and viable for those who need it relies heavily on preparation and having components in place ahead of time. Nobody could have foreseen the events of the coronavirus pandemic; however, now we have seen the effects of a global pandemic on modern society, it is evident that long-term disruption to service is something that should be considered and planned for. "Engineering and building a VDI environment is complex. If you don’t currently have VDI, you’ll need to get the right experts in to help design and build it" Brian Madden, VMware blog In addition to this, consider the positive effects of enabling distance learning for your students and remote teaching for your faculty. Remote-working global experiment For many, the disruption of COVID-19 may become something of a proof-of-concept that remote working is viable, with the proper systems in place. Make note of any benefits you see in enabling remote lectures and study from home, how attitudes to learning are affected and how student outcomes may improve when students can study on their own terms, where appropriate.
<urn:uuid:8a2d0825-ca34-42fb-b200-5d4cb2a5eeb9>
CC-MAIN-2022-40
https://www.appsanywhere.com/resource-centre/byod/how-can-you-prepare-students-to-study-remotely
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00232.warc.gz
en
0.941299
3,398
2.734375
3
Virtual Local Area Networks (VLANs) and VLAN Tagging let you sub-divide or isolate your physical network into smaller Virtual Networks to provide better functionality, services, or to isolate or restrict traffic between networks. For example, an organization may want to isolate Voice-Over-IP (VOIP) related traffic from users workstation data due to the different quality of service requirements for each. VLAN Tagging is a feature of a network device to insert VLAN IDs, or VLAN tags, onto data packets to distinguish traffic from different VLANs. VLAN Tagging is necessary when your multi-VLAN traffic spans across trunks between switches that support IEEE 802.1q. As a packet goes through a switch supporting IEEE 802.1q and enters a trunk channel towards the next network device, the switch inserts a VLAN ID, or VLAN tag, onto a data packet in order to identify the VLAN to which the packet belongs. BlueCat DNS/DHCP Servers can be configured with multiple VLANs (each represented as a sub-interface). In this way, the DNS/DHCP Server can identify which packets belong to which VLAN and respond appropriately. To support VLAN tagging, you configure a sub-interface on top of parent physical interface (either eth0 or bond0) on the DNS/DHCP Serverand assign that sub-interface with certain VLAN ID. Any response on such a sub-interface will be broadcast to hosts and network devices in the corresponding VLAN. - VLAN Tagging is supported on managed DNS/DHCP Server appliance or virtual machines using software version 8.3.1 or greater only - VLAN Tagging can be configured on standalone DNS/DHCP Server or xHA pairs - VLAN Tagging can be used with port bonding to provide customers with NIC level redundancy.
<urn:uuid:bf6a532f-4c25-4cc9-ac90-58e4fa1e6af0>
CC-MAIN-2022-40
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/VLAN-Tagging/9.1.0
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00232.warc.gz
en
0.827958
385
3.015625
3
One of the most popular buzzwords in tech today is the “Internet of Things” (IoT) but what exactly is it referring to? This week, we’ll explain the very basics of IoT and how it will continue to transform the way we communicate and conduct business in the 21st century. The phrase “Internet of Things” was first coined in 1999 by Kevin Ashton, executive director of the Auto-ID Center at MIT, and referred to the myriad of devices that would eventually be connected to the internet. In summary, the “internet of things” is the natural evolution of the network, whereby smart objects on the network have the ability to collect data, in addition to users having the ability to connect to those devices and issue commands, such as remotely setting a home or business’ security system or setting a home DVR to record a television show, from a remote location, miles away. Today, the “Internet of things” includes mobile phones, tablet computers, cars, fitness trackers, refrigerators, security systems and light bulbs, just to name a few. All of these devices collect data and exchange it through dedicated sensors, over the internet. In 2010, there were over 12.5 billion devices connected to the internet, even though the worldwide population was only 6.8 billion, marking the beginning of “the Internet of Things” – where devices on the network outnumber human beings on the network. The following are some of the ways that IoT will make our lives easier: Healthcare providers will have the ability to monitor patients without visiting them in person, reducing healthcare costs and significantly reducing the risk of errors and omissions. Medical devices may even, at some point, have the ability to administer medication, ensuring the patient is always comfortable, even when a doctor cannot be immediately reached. There are a myriad of ways that IoT will help modern medicine make significant advances. Google, for example, is currently working on a contact lens that has the ability to measure glucose, making life easier for diabetics. Self-driving cars may be the most cutting edge example of IoT. You can now buy a car that will drive you to and from work, suggest potential shortcuts based on real-time traffic conditions, notify you of weather forecasts and even suggest what type of music you might like to listen to on the commute. It’s only a matter of time before these vehicles are the rule, rather than the exception. Some cutting edge examples of smart housing include sensors that can detect human behavior and adjust the mood of the house accordingly by dimming lights and putting on soothing music, for example. Other examples may include the ability to draw a hot bath for the owner or brew a pot of coffee, automatically, based upon user input. Sales Analytics and Distribution Manufacturers can benefit from IoT by tracking their product from the initial sale to production to shipment, all while monitoring parts inventory levels and benefiting from AI that automatically orders new parts, as stock runs low, before any employees are even aware of the issue. As is the case in every technical revolution, the “IoT” may mean that certain tasks are handled by automation, allowing businesses to prosper in an age of Artificial intelligence and internet connectivity. If you would like more information about IoT, including which IoT platforms would best suit the needs of your product or solution, we invite you to fill out the form on this page and someone from our offices will get back with you.
<urn:uuid:8e7a7a5b-3e10-462c-a827-a31c8f18cdf8>
CC-MAIN-2022-40
https://www.clarusco.com/what-in-the-world-is-the-internet-of-things/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00232.warc.gz
en
0.956277
726
3.484375
3
The National Institute of Standards and Technology (NIST) has created a new method that organizations may use to protect themselves from phishing, a cyber attack that uses emails with malicious links potentially containing malware. The method, known as The Phish Scale, is designed to inform explanations behind the click rates of links found in phishing emails, the National Institutes of Standards and Technology said Thursday. “The Phish Scale is intended to help provide a deeper understanding of whether a particular phishing email is harder or easier for a particular target audience to detect,” said Michelle Steves, a NIST researcher. NIST used a rating system that bases results on cues seen in a phishing email's content. These cues may serve as factors that convince individuals to perceive the email as legitimate. Steves and fellow researcher Kristen Greene said NIST needs other parties, including those outside the public sector, to provide more data for Phish Scale's further development. The additional data would expand the scale's use to include a wider range of operational scenarios.
<urn:uuid:4320486c-7bbe-4958-9c45-5be8b3051896>
CC-MAIN-2022-40
https://executivegov.com/2020/09/nist-introduces-new-method-to-assess-phishing-cases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00232.warc.gz
en
0.940157
213
3.15625
3
Deciphering how to configure a Web server to meet your needs can seem like a daunting task. And, when faced with a new HTTP server that provides a rich set of highly configurable features, knowing where to start can be quite a challenge. To make that challenge more manageable, IBM’s latest incarnation of the HTTP Server (powered by Apache) for iSeries simplifies the approach to Web server configuration. A new wizard guides you through the common steps for configuring a new Web server in an understandable and repeatable manner! Let me take you on a guided tour that shows you how to create, configure, and use your first Apache server. The HTTP Server (powered by Apache) requires special software and authorities. In OS/400 Version 4 Release 5, the HTTP Server (product ID 5769-DG1) must be installed on your system. You must also order a new group PTF and make sure the Java Virtual Machine (JVM) 1.2 and the Java Developer Kit (JDK) are installed on the system. The IBM HTTP Server Documentation Center (www.iseries.ibm.com/products/http/docs/v4r5) provides you with all the information you need to get started. In the documentation center, click on the HTTP Server (powered by Apache) section to view the set of articles specifically for Apache. Read and follow the instructions in these articles: “Installing the server,” “Testing the installation,” and “Configuring a new server.” The “Configuring a new server” article will tell you to start two administration servers. In V4R5, the Web pages for configuring Apache require a second administration server called ADMINA. The article concludes by guiding you to the administration Web page, where new Apache servers can be created (see Figure 1). This is where our journey will begin. Configuring Your First Apache Server If you have experience creating original HTTP servers, you’ll find creating Apache servers much easier. To begin, click on Create HTTP Servers to start a wizard that will walk you through a series of Web pages. Each page in the series will provide brief information and will ask you to answer a question. The objective of the wizard is to prompt you for the necessary information required to easily get a basic Apache server configured with common server functions. Your path through the wizard will vary depending on how you answer the questions. The questions are straightforward, but I’ll walk you through them while you create your first Apache server. (Note: Click Next to advance to the next question. No server is created until you arrive at the last question and click Finish. Until then, you can use the Back button to go back to previous questions and answer them differently, if desired.) 1. The first Web page asks which type of HTTP server you want to create. Although the recommended type is Apache, you can use the wizard to create original servers as well. Select HTTP Server (powered by Apache). 2. You are asked for the name you would like to give your HTTP server. Choose a name that uniquely identifies the server and that reflects what the server will be used for. In the future, you will use the name to identify the server you want to configure. Customers of your Web site will not know or use this name. For this example, type TRYAPACHE as the name for your Apache server. 3. The next question asks whether you want your new Apache server configured just like an existing server. This is a handy feature of the wizard, because it can save you time if you require more than one server configured in a similar way (such as a test versus production server). If you answer No, the remaining questions will step you through the basic configuration setup. If you answer Yes, you have the chance to migrate the configuration of an existing original HTTP server and have that equivalent configuration used by the new Apache server. Converting or migrating original HTTP server configurations to Apache is an advanced task that requires its own separate discussion. Answer No to continue. 4. You will need to specify a parent directory where you want your server’s configuration and logging information stored. This is referred to as your server root. A default of /www/tryapache is displayed for your consideration. Use the default directory shown. 5. A document root needs to be specified next. The document root is the directory that contains or will contain the documents or Web pages to be served by your Apache server. A default directory of /www/tryapache/htdocs is provided. Use the default directory shown and continue to the next page. 6. The HTTP server must be told which of the IP addresses on the iSeries system to look for HTTP requests on. For this server, pick all IP addresses. A unique port number must be used by each server application that runs concurrently on the system. The port tells the system which server application handles the requests coming in on that port. If you don’t already have an HTTP server running on port 80, use 80, which is the preselected default. Otherwise, pick a unique port that is not being used on your system. To see the ports currently in use on your system, enter the command NETSTAT OPTION(*CNN) and press F14 to display the port numbers. Browse the numbers in the Local Port column to verify that the number you want to use is not already taken. 7. Most Web sites keep an audit trail of what their HTTP server is doing. Configuring your Apache server to log every request made to the server allows you to know which Web pages are being accessed via your server and which users are accessing those pages. To answer the question on the Web page, select Combined log file, which will enable both access and error logging. 8. After clicking Next, you will be presented with a summary of the initial configuration for your Apache server (see Figure 2). At this point, you must click the Finish button in order to actually create the Apache server. That’s all there is to creating and configuring your first Apache server. Starting Your Apache Server You are now ready to use your new Apache server! The final information shown by the wizard directs you to your next step. You can start your server and use it to view your documents in your Web browser, or you can continue to configure your Apache server by using the configuration forms to enable more advanced features. Continue by choosing the Manage button. Find your server in the list (it should already be preselected), then click Start. After the list of servers refreshes, it will show that your server has a status of running. To test that your new Apache server is functioning, enter http://myhostname:port/ in your Web browser where myhostname is the name that identifies your iSeries system (not your Apache server name), and port is the port number you selected for your Apache server. A default welcome page is displayed (this page was copied to your document root directory when your server was created by the wizard). Continuing Your Journey We’ve just touched the surface of what is possible for HTTP server configuration for Apache. The Configuration page has all the forms you’ll need to continue configuring your TRYAPACHE server, such as enabling features like directory browsing, document caching, or running CGI programs. Wizards are available for configuring more complex or advanced features. Because the Apache server is more highly configurable than the original HTTP server, focus on learning and configuring one feature at a time. I recommend you enable other features and then test the server to ensure it is working as you expect it to. Finding More Information If you have difficulties or see unexpected results when using your Apache server, read the IBM HTTP Server for iSeries home page (www.iseries.ibm.com/products/http/ httpindex.htm) for information on the latest fixes, known problems, and workarounds. I would also suggest browsing the Concepts section in the HTTP Server Documentation Center to learn Apache-specific terminology and the Configuring section to learn more details about the features of the Apache server on iSeries. If you want to learn more about the technical details behind an Apache server configuration, you may be interested in looking at the raw configuration. To view the entire configuration you have just created, click Display Configuration File on the Configuration page. Each line contains a directive name and one or more values. At the beginning, don’t worry about knowing and understanding the directives themselves. Use the Web-based forms to manage and configure your servers. Knowing the directives is not necessary for successfully configuring and managing basic Apache servers. Once you are ready for a more in-depth view of directives, the Reference section in the HTTP Server Documentation Center shows the list of supported directives, along with detailed descriptions and examples for HTTP servers powered by Apache on iSeries. Once you are comfortable with creating, configuring, and using Apache servers on iSeries, the next step for many of you will be to migrate your existing original HTTP servers to Apache. The knowledge you gain from practicing will be of great use when configuring an Apache HTTP server on iSeries for real production use. REFERENCES AND RELATED MATERIALS: • IBM HTTP Server Documentation Center: www.iseries.ibm.com/products/http/docs/v4r5 • IBM HTTP Server for iSeries home page: www.iseries.ibm.com/products/http/ httpindex.htm Figure 1: This is the new interface for configuring both original and Apache servers. Figure 2: Here’s a summary of your initial Apache server configuration.
<urn:uuid:8c904a86-735a-42f4-bcbc-0577682dd590>
CC-MAIN-2022-40
https://www.mcpressonline.com/it-infrastructure-other/application-servers/configuring-the-http-server-powered-by-apache/print
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00232.warc.gz
en
0.876895
2,010
2.609375
3
Teachers who want to make student engagement with the common core easier turn to technology. Teachers are facing two new pressures in the classroom. The first is to focus more and more centrally on Common Core Standards and the second is to make sure that their students keep up with our increasingly technologically-dependent world. Some teachers have allowed the use of devices to ease the transition. Here’s an example – often teachers are stuggle to get their students to annotate as they read. The teacher knows that writing notes will make it easier for students to comprehend and retain information as they read, but students can feel like it’s a hassle to highlight and cram notes in the margin. As an alternative, teachers can have their students take notes on a tablet or computer. Most students today can type more quickly than they write, so this will make it easier for them to keep up their reading pace while also tracking what they’ve read. Let’s look at another example. Students are in the science lab, performing an experiment. They might forget to take notes as they go, making it challenging to complete their write-ups. If, however, you give students the freedom to take pictures and capture video with their phones as they go, you empower them to collect the information they need. Plus, they can then easily share those lab “notes” via email with their peers, simplifying group project communication. Contact D&D Security by calling 800-453-4195 or by clicking here.
<urn:uuid:098318c9-491c-479a-a9b8-392df88590b1>
CC-MAIN-2022-40
https://ddsecurity.com/2015/12/09/common-core-and-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00232.warc.gz
en
0.970958
310
3.515625
4
What Is the Sarbanes Oxley Act (SOX)? The Sarbanes-Oxley Act (SOX) was passed by the Congress of the United States in 2002 and is designed to protect members of the public from being defrauded or falling victim to financial errors on the part of businesses or financial entities. SOX compliance is both a matter of staying in line with the law and making sure your organization engages in sound business principles that benefit both the company and its customers. History of SOX Compliance The Sarbanes-Oxley Act of 2002 was put forth by Senator Paul S. Sarbanes and Representative Michael G. Oxley. The bill came about in response to a series of high-profile incidents, such as those involving Enron, Tyco, and WorldCom—all of which involved the compromise of sensitive data. Sometimes referred to as the “Sarbanes-Oxley Act” or “Sarbanes Oxley,” SOX levies criminal penalties on those who do not comply with its mandates. Who Is Required To Comply with SOX? What is SOX compliance? SOX compliance is required of all companies that are traded publicly in the United States, as well as subsidiaries that are wholly owned. It also covers foreign companies that carry on business in the U.S. and accounting companies that perform audits on other businesses. The Requirements of SOX Compliance CEO and CFO Acknowledge Responsibility for Accuracy and Documentation The chief executive officer (CEO) and chief financial officer (CFO) have to affirm that all necessary documentation is submitted, filed with the Securities and Exchange Commission (SEC), and accurate. Generating an Internal Control Report The manner in which the company controls its financial reports has to be outlined, and any errors or issues need to be reported to executives. Formal Data Security Policies Data security policies need to be formalized and enforced. The ways in which the organization protects data need to be explicitly outlined. Documentation Proving SOX Compliance The documents that prove the organization is remaining in compliance need to be maintained and frequently updated. SOX Compliance Audits SOX compliance audits involve regular checkups to verify that the company is meeting the legislation's requirements. An organization may make use of SOX compliance software that can streamline the compliance-checking process, saving time while keeping the company in line with the necessary standards. Preparing for a SOX Compliance Audit To get ready for a SOX compliance audit, you should update all reporting and auditing systems, allowing you to pull reports that the auditor asks for. Also, you will want to make sure any SOX compliance software you are using is working well in case they are checked. A few things to keep in mind when preparing for a SOX audit: Access refers to how you control the digital and physical access to data. Have off-site backups of financial records that are compliant with SOX standards. Outline how you would manage changes without compromising security. To verify security, outline your plans for defending the system against breaches. SOX Compliance Checklist Maintain Regular SOX Compliance Status Reports Make sure you have all compliance reports updated and ready for presentation. Verify Your SOX Compliance Software Is Up to Date and Clear Your SOX compliance software must have the most recent iteration of the standards and be functioning well to protect your business from falling out of compliance. Report Security Breaches Reporting security breaches right away can help you remain in compliance with SOX standards, particularly because it prevents your company from looking like it is trying to conceal mistakes from the authorities or the public. Provide SOX Auditors with the Access Needed SOX auditors need unfettered access to your systems and records. Giving them clear access will show good faith on the part of your organization. Benefits of SOX Compliance Strengthening the Control Environment When you bolster the security measures and data protection solutions of the control environment, both the company and its customers are safer. Documentation can be used to keep track of your progress toward achieving data security goals. It can also be an effective tool for educating newer employees and for reviewing the success of specific programs. Increasing Audit Committee Involvement Ultimately, the audit committee wants to improve the security of your organization. Remaining in compliance helps everyone work together towards this goal. Standardizing processes makes them easier to replicate and teach to others. Also, when the time comes to adjust the implementation of procedures, standardized processes remove uncertainties associated with the logic, structure, and reasoning behind each measure. Protecting an ever-expanding attack surface is more difficult if an organized compliance system is not in place. By staying in compliance, you can develop consistent procedures for protecting various types of data. Strengthening Weak Links The process of ensuring compliance will often reveal areas of improvement that would have otherwise gone unnoticed. As these weak links are strengthened, the entire security profile is elevated. Minimizing Human Error In the course of an average day, imperfect humans have to make dozens, if not hundreds, of decisions. Therefore, errors are common. The compliance process provides you with an opportunity to double-check to make sure human errors are not harming your organization or its customers. How Fortinet Can Help? The Fortinet Public Cloud Security service helps organizations stay in line with SOX compliance standards by covering the whole attack surface. It can provide you with a Fortinet SOX report and integrates data aggregation and the sharing of information between the various security elements impacted by SOX. Notifications alerting the IT team to compliance issues, as well as tracking and reporting, make Fortinet Public Cloud Security an integral—and convenient—part of a SOX compliance initiative. What is SOX? The Sarbanes-Oxley Act (SOX) was passed by the Congress of the United States in 2002 and is designed to protect members of the public from being defrauded or falling victim to financial errors on the part of businesses or financial entities. Who is required to comply with SOX? SOX compliance is required of all companies that are traded publicly in the United States, as well as subsidiaries that are wholly owned. It also applies to foreign companies that carry on business in the United States. In addition, SOX applies to accounting companies that perform audits on other businesses. What are the requirements of SOX compliance? The requirements of SOX compliance include the CEO and CFO acknowledging responsibility for accuracy and documentation, generating an internal control report, formal data security policies, and documentation proving SOX compliance. What are the benefits of SOX compliance? SOX compliance benefits include strengthening the control environment, improving documentation, increasing audit committee involvement, standardizing processes, reducing complexity, strengthening weak links, and minimizing human error.
<urn:uuid:a2b2f3f2-9d80-46f3-a73b-b8619591b269>
CC-MAIN-2022-40
https://www.fortinet.com/fr/resources/cyberglossary/sox-sarbanes-oxley-act
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00232.warc.gz
en
0.940388
1,448
2.765625
3
TLS (Transport Layer Security) is a cryptographic protocol that enables authenticated connections and secure data transport over the internet via HTTP. A direct evolution of Secure Socket Layers (SSL), TLS has gone through a series of updates since its initial definition in January 1999. The most recent, TLS 1.3, added improvements in both performance and security, though its predecessor, TLS 1.2, remains in widespread use as well. Due to their shared history and similarities, the terms TLS and SSL are sometimes used interchangeably, and the same certificates can be used with both TLS and SSL. While (TLS) Transport Layer Security can be highly effective for ensuring data privacy, it can also have an unintended consequence for cybersecurity. By encrypting internet traffic, TLS not only renders data unreadable; it also does the same for malware and other threats. To close this security gap, organizations typically need to decrypt incoming traffic for TLS/SSL inspection by security devices and software, a solution that can incur significant penalties in cost, performance, and scalability. Encrypted traffic can allow threat actors to hide malware and other cyberattacks targeting an organization. A10 Networks Thunder® SSL Insight (SSLi®) eliminates this blind spot with a highly efficient approach to TLS decryption/SSL decryption, allowing organizations to decrypt and inspect incoming traffic at scale without impacting performance. Klein Independent School District now has full visibility into encrypted traffic across all ports and multiple protocols, enabling its security infrastructure to inspect previously invisible traffic and detect hidden threats. And thanks to providing privacy and security to connections between users and servers, encryption has become ubiquitous, to the point that over 90 percent of the internet traffic is encrypted. A10 Networks Thunder® Application Delivery Controller (ADC) provides SSL offload capabilities, which takes care of the compute intensive TLS/SSL decryption and encryption of application traffic, relieving the web servers from these duties and allowing them to function at optimal performance levels. Take this brief multi-cloud application services assessment and receive a customized report.Take the Survey
<urn:uuid:1d191c57-0c7e-4acb-8f90-5591990df5db>
CC-MAIN-2022-40
https://www.a10networks.com/glossary/what-is-tls-transport-layer-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00432.warc.gz
en
0.917409
415
2.84375
3
There has been a steady take-up of data virtualisation technology across industries over the last few years. This is largely due to the many benefits that the technology brings to data management and usage. In fact, data virtualisation is slowly being seen as a critical foundation for modern data architectures. As the adoption of data virtualisation grows, it is also discussed more frequently by analysts and vendors. However, the technology is not always described accurately. Often, data virtualisation is described as being similar to the early data federation systems of the late 1990’s or early 2000’s, which are extremely different from modern data virtualisation platforms. To provide a better understanding of data virtualisation, I will clarify some of the common myths about this technology. Myth 1: Data virtualisation is equivalent to data federation When data virtualisation was first introduced, data federation was one of its primary capabilities. Data federation involves the ability to answer queries by combining and transforming data from two or more data sources in real time. Similarly, the data-access layer established by data virtualisation contains the necessary metadata for accessing a variety of data sources and returning the results in a fraction of a second. However, this capability of data virtualisation has been broadened by leaps and bounds in recent times. Data virtualisation’s toolset now includes capabilities such as advanced query acceleration, which can improve the performance of slow data sources. Data virtualisation solutions also provide sophisticated data catalogues and can build rich semantic layers into the data-access layer so that different consumers can access the data in their chosen form. Myth 2: Data virtualisation overwhelms the network The data sources used in analytics architectures typically contain exceptionally large volumes of data. This is especially true as data generation has been increasing exponentially. One might think that data virtualisation platforms will always need to retrieve large data volumes through the network, especially when they are federating data from several data sources in the same query, which would heavily tax query performance. The fact is, the query acceleration capabilities of data virtualisation platforms, mentioned above, also minimise the amount of data flowing through the network. These techniques offer the dual advantage of improving performance while reducing network impact, freeing up the system to accommodate a heavier query workload. This is possible due to advances in the query execution engines of data virtualisation, which act like coordinators that delegate most of the work to the applicable data sources. If a given data source is capable of resolving a given query, then all of the work will be pushed down to that source. However, if the query needs data from multiple source systems, the query execution engine will automatically rewrite the query so that each source will perform the applicable calculations on its own data, before channelling the results to the data virtualisation platform. These results involve far less data being read over the network, compared with the early incarnations of federation tools. Myth 3: Data virtualisation means retrieving all data in real time With data virtualisation, the default mode for query execution is to obtain the required data in real time directly from the data source. This will often perform well, and this is the most common execution strategy used by our customers. However, advanced data virtualisation platforms also support additional execution methods to further improve performance and better accommodate slow data sources. For instance, data virtualisation can replicate specific virtual datasets in full. This can be useful for specific cases, such as providing data scientists with a data copy, which they can modify and work with without affecting the original data. Today, data scientists can decide between a range of options from zero, to partial, to full replication. Also, that decision is transparent to the data consumers, and it can be changed any time without affecting the original data source. Next-generation data virtualisation Data is fast becoming the lifeblood of modern-day businesses that operate in increasingly digitalised environments. Many businesses turn to data virtualisation as a foundational technology to drive business performance, operational agility, and overall resilience. Data virtualisation has evolved over the years to offer both advanced performance and advanced support. It now incorporates emerging technologies such as artificial intelligence, to automate manual functions and speed up data analysis. These capabilities effectively free up IT teams so that they can focus on innovation and other business objectives. Business and technology leaders need to understand the true benefits and possibilities of data virtualisation. Only then can they fully appreciate the potential that the technology brings to modern analytics.
<urn:uuid:d76c99c6-0679-43c0-8cfd-756fd7aa3419>
CC-MAIN-2022-40
https://www.frontier-enterprise.com/debunking-3-common-data-virtualisation-myths/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00432.warc.gz
en
0.921676
913
2.6875
3
AGAT Software provides unique compliance and productivity solutions for Webex Meetings including Ethical Wall to control participants and activities during the meeting, eDiscovery to audit and store all meeting data, DLP to inspect meeting content by policy, and lastly, Virtual Asstsiatnt and Sentiment analysis to improve productivity and performance. Continue reading to learn more about […] AI Sentiment Analysis of Chats & Meetings in MS Teams and Webex - What is sentiment analysis? - Why is sentiment analysis important? - What is sentiment analysis used for? - How does sentiment analysis work? What is sentiment analysis? The process of analyzing call recordings and chat to determine whether the underlying emotions are positive, negative, or neutral is the definition of AI sentiment analysis. In other words, sentiment analysis helps to find the person’s feelings in a particular situation and define the emotions involved to be either joyfulness, happiness, surprise, anger, disgust, or sadness. Why is sentiment analysis important? First and foremost, AI sentiment analysis is important to help spread positive behavior to other parts of the company. With the positive and negative communications documented in every employee’s report, sentiment analysis can help businesses promote positive behavior in the workplace. This can be done by comparing employees’ performances and encouraging everyone to improve. Furthermore, evidently, businesses must ensure that customers are receiving excellent service. Sentiment analysis can help businesses identify negative behavior and detect any interaction that may have been negative. This will help in managing the employees’ negative behavior to provide the best customer experience. Thirdly, Understanding customers’ emotions can empower the employees with knowledge that can help them provide better service. The customer-facing team can therefore offer proactive solutions to increase customer satisfaction. Fourthly, by analyzing employees’ communications, companies can better understand how they feel, which in turn helps reduce employee turnover and increase overall productivity. Lastly, sentiment analysis can give visibility to employee communication with others while working remotely which in turn helps employees stay connected to their team and improve their collaboration with others. What is sentiment analysis used for? Sentiment analysis can be useful in different business departments or divisions. Let’s see in more detail how sentiment analysis benefits some of them. - Customer Success / Support Managers: Sentiment analysis is an extremely useful tool in the customer service field as it allows businesses to improve their direct communications with customers. It can also help businesses prioritize their customer support issues by identifying and handling the most negative feedback first, which increases customer retention and satisfaction by providing quick answers. - HR Manager: Sentiment analysis helps HR managers make decisions and organizational changes based on employee feedback and satisfaction to promote proactive action before any interview or conversation. Employee reports can help an employee objectively analyze their relationships with other colleagues within the organization, as well as their communication trends. How does sentiment analysis work? AI sentiment analysis employs natural language processing and machine learning algorithms to classify text and audio pieces as positive, neutral, or negative. - Natural Language Processing: NLP uses computer-based methods that analyze the human language used in communications. In order for machines to understand human text and speech, NLP techniques need to be put in place. This includes Tokenization, Stemming, and Part-of-Speech (POS) Tagging. After the natural language processing is completed, the text will be ready for the classification process of machine learning. - Machine Learning: Using existing data, machines are trained to recognize patterns in new data sets to predict the sentiment behind a given text and automatically classify it as positive, negative, or neutral. AGAT Software recently released its first AI sentiment analysis engine, to learn more about it, contact us today.
<urn:uuid:f23b68b2-4340-4c5b-aac3-59e55ca38df2>
CC-MAIN-2022-40
https://agatsoftware.com/blog/ai-sentiment-analysis-of-chats-meetings-in-ms-teams-and-webex/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00432.warc.gz
en
0.916728
780
2.546875
3
Two separate studies, one by UK-based artificial intelligence lab DeepMind and the other by researchers in Germany and Greece, display the fascinating relations between AI and neuroscience. As most scientists will tell you, we are still decades away from building artificial general intelligence, machines that can solve problems as efficiently as humans. On the path to creating general AI, the human brain, arguably the most complex creation of nature, is the best guide we have. Advances in neuroscience, the study of nervous systems, provide interesting insights into how the brain works, a key component for developing better AI systems. Reciprocally, the development of better AI systems can help drive neuroscience forward and further unlock the secrets of the brain. For instance, convolutional neural networks (CNN), one of the key contributors to recent advances in artificial intelligence, are largely inspired by neuroscience research on the visual cortex. On the other hand, neuroscientist leverage AI algorithms to study millions of signals from the brain and find patterns that would have gone. The two fields are closely related and their synergies produce very interesting results. Recent discoveries in neuroscience show what we’re doing right in AI, and what we’ve got wrong. DeepMind’s AI research shows connections between dopamine and reinforcement learning A recent study by researchers at DeepMind prove that AI research (at least part of it) is headed in the right direction. Thanks to neuroscience, we know that one of the basic mechanisms through which humans and animals learn is rewards and punishments. Positive outcomes encourage us to repeat certain tasks (do sports, study for exams, etc.) while negative results detract us from repeating mistakes (touch a hot stove). The reward and punishment mechanism is best known by the experiments of Russian physiologist Ivan Pavlov, who trained dogs to expect food whenever they hear a bell. We also know that dopamine, a neurotransmitter chemical produced in the midbrain, plays a great role in regulating the reward functions of the brain. Reinforcement learning, one of the hottest areas of artificial intelligence research, has been roughly fashioned after the reward/punishment mechanism of the brain. In RL, an AI agent is set to explore a problem space and try different actions. For each action it performs, the agent receives a numerical reward or penalty. Through massive trial and error and by examining the outcome of its actions, the AI agent develops a mathematical model optimized to maximize rewards and avoiding penalties. (In reality, it’s a bit more complicated and involves dealing with exploration and exploitation and other challenges.) More recently, AI researchers have been focusing on distributional reinforcement learning to create better models. The basic idea behind distributional RL is to use multiple factors to predict rewards and punishments in a spectrum of optimistic and pessimistic ways. Distributional reinforcement learning has been pivotal in creating AI agents that are more resilient to changes in their environments. The new research, jointly done by Harvard University and DeepMind and published in Nature last week, has found properties in the brain of mice that are very similar to those of distributional reinforcement learning. The AI researchers measured dopamine firing rates in the brain to examine the variance in reward prediction rates of biological neurons. Interestingly, the same optimism and pessimism mechanism that AI scientists had programmed in distributional reinforcement learning models was found in the nervous system of mice. “In summary, we found that dopamine neurons in the brain were each tuned to different levels of pessimism or optimism,” DeepMind’s researchers wrote in a blog post published on the AI lab’s website. “In artificial reinforcement learning systems, this diverse tuning creates a richer training signal that greatly speeds learning in neural networks, and we speculate that the brain might use it for the same reason.” What makes this finding special is that while AI research usually takes inspiration from neuroscience discovery, in this case, neuroscience research has validated AI discoveries. “It gives us increased confidence that AI research is on the right track, since this algorithm is already being used in the most intelligent entity we’re aware of: the brain,” the researchers write. It will also lay the groundwork for further research in neuroscience, which will, in turn, benefit the field of AI. Neurons are not as dumb as we think While DeepMind’s new findings confirmed the work done in AI reinforcement learning research, another research by scientists in Berlin, this time published in Science in early January, proves that some of the fundamental assumptions we made about the brain are quite wrong. The general belief about the structure of the brain is that neurons, the basic component of the nervous system are simple integrators that calculate the weighted sum of their inputs. Artificial neural networks, a popular type of machine learning algorithm, have been designed based on this belief. Alone, an artificial neuron performs a very simple operation. It takes several inputs, multiplies them by predefined weights, sums them and runs them through an activation function. But when connecting thousands and millions (and billions) of artificial neurons in multiple layers, you obtain a very flexible mathematical function that can solve complex problems such as detecting objects in images or transcribing speech. Multi-layered networks of artificial neurons, generally called deep neural networks, are the main drive behind the deep learning revolution in the past decade. But the general perception of biological neurons being “dumb” calculators of basic math is overly simplistic. The recent findings of the German researchers, which were later corroborated by neuroscientists at a lab in Greece, proved that single neurons can perform XOR operations, a premise that was rejected by AI pioneers such as Marvin Minsky and Seymour Papert. While not all neurons have this capability, the implications of the finding are significant. For instance, it might mean that a single neuron might contain a deep network within itself. Konrad Kording, a computational neuroscientist at the University of Pennsylvania who was not involved in the research, told Quanta Magazine that the finding could mean “a single neuron may be able to compute truly complex functions. For example, it might, by itself, be able to recognize an object.” What does this mean for artificial intelligence research? At the very least, it means that we need to rethink our modeling of neurons. It might spur research in new artificial neuron structures and networks with different types of neurons. Maybe it might help free us from the trap of having to build extremely large neural networks and datasets to solve very simple problems. “The whole game—to come up with how you get smart cognition out of dumb neurons—might be wrong,” cognitive scientist Gary Marcus, who also spoke to Quanta, said in this regard.
<urn:uuid:11236b8a-60f0-471b-9880-bf050225f784>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/01/20/neuroscience-artificial-intelligence-synergies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00432.warc.gz
en
0.947841
1,374
3.609375
4
Now that 3D printers are cheaper to produce, experts predict it won’t be long before they are common in our homes. Even today, more companies realise the potential for 3D-printed applications in their own businesses. Since the technology moved from the theoretical to the real, people are expanding the boundaries of what’s possible to print from very practical applications for manufacturing and medical devices to those just for fun. Here are just a few of the amazing real-world examples of 3D printing in 2018. What is 3D printing? 3D printing had its start in the 1980s when Chuck Hull designed and printed a small cup. Also known as additive manufacturing, 3D-printed objects are created from a digital file and a printer that lays down successive layers of material until the object is complete. Each layer is a thinly sliced cross-section of the actual object. It uses less material than traditional manufacturing. Most materials used in 3D printing are thermoplastics—a type of plastic that becomes liquid when heated but will solidify when cool and not be weakened. However, as the technology matures, researchers are finding new materials—even edible—that can be 3D printed. Prosthetic limbs and other body parts From vets who have made a 3D-printed mask to help a dog recover from severe facial injuries to surgical guides, prosthetic limbs and models of body parts, the applications for 3D printing to impact medical strategies is vast. In an experiment conducted by Northwestern University Feinberg School of Medicine in Chicago, a mouse with 3D-printed ovaries gave birth to healthy pups which could bode well for human interventions after more research is done. Homes and other buildings In less than 24 hours, a 400-square-foot house was constructed in a suburb of Moscow with 3D printing technology. The possibilities for quickly erecting houses and other structures with 3D printing are intriguing when time is critical such as to create emergency shelters for areas after a natural disaster. Additionally, the potential for new architectural visions to be realised, that weren’t previously possible with current manufacturing methods will lead to design innovations. An entire two-storey house was 3D printed from concrete in Beijing in just 45 days from start to finish. Researchers from Germany even 3D-printed a house of glass—currently only available in miniature size—but they were the first to figure out how to 3D print with glass. When you think about traditional cake decorating techniques—pushing frosting through a tip to create designs—it’s very similar to the 3D printing application process where material is pushed through a needle and formed one layer at a time. Just as it’s done with 3D plastic printing, a chocolate 3D printer starts with a digital design that is sliced by a computer programme to create layers; then the object will be created layer by layer. Since chocolate hardens quickly at room temperature, it’s an ideal edible material for 3D printing, but companies have printed other edible creations from ice cream, cookie dough, marzipan and even hamburger patties. Defence Distributed was the first to create a 3D-printed firearm in 2013 called the Liberator. While there are 3D printers that can use metal, they are very expensive, so the Liberator was printed using plastic. The advances of 3D technology and the ability to print your own firearm from home has raised questions about how to address the technology in gun control regulations. There are many applications across several industries including automotive, aerospace and more for 3D printing in manufacturing from printing replacement parts of machinery and prototyping new products (with the added benefit of recycling the models after you’re done) to creating moulds and jigs to improve the efficiency of the production process. The bodies of electric vehicles and other cars have been 3D printed. Manufacturers can use 3D printing to lower costs and produce products quicker. From an incredible 3Dvarius, inspired by a Stradivarius violin, to flutes and banjos, several musical instruments and parts of instruments such as mouthpieces have been created using a 3D printer. In fact, the world’s first live concert with a 3D-printed band (drum, keyboard, and two guitars) took place at Lund University in Sweden. Anything your mind can imagine The extraordinary thing about 3D printing is that it can be used to create just about anything your mind can conjure up. It just requires the digital file and the right material. While experts are still troubleshooting how to incorporate 3D printing processes into all areas, weekend warriors are finding all kinds of clever hacks to create with their 3D printers including trash cans, cup holders, electric outlet plates and more.
<urn:uuid:9eba949f-8cce-4e62-a5e9-5d8dd0850296>
CC-MAIN-2022-40
https://bernardmarr.com/7-amazing-real-world-examples-of-3d-printing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00432.warc.gz
en
0.955321
999
3.203125
3
Healthcare is an industry that is currently being transformed using the latest technology, so it can meet the challenges it is facing in the 21st century. Technology can help healthcare organisations meet growing demand and efficiently operate to deliver better patient care. Here are 9 technology trends that will transform medicine and healthcare in 2020. AI and Machine Learning As the world population continues to grow, and age, artificial intelligence, and machine learning offer new and better ways to identify disease, diagnose conditions, crowdsource and develop treatment plans, monitor health epidemics, create efficiencies in medical research and clinical trials, and make operations more efficient to handle the increased demands on the healthcare system. By 2020, medical data will double every 73 days. McKinsey estimates that there could be $100 billion in annual savings four medicine and pharma by leaning on big data as well as the artificial intelligence and machine learning tools to process it. Artificial intelligence algorithms powered by recent advances in computational power learn from the data and can predict the probability of a condition to help doctors provide a diagnosis and treatment plans. Ultimately, AI and machine learning can assist with many clinical problems as long as governing and regulatory bodies can determine how to regulate the use of algorithms in healthcare. When it comes to life or death, would you trust a robot with yours? Currently, collaborative robots—such as the da Vinci surgical robot— are already assisting humans with tasks in the operating room. However, the potential four robots in healthcare expands beyond surgical uses. With tremendous growth expected in the industry—the global medical robotics market is expected to reach $20 billion by 2023—there’s no doubt that robots used in healthcare will continue to conduct more varied tasks. These already include helping doctors examine and treat patients in rural areas via “telepresence,” transporting medical supplies, disinfecting hospital rooms, helping patients with rehabilitation or with prosthetics, and automating labs and packaging medical devices. Other medical robots that are promising include a micro-bot that can target therapy to a specific part of the body, such as radiation to a tumour or clear bacterial infections. Computer and Machine Vision Training computers to “see” the world and understand visual input is no small feat. Since there has been significant progress in machine vision, there are more ways computers and machine vision are being used in medicine four diagnostics, viewing scans and medical images, surgery, and more. Machine vision is helping doctors definitively know how much blood a woman loses in childbirth so that appropriate care can be given to reduce the mortality of mothers from post-partum hemorrhaging. Computers provide accurate intel, while previously this was a guessing game. The applications where computers are being used to view CT scans to detect neurological and cardiovascular illnesses and spot tumours in X-ray images are growing rapidly. Wearable fitness technology can do much more than tell you how many steps you walk each day. With more than 80% of people willing to wear wearable tech, there are tremendous opportunities to use these devices four healthcare. Today’s smartwatches can not only track your steps but can monitor your heart rhythms. Other forms of wearable devices are ECG monitors that can detect atrial fibrillation and send reports to your doctor, blood pressure monitors, self-adhesive biosensor patches that track your temperature, heart rate, and more. Wearable tech will help consumers proactively get health support if there are anomalies in their trackers. Artificial intelligence and machine learning help advance genomic medicine—when a person’s genomic info is used to determine personalised treatment plans and clinical care. In pharmacology, oncology, infectious diseases, and more, genomic medicine is making an impact. Computers make the analysis of genes and gene mutations that cause medical conditions much quicker. This helps the medical community better understand how diseases occur, but also how to treat the condition or even eradicate it. There are many research projects in place covering such medical conditions as organ transplant rejection, cystic fibrosis, and cancers to determine how best to treat these conditions through personalised medicine. Just as it’s done four other industries, 3D printing enabled prototyping, customisation, research, and manufacturing four healthcare. Surgeons can replicate patient-specific organs with 3D printing to help prepare four procedures, and many medical devices and surgical tools can be 3D printed. 3D printing makes it easier to cost-effectively develop comfortable prosthetic limbs four patients and print tissues and organs four transplant. Also, 3D printing is used in dentistry and orthodontics. Extended Reality (Virtual, Augmented and Mixed Reality) Extended reality is not just four entertainment; it’s being used four important purposes in healthcare. The VR/AR healthcare market should reach $5.1 billion by 2025. Not only is this technology extremely beneficial four training and surgery simulation, but it’s also playing an important part in patient care and treatment. Virtual reality has helped patients with visual impairment, depression, cancer, and autism. Augmented reality helps provide another layer of support four healthcare practitioners and aided physicians during brain surgery and reconnecting blood vessels. In mixed reality, the virtual and real worlds are intertwined, so it provides important education capabilities four medical professionals as well as to help patients understand their conditions or treatment plans. A digital twin is a near real-time replica of something in the physical world—in healthcare, that replica is the life-long data record of an individual. Digital twins can assist a doctor in determining the possibilities four a successful outcome of a procedure, help make therapy decisions, and manage chronic diseases. Ultimately, digital twins can help improve patient experience through effective, patient-centric care. The use of digital twins in healthcare is still in its early stages, but its potential is extraordinary. As the capabilities four healthcare centres to provide care in remote or under-served areas through telemedicine increase, the quality and speed of the network are imperative four positive outcomes. 5G can better support healthcare organisations by enabling the transmission of large imaging files so specialists can review and advise on care; allow four the use of AI and Internet of Things technology; enhance a doctor’s ability to deliver treatments through AR, VR and mixed reality; and allow four remote and reliable monitoring of patients. These technologies offer incredible opportunities to provide better healthcare to billions of people and help our healthcare systems cope with the ever-increasing demands.
<urn:uuid:e0ea43a5-4485-4bf8-83f1-f3c9cad90afc>
CC-MAIN-2022-40
https://bernardmarr.com/the-9-biggest-technology-trends-that-will-transform-medicine-and-healthcare-in-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00432.warc.gz
en
0.939158
1,330
3.03125
3
You can assign a summary function to columns that contain numeric values in your report. A summary function is a summarization technique that performs calculations on values in columns, groups, or in the entire report. - Click the down arrow next to a report column that contains numeric values. - Select Summary from the menu, then choose the summary type. Function Name Description None No summary function assigned Average Calculates the average value in a given column Count Counts the items in a group or report, but does not require a numeric value. Count Distinct Counts the distinct occurrences of a certain value in a column; does not require a numeric value Max Identifies the highest or largest value in a column Min Identifies the lowest or smallest value in a column Sum Calculates a total sum of the group or report (group level, and running total in the report footer) The numeric values in the column update. - Save the report.
<urn:uuid:bff12a10-5350-4955-ae13-589f09bbdd5b>
CC-MAIN-2022-40
https://help.hitachivantara.com/Documentation/Pentaho/5.2/0L0/120/020/010/010
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00432.warc.gz
en
0.714393
191
2.609375
3
5G is here and is widely expected to be a transformative communications technology for the next decade. This new data network will enable never-before-seen data transfer speeds and high-performance remote computing capabilities. Such vast, fast networks will need dedicated tools and practices to be managed, including AI and machine learning processes that will ensure efficient management of network resources and flexibility to meet user demands. But first, let’s examine how exactly this ultra-fast, robust data network will influence enterprises and private individuals. 5G is expected to have a significant impact on nearly every aspect of our daily lives. While the 5G network will eventually touch every corner of digital communication, these are the industries that will be immediately disrupted: - Automated factories: Increasingly automated industrial processes, production lines, supply chains and inventory optimization will herald an era of IoT that will hinge on 5G networks. - Automotive Industry: This industry is on the cusp of radical transformation. The electric vehicle is just the tip of the iceberg. 5G technologies will underpin developments in this industry, such as autonomous driving, vehicle-to-vehicle communication and in-vehicle media will require high performance, robust, reliable and low-latency 5G networks. - Energy & Water: With rising energy costs, water scarcity, increasing demand and changing business models, there will be an acute need for households, businesses, governments and suppliers to ensure intelligent management of infrastructure. 5G is expected to play a significant role in ensuring smart grids are built with robust connectivity and communication capability. - Healthcare: With an increasing population of senior citizens across much of the Western world, health care costs are expected to rise. New health management paradigms need to be adopted to sustain an appropriate quality of life. Remote health monitoring using various sensors and wearable devices will change doctor-patient interactions, and remote surgery will become the mainstream. 5G is expected to play a significant role in providing high quality, low-latency communications. - Media and Entertainment: The last three years have transformed this industry rapidly – on-demand streaming services, user generated content and social media are pervasive. The next decade is expected to see increased consumption at higher bandwidths, across multiple platforms and devices, with higher resolution. To keep pace, 5G will soon become the norm. The 5G use cases can be broadly be categorized in the following manner: - Enhanced Mobile Broadband – Hotspots, general broadband, public transport, special events - Connected Vehicles – Vehicle to vehicle, vehicle to infrastructure and vehicle to pedestrian communication - Enhanced Multimedia – Broadcast media, on demand and live TV, mobile TV - Massive Internet-of-Things – Sensors monitoring pollution, noise, temperature, etc. - Ultra-reliable, low-latency applications – Remote surgery, emergency communication, automated factories Fundamentally, all the above verticals need five key capabilities: mobility, low latency, high data rates, extreme reliability and vast scale. The underlying network technologies required for 5G include: - Spectrally efficient, dense and ultra-reliable low-latency radio networks - Strong security and authentication framework - Network Function Virtualization – Cloud computing-based networks where network functions share resources dynamically and independently of geographical location - Software-Defined Networking to separate user plane and control plane functions and to support network slicing to enable creation of multiple virtual networks to cater for specific characteristics of the services being offered - Orchestration and management to fully automate fulfillment and assurance of services - Multi-access edge computing to bring services closer to the network edge This implies that the 5G network engineering, operations and scaling have to be vastly different from all the previous generation of mobile technologies. Cognitive Analytics using Machine Learning and AI 5G service requirements, user behavior and service creation will require AI/ML networks and applications that are closed loop, self-optimizing and self-healing. With this technology, often classified as cognitive analytics (and also referred to as AI analytics or augmented analytics), telcos can develop the two core solutions needed to keep pace with 5G networks: - Anomaly Detection - Forecasting demand & behavior Anomaly detection and forecasting will be critical enablers to manage the complexity of these networks and meet the critical service expectations that will need to be met in order to serve transformative technologies such as autonomous vehicles, remote surgery and massive IoT. 5G networks and user applications/content are going to create billions of time series, logs and event data. Anomaly detection will be one of the key ingredients to ensure the underlying network and customer support systems can self-heal and self-organize to keep pace with the high level of service requirements (low latency, high reliability) that are required to support mission-critical applications. Forecasting demand and behavior will be key to enable service providers to automate the creation of network slices to support new usage patterns. Forecasting user demand will ensure sufficient network resources are provisioned, created and scaled ahead of usage. Taking service creation to service consumption in order to meet changing consumer demands will be the killer value proposition for 5G service providers. AI/ML-based anomaly detection can be used to detect latency problems for autonomous vehicles, initiating the orchestration layer to spin up new network slices in order to recover performance. Another example would be to use real-time forecasting of traffic to spin up and spin down network slices. Further, anomaly detection and forecasting will play a vital role in conjunction with Self-Organizing Network (SON) to dynamically manage 5G radio resources. In today’s service provider landscape, in-house anomaly detection and forecasting are performed in silos. Multiple teams spend months organizing data, preparing models, experimenting with data, interpreting results and then trying to ‘productionize’ these models with a limited pool of data scientists, data engineers and analysts. In order to avoid these bottlenecks, service providers must support the development of these core modules (anomaly detection and forecasting) as an enterprise-wide AI/ML service. There’s no doubt that 5G networks will present a major step forward in global tech and communications. As the use of mobile communication practices expands to new areas – and 5G becomes the superhighway on which all data travels – reliable, automated data-management tools will become a must-have for the majority of enterprises. With its AI- and machine learning-based anomaly detection and forecasting, Anodot is proving to be a prominent player in the implementation and utilization of 5G networks now and surely in the coming years. Learn how telcos are using Anodot to expand their services and improve coverage:
<urn:uuid:177164c5-8296-4593-a90a-6cb00e3b9a93>
CC-MAIN-2022-40
https://www.anodot.com/blog/ai-analytics-5g/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00432.warc.gz
en
0.909844
1,379
2.671875
3
The introduction of the General Data Processing Regulation (GDPR), on 25 May, 2018, will regulate the way different member states of the EU deal with the protection of personal data of individuals in the EU. The GDPR will lead to a new level of uniformity in regard to the protection of rights and freedoms of living individuals in the Union (EU). Living individuals are referred to as data subjects in the regulation. It is important to note that GDPR does not only apply to companies and organizations based within the EU, but also to companies and organizations that have offices outside the EU however process the personal data of people located within the EU. The scope of the GDPR is summarised in Articles 3(1) and 3(2) below. Article 3 (1) states: “This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.” Article 3 (2) states ‘’This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or the monitoring of their behaviour as far as their behaviour takes place within the Union.” The GDPR in Article 3 details processing by a controller or processor. A “Controller” under GDPR is the organisation or company which determines the purposes of the processing of personal data where a “processor” carries out the processing of the personal data on behalf of the “Controller”. A “processor” can further engage “sub-processors” and the “Controller” would have visibility and approval rights over these “sub-processors”. What Categories of Personal Data does the GDPR detail The GDPR details a definition of personal data in Article 4 which is extensive, in short Personal data are any information which are related to an identified or identifiable natural person. To process this personal data, processing’ means any operation or set of operations which is performed on personal data or on sets of personal data’ a legal basis is required. With regard to this legal basis the GDPR in article 6 lists those legal bases which are (1) Consent of the data subject, (2) processing is necessary for the performance of a contract, (3) processing is in compliance with a legal obligation, (4) processing is necessary for protection of the vital interests of the data subject or other natural person, (5) processing of personal data is being carried out in the public interest and (6) and processing is carried out for the legitimate interest of the controller or by a third party. The GDPR in Article 9 has additional requirements for Special categories of personal data. Special Categories of personal data are “Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation. These Special Categories of personal data have extra safeguards around their processing also detailed in Article 9. For supporting of special category personal data consent now becomes explicit consent ( a signed form for example). The legal bases for processing Special Category personal data are as listed in Article 9 - the data subject has given explicit consent to the processing of those personal data for one or more specified purposes - processing is necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment and social security and social protection law in so far as it is authorised by Union or Member State law or a collective agreement pursuant to Member State law providing for appropriate safeguards for the fundamental rights and the interests of the data subject; - processing is necessary to protect the vital interests of the data subject or of another natural person where the data subject is physically or legally incapable of giving consent; - processing is carried out in the course of its legitimate activities with appropriate safeguards by a foundation, association or any other not-for-profit body with a political, philosophical, religious or trade union aim and on condition that the processing relates solely to the members or to former members of the body or to persons who have regular contact with it in connection with its purposes and that the personal data are not disclosed outside that body without the consent of the data subjects; - processing relates to personal data which are manifestly made public by the data subject; - processing is necessary for the establishment, exercise or defence of legal claims or whenever courts are acting in their judicial capacity; - processing is necessary for reasons of substantial public interest, on the basis of Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject; - processing is necessary for the purposes of preventive or occupational medicine, for the assessment of the working capacity of the employee, medical diagnosis, the provision of health or social care or treatment or the management of health or social care systems and services on the basis of Union or Member State law or pursuant to contract with a health professional and subject to the conditions and safeguards referred to in paragraph 3 of Article 9; - processing is necessary for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health or ensuring high standards of quality and safety of health care and of medicinal products or medical devices, on the basis of Union or Member State law which provides for suitable and specific measures to safeguard the rights and freedoms of the data subject, in particular professional secrecy; - processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) based on Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject. This Special Category data is more high risk data and it comes with extra safeguards. The special category data are often areas which may be or have been used in the past to discriminate against individuals or data subjects. Financial data is not listed here as special category data however it would have specific protection around its processing under financial regulations, and financial fraud would be an area that poses a high risk to the rights and freedoms of data subjects. Companies need to ensure that they process personal data in line with the new regulation. This will involve the completion of a Data Protection Impact Assessment (DPIA) examining the various items of personal data they hold and their processing procedures. Risk assessments are a mandatory step under the GDPR, which notes “the likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. Risk should be evaluated on the basis of an objective assessment, by which it is established whether data processing operations involve a risk or a high risk”. Identifying data processing that is high risk The GDPR details high risk processing as processing which leads to a high risk to the rights and freedoms of natural persons by virtue of the nature, scope, context and purposes of the processing. The Guidance from the European Data Protection Board (EDPB), is that when high risk processing takes place a DPIA or Data Protection Impact Assessment should be carried out, before the processing starts. The EDPB advises that high risk processing areas that may necessitate a DPIA are processing that involves new technologies or AI, genetic or biometric data, decisions made which are based on automated processing including profiling of data subjects, any large scale processing or combination of data from different data sources, data which is obtained from third party sources, data which may be used to target children and where processing could potentially bring harm to the data subject. This is not by any means an exhaustive list and EU data protection authorities would advise carrying out DPIA’s also on areas outside the above. One Key area when dealing with high risk processing therefore is to perform a DPIA. The DPIA should examine whether or not prior consultation is necessary with the Data Protection authorities before processing takes place. The DPIA should examine safeguards to lower any identified risk by the DPIA, Recital 90 of the GDPR states “That impact assessment should include, in particular, the measures, safeguards and mechanisms envisaged for mitigating that risk, ensuring the protection of personal data and demonstrating compliance with this Regulation” Codes of conduct are also mentioned in Recital 98 as a mechanism to calibrate controllers and processors. Other guidelines around high risk in the GDPR Regarding Data Breaches where a data breach in a company or organisation poses a high risk to the rights and freedoms of data subjects then the breach must be disclosed to the appropriate data protection authority and also to the data subjects who have had their data breached. Recital 75 and Recital 76 Recital 75 of the GDPR addresses the risk to the rights and freedoms of natural persons or data subjects “The risk to the rights and freedoms of natural persons, of varying likelihood and severity, may result from personal data processing which could lead to physical, material or non-material damage, in particular: where the processing may give rise to discrimination, identity theft or fraud, financial loss, damage to the reputation, loss of confidentiality of personal data protected by professional secrecy, unauthorised reversal of pseudonymisation, or any other significant economic or social disadvantage; where data subjects might be deprived of their rights and freedoms or prevented from exercising control over their personal data; where personal data are processed which reveal racial or ethnic origin, political opinions, religion or philosophical beliefs, trade union membership, and the processing of genetic data, data concerning health or data concerning sex life or criminal convictions and offences or related security measures; where personal aspects are evaluated, in particular analysing or predicting aspects concerning performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, in order to create or use personal profiles; where personal data of vulnerable natural persons, in particular of children, are processed; or where processing involves a large amount of personal data and affects a large number of data subject” Recital 76 covers risk assessment “The likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. 2Risk should be evaluated on the basis of an objective assessment, by which it is established whether data processing operations involve a risk or a high risk.” Recital 77 goes on to state that risk guidelines can be from certifications, codes of conduct and that the European Data Protection Board will also give guidelines. Data protection by Design and by Default Article 25 of the GDPR addresses certification mechanisms as a means to keep data safe and secure alongside the controller or processor taking the appropriate technical and organisational measures to safeguard personal data. Security of Processing Article 32 of the GDPR covers Technical and Organisational measures to safeguard personal data in line with the risk and suggests using encryption and pseudonymisation of personal data, it highlights the importance of confidentiality, integrity, availability and resilience of processing systems and services. Article 32 also covers the ability to restore data in the event of a loss of data and a process for testing the technical and organisational measures around the processing. Appropriate levels of security must be undertaken in accordance with the data processing taking place. Risk and High Risk Extensively Covered Risk and High risk is a key concept under GDPR and there are widespread references under GDPR. Indeed the regulation is risk based, companies and organisations are given guidelines on areas such as DPIA’s, Technical and Organisational Controls and Breaches. However it is the company or organisation who must decide when to carry out a DPIA, what controls to implement and when to communicate breaches to the Data Protection Authorities and affected Data subjects. Records to justify decisions should be documented and maintained. The GDPR is very clear on the difference between personal data and the special categories of personal data and processing of the special categories have additional safeguards. There is guidance available from the EU concerning what may be considered as risky processing activities under the GDPR. This can be sought from the European Data Protection Board, a Board created by the GDPR itself in order to facilitate compliance. Individual supervisory authorities are also required to create and publish lists of data processing activities that will require DPIA’s. Companies should pay attention to this guidance and the information it provides about the harm that could result from high risk and very high risk processing activities. In doing so, they may come across best practices or other relevant information that will help them to complete their DPIA’s as efficiently and as thoroughly as possible. High risk processing cannot be specifically defined overall, but it can more easily be identified though consideration of a set group of criteria, including security of data, potential for a security breach, assurance of privacy, limitation of purpose, and the fairness of the processing involved. Large scale data processing and processing of sensitive data may also present higher risks. It should be noted that merely using new technology should not be classified as a high risk on its own; it needs to be considered in conjunction with other areas. Each piece or area of data should be considered in its own context, as what might be considered high risk in one area might not be in another area. Once the assessment has been completed, companies are required to mitigate the risks that have been identified. If mitigation does not seem possible, then they must consult the relevant Data Processing Authority (DPA) before any unmitigated high risk processing is attempted. As far as the GDPR is considered, identifying high risk and very high risk processing is all about considering areas such as scope, reliability and security, as well as potential harm that could result from problems due to the nature of the data or the amount being used. Companies then need to take steps to mitigate these risks as much as is reasonably possible in order to ensure they meet GDPR requirements. It will be important to document the findings of any DPIA, as well as the corrective or mitigating actions that the organization has taken. This documentation will be a key factor in the group’s ability to demonstrate to authorities that it is complying with the GDPR.
<urn:uuid:48efc825-cd68-44dc-acfe-5f60d4a02457>
CC-MAIN-2022-40
https://www.compliancejunction.com/high-high-risk-gdpr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00632.warc.gz
en
0.933332
3,033
2.609375
3
Pretty much the first thing anyone does when they start having problems with their internet connection, is reboot their router. After all, rebooting your router usually fixes the problem – but why? Why power cycle? What is it about a reset (power cycle) that fixes problems? Well, there are several things. For starters, a router is similar to a computer. It has a power supply, a processor (CPU), memory (RAM), and even an operating system (firmware). Just like your computer needs a good reboot from time to time, so does a router. Some routers are more well built than others and don’t require a reset as often, but for the most part any consumer-grade router is going to need a power cycle occasionally. Consumer routers are generally built with lower quality hardware, slower speed components, and less rigorous software engineering standards compared to the routers that your ISP uses. As a result, they generally can’t go as long without a reset as their business-grade counterparts can. Drain the electricity from circuitry A proper power cycle involves disconnecting power from the device for 5-10 seconds, which allows for all of the circuitry in the router to fully discharge. There are capacitors inside the router that take a few seconds to discharge, during which time, if you were to reapply power, the device would start back up but could continue to have issues since it wasn’t fully reset. Memory in a computer system (or a router, in this case) gets fully erased when the power is cut. This is called volatile memory. Don’t worry – there are also non-volatile memory types, which is why we don’t lose the configuration on our router as well when it is power cycled. When power is re-applied, the router’s operating system boots from a fresh state, which fresh memory, and is completely re-initialized so it can run at maximum capacity again. IP address issues After rebooting, the router also verifies that its’ current IP address from your ISP is valid (called renewing the IP) or if necessary, it requests a new IP address from your ISP. Sometimes there is a bit of a disconnect between the IP address your ISP is providing and the IP address your router is using – the reboot will synchronize your router with your ISP again. Bandwidth hogs are shut down (at least temporarily) Sometimes, an internet connection is not working well because of a bandwidth hog. A bandwidth hog is defined as a person or device on your network that is uploading or downloading a large amount of data. It could be something like a roommate downloading a new game on their Xbox, or something more systematic, like a computer downloading an automatic update. This large upload or download creates a data contention issue, where other user’s data is slowed down because of the lack of bandwidth. During a router reboot, bandwidth hogs lose the internet connection along with everyone else. They will usually resume their upload or download once the internet connection becomes available again, but sometimes (like in the case of a software update) they will wait a while to resume the data transfer. Sometimes this delay is all you need to finish what you were doing online. Wi-Fi frequencies are re-scanned Some routers have a dynamic channel allocation feature where they survey the other nearby Wi-Fi networks to see what channels are in use, and then they pick the channel that is least populated or has the least amount of interference. Power cycling your router will force your router to perform this adjustment as soon as the router has finished its reboot, as opposed to waiting for the router to do it on it’s own. What are your options? If you are otherwise fairly happy with your current router, you may wish to simply continue putting up with the minor inconvenience of occasionally resetting it. You could also automate the resets so that you don’t have to worry about doing it yourself. Depending on the model of your router, you may be able to schedule it to reboot at the same time daily or weekly. I do this with my router – I have scheduled therapeutic reboots to occur every day at 2:00 AM, when everyone in the house is sleeping and won’t notice the brief interruption associated with the reboot. If your router doesn’t support scheduled reboots, you can also get smart power switches that can turn the power off or on depending on the time of day. You could obtain one of these switches and then connect your router through it and accomplish the same goal. Get a new router You could also just consider getting a new router. Here is a recommendation on a router that gets overwhelmingly positive reviews and most people report that it doesn’t need rebooting. - Minimize ping and maximize performance with four 1 Gigabit Ethernet ports for lag free, wired connectivity and 1.7 GHz dual core processor network efficiency - Amp up your WiFi with AC2600 dual band router that delivers blazing fast speeds up to 2.6 Gbps - Put your gaming traffic in a designated express lane with advanced quality of service, bypassing network congestion and reducing lag spikes, jumps and jitters - Make every millisecond count by using geo filtering to connect to the closest servers and players so you can respond and dominate - Monitor your network and game ping in real time so you can see who’s hogging the bandwidth by device and application Andrew Namder is an experienced Network Engineer with 20+ years of experience in IT. He loves technology in general, but is truly passionate about computer networking and sharing his knowledge with others. He is a Cisco Certified Network Professional (CCNP) and is working towards achieving the coveted CCIE certification. He can be reached at [email protected].
<urn:uuid:a9a90cb3-de25-44c0-992e-2b2462eda96f>
CC-MAIN-2022-40
https://www.infravio.com/why-do-i-have-to-keep-resetting-my-router/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00632.warc.gz
en
0.953993
1,219
2.90625
3
Food is an essential part of everyone’s lives. It gives us the energy and nutrients to grow and develop. Foods, such as fruits and vegetables contain not only the vitamins and minerals that are often found in supplements, but also other naturally occurring substances that may help protect from diseases. Coming to organic foods, the term “organic” refers to the process of how certain foods are produced. Organic food produced by methods that comply with the standards of organic farming. One of the biggest studies into organic food found that it is more nutritious than ordinary produce and may help to lengthen people’s lives. Organic farming avoids the use of man-made fertilizers, pesticides, growth regulators and livestock feed additives. Irradiation and the use of genetically modified organisms (GMOs) or products produced from or by GMOs are generally prohibited by organic legislation. In the U.S., organic crops must grow without the use of synthetic pesticides, bio-engineered genes, petroleum-based fertilizers, and sewage sludge-based fertilizers. Organic agriculture, a way of farming that pays close attention to nature. It means fewer chemicals on the land, such as artificial fertilizers, which can pollute waterways. Organic farming can also offer benefits for animal welfare, as animals are required to be kept in more natural, free conditions. The farming tends to improve soil quality and the conservation of groundwater. |Organic produce||Conventionally-grown produce| |Grown with natural fertilizers (manure, compost).||Grown with synthetic or chemical fertilizers.| |Weeds are controlled naturally (crop rotation, hand weeding, mulching, and tilling). Pests are controlled using natural methods (birds, insects, traps) and naturally-derived pesticides. |Weeds are controlled with chemical herbicides. Pests are controlled with synthetic pesticides |Organic meat, dairy, eggs||Conventionally-raised meat, dairy, eggs| |Livestock are given all organic, hormone- and GMO-free feed.||Livestock are given growth hormones for faster growth, as well as non-organic, GMO feed.| |Disease is prevented with natural methods such as clean housing, rotational grazing, and healthy diet.||Antibiotics and medications are used to prevent livestock disease.| |Livestock must have access to the outdoors.||Livestock may or may not have access to the outdoors.| To be labeled organic, a food product must be free of artificial food additives. This includes artificial sweeteners, preservatives, coloring, flavoring and monosodium glutamate (MSG). The most commonly purchased organic foods are fruits, vegetables, grains, dairy products and meat. Nowadays there are also many processed organic products available, such as sodas, cookies and breakfast cereals. Currently, the United States, European countries, Canada, Mexico, Japan, and many other countries require producers to obtain special certification in order to market food as organic within their borders. In the context of these regulations, organic food produced in a way that complies with organic standards set by regional organizations, national governments and international organizations, such as the US Department of Agriculture (USDA) or European Commission (EC). benefits of organic food for the body Organic foods often have more beneficial nutrients, such as antioxidants, than their conventionally-grown counterparts and people with allergies to foods, chemicals, or preservatives often find their symptoms lessen or go away when they eat only organic foods. When choosing organic products, most consumers start with organic food items such as fresh produce and milk. However, there is a wide diversity of other organic products available, including organic apparel and beds/bedding, cleaning and household products, nutritional supplements, organic flowers, and even organic pet food. Organic milk contained higher concentrations of heart-healthy omega-3 fatty acids compared to milk from cows raised on conventionally managed dairy farms. The milk tested in the study also had less saturated fat than non-organic. Organic soybeans have a healthier nutritional profile from conventionally grown or genetically modified Roundup Ready soybeans. The organic tomatoes produce in an environment that has a lower nutrient supply. Organic tomatoes have an excessive formation of antioxidants, such as quercetin (79% higher) and kaempferol (97% higher). Antioxidants are good for health and help in reducing heart disease and the chances of developing cancer. In a recent six-year study, the researchers found that organic onions had about 20% higher antioxidant content than conventionally grown onions. They also theorized that previous analyses found no difference in conventional versus organic antioxidant levels may prevented by too-short study periods and confounding variables like weather. Increased amount of time grazing on grass also increases the amounts of CLA (conjugated linoleic acid) found in the animal products. CLA, a heart-healthy fatty acid that can boost cardiovascular protection and it is found in higher quantities in breast milk and in meat for animals that have been raised free range or cage-free. organic food sales growth Organic food sales are increasing by the double digits annually. More than 80 percent of parents reported buying organic food for their families last year. While there’s great momentum for organic sales, the reason people give for not purchasing it is because it’s too expensive. Sales of Organic products in the U.S. totaled around $47 billion in 2016. Organic food now accounts for more than five percent of total food sales in the U.S. Organic food sales increased by 8.4 percent from last year, blowing past the stagnant 0.6 percent growth rate in the overall food market. Sales of organic non-food products were up 8.8% in 2016. All organic non-food products produced without the use of toxic and persistent pesticides, also synthetic fertilizers. Organic products are more expensive than conventional ones. Harvard research reveals that pregnant women, young children, the elderly and people suffering from allergies may benefit the most from choosing organically produced foods. However, there are many ways families can enjoy all organic meals every day affordably. More information: [Journal of Agricultural and Food Chemistry]
<urn:uuid:a8ef6b96-a428-4333-9f48-b4eebad87304>
CC-MAIN-2022-40
https://areflect.com/2017/10/22/health-benefits-of-organic-food-and-its-demand/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00632.warc.gz
en
0.94541
1,303
3.609375
4
Using our semi-formal, semi-quantitative approach, we’ll need a way to measure cyber risk in order to use data to manage it. Because we’re taking a managerial approach to our risks, as opposed to a very technical one, we’ll need measurements that facilitate management thinking and action which will open the door to making useful changes. We also need to be able to measure the true nature of security. Let me tell you what I mean. The True Nature of Security Most people believe that you can never have too much money. However, it is possible to have too much security (or too little). Look at the left side of the diagram. You can see that as we go from left to right along the x-axis, we’re spending more and more trying to reduce risk. Notice that risk does go down rather quickly as we begin to manage it. As you move to the right and enter the green zone, the curve goes lower, and risk levels drop to an acceptable level. However, as you continue to spend money and add more controls, the risk increases again as you move further to the right and out of the green zone. Why is that? Well, past a certain point, security gets to be so difficult that people begin to look for ways to go around the controls, which can create a false sense of security for the people responsible for managing risk. False Sense of Security In other words, risk managers may be using more resources than are required and getting a risk level that’s much worse than they need in return. I’m sure you’ve experienced a situation where there was too much security required to get your job done. Example of Level Ten Security I’ve seen remote network access systems that were so secure it required four separate, two-factor authentications to reach your data! It was so complicated and time-consuming, most people didn’t use it, which reduced that organization’s productivity. And it caused them to spend a lot of money on a remote access solution that was operating far under capacity. So, the challenge with security, as with most things in life, is to find a good balance between protection and usefulness. Now, let’s create a score key that captures these three security states and the need to find balance. Score Key Explained Starting on the left: The scores zero through four, colored in yellow, represent various levels of insecurity. From no security at all to some. The scores from five through eight, colored in green, represent a range from minimally acceptable security to fully optimized. And scores nine and ten represent too much security, which is wasteful of time, money, and morale, just like the remote access solution I mentioned. Granularity of Scores Notice there are five possible scores for insecurity, four possible scores for balanced security, and two possible scores for excessive security. This reflects my experience that we often need less granularity to measure and improve situations that are too secure as opposed to the other two possible states. Only Two Colors Also notice there are only two colors: yellow and green. This is a result of my emphasis on simplicity. What do I mean by that? When it comes to risk management, I’ve noticed people tend to make things complicated. But too much complexity becomes counter-productive to creating clarity and moving at a brisk pace. After all, cyber risk is already an abstract and difficult thing for most people to understand, especially executives who set priorities and control your budget. So, do what you can to keep your risk management work as simple as possible without getting so simplistic you can’t deliver results! Cyber Risk Opportunities provides middle market companies with cost-effective Cyber Risk Managed Programs to prioritize and reduce your top cyber risks, including the specific requirements of PCI, HIPAA, SOC2, ISO 27001, DFARS, and more.
<urn:uuid:e1be8c89-328e-4191-9014-a93d90245828>
CC-MAIN-2022-40
https://www.cyberriskopportunities.com/how-to-measure-cyber-risks-using-a-zero-through-ten-scale/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00632.warc.gz
en
0.952856
837
2.671875
3
Today In History June 3 - June 3 1958 Singapore adopts a constitution June 3, 1958 is a very important day for the citizens of Singapore. It was on this glorious day that Singapore got its independence from the British. The formulation of the self-governance of Singapore was done on this day. It took over the place of the Singapore Colony Order in Council 1955, more popularly known as the Rendel Constitution, which came into force on 8 February 1955. The new constitution which was formulated was the work put in British Colonial Office and 13 members from the Singapore Legislative Assembly between the years 1956 to 1958. This constitution gave the country of Singapore full power over its internal and external affairs. The Government was to be formed after a free and fair election which elected a 51-seat Legislative Assembly. The Governor’s post was replaced by the of the Yang di-Pertuan Negara as the constitutional head of state. - June 3 1956 3rd class travel on British Railways ends There was nowhere in the world that defined class more eloquently than the British it. I t was seen in their jobs, churches, clothes. And of course it was quite prevalent in its Railway system AS WELL. The British Railway had three classes. The First Class was for the elite upper classes of society who usually travelled in style. The Second Class was for moderately rich people who were mostly middle class. The third class was for the working class who came from the lower strata of society. The third class was the least comfortable but the cheapest as well. The tickets were tax free and were very affordable for most people. But in 1884 the concept for giving basic dignity to people travelling in the third class was discussed in the Parliament. As the years went by, the 20th century saw the standards of the people increasing and so the Government decided to abolish the Third class on June 3 1956. The change was gradual process. It started with the issuance of new tickets, repainting and reconstructing the third class trains to second class. The second class was now called the ‘standard class’ for righteous purposes. Yet the class system continued to haunt the railways, and in 1987, British Railways (now called British Rail) renamed second class as the more egalitarian “standard class”. There were many speculations of the Third class returning to Britain after a document regarding the same was leaked. But the Government out rightly denied it. - June 3 1784 US army officially established by Congress of the Confederation The creation of the USA’s military forces was astonishingly slow and prolonged for being such a big and powerful country. After the American Revolution, the Continental was done away with due to the maintenance costs and because the Congress at that time felt that it was unnecessary. But as the borders kept growing wider and as threats kept increasing, the Congress was forced to establish n army, to protect its borders, people and other interests. The states that were chosen for eh recruitment of the army were Pennsylvania, New Jersey, New York and Connecticut where a total of 700 men were recruited. This included eight infantry and two artillery companies to join the regiment. As the state of Pennsylvania had the highest allotment of troops to fill they had the added advantage of choosing the regimental commander from amongst them. Josiah Harmar with recommendation of the President of the Congress at that time, Thomas Mifflin was made the lieutenant colonel commandant.
<urn:uuid:e61314b9-8fa6-4f43-b18e-cf79441be943>
CC-MAIN-2022-40
https://areflect.com/2020/06/03/today-in-history-june-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00632.warc.gz
en
0.987666
706
3.578125
4
Don't plug smartphones into medical machines. Log off your computer. Keep your password to yourself. The medical community has plenty of room to improve when it comes to keeping critical data safe. Health professionals looking to implement effective security measures on a budget might want to stick to the KISS principle. The reason, as Michael McCoy, chief health information officer at the Department of Health and Human Services’ Office of the National Coordinator for Health Information Technology, put it: “Common sense is uncommon.” Speaking alongside other HHS higher-ups, McCoy and his colleagues shared their simple prescriptions for better security as part of the July 31 Health Summit sponsored by AFCEA’s D.C. chapter. Start, and maybe stay, small Rose-Marie Nsahlai, a senior adviser in the same HHS office as McCoy, said she has witnessed myriad breaches of basic cyber hygiene in hospital systems. Most organizations don’t have insider-threat training, they don’t terminate old user accounts or monitor employee activities, and plenty of hospital staff neglect to log off computers and even actively share passwords. Suzanne Schwartz, of HHS’s emergency preparedness outfit, said she has seen nurses plug their smart phones into medical devices to charge them – without thinking of the potential vulnerabilities that could open. “There’s a whole laundry list out there [of problems to address],” Nsahlai said. “We can’t address everything but we can address the big areas that need protection.” “Simple, basic hygiene can mitigate the potential for intrusions,” added Schwartz. And it’s good that basic steps could yield big results, because money is tight. “A big data encryption scheme is tremendously expensive,” noted NATO cyber adviser Curtis Levinson as he bemoaned a lack of federal funding for medical system security. Privacy has become a direct function of cryptography, he argued, saying, “unless you encrypt, you become vulnerable to violation.” (It’s worth noting, as McCoy did at the Health Summit and others have before, that encryption wouldn’t have helped in the Office of Personnel Management breaches.) Beyond basic cyber hygiene, the medical system professionals offered a range of security suggestions. Nsahlai decried the lack of user behavior analytics in health care and called for a switch from SHA-1 hashing to the stronger SHA-2. McCoy recommended national health ID numbers, totally separate from other identifying information, to prevent hackers from having access to the demographic data “gold” we currently send with medical transactions. It’s something HHS is barred, for now, from implementing he noted, saying, “Congress will have to take the handcuffs off HHS.” McCoy cautioned against overburdening practitioners with security measures. He recalled a small faith-based, mission-driven care center, constantly cash-strapped, and wondered whether the security measures they could afford would leave them spending 30 or 60 seconds each time logging on to their systems. “I am for privacy and I am for security, but I’m also for seeing the patients and getting the work done,” he said. “There’s a balance.” Steven Hernandez, CISO for HHS’s inspector general, advised a combination of technical and cultural change, and disagreed with McCoy on the burdens of added security. Employees might complain at first about new security measures, but they’ll get used to them. “People get on with their day to day,” he said. “[T]hey put in the PIV cards, they get on with it.”
<urn:uuid:d4a86fc1-06d0-411d-ba73-d9ef3a62c115>
CC-MAIN-2022-40
https://fcw.com/security/2015/07/hhs-security-goes-back-to-basics/207776/?oref=fcw-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00632.warc.gz
en
0.951683
780
2.59375
3
Social media has completely changed the dynamics of how people communicate with one another. While some users might only keep in touch with friends through it, others–including businesses and brands–use it to promote their products. In an age where a picture can appear on thousands of devices all over the world, privacy has become a major concern for anyone using social media. Social media was built on the foundation of users sharing parts of themselves through the Internet, and in a world dominated by mobile devices, it’s not surprising that the majority of today’s computing is done through mobile platforms. One study found that in 2018, 69 percent of all American adults over 18-years-old used social media regularly over the subsequent year; this does not include YouTube as a social media platform. This number grows ever larger, particularly in regard to seniors. Nearly 40 percent of them use some form of social media–a number that has increased by about 200 percent since 2012. Due to this increase, there is also a much larger group of individuals out there to steal money, information, and identities. Privacy concerns are prevalent in today’s social media environment, and users must be aware of how they are putting their data in harm’s way. Most people cite social media as a place where they can share their civil and political views, personal health information, learn scientific information, engage in job, familial, and society-related activities, and where they get most of their news. Role of Privacy As always, privacy will depend on how much an individual prioritizes the security of his or her personal information. If someone wants to keep a semblance of themselves private, they have to avoid placing that information in a public space like social media. As social media usage increases, the issue only grows larger. Add in the functionality that a lot of developers integrate into these websites and, before you know it, control over personal data is suddenly a problem. Obviously, these platforms require you to give over some of your personal information to them in order to use the service, but when you begin to lose control over who has your data, and what data has been shared, negative situations can arise. A 2014 survey suggested that 91 percent of Americans have lost control over their data, and that advertisers and social media companies are taking more of their data than they even know. Half of Americans know, and largely understand, the problems they face by having their information fall into the wrong hands. This leads them to be more proactive about securing their personal information. An issue everyone runs into, however, is that in order to use social media (or e-commerce for that matter), companies demand access to more personal information than necessary. By mining all this data, they then have carte blanche to do with it as they please, which can become a problem if that data is scraped by odious sources. Why Stay on Social Media If They Are Stealing from You? If you are at the beach and a professional lifeguard were to tell you that you need to get out of the water because there is a good chance you will be bitten by a shark, would you wade around in waist-deep water trying to spot the sharks? No chance. That’s why we scratch our heads when we see companies openly take our client’s personal information, the information they share, and their user histories to create a consumer profiles that will be sold for profit to advertisers. We constantly warn people to protect their personal information, and they consistently don’t. We understand… maybe you use social media for marketing. Maybe you are one of the ones that are careful what they share with these sites. Maybe, you are comfortable with it and are one of the millions of people that trade their privacy for convenience. Whatever the reason is, if social media has become an important part of your life, you most likely have made some privacy concessions, knowingly-or-not, in order to use it. Between social media and online commerce, more personally identifiable information is shared with corporations than you would ever knowingly share with your best friends. This speaks to just how oblivious the typical user is about their own personal information. People find value in social media. In fact, there are businesses that provide their staff with regular social media breaks as to not interrupt organizational productivity with social media. When you consider 30 percent of all online time is spent on social media (which only increases when people go mobile), you begin to understand that it carries value for hundreds of millions of people. Are you concerned about your private information being tracked and shared by Internet-based services? Do you have a good idea about who has your personal information and where it is going? Leave your thoughts about this issue in the comments.
<urn:uuid:252fde92-edc2-4ba1-b38e-49831fc266b8>
CC-MAIN-2022-40
https://www.activeco.com/social-media-users-should-consider-their-personal-information/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00632.warc.gz
en
0.966748
974
3.265625
3
“Wearable devices are here to stay, and they'll only get more sophisticated and effective as they evolve. Until now, most of us have made our health and fitness decisions based on what we think we know about ourselves. Advancements in technology - wearables and otherwise - will eventually take much of the guesswork out of healthy living.” — Michael Dell Wearables are generally interpreted as a fitness tracker, which just monitors blood pressure or heart rate. Also, the elite device that athletes wear for monitoring their performance during games. However, with the Internet of Things (IoT) integration, the devices are becoming smarter and expanding their application area, which means these technologies are likely to be around for longer time. These devices are helping to make lives better with quicker and safer payment options, better digital and physical interactions, and with multiple ways of communication. Two decades ago nobody could believe a palm-sized, mobile, wireless device - smartphones - will be among nearly 80% of the world population. Likewise, wearables such as smartwatches are expected to gain popularity among consumers owing to rising inclination and necessity of being connected. IoT-enabled Wearable AKA smart wearable devices collect robust data, which can be helpful to medical professionals to track and monitor medical conditions, diseases, and treatments effectively to eliminate unseen health risks. For instance, the diseases such as cardiovascular diseases and epileptic seizures along with tracking cancer treatment remotely through connected devices. Recently, cardiac surgeons conducted coronary revascularization surgery by using Google AR Glass successfully for navigating CT scans in hand-free mode. Owing to awareness about these benefits, the products such as rings, fitness trackers, smart posture trainers, gaming simulator, smart jewelry, shoes, clothes, nicotine patch, and patches to help blinds to navigate freely are witnessing robust acceptance. Shifting consumer demands toward easy and accessing their health data whenever required is boosting smart wearables demand significantly. Additionally, their convenience is making these devices more mainstream especially for healthcare purposes. Author Name: Nitish P.
<urn:uuid:1fadae62-8bec-441c-8c0a-eeabf3d8266d>
CC-MAIN-2022-40
https://www.alltheresearch.com/white-paper/iot-in-wearables-to-achieve-customer-satisfaction
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00632.warc.gz
en
0.956599
417
2.65625
3
This is the sixth tutorial in my ‘Practical Introduction to Cloud Computing’ course. Click here to enrol in the complete course for free! Cloud Broad Network Access – Video Tutorial Cloud Broad Network Access The next essential characteristic of code computing to cover is Broad Network Access. As usual we’ll have a look at the NIST definition first: ‘Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms e.g. mobile phones, tablets, laptops and workstations’. The cloud characteristic of Broad Network Access means that the service can be accessed from multiple locations (such as from a corporate office or from home) using multiple different types of client (such as a Windows PC or an Android mobile). On Premises and Cloud Comparison Let’s compare this with a traditional on-premise solution. For this example there’s a small company with a rack at their headquarters which is populated with a mail server, database server and web server. They also have other branch offices and teleworkers, so they need Wide Area Network connectivity. The connection between the branch office and the main site could be a Virtual Private Network over MPLS or the internet, or maybe it’s a direct leased line between the two offices. In the branch office, the users are working on a mix of Windows PCs, Linux and Macs. It doesn’t really matter which kind of desktop they’re using, they’re all able to access the servers in the company data centre in the main site. The teleworkers typically work from home or a hotel and could be on a laptop, tablet or mobile. Again it doesn’t really matter, as long as the applications running in the main site support the different types of clients, and as long as we’ve got network connectivity everywhere then everything’s going to work just fine. Okay, so that’s how it works with a traditional on-premise solution. Now let’s take a look at how it works with a Cloud-based solution: You can see we have exactly the same network typology. The only difference is that the servers are now in a Cloud data centre provided by a Cloud service provider, rather than in our own on-premises data centre. As far as the network connectivity goes, it’s exactly the same. If you’re a network engineer who’s going to be doing the design as you transition from on-premise to a Cloud-based solution, this is great news because you don’t need to learn anything new. You just do the network design exactly the same way as before. There’s not a new skill set to learn, you can apply your existing knowledge.
<urn:uuid:ae139aa6-13a9-413a-bb67-d7ad83311c71>
CC-MAIN-2022-40
https://www.flackbox.com/cloud-broad-network-access-tutorial
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00632.warc.gz
en
0.924977
587
2.578125
3
Practical TCP Series - The Connection Setup This is a 5-part series on the most practical things you need to know about TCP. These topics were selected because they are the most frequently asked from readers and customers that have needed help with Protocol Analysis. Ok, now on with the article. The network is slow. Ok, we’ve all heard it a million times. But what is the real problem? Oftentimes when going in to troubleshoot a slow application, we find the network guys pointing fingers to the application guys, and the app guys pointing at the network guys. In several cases though, the root problem can be found in the transport layer, which neither team really wants to own up to. So in this article series we are going to discuss the details behind several TCP topics, why you care, and how you find these problems on your network. In the 5 part series we will cover: The TCP Connection The TCP Window Sequence and Acknowledgement Numbers Connection Teardown (Expected and Unexpected) The TCP Connection In order for data to move between two machines using TCP, they must first establish a connection, sometimes also referred to as a socket. A connection is established by a three way handshake, involving a unique port on each machine. If a client is connecting to a server, it will open one of its ports – a dynamic port – and send a connection request to the server’s IP, with a specific server port. If this connection gets established, the client and server must continue to use this IP:Port paring to communicate. The three way handshake is very simple, and can tell you quite a bit about the underlying network as well. Below is a bounce chart taken from ClearSight which does a great job of displaying the packets involved. The first packet is the client sending a SYN to the server. The client will inform the server which sequence number it is using to start on. (Sequence number A) Note: Wireshark typically starts these numbers off at zero automatically. This makes it easier to see how much data has been transferred over the connection. The server replies with a SYN-ACK packet. In the TCP header, the server will choose its own sequence number (B) and will acknowledge the client’s sequence number by returning A+1 in the ACK field. Last, the client sends an ACK back to the server. The sequence number is set to A+1 and the acknowledgement number is set to B+1 Why we care about the TCP Connection Bottom line, if there are problems in the connection setup, this will result in no or slow application performance. If there is packet loss at this stage, TCP will take up to 3 seconds to retransmit, delaying the connection. Use the TCP connection info as a benchmark for network roundtrip time when troubleshooting application performance problems. In the next article we will look into the header flags and define them one at a time. We will discuss when you should see them, and when they may be a symptom of a problem. If you have any questions, comments, answers, or other TCP topics you would like to see, shoot me an email! Thanks for tuning in.
<urn:uuid:339b3006-25ab-4abb-9753-651a612f315c>
CC-MAIN-2022-40
https://www.networkdatapedia.com/post/2016/09/08/practical-tcp-series-the-connection-setup
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00632.warc.gz
en
0.939414
674
3.015625
3
Social Engineering Attack Social engineering is a deception method that takes advantage of the human mistake to get sensitive information, access, or assets. These "human hacking" schemes in cybercrime tend to entice unwary individuals into disclosing data, spreading malware, or granting access to restricted systems. Attacks can occur online, in person, or through other contacts. Social engineering scams are designed to exploit how individuals think and act. As a result, social engineering assaults are very effective in manipulating a user's behavior. Once an attacker learns what motivates a user's activities, they may easily deceive and influence the user. Furthermore, hackers attempt to exploit a user's lack of expertise. Because of the rapid pace of technology, many customers and employees are unaware of hazards such as drive-by downloads. Users may also underestimate the significance of personal information such as their phone number. As a result, many users are confused about how to safeguard themselves and their data effectively. In general, social engineering attackers have one of two objectives: - Sabotage: The intentional disruption or corruption of data to cause harm or discomfort. - Theft: Obtaining goods such as knowledge, access, or money by deception. This social engineering concept may be enhanced by understanding how it works. What Is the Process of Social Engineering Attack? Most social engineering attempts rely on direct communication between the attackers and the victims. Rather than forcefully penetrating your data, the attacker will usually try to persuade the user to compromise themselves. The assault cycle provides these crooks with a consistent method of misleading you. The following are typical steps in the social engineering assault cycle: Prepare by acquiring background information on yourself or a bigger organization in which you are involved. Infiltrate by forming a relationship or beginning an encounter that starts with trust. To escalate the assault, exploit the target after trust and weakness have been developed. Once the user has completed the specified activity, disconnect. This procedure can be completed in a single email or over many months through social media talks. It may even be a face-to-face encounter. However, it all comes down to your decision, such as providing information or exposing yourself to infection. It's critical to avoid using social engineering to cause misunderstanding. Many employees and customers are unaware that hackers may access many networks and accounts with only a few pieces of information. They get your personal information by impersonating real users to IT support professionals. It includes your name, date of birth, and address. It's, therefore, a simple affair to change passwords and acquire nearly unrestricted access. They can steal money, distribute social engineering malware, etc. Characteristics Of Social Engineering Attacks The attacker's persuasion and confidence are fundamental to social engineering attempts. When exposed to these strategies, you are more likely to do acts you would not normally take. Among most assaults, you will be misled into the following behaviors: - Increased emotions: Emotional manipulation provides attackers with an advantage in every engagement. When you are in a high emotional state, you are significantly more prone to conduct illogical or unsafe activities. Fear, Excitement, Curiosity, Anger, Guilt., and Sadness — These emotions are employed in equal amounts to persuade you. - Urgency: Another reliable tool in an attacker's armory is time-sensitive opportunities or demands. Under the pretext of an acute crisis requiring quick treatment, you may be persuaded to compromise. - Alternatively, you may be presented with a prize or incentive that will vanish if you do not respond immediately. Either method trumps your capacity to think critically. - Trust: Believability is vital in a social engineering attack. Because the attacker is ultimately lying to you, confidence is essential. They've done enough study on you to come up with a story that's simple to trust and unlikely to raise suspicions. There are a few exceptions to these characteristics. In certain circumstances, attackers utilize more basic social engineering techniques to acquire network or computer access. A hacker, for example, would frequent a major office building's public food court and "shoulder surf" customers using tablets or laptops. 5 Methods of Social Engineering Attack Social engineering assaults may take many forms and can be carried out everywhere there is human contact. The five most popular types of digital social engineering attacks are as follows. Baiting assaults, as the term indicates, employ a false promise to spark a victim's avarice or interest. They trick people into falling into a trap that takes their personal information or infects their computers with malware. The most despised type of baiting uses tangible material to disseminate malware. For example, attackers may place the bait (usually malware-infected flash drives) in high-traffic places where potential victims are bound to notice them (e.g., bathrooms, elevators, the parking lot of a targeted company). The bait has a legitimate appearance, such as a label identifying it as a company's payroll list. Scareware bombards victims with false alerts and phony threats. Users are duped into believing their system is infected with malware, encouraging them to install software that serves no purpose (other than to profit the offender) or is malware itself. Scareware is also known as Ruse Software, Rogue Scanning Software, and Fraudware. A frequent form of scareware is the appearance of legitimate-looking pop-up advertisements in your browser while surfing the web. It either offers to install the utility (frequently tainted with malware) for you or directs you to a malicious site where your machine becomes infected. An example of the same is - "Your machine may be infected with nasty spyware applications." Pretexting is when an attacker gets information by telling a series of well-designed falsehoods. A perpetrator would frequently commence the scam by professing to need sensitive information from a victim to complete an essential activity. Typically, the attacker begins by gaining confidence with their target by impersonating coworkers, police, bank and tax authorities, or other individuals with right-to-know power. The pretext asks necessary inquiries to validate the victim's identity, allowing them to obtain sensitive personal information. This fraud collects all kinds of relevant information and data, including social security numbers, personal addresses, phone numbers, phone records, employee vacation dates, bank records, and even security information relating to a physical plant. Phishing scams, one of the most common forms of social engineering attacks, are email and text message campaigns designed to instill fear, interest, or urgency in victims. It then prods them into disclosing personal information, visiting dangerous websites, or downloading malware attachments. An example is an email sent to subscribers of an online service informing them of a policy violation that necessitates prompt action on their side, such as a password change. It contains a link to an illicit website that looks virtually identical to the official version, inviting the unwary user to input their existing credentials and a new password. The information is delivered to the attacker upon form submission. It is a more focused variation of the phishing scam in which the perpetrator targets specific persons or businesses. They then customize their communications to their victims' features, employment positions, and contacts to make their attack less visible. Spear phishing takes far more work on the attacker's part and might take weeks or months to complete. They are far more difficult to detect and have a higher success rate if done correctly. In a spear phishing scenario, an attacker sends an email to one or more workers while impersonating an organization's IT, consultant. It's phrased and signed like the consultant would typically, leading recipients to believe it's a legitimate communication. 4 Ways of Preventing Social Engineering Attacks Social engineers use human emotions like curiosity or terror to carry out schemes and lure victims into their traps. As a result, be cautious if you receive an alarming email, are drawn to an offer shown on a website or come across loose digital media lying around. Being vigilant can help you defend yourself from most social engineering attempts in the digital arena. Furthermore, the following pointers might assist you in increasing your alertness regarding social engineering hacks. Do Not Open Emails Or Attachments From Unknown Senders If you do not know the sender, you do not need to respond to an email. Even if you know them and are skeptical of their message, double-check and validate the information from other sources, such as by phone or straight from a service provider's website. Remember that email addresses are constantly spoofing; an attacker might have even launched an email allegedly from a reputable source. - Implement Multifactor Authentication User credentials are one of the most important pieces of information that attackers want. Using multifactor authentication helps to secure your account in the case of a system intrusion. Be Careful Of Attractive Offers If an offer sounds too good to be true, think twice before taking it. You can immediately identify whether you're dealing with a real request or a trap by Googling the topic. Maintain Your Antivirus/Antimalware Software by enabling automatic updates or making it a routine to get the most recent signatures first thing each day. Check to ensure the updates have been deployed regularly, and scan your system for viruses.
<urn:uuid:443b461d-7176-4427-bed1-099371b9afd5>
CC-MAIN-2022-40
https://www.appknox.com/cyber-security-jargons/social-engineering-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00632.warc.gz
en
0.92917
1,932
3.40625
3
We are all familiar with the basic idea of what a modem does. It’s that handy little device that gets you to the internet. From a 56K modem (our condolences if you are using one) to a cable modem- they can go just about as fast as you need them to. But what remains obscure to many is how they allow the bridge to the internet to be built. How a Modem Works The term modem is actually conceived from the two processes it performs: modulation and demodulation. Modulation is the process of turning a digital signal (sent from the sending computer) into an analog signal (so that it can be transmitted as electrical pulses). On the flipside we have demodulation, which takes an analog signal and turns it back into a digital signal that can be read by the receiving computer. Early modems were classified as point-to-point connections. When a dialup connection was setup between two computers, that connection was only shared between them. That means that the transmission medium (the phone line) didn’t have to be shared by other resources- only the two computers communicating. But as your may well know, this type of dialup connection isn’t very desirable. Connections made through a phone line are much slower than other counterparts such as Ethernet or fiber optic cable. But since the original modems used phone lines, they became vastly popular. After all- if you have a global telephone line system, why not take advantage of that and use it for internet access? As far as Cisco exams go, just remember how modulation and demodulation works. Cisco doesn’t include a lot of material or exam questions on modems, so there isn’t much of a need to go into further detail.
<urn:uuid:49b4d3bc-332e-449d-96cb-777b3818eafd>
CC-MAIN-2022-40
https://www.itprc.com/modem-modulationdemodulation-definition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00632.warc.gz
en
0.968907
365
3.9375
4
Wednesday, September 28, 2022 Published 2 Years Ago on Monday, Aug 03 2020 By Adnan Kayyali Novoic, a startup founded by Oxford and Cambridge researchers, is developing a COVID-19 screening app that can screen people for COVID-19 by listening to the sound of their cough, aptly named “Coughvid”. The company that previously used speech analysis technology to detect cognitive impairments and diseases such as Parkinson’s and Alzheimer’s, is now turning its attention to COVID-19. The app works by recording a person’s coughing, breathing, voice patterns and analyzes them using a machine learning algorithm. Additionally, the app collects users’ demographic and medical history data and asks if a participant has been tested for COVID-19. Currently the COVID-19 screening app has only a 70% accuracy rate, but the developers are far from done. To ‘teach’ the AI, Novoic will ask 1 million volunteers to “donate their cough” so the algorithm can be refined into a reliable screening process to distinguish between infected and non-infected coughs more decisively “Different people’s voices of course sound different from each other, including when they’re healthy,” explains Emil Fristed, co-founder and CEO of Novoic. “To build accurate algorithms that work for everyone, we need a lot of data, which is why we are calling upon the public to step forward. If we capture enough cough sounds, we believe this could be the answer to cheap, accessible testing. Due to the lack of respiratory sound data sets available to researchers, collecting these samples may help further diagnostic research beyond COVID-19. “The data will be stored on University servers and be used solely for research purposes,” said Cambridge University. To assure those with fears of privacy and data security, the company said that user locations will only be known when actively using the app. The researchers say however that the COVID-19 screening app is not a substitute for a medical exam. It is made to bolster the capabilities of mass testing in a non-intrusive, social distancing- friendly way. Google Workspace vs Microsoft 365, which one is better for you? Many firms, especially startups, have a challenging time deciding. Consider this blog post a quick overview of what to expect from each. We will go through each product’s advantages and disadvantages as well as when and why you might prefer using one over the […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:b22c8388-1903-4499-b51b-4514cef56e26>
CC-MAIN-2022-40
https://insidetelecom.com/covid-19-screening-app-that-analyzes-breathing-coughing-and-voice-patterns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00032.warc.gz
en
0.943283
573
2.703125
3
ustas - stock.adobe.com Genomics, the study of genes, is a field of biology that relies on computing. While the ability to sequence – effectively, read – the human genome has gained much attention, researchers have been quietly working to use the same techniques to track and analyse diseases. This work stepped into the limelight in 2020 by focusing on Sars-Cov-2, the virus that causes Covid-19. The UK’s work on this has taken place through the Covid-19 Genomics UK Consortium (Cog-UK), which as of 12 April 2021 had sequenced 428,056 samples. Data from global repository Gis-Aid suggests that only the US has come close to this. Emma Hodcroft, a molecular epidemiologist at the University of Bern in Switzerland, described the UK’s sequencing work to the New York Times as “the moonshot of the pandemic”. Genomic sequencing of viruses allows researchers to track mutations as they reproduce, allowing authorities to change strategies accordingly. The B117 variant of Sars-Cov-2, which is more transmissible than earlier strains, was first sequenced in September 2020 and formally identified as being of concern by Public Health England in December, contributing to the lockdown that month. Within the UK, B117 is often called the Kent variant, although other countries tend to call it the UK or British variant. Origins of Cog-UK Cog-UK was set up quickly, but it relies on technology and expertise developed over the years. Following a request from the UK government’s chief scientific adviser, Patrick Vallance, and a series of emails and phone calls, a group of about 20 people met at the Wellcome Trust in London on 11 March 2020. “Most of the objectives and framework for Cog-UK were negotiated by the end of the meeting,” writes Sharon Peacock, professor of public health and microbiology at the University of Cambridge and executive director of the consortium. The previous largest genomic viral dataset, from the Ebola outbreak in west Africa in 2014-16, contained about 1,500 samples. “Cog-UK surpassed this total within the first month and has continued to push viral genome surveillance on to an entirely different scale ever since,” says Peacock. The project launched with £20m of UK government funding on 23 March 2020. Peacock describes Cog-UK as “a coalition of the willing” involving the UK government, the UK’s four public health agencies and a range of academic, NHS and public health organisations. Through 16 hubs, members sequence positive samples from people with Covid-19, with the Wellcome Sanger Institute in Cambridgeshire – which co-led the first sequencing of the human genome two decades ago – acting as the central sequencing hub. The institute built on its previous work with malaria genomics to set up a highly automated pipeline process for Sars-Cov-2 that involves standardised file formats, quality control checks and editing to remove parts of the sequencing that are not required. The institute runs its own datacentre, effectively a flexible private cloud with high-performance compute and storage. Peter Clapham, team leader for the high-performance computing (HPC) informatics support group, says a lot of the institute’s work involves large projects, including the UK Biobank, which tracks genomic and health data on 500,000 people, and the Tree of Life project, which aims to sequence DNA from all 70,000 organisms with a nucleus in the British Isles. “We designed very early on a flexible system with our informatics customers that would allow us to adapt to what is needed,” says Clapham. For Cog-UK, it repurposed existing technology infrastructure rather than buying new equipment. “This has been a really good confirmation of the hybrid nature of what we’ve got, the flexibility we’ve managed to maintain and develop,” he adds. Although the sequencing work is distributed, Cog-UK needed a central computing platform to hold the resulting data and allow analysis. Thomas Connor, professor in Cardiff University’s school of biosciences, attended the 11 March meeting with his colleague Nick Loman, professor of microbial genomics and bioinformatics at the University of Birmingham. Their universities, along with Swansea and Warwick, have collaborated on the Cloud Infrastructure for Microbial Bioinformatics (Climb) since 2014. Climb provides microbiologists with the computing power, storage and tools required to carry out analysis of genomic data, with both universities having between 3,000 and 4,000 virtual CPUs available to support research using open source software including OpenStack for cloud computing and Ceph for storage. “It’s probably the largest dedicated system for microbiology of its type in the world,” says Connor. For Cog-UK, Connor, Loman and colleagues set up Climb-Covid, a walled garden within Climb’s existing systems at Birmingham and Cardiff universities’ on-premise datacentres. This took about three days and uses only a small fraction of Climb’s capacity with research on other pathogens continuing. “This is the advantage of having a cloud to play on,” says Connor, adding that the project has had a different impact on his own capacity. “My last year has been Covid.” With 30,000 base pairs – effectively bits of genomic information – Sars-Cov-2 is a minnow compared with the 3.1 billion in human DNA. But the three sequencing machines used by Public Health Wales process genomes in blocks of just 400 base pairs, producing up to 120Gb of data a day. “The computational challenge is taking that jigsaw and rebuilding it,” says Connor, who also works for the Welsh agency. The system also needs to handle metadata, including demographic details, location and information on how the sample was processed, and it has to do this quickly for it to be useful. Public Health Wales typically processes samples in five days, rather than the months that would be normal for scientific research. This is easier to do in Wales than in England. The country sequences Sars-Cov-2 from about two-thirds of positive lab-processed tests for Covid-19, discarding those with low levels of the virus because they are less likely to be viable. The Welsh NHS is more centralised than England’s, with a single laboratory information management system for pathology, making it easier to gather metadata. “We can do things very rapidly here,” says Connor. “In England, things are a little more fragmented. Climb is providing a way to integrate that data.” The two universities used Cog-UK funding to buy solid-state drives (SSDs) to increase Climb’s speed, bringing its storage capacity to 1.5PB of SSD and 2.8PB of disk. Connor says he is grateful for the way in which Cardiff’s supplier Dell and Birmingham’s supplier Lenovo rushed new equipment to them, as well as the support of HPC colleagues Simon Thompson at Birmingham and Christine Kitchen and Martyn Guest at Cardiff. Repurposing existing work As with generating and storing the genomic data, repurposing existing work is key to Cog-UK’s software-based analysis. David Aanensen, professor and senior group leader in genomic surveillance at the University of Oxford’s Big Data Institute, is also director of the Centre for Genomic Pathogen Surveillance, which is based at the Big Data Institute and the Wellcome Genome Campus, also the home of the Wellcome Sanger Institute. The centre, founded in 2015, already had its software widely used to gather and analyse genomic data on diseases in poorer countries. Aanensen and his team started working on Covid-19 as early as January 2020, mostly using existing funding as well as grants from the National Institute of Health Research. “All the partners have volunteered time and leveraged existing infrastructure and grants,” he says of Cog-UK. Two of the centre’s existing software packages, Data-flo and Microreact, have been used extensively by Cog-UK partners. There are local instances of Data-flo, which manages epidemiological data pipelines, at Public Health Wales and Health Protection Scotland. These allow the agencies to use the open source software to link and visualise genomic data with personal and commercial information, including patient records and names of care homes. Microreact, developed over the last five years with Wellcome funding to visualise and share data on genomic epidemiology, has been particularly widely used. The centre has installed local instances for Public Health Wales and Health Protection Scotland, but also the US Centres for Disease Control and Prevention and the European Centre for Disease Prevention and Control. It has also been used by other health authorities in Europe, as well as organisations in Argentina, Brazil, Colombia and New Zealand. “The impact is huge, and we want data tools and ways of bringing high-quality information together to inform policy and action to be scaled,” says Aanensen. “Freely available software and an open data ethos is something we hold close to our hearts.” As well as supporting its existing applications, the centre has created and adapted software during the pandemic. This includes a system that enables Cog-UK’s sequencing sites to upload speadsheet-format metadata on samples to Climb-Covid using a drag-and-drop interface, as well as ensuring validity. It also produced a web wrapper for Pangolin (Phylogenetic Assignment of Named Global Outbreak Lineages), software that assigns Sars-Cov-2 genomes to lineages which is developed by a team led by Andrew Rambaut, professor of molecular evolution at the University of Edinburgh. This makes Pangolin easier to access, allowing it to process hundreds of thousands of samples and enabling users to view the global distribution of specific lineages, such as the B117 variant. David Aanensen, University of Oxford This meant increasing the capacity of computational and visual algorithms to cope with the volume of data collected through Cog-UK. For example, the tree viewer used to visualise relationships between genomes was moved from Canvas to Web GL, with an algorithm to reduce detail from a large number of samples. “Now we can display trees of several million, even though we’re not there yet,” says Aanensen. This work fits with the centre’s aim of not developing software that is narrowly defined, with most of the focus on existing products. “Lots of processes have been accelerated,” says Aanensen of its work during the pandemic. This was primarily achieved through everyone doing more: “Essentially, we just doubled our workload.” Aanensen says that having a number of sequencing labs joined up with computing has been a key strength of Cog-UK, an approach he sums up as “decentralised sequencing with centralised analysis”. He adds: “You have to deliver value at local sites, but contextualise local data in the broader picture.” It has been refreshing to work with organisations across the UK, all fired up quickly and focused on delivery, he says. Although Cog-UK’s work on the pandemic is not yet completed, those involved are excited about how future projects can build on it to go further. “This could be applied to any pathogen you care to look at,” says Thomas Connor at Cardiff University. Samples of tuberculosis and gastro pathogens are already sequenced but rarely shared, and there is potential to sequence other infectious diseases, he says. “The value of sharing this kind of data fast has been demonstrated. That’s a really important legacy.”
<urn:uuid:e260b5db-4bf6-4cdc-96ed-e8771c61a0ce>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/How-the-Covid-19-Genomics-UK-Consortium-sequenced-Sars-Cov-2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00032.warc.gz
en
0.949621
2,509
3.484375
3
The Risk of Identity Theft Does Not Stop with Biographic Data As the world increasingly moves towards assuring each individual’s identity, so does the need to collect and manage more biometric data as a means to uniquely enroll and identify individuals. But sharing our biometric data means that we are exposed to the same identity theft risks as when we share other important data, such as financial account data, address history, employer history and more. Several data breach laws around the world have already been enacted requiring companies to report on breaches after the fact. But those measures are reactive and come into effect after the damage is already done. They do not allow individuals to control who has their data or what is done with it after the company has no further need for it. The EU as a Protector of Their Citizen’s Data As a result, several regulatory measures have been put in place to give individuals control over the data that a company collects, whether you are a consumer of goods or services, or even as an employee of a company that requires the collection of biometric data for background checks. The first broad protections came as a result of the European Union’s General Data Protection Regulation, otherwise known as the GDPR. The provisions of the GDPR came into effect two years ago, providing rights to EU citizens and permanent residents about how their data is used. The regulations were put into place to prevent citizen data from being shared without consent. That also includes EU citizens that do not live within the European Union, making this a global concern. The GDPR and other acts, such as the California Consumer Privacy Act, specifically make provisions for the protection of biometric data. This means that companies collecting biometric data must take measures to protect it in the same fashion that they would any other personally identifiable information. Each individual may also be given rights, such as: - The ability to have access to the data (understand and view the data that has been collected) - A way to request that it be removed (otherwise known as the right to be forgotten) - The ability to receive a copy of their data so they may re-use it (if applicable) - The right to determine how the data can or cannot be shared - The right to be notified in the event of a data breach or misuse of their data Companies and organizations must now be prepared to deal with those requests as defined by the individual regulations or be prepared to deal with stiff fines. Reviewing Biometric Data Usage As biometrics plays a bigger role in each individuals’ routine transactions, organizations making use of that data will need to address and adopt new policies and procedures in relation to these regulations at the national and local levels to determine how best to meet the requirements. Reviewing each regulation and how it applies is incumbent to managing and executing a successful and sustainable system. More and more countries around the globe are taking notice, and developing their own regulations based on these ground-breaking privacy laws. Together governments, public and private organizations as well as individual citizens recognize the need to better protect our unique, individual data, particularly biometric data with its greater capacity to identify us and make it easier to safely and conveniently play a bigger role in our day-to-day transactions. Learn how our tenprint readers perform best-in-class biometric captures for accurate background checks.
<urn:uuid:f45160f7-9d8b-4080-a6e7-5dc9b5dec4ef>
CC-MAIN-2022-40
https://blog.hidglobal.com/ja/node/37182
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00032.warc.gz
en
0.954416
685
2.765625
3
Bletchley Park, the Enigma, and the Bombe The German government used the Enigma machine to encrypt and decrypt their military communications during World War Two. The Enigma was an electro-mechanical cipher machine based It was a commercial system widely used by corporations and governments starting in the early 1920s, the German government used specially modified versions. The UK and the US knew about the Enigma system, but had concluded that it was impractical to break. Then some Polish cryptanalysts arrived in the UK, bearing the solution — they had reverse-engineered the rotor wiring patterns using pencil and paper, and they had designed a code-breaking machine. They in turn were using some information gathered by French intelligence. Giving a workable solution, the British devised systems to break the Enigma traffic. This cryptanalysis and intelligence program, known as Ultra, was a huge aid to the Allied war effort. The Allies carried out the work at Bletchley Park, a former country estate in England. The official historian of British intelligence during the war has estimated that while the Allies would have defeated Germany anyway, Ultra shortened the war by one to four years and saved millions of lives. The Allies knew how the system worked as it was based on a commercial design. The Enigma used a set of rotors, each of which applied a unique encryption operation. The initial design had five rotors (the naval system had six). The encryption and decryption process was based on selecting three of them in some order. Later designs had more rotors, by the end of the war the naval system used four rotors out of a set of eight. Enigma wasn't the only German machine cipher. The UK used "Fish" as a code word for a series of German teleprinter stream ciphers, done with the Lorenz SZ machines and similar. They had called the first one "Tunny" as in tuna.Allied Cryptanalysis of Japanese Codes and Ciphers The rotors were discs about 10 cm in diameter with 26 brass, spring-loaded contact pins in a circle on one face, and 26 brass contact plates on the opposite face. Each rotor performs a substitution cipher based on the wiring connecting the two sets of contacts. But the rotors were wired differently than in the commercial version. A set of 26 letters in alphabetical order appear around the rim of the rotor. A gear-like plate on the pin face allows the user to rotate the rotor into one of 26 positions. A ratchet disc on the pin face allows the machine to advance the rotor one position at a time. The first picture below shows a later 4-rotor naval Enigma. The four rotors appear near the top, they are currently rotated into position RDKP. The keyboard is toward the bottom, between it and the rotors is an array of lamps. When you press a key, an electrical signal is sent through the stack of rotors to light one of the lamps. The second picture above shows a partially disassembled rotor. The spring-loaded contact pins are at far right. The green wires connect them to the circle of contact plates. The lettered rim and gear-like setting ring are at far left. Each position of a single rotor applies a unique substitution cipher based on its internal wiring. The machine is loaded with the specified order of three or four rotors from the set of five to eight, each rotated to the specified starting position. Pressing one of the typewriter-like keys sends an electrical signal through the stack of rotors, through a reflector plate at the end (which adds its own substitution), and back through the stack of rotors to light one of the indicator lamps. The setting for the Enigma machine, analogous to the key in a modern cipher algorithm, consists of the selection and order of rotors (what the Germans called the Walzenlage), the "roll-over" ring setting (or Ringstellung) for each rotor, and the cabling of the plugboard (or Steckerverbindungen) on its front panel that applied further permutations. Each "XXX" in the diagram below represents one substitution cipher operation. The configuration applies a unique monoalphabetic substitution cipher. Such a system could be broken with a few dozen characters. However, the rotor positions change with every letter, meaning that a different substitution cipher is applied at each letter through the message. The right-most rotor moves once with each letter. With each full rotation of a rotor, it moves the rotor to its left by one position. The "roll-over" or Ringstellung specifies just where this rotor moves the one to its left. Enigma applies a polyalphabetic substitution cipher with a period of about 17,000 letters before its rotors would return to their original positions and apply the same substitution as used for the first letter. However, messages were no more than a few hundred letters each. The result is that the simplest Enigma, where you select three out of five rotors, has 158,962,555,217,826,360,000 possible initial settings. The electromechanical Enigma machine is analogous to today's cipher algorithms, making the combined details of its setting analogous to an encryption key. The number of settings of a three-of-five rotor Enigma system is a little larger than the number of 67-bit keys: 267 = 147,573,952,589,676,412,928 268 = 295,147,905,179,352,825,856 However, the Enigma implemented a polyalphabetic substitution cipher while today's cipher algorithms combine substitution (or confusion) with permutation (or diffusion), meaning that you can't simply estimate difficulty of attack by comparing the keyspace. The idea of attacking this system would seem to be pretty hopeless. However, French intelligence had collected information on the procedures used to encrypt messages, and they passed these along to Polish intelligence and their Biuro Szyfrów or Cipher Bureau. The Polish cryptanalysts Marian Rejewski, Jerzy Różycki and Henryk Zygalski were able to reverse-engineer the wiring of the rotors through theoretical mathematics, and they started breaking Enigma-encrypted messages in December 1932. They also designed the Kryptologiczna Bomba or Cryptologic Bomb, a device used to search for Enigma settings. In late July 1939, just five weeks before Germany began World War Two by invading Poland, staff of the Biuro Szyfrów met with cryptanalysts of France and Britain and described how they had been breaking Enigma messages. They explained their procedures and the design of Rejewski's Kryptologiczna Bomba. The British cryptanalysts came from the Government Code and Cypher School. The Government Code and Cypher School was based at Bletchley Park, 80 kilometers north of London. This is the Mansion at Bletchley Park. The records on the place go back to the Domesday Book of 1086 The present mansion, an odd mix of architectural styles, was built after its purchase by Sir Herbert Samual Leon in 1883. It passed from the family's hands to a builder who was planning to tear it down for a housing estate. Then Admiral Sir Hugh Sinclair, director of British Naval Intelligence 1919-1921 and founder of SIS, commonly known as MI6, purchased the mansion and surrounding 23 hectares of land with his own money. The site was within a short walking distance of Bletchley railway station, with frequent connections to London. There were high-volume communication links via telephone and telegraph to London. The main road from London to the north and northwest passed nearby. Bletchley was also on the east-west railway connecting Cambridge and Oxford, universities expected to supply many of the code-breaking staff formally trained in mathematics. They joined a staff with diverse skills including linguistics, chess, and crossword puzzle solving. The mansion was soon surrounded by hastily-built wooden structures referred to as "huts" and simply numbered. Later, several numbered "blocks" were built from brick. Above you see the end of the mansion and part of Hut 4. Hut 4 was occupied by Naval Intelligence, it was where they analyzed decrypts of German Naval messages. Messages were intercepted at radio listening stations. These would be apparently random 5-letter groups transmitted by Morse code. The sheets of intercepted messages were all delivered by motorcycle dispatch riders initially. Later in the war, some listening stations were connected by teletype to Bletchley park. When an intercepted message arrived, it was first processed for traffic analysis using the meta-information about the message such as time, frequency, length apparent transmitter location based on signal strength and direction finding, and possible recognition of the Morse operator's "fist". That's the distinctive Morse "accent", small variations in how that operator forms Morse characters. It then went to cryptanalysis in which various techniques were applied to try to find the Enigma settings and decrypt the message. If it could be decrypted, it went to the appropriate intelligence group for analysis of whatever meaning could be derived. The finished product was then sent to MI6 or the Secret Intelligence Service and to intelligence chiefs in the relevant ministries. Later in the war, intelligence was sent directly to some high-level commanders in the field. The source would be obscured for that last category, with a cover story giving credit to some other source. Some deliberately visible scouting missions were sent to "discover" German positions already known from decrypted traffic. The name "Ultra" came from this program being classified as "Ultra Secret", above Top Secret, and the Allies imposed strict limits on the use of the information to protect the source. If the Germans realized that the Allies could read Enigma traffic, they would change their procedures for using it and work on a replacement system. Huts and Blocks Hut 12, seen below, was used for Intelligence Exchange and Education. Here, left to right, are Huts 3, 6, and 1. Hut 3 (red) housed Intelligence. This is where they translated and analyzed decrypts of German Army and Air Force messages. Hut 6, at center above and shown in the two pictures below, housed Cryptanalysis of German Army and Air Force ciphergrams. Hut 1, at right above, was the first hut built. It went up in 1939 and initially housed a radio intercept station. This was called Station X, sometimes used later to refer to the entire Bletchley Park operation. The wireless station was moved to the nearby village of Whaddon so as to avoid drawing attention to its large wire antennas. Hut 1 was later used for administrative purposes. The first Bombe was initially installed in Hut 1. This is Hut 6. Below is Hut 3, where the Intelligence division translated and analyzed decrypts of German Army and Air Force messages. Hut 6, the grey structure beyond the red Hut 3, housed Cryptanalysis of German Army and Air Force ciphergrams. Below is Hut 8, where Naval Intelligence analyzed the decrypts of German naval messages. Here we're looking into a window at the end of Hut 8. Hut 6, at left below, housed cryptanalysis of German Army and Air Force cryptograms. Alan Turing lived in "the Cottage", seen here. The Cottage sits beside the carriage house and the Mansion. The turret with the windows at top led to his room. Dilly Knox, John Jeffreys, and others lived in other rooms of the Cottage. While GCHQ, the Government Communications Headquarters, had mostly moved out in 1946, they continued to use this training facility into, I think, the early to mid 1980s. It looks like a medium-sized battered elementary school. This is up the rise beyond H block. H Block, seen here, is in worse shape. Below we are walking from H Block down between D Block (visible to the left) and E Block (off to the right). D Block housed Traffic Analysis, Intelligence Exchange, and Naval Intelligence. Here we see B Block at left leading into E Block. B Block housed Air Intelligence and Naval Intelligence. E Block handled communication in and out of Bletchley Park. It also housed BRUSA, the British-U.S. liaison program. B Block now contains several exhibits, including the reconstructed German wireless station seen here. The British Y Service intercept operators used these HF receivers to collect the intercepts attacked by the cryptanalysts. In June 1941 the German military began using the Lorenz SZ or Schlüsselzusatz (cipher attachment) systems. These attached to a teletype machine and applied a Vernam stream cipher to its output data stream. The Lorenz SZ machine had twelve rotors, they generated a pseudorandom key stream which was exclusive-ORed with the 5-bit-per-character bit stream generated by the teletype machine. The output cipher stream modulated a frequency-shift keying radio transmitter. Below is a Lorenz SZ42 machine at left, and the associated Lorenz T32 teleprinter at right. The Lorenz machines were built in small numbers, needed only at major command sites, and so only a few survive today. The Lorenz SZ system was used beginning in the middle of 1942 for communication between German High Command in Berlin and the Army Commands across Europe. On 30 August 1941, the Army Command in Athens sent a Lorenz-encrypted 4,000 character message to Vienna. It was not received correctly, and Vienna sent a cleartext message asking for a retransmission. The operator in Athens made a number of changes in the message, including replacing some words with abbreviations and therefore slightly shortening the message. Then the big error: the second message was transmitted with the same key settings. The cleartext request from Vienna alerted Bletchley Park to pay special attention to the pair of messages. The error in Athens provided a depth that allowed Bletchley Park staff to discover the two plaintexts and the key. From then through the following January cryptanalysts worked to reverse engineer the Lorenz SZ machine design. This has been described as one of the greatest intellectual feats of the entire war. The British had code-named this system "Tunny", and they then built the Colossus computer to break Tunny. This was the world's first programmable electronic computer. It used thermionic valves (called vacuum tubes in American English) to perform calculations and Boolean operations. The first Colossus was operational by December 1943, ten of them were in operation by the end of the war. The British cryptanalysts built a functional replica of the German Enigma machine, following the work done by the Poles before them. The goal was to discover the daily setting (or key) for a specific German military network. The device was called a Bombe, and two explanations are given for its name: First, and probably most likely, this was the name given by the Polish mathematicians turned cryptanalysts as they discussed the problem over an ice cream dish known as a bomba. A less convincing claim is that the name had to do with the ticking sound made by the machine as it worked through the possible keys, stopping and ringing a bell when it found a solution. Here are pictures of the rear of a Bombe: The front of each Bombe held 108 drums. Each drum simulates a rotor, so a column of three drums simulates one Enigma-equivalent device. Each Bombe then simulates 36 Enigma machines. Or, given the point of the Bombe, 36 possible initial settings for a three-rotor Enigma. The U.S. also built Bombes, specializing in 4-rotor systems to attack the German naval Enigma. National Cash Register of Dayton, Ohio, was the main manufacturer for this project. The Bletchley Park code breaking operation was moved to RAF Eastcote in 1946, where some Bombes were already based. Government Code and Cypher School was renamed Government Communications Headquarters or GCHQ in 1952. It remained at Eastcote until 1954, when GCHQ moved to Cheltenham. Today you can visit Bletchley Park, it's open daily. The National Radio Center is located within Bletchley Park, and the National Museum of Computing is nearby. How Were the Allies Able to Break Enigma, Lorenz, and other Axis Cipher Systems? There were minor cryptographic weaknesses in the Axis cipher systems. However, key management is always the most difficult part of using cryptographic systems. Despite the German reputation for careful design and meticulously following procedures, even the German military failed to design a good key schedule and then they continuously made operational errors. If the German military can't do key management, it must be awfully difficult. Weakness in the Cipher Design The cryptographic weakness in the Enigma design is that an output character must differ from the corresponding input character. Put simply, if the ciphertext output is then we know that the cleartext input could not The ciphertext looks highly random, but it does leak some information — character by character, it tells us what the cleartext could not be. However, this is a very minor weakness, and if the Germans had used the system correctly, the Allies could not have discovered the keys and decrypted messages. There were some systematic key design weaknesses, and even more significantly, systematic operational errors. The Nazis had a good cryptographic system, but they did not know how to use it correctly. As far as strength against brute-force attack, the Enigma would have been adequately strong against the technology available to the Allies if used correctly. The U.S. SIGABA cipher system (also known as the M-134-C, ECM Mark II, CSP 888/889, and ASAM VI) had a much larger key space, 248.4 as commonly used, 295.6 as used on the POTUS-PRIME link between US President Franklin Delano Roosevelt and UK Prime Minister Winston Churchill. There was concern about SIGABA technology falling into Axis hands — not because it would provide any advantage for breaking the SIGABA cipher (see Kerckhoffs' principle) but because it might suggest improvements in the Enigma system. - SIGABA: Cryptanalysis of the Full Keyspace, Mark Stamp and Wing On Chan, Cryptologia, v31:202-222, 2007 - Cryptanalysis of SIGABA, Wing On Chan, Master's thesis, San Jose State University, 2007 - Cryptanalysis of the SIGABA, M Lee, Master's thesis, University of California Santa Barbara, 2003 - "La cryptographie militaire", in Journal des sciences militaries, Auguste Kerckhoffs (1835-1903), vol IX, pp 5-38, Jan 1883, in which he proved that if the security of a cryptosystem relies on the secrecy of the algorithm, it is weak and could be improved. Key Design Flaw #1 The first key design flaw was that the Germans specified that no rotor order should repeat within one month. So, that component of their key was not really random, and the number of possible keys decreased during a month. If an attacker can do enough brute-force work to break keys early in the month, or otherwise discover early keys (espionage, operator error), the required amount of work decreases as the month progresses. Constraining the key in this way is as foolish as believing that a random system (throwing dice, flipping coins, observing fairly shuffled cards) somehow has a memory of past events! The probability of each random event is independent of previous events, and so today's key should not be constrained by those of the past one to thirty days. Key Design Flaw #2 Even more significantly, and still on the rotor order, the Germans specified that no rotor could occupy the same position two days in a row. This also made things much easier for the attackers. Consider the Army / Air Force system, where the rotor order was three rotors out of a set of five. Given free choice of all three rotors, as the Germans allowed themselves only on the first day of the month, you have 5 choices for the first rotor, 4 remaining choices for the second rotor, and 3 remaining choices for the third and final rotor. So, a total of 5×4×3 = 60 possible rotor orders. However, if it is any day of the month after the first one, in the German system, your choices are very limited. Let's label yesterday's rotor sequence as 123 and the two unused rotors as 4 and 5. Today's allowed rotor choices are limited to the following, where a violates this rule, and a indicates an invalid rotor sequence: So, at most, you have only 32 rotor sequence choices rather than 60. Of course, as the month progresses, some of the remaining 32 rotor sequences allowed by this rule would be disallowed by key design flaw #1 described above, so an attacker has even fewer rotor sequences to test. Key Design Flaw #3 The Germans always used 10 plugboard cables. This decreased the possible key space by a factor between four and five and significantly simplified an attack by an even greater factor. Operational Error #1 Recall that the operator was to choose two random 3-character sequences,the indicator setting and the message setting. However, humans are very bad at choosing random sequences. The indicator setting and message setting were often predictable, making a search of the key space significantly easier. Two common pairs of 3-character "random" sequences were, as well as simple keyboard sequences like Predictable settings were referred to as "Cillies", because one German cipher clerk with a girlfriend named Cilia always used Operational Error #2 The Germans used predictable cleartext sequences, providing "cribs" or probable known-plaintext attacks. Part of this was the lexicon, the terms used in the language. Long distinctive words appeared frequently: Obergruppenführer, Untersturmbannführer, Obersturmbannführer, and other military ranks, plus common military terms including Kriegsgefangene (prisoner of war) and gefangengenommen. (captured) Another part was the predictable structure of routine messages. For example, weather stations that encrypted a message every day with an identical beginning, such as the German for: TODAYS WEATHER IN THE BALTIC SEA WILL BE ... Similarly, Italian forces in North Africa utilized a standardized surrender message sent back to Italian headquarters immediately before surrender, sent in Italian, of course: WE WILL DEFEND THE SACRED SOIL OF AFRICA WITH OUR LAST DROP OF BLOOD X LONG LIVE THE KING X LONG LIVE IL DUCE X This not only provided a crib, but a message of this length from a field unit was very likely contained the ciphertext version of this standard message. This meant two benefits for Allied forces — a crib for cryptanalysis and a clear indication that the sending unit was ready to surrender immediately and could be captured without a fight. HFDF (High-Frequency Direction Finding) was crucial to figure out just which unit was done. Similarly, the daily garrison medical report was useful Hemorrhoids are a common ailment in desert service, but the Italians had no code word for that. So it had to be spelled out, LE EMORROIDI in Italian, providing a likely crib. [For the Italian details see The Sigint Secrets: The Signals Intelligence War, 1990 to Today Nigel West, 1988, ISBN 0-688-07652-1, pg 231.] Operational Error #3 Stronger systems were deployed incrementally. When the Navy first introduced the four-rotor Enigma, this meant an additional three rotors with unknown wiring. But the same message would be encrypted both with the four-rotor and three-rotor system, so those vessels and stations without the four-rotor system could receive the message. Since the three-rotor system was broken, this allowed the discovery of the wiring of the three new rotors in the naval 4-rotor system. Operational Error #4 Identical messages were encrypted with multiple keys. When submarines surfaced, they typically requested the transmission of all messages sent while they had been submerged. These old messages would be encrypted with this day's key. By matching messages based on character length, this frequently provided a match to a broadcast message that had been broken previously. This provided a complete known-plaintext attack against this day's keys. Operational Error #5 Some messages were also sent by insecure means, providing known-plaintext attacks. A prominent example was the provision of much information, including military cables, to the Japanese military attache in Berlin. General Hiroshi Ōshima then encrypted these messages for transmission to Tokyo using a compromised Japanese cipher. Since the Japanese system had been broken, the message could be read from the Berlin-to-Tokyo link. This provided the message itself, which might be of enormous use on its own, but it also provided a known-plaintext attack to more easily discover that day's Enigma key. Allied General George S Marshall described Ōshima as "our main basis of information regarding Hitler's intentions in Europe." And, as was only declassified in 1995, Ōshima's messages led to Japanese diplomatic traffic in general. That gave the Allies an insight into Japanese strategic thinking. This included the fact that the Emperor had no intention of surrender, and was not bothered by the prospect of millions of Japanese civilians dying in a suicidal defense of the home islands. This made the two atomic bombs the far less deadly choice for forcing the end of the war. Operational Error #6 Some messages were sent in response to Allied actions, providing partial chosen-plaintext attacks. The British referred to this as "gardening". The Allies would do something at a specific time and place, like an artillery barrage, the laying of naval mines, an obvious surveillance overflight, etc., in order to get a German unit to report it using that day's code. This revealed code words for specific locations, navigational references, and the cryptographic keys.Allied Cryptanalysis of Japanese Codes and Ciphers Also see how the U.S. enticed the Japanese into reporting on a non-existent problem with water desalination systems on Midway Island, in order to determine the Japanese code word for that island. The Allies were careful to not make too much use of the ULTRA data, so as not to encourage the Germans to suspect their system had been broken. But even when the U-boat operations in the Atlantic were largely shut down, Dönitz refused to believe that it was because of cryptanalysis — he continued to believe that any communications insecurity was due to espionage. The Allies interrogated a number of German cryptologists after the war. Several of them said that they knew that the Enigma system had vulnerabilities and had reported this up the chain of command. However, it was thought that the Allies would not expend the effort needed to break the system. Japan's Imperial Fleet used codes rather than ciphers, such as the several JN25 code variants. Their Foreign Ministry used cipher machines, called PURPLE by the Allies. Japan made similar mistakes, including perhaps most significantly the belief that their system was unbreakable. The Imitation Game The 2014 film The Imitation Game was loosely based on the book Alan Turing: The Enigma by Andrew Hodges. I think that as movies go, it did a reasonable job of showing the history. It correctly showed some of the technical details, like the use of partial known-plaintext "cribs" and the "menu" diagrams. Also some relatively minor details, like showing intercept operators writing down the actual Morse code groups we hear on the soundtrack. My biggest complaints about historical inaccuracies are: - Portraying Alan Turing with severe psychological and social problems, surrounded by conservative, almost stuffy, co-workers who barely tolerate him. Turing wasn't nearly the most eccentric person at Bletchley Park. - Conversely, portraying Alastair Denniston as the Frank Burns of British military intelligence. Bletchley Park had a fairly academic atmosphere with several rather eccentric people in crucial roles. - Showing four young workers making major decisions about naval missions. Interception, decryption, translation, and analysis were separate stages, at which point the intelligence was sent to high-ranking military officers and political leaders for decision-making. Other than that, I thought it was good, more accurate than many supposedly historical movies. Others have compiled significantly longer and more detailed lists of complaints about historical accuracy.
<urn:uuid:1a499b14-f1b9-409d-b033-644f94d190e3>
CC-MAIN-2022-40
https://cromwell-intl.com/travel/uk/bletchley-park/Index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00032.warc.gz
en
0.956492
6,285
3.859375
4
A recent report by The Economic Times (CISO) revealed that more than 600 twitter accounts, emails and websites pertaining to the government of India were compromised in the last five years. CERT-In, the official government body that notifies the affected entities along with remedies, have also confirmed these shocking statistics. The ‘Digital India’ drive, announced by the Government of India, has received lots of applause and appreciation from both in India as well as the rest of the world. In order to fructify this grand initiative, it is important to safeguard the messaging and communications infrastructure though. Still, many entities have suffered cyber attacks due to inadequate cyber security measures, poor IT security policies, lack of knowledge from end-users among many other shortcomings. So what steps can be taken to ensure that similar statistics, as mentioned above, do not recur? In the information age, social media plays a pivotal role to stay connected – both personally and professionally. Socialization is no more a ‘trend’ – it has turned into a ‘necessity’. Today, not only government organizations but organizations and businesses of all shapes and sizes count on different social media platforms to stay connected with the audience to disseminate information. There are millions of Facebook, Twitter, LinkedIn and other social media accounts across the globe. How do government officials ensure that the social media accounts and the admin accounts are safe from misuse? Likewise, government organizations and government authorized agencies require intensive monitoring over hundreds or maybe thousands of emails coming in the inbox every now and then. Unless verified and secured, there could be threats of unwanted access through phishing emails. ARCON has discussed the increasing vulnerabilities associated with government organizations in an exclusive whitepaper on essential IT safeguards in government organizations. ARCON has discussed the IT security limitations and how to reinforce end-user behaviour monitoring to keep anomalous user profiles at bay. In this backdrop, cyber threats on social media applications are not just restricted to government organizations. For instance, a twitter account of a government department could definitely be at risk if there are multiple users accessing the account for sharing different updates. Moreover, shared passwords always pose an additional threat of unauthorized access and hacking. Post analysis, it has been evident that organizations lack answers to the below: - Are the passwords of the accounts frequently changed or randomized? - Is there any dedicated access control mechanism to monitor social media accounts? - How many shared credentials are used to manage and control social media accounts? - Do the employees have a casual attitude towards following IT security policies? How to address the Risks of Emails and Social Media Accounts Compromise? During this crucial juncture of digital transformation happening everywhere, ARCON would like to discuss some basic measures to ensure comprehensive security of the official email accounts and social media accounts. Security of Email Servers: Every email coming to the admin inbox of critical government departments should be verified by DNS (Domain Name System) server and MAC (Medium Access Control) sublayer. DNS validates the authenticity of the IP address and MAC ensures secure transmission of the data through emails. Secondly, deployment of a robust Privileged Access Management (PAM) solution can ensure email servers are protected from unauthorized access. SMTP (Simple Mail Transfer Protocol) servers, especially in the government departments, require multi-level authentication so that the suspicious emails (mainly phishing emails) are restricted and deleted immediately after detection. Protection of Social Media Applications: Many organizations keep third-party agencies to manage and control the social media activities. In the case of government authorities, social media accounts are more vulnerable to unauthorized access. Due to the lack of centralized governance framework and poor end-user visibility, the critical data assets are always at risk from malicious insiders or unknown third-party users. With the solutions such as Endpoint Privilege Management (EPM), government organizations can ensure a centralized policy for all social media admins, and ensure access to social media applications on just-in-time basis. In the age of digitalization, data security, data integrity and data privacy are the top priorities. Cyber criminals always look to target the communication infrastructure as they offer big bounties. Hence, it is critical to protect emails and social media applications.
<urn:uuid:3edf2494-fbde-4d35-a5b1-46373a1fce9d>
CC-MAIN-2022-40
https://arconnet.com/risks-to-watch/unauthorized-access-its-not-just-about-databases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00032.warc.gz
en
0.92777
885
2.53125
3
Are there key differences between information assurance and cybersecurity? Most references to information assurance and cybersecurity get the two terms mixed up. It is now to the point where people believe they mean the same thing. Many people may consider the two concepts interchangeable. However, there are fundamental similarities and differences between information assurance and cybersecurity, as described in this article. What is Information Assurance? Information assurance is the practice of ensuring that information systems perform as needed and prevent unauthorized access. In addition, the system remains accessible to legal users. The term refers to the technical and managerial measures designed to ensure the confidentiality, integrity, control, availability, and utility of information and computer systems. Techopedia highlights the five pillars that encompass information assurance: integrity of information, availability, authentication, confidentiality, and nonrepudiation. Information assurance processes protect computer systems by maintaining these five system qualities. Information assurance has been around a lot longer than cybersecurity, effectively giving the field a broader scope of focus. A post published on Lewis University state that information assurance is closely linked with risk management. A business identifies its information assets and the systems and applications that store, process, and communicate them. Subsequently, information assurance professionals estimate the susceptibility of the information assets to cyber threats and attacks. These attacks include disclosure, modification, or disruption that result in loss of confidentiality, integrity, and availability. The information assurance process then quantifies the effect of unwanted occurrences on the assets. It guides an organization on devoting resources, personnel, and best practices to protect information assets. Putting data protection controls in place is just for starters in information assurance. The practice calls for adopting various assessment frameworks and security audits. This helps a business to understand how well the controls can mitigate risks. Robust information assurance involves planning, assessment, information risk management, governance, and the use of cybersecurity measures to protect information assets. What is Cybersecurity? United States FEMA’s Ready.gov defines cybersecurity as a process involving preventing, detecting, and responding to security breaches and cyber attacks. Such attacks can result in wide-ranging effects on individuals, organizations, the community, and the national level. Cybersecurity encompasses various technologies, processes, and practices that individuals and organizations design and develop to protect information assets. Such assets include networks, devices, programs, services, and data from attacks, damage, or unauthorized access. In cybersecurity, enterprises analyze and determine the risk levels of potential threats to computer networks. A cybersecurity expert’s important work involves preventing information assets from cyber attacks. Digital Guardian further states that an effective cybersecurity strategy incorporates elements like network security to protect the network from intrusions, data security to protect sensitive information from unauthorized access. Other cybersecurity components include application security to constantly update and test applications for safety and endpoint security for protecting system and data access using devices. Identity management is also essential for understanding the access that users and entities have in an organization. Cybersecurity also features database and infrastructure security, cloud security, mobile security, restoration of information systems, business continuity planning, and physical security. That is to say, cybersecurity professionals focus primarily on defending the infrastructure of computer systems from cyber attacks, including computers, networks, and communications, and secondarily on protecting information and data within the cyber domain. If so, cybersecurity does not include protecting information assets outside the cyber domain, which information assurance covers. How Cybersecurity Relates to Information Assurance An article published by the University of San Diego asserts that information assurance and cybersecurity involve risk management, maintaining, and safeguarding the high-tech information systems used across different industries to store, process, and distribute crucial data. Chiefly, information assurance and cybersecurity consider the value of information. In this case, the two fields prioritize different forms of information, including physical and digital data, based on its criticality. The more crucial information gets more security and assurance layers than the less critical data. Besides, from the above descriptions of the two terms, cybersecurity can be considered a subset of information assurance that encompasses higher-level concepts like strategy, law, policy, risk management, and training. Information assurance is a broader strategic initiative with a range of processes, including cybersecurity activities. An organization achieves information assurance objectives partly by implementing cybersecurity measures that protect all information and functional computer systems, including networks, online services, critical infrastructure, and IoT devices. Information assurance and cybersecurity employ tools, practices, and strategies, including firewalls, user education, penetration testing, endpoint protection tools, and other high-tech systems to eliminate threats and maintain desired service levels. There is also an overlap between the two fields in terms of work qualifications. Both information assurance and cybersecurity require a downright understanding of security issues and technologies that go into information asset protection. Information assurance managers also include cybersecurity controls in their roles. Information Assurance vs Cybersecurity Every so often, people use the term information assurance that has spread from government use into common parlance synonymous with cybersecurity. However, the two terms have distinguishing differences. How does information assurance differ from cybersecurity? - Information assurance is an old field that existed before the digital age. Conversely, cybersecurity is an innovative field that keeps pace with the dynamic technology field and the ever-changing threat landscape. - Information assurance processes focus on protecting physical (data in a hard drive and personal computers) and digital information assets. On the other hand, cybersecurity concentrates on safeguarding and managing risks targeting digital information assets. - Information assurance is more strategic in nature, dealing with policy development and implementation to keep information assets secure. On the other hand, cybersecurity deals with the practical reality of setting up security controls and tools to keep information safe. - A cybersecurity career requires strong technical skills and a cybersecurity degree course. Other courses for information security professionals and chief security officers include a master’s degree or bachelor’s degree in information technology, computer science, or computer engineering. A computer network architect also makes a potential cybersecurity specialist. Information assurance ordinarily includes many of the same academic programs as cybersecurity. It may also consist of an information assurance degree with additional courses for data analysis, cryptography, and data protection. - An information assurance professional protects physical data, digital information, and electronic hardware by instituting, updating, and maintaining policies and controls that protect valuable assets. On the other hand, cybersecurity experts, managers, and an information security analyst emphasize defeating cyber adversaries targeting digital information and information systems. It is essential that the terminologies used in the IT world clearly reflect what we do. Comparing and differentiating the terms information assurance and cybersecurity helps avoid conflict, inefficiencies, violated expectations, and gaps in the measures, processes, and technologies that we implement and maintain to ensure government agencies and organizations meet the two fields’ expectations and goals. By understanding the similarities and differences between the two fields, individuals can better select the educational and career paths that best match their passion, skills, interests, and goals. Finally, information assurance versus cybersecurity is not an either/or option for protecting an organization and its customers. Businesses deal with sensitive and confidential information like credit card transactions, confidential data, and communications via email, phone, and mail. So, information assurance is a necessity, and cybersecurity falls underneath the umbrella of this practice.
<urn:uuid:962f6524-c72c-46dd-a8f4-553cf076bc93>
CC-MAIN-2022-40
https://cyberexperts.com/information-assurance-vs-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00233.warc.gz
en
0.916776
1,479
3.171875
3
Active content is software that is able to automatically carry out or trigger actions without the explicit intervention of a user. When you are visiting a webpage on the Internet, there are occasionally videos, animations or polls that are implemented to improve the website functionality. These are examples of active content. It is very possible that hackers can maliciously implement “scripts” into the active content, which may install malware onto your device. What Does This Mean For An SMB? Your business needs to take proactive measures today to first reduce its chances of being hit by ransomware, phishing, or other cybersecurity attacks. Secondly, validate backups and disaster recovery plans are current and functioning in case you end up hit with ransomware. CyberHoot recommends the following best practices to avoid, prepare for, and prevent damage from these attacks: - Adopt two-factor authentication on all critical Internet-accessible services - Adopt a password manager for better personal/work password hygiene - Require 14+ character Passwords in your Governance Policies - Follow a 3-2-1 backup method for all critical and sensitive data - Train employees to spot and avoid email-based phishing attacks - Check that employees can spot and avoid phishing emails by testing them - Document and test Business Continuity Disaster Recovery (BCDR) plans - Perform a risk assessment every two to three years Start building your robust, defense-in-depth cybersecurity plan at CyberHoot. Source: CNSSI 4009
<urn:uuid:7f792037-9673-4559-9473-384c62db3664>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/active-content/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00233.warc.gz
en
0.900077
306
2.703125
3
In today’s world of hybrid cars and wind farms, consuming energy wisely is on every individual’s agenda. Companies and their IT departments should be no different. Statistics aside, there is no doubt that collectively data centers have a huge “carbon footprint” and are the biggest or one of the biggest consumers of energy in every company. You have already heard about how it is only a matter of time until companies will be forced to make data centers green as a part of their effort to reduce their carbon penalty. If you think this issue is going away, think again. Companies have already started planning for data centers built on alternate energy sources, such as solar and wind power. While it may be a while before those options become commonplace, there is no reason why companies cannot start taking baby steps. Efforts to utilize current technology and methodologies that will reduce the amount of electricity consumed and reduce the amount of heat generated (hence reducing your air-conditioning bills) can be combined with other subtle efforts that directly and indirectly change the way in which you think about energy in the data center. A lot of “out of the box” server and storage technologies — such as virtualization, thin provisioning, low-consumption hard drives, etc. — allow companies a jump-start on this effort, so that when the time comes to really put this into action, you will be ahead of the crowd. However, in order for companies to measure their success, they need to first establish a baseline and create a roadmap; i.e., they need to know where to start and what to accomplish in a prioritized manner. Once a company starts its journey, it can measure its success along the way. The key thing to keep in mind here is that this is not an overnight change, but rather a multi-year journey that provides benefits in an incremental manner as initiatives are accomplished. A data center energy audit, in short, is a good way to start this initiative. Several companies are beginning to offer this service as a part of their core portfolio. The focus of this service is essentially to make data centers as efficient as possible and to provide prioritized and actionable recommendations to achieve that target state. There are a lot of misconceptions and pitfalls in this field — a wrong move could easily lead to a wasted investment. Starting with this service should therefore help companies ensure that their investment is made in the right direction. Most data center audits include various proficiency analyses. One key that applies to green initiatives should be cost proficiency (i.e., a way to identify, consolidate and streamline cost savings). At the same time, a green proficiency analysis provides you with a “blueprint” to achieving a more efficient, greener IT environment based on industry standards such as those established by organizations like the Green Grid. Data Center Infrastructure Efficiency and Power Usage Effectiveness The Green Grid defines these two metrics as ways to determine the operational efficiency of a data center. Depending on the units used and the process used to measure these metrics, they (one is the reciprocal of the other) provide a comparison between total facility power and IT equipment power. The total facility power is the amount of power being consumed by the data center as a whole and includes everything housed in the data center. IT equipment power is the sum of all the power consumed by systems, storage, network equipment and any devices that directly form a part of IT in that company and are housed in the data center. Therefore, it is safe to say that IT equipment power is a subset of the data center. These metrics can be used to improve the operational efficiency of a data center, and when used in a normalized manner (i.e., accounting for per-data center specifics such as location, etc.), they can be used to compare various data centers (i.e., to establish a baseline for comparison). If a company has multiple data centers, these values can provide information on which data centers are the most efficient and which ones are not. They help in determining the impact of other green initiatives mentioned above. For example, a virtualization initiative can help in reducing IT equipment power consumption, thereby impacting these metrics in a positive manner. Datacenter Space Optimization While this metric is up for interpretation, it is an important one to measure “how full” the data center is. The lower this value, the more inefficient the data center. This is because it implies that the company is essentially paying to cool and light up a lot of empty space. DCIE (data center infrastructure efficiency) and PUE (power usage effectiveness) numbers should indirectly lead you to this conclusion, but keep in mind that the above values are not a function of physical space. With a certain amount of space left for growth, companies should strive to keep their data center occupancy at healthy levels. Too many data centers with low space utilization should be immediate grounds for data center consolidation. However, the effort should not stop there. Rack utilization and efficiency is also important. It also goes hand in hand with how the data center has been designed for rack placement (to optimize cooling and power). Rack utilization should be based on trying to ensure the highest average rack unit (RU) consumption in the data center. So if a rack unit is 44 units, then the average consumption should be as close to the theoretical as possible — the higher the overhead, the lower the efficiency. If there is higher overhead, examine why it is the case. Is it due to cooling? Is it due to poor cable management or rack rails? Or is it simply poor planning in how and where to place systems? Another indirect interpretation of the DCIE and PUE metrics is the efficiency of the DC infrastructure. Cooling tends to have the biggest impact, as the biggest “emission” from IT equipment is heat. If the data center cooling infrastructure is not mated well to the data center profile (in terms of the heat generation, air flow in and around the racks and the placement of cooling units in the data center), then the company is generally at a high risk of either being under-protected or overpaying. The quality of the equipment used is a key element in this matter. Older equipment tends to consume more energy, so a data center operation should include life cycle management of facilities equipment as well. Another key area of data center infrastructure is standby power. The UPS (uninterruptible power supply) and standby power-generating capabilities of the equipment should be mated to the data center power consumption requirements. The amount of time the data center needs to stay on standby UPS power and subsequently on generator power should be dictated by the company’s DR strategy and RPO/RTO requirements. If the company has a warm standby or active disaster recovery facility, it could afford to have smaller reserve standby power, thereby minimizing ongoing consumption. Cable management can present another big challenge in a data center. An efficient data center should be designed to minimize wastage. Wastage is often caused by a “home run” system because cables generally have higher failure rates due to human accidents. During such times, there is wastage because the old cable stays intact, whereas a new one has to take its place. Over time, it adds to a lot of wasted copper or fiber. IT Equipment Efficiency An article on data center efficiency cannot be complete without a discussion on the efficiency of the equipment that consumes so much of the data center’s assets. There is no doubt that every company should examine and implement some or the other form of systems virtualization. This article does not deal with the various types of systems virtualization out there; however, it is enough to mention that there are many different products out there and there should be an effort to determine which option best suits a company’s needs. The goal of virtualization should be simple — to reduce the physical server footprint in the data center and therefore the amount of energy consumed. However, the buck does not stop there. Newer storage technologies such as thin provisioning allow companies to improve storage utilization, thereby reducing the amount of “idle, spinning disk” that is deployed. Similarly, technologies such as MAID (massive array of independent disks) ensure that disks can be spun down (and powered off) when not in used. Storage tiering ensures that all of the data is not stored on the fastest disks available (faster generally means more power consumption). Explore solid state drives. They are here to stay, and their prices will only go down. What about storage reclamation? If you find yourself not being able to answer this question, consider this: The average storage consumed by a server is 20pc. Beyond that, all space is wasted. If the server no longer needs the storage, is there a way you can reclaim it? The answer is yes, and it directly leads to better storage management. Retiring old equipment in a timely manner should be on everyone’s list of things to do. A data center audit, if performed periodically, should expose equipment and servers that are not in use. Such equipment should be powered off and hauled off the data center floor as quickly as possible. IT budgets are not increasing, but companies are faced with an ever-increasing demand and need to support a growing business and new initiatives. A green initiative should not be looked upon as anything separate, but rather one of the core drivers for initiatives, such as streamlining IT infrastructure and reducing operational costs and/or capital expenditures. Being able to measure the impact of this driver before and after is the right way to get started. Ashish Nadkarni is a principal consultant at GlassHouse Technologies.
<urn:uuid:23a27592-3bde-4395-bf1d-18f2d8c8e7e3>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/keys-to-an-even-greener-data-center-67778.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00233.warc.gz
en
0.949962
1,988
2.515625
3
SQL Server Standard Uses All Available Memory SQL Server Standard Uses all Available Memory When using SQL Server Standard, SQL Server uses all or most of the available memory on the server. In order to run databases as fast as possible, SQL Server caches as much of the database and queries into memory as possible. SQL Server does this by design in order to increase performance. While this should not cause any problems, if you would like to limit the amount of RAM that SQL Server can use on your dedicated server, please follow the steps below: Log into your server through Remote Desktop. SQL Server Management Studio and log in. In Management Studio, right-click on the server and select Click on the section on the left-hand side. Specify the maximum amount of memory that you want SQL Server to use. (1GB = 1024MB). 6) In Management Studio, Right-Click on the server and select Article ID: 501 Created: April 10, 2012 at 7:07 AM Modified: August 26, 2014 at 9:17 AM Was this article helpful? Thanks for your feedback... Share this article Send Reset Email Please log in below Not Logged In You must be logged in to perform this action.
<urn:uuid:7ec3dd05-a49f-4d6f-bb31-22e7c643897f>
CC-MAIN-2022-40
https://support.managed.com/kb/a501/sql-server-standard-uses-all-available-memory.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00233.warc.gz
en
0.799331
355
2.734375
3
In my last post on Small and Medium Business (SMB) I touched on creating unique IDs for your employees, contractors, and services. In this blog I would like to expand on access controls for these IDs by briefly describing passwords and other authenticators we can use to secure the use of IDs. What is Authentication In cybersecurity, authentication is the process of ensuring an individual using an ID is the person or process they claim to be. Assume we have a group of files that we want Jan from accounting to be able to access. Prior to accessing the files, Jan will need to enter her ID to assert to the system that she is Jan. However, to be secure, Jan also should be required to provide some proof that it is, in fact, Jan trying to access the files, and not someone else who knows her ID. The process of soliciting and checking factors that can verify the identity of an entity using an ID is called authentication and is a critical part of controlling access to your systems and data. Authenticators are what we use to ensure an individual using an ID is who they say they are. The most well-known authenticator is the password. Authenticators can take the form of “something you know” (like a password or PIN), “something you have” (like a token or smartcard), or “something you are” (like a biometrics scan of certain aspects of your hand, face, or eye). Passwords are the most ubiquitous form of authenticators. Weak passwords compose a lot of risk to your company. However, strong passwords can actually be quite secure, particularly when used in conjunction with another authenticator. In a large enough organization, it is a near certainty that at least one person is using a password that is a combination of their pet’s name, their child’s birthday, and an exclamation mark. Because of this, I recommend that you develop criteria for passwords that govern the length, complexity, reuse, and age of passwords. The longer your passwords are, the longer it will take for an attacker to break them. You can increase the length of time it will take to break your passwords by introducing complexity such as requirements for capital and lowercase letters, the use of special characters (i.e. !,@,#,$,%,^,&,*), and the use of numbers. There are mixed opinions about how long you should allow passwords to remain unchanged. Some people argue that passwords should be changed frequently, and others argue that changing passwords too frequently makes users write down their passwords or store them in a similarly unsecure manner. I recommend using a secure password generator to generate a 32 character password that includes uppercase, lowercase, special characters, and numbers, and using an encrypted password manager to store it. Because this password is so long and so complex, I feel that you can go a longer time before changing your passwords. I also recommend that you limit the amount of times the password can be reused as well as the number of iterations of passwords that must occur before any given password can be reused. Furthermore, you should ensure you have restrictions for the minimum times a password can be used so that people don’t change their passwords a bunch of times in quick succession just so they can reuse their favorite password. Passwords are an important authenticator that can be made stronger by requiring an additional authenticator. It’s generally best practice to augment your password (something you know) with an authenticator that is “something you have” or “something you are”. “Something you have” authenticators can include hard tokens that look like key fobs, or soft tokens that run on your phone that allow the user to prove ownership by entering a one-time key code they generate. Many applications are preconfigured to use a variety of hard and soft tokens. Another strong type of “something you have” authentication involves chips that store certificates that transmit encrypted keys that authenticate the user. This functionality can be found in Smart Cards, keys that are inserted into computers, and NFC Devices. The other authentication method that can augment passwords is the use of “something you are.” This type of authentication generally alludes to biometrics. Common biometrics include fingerprint scans on mobile devices, retina scans and facial recognition on computers and mobile devices with forward-facing cameras, and palm scans on physical access systems. Biometric authenticators have issues with false positives and negatives, so be sure to check the error rates for the biometric you would like to use to make sure it’s within the tolerances for your purpose. Thank you for reading this installment of my blog for small and medium businesses. Please check back next month when I’ll discuss configuration and patch management. Other Post From The SMB Cybersecurity Series:
<urn:uuid:40c67127-4402-49b9-8352-5784051305bd>
CC-MAIN-2022-40
https://www.cybergrx.com/resources/research-and-insights/blog/smb-security-series-why-you-need-authentication-cybergrx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00233.warc.gz
en
0.925743
1,035
3.1875
3
What is a virus signature? Earlier in the year, we discussed the different types of malware and one of the most prevalent is a virus. To recap, a virus infects a legitimate program such as a Microsoft Word file to spread and replicate itself and ultimately perform some nefarious act such as deleting files or sending out spam email. But how does one detect a virus? The simple answer and most common way is to produce a “virus signature” and then search a computer for that signature. If the signature is found, the infected file or program is then “cleaned” or in other words, the offending code is removed. Most people will be familiar with this procedure because it is exactly how anti-virus software from McAfee, Sophos, Norton and others acts. A virus signature is best thought of as a sort of “fingerprint” of the virus. It is a set of unique data, or bits of code, that allow it to be identified. The challenge of course is to identify these signatures before the virus can do too much damage. Virus companies must marshal considerable resources on research to keep up (or at least not fall too far behind) the malware developers. They use a variety of techniques to find signatures including honeypots which we have discussed in the past.
<urn:uuid:53ed4d74-b574-449a-b98e-a0718b22c15e>
CC-MAIN-2022-40
https://accoladetechnology.com/what-is-a-virus-signature/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00233.warc.gz
en
0.947064
268
3.484375
3
By Amanda Dudley: They say a stitch in time saves nine; with emergency guides, that stitch can save billions of dollars. The very nature of an emergency is that it occurs unexpectedly, and the severity or impact of a situation may change at any moment. In the workplace, adequate preparation and planning are essential for preventing damage to people, the environment, and property. Writing an emergency guide is a proven strategy for managing disasters. Written emergency guides can save employees’ lives in dire situations. In the case of property loss, these guides can facilitate business continuity or disaster recovery. This article will analyze some emergency guides and how they make a difference in extreme situations. The best reaction to disasters is not to allow them to happen in the first place. But with events like natural disasters, there is little we can do to stop mother nature. Unavoidable emergencies are the reason why contingency plans are necessary for the workplace. Rebound, avoidance, responsiveness, preparedness, and management are the five phases of a contingency plan. Under the preparedness phase, organizations of all sizes and sectors must set up systems to effectively handle workplace emergencies professionally and timely. Companies can reach an essay writing service to write practical emergency guides for a staff with clearly outlined disaster responses. These guides will prepare employees for crises like earthquakes and direct their reactions as first-responders until the earthquake authority shows up. Why does every company need an emergency guide? An emergency guide teaches employees how to initiate safety procedures when a calamity happens. It oversees safety systems, like installing shutters and the construction of barriers at and around the workplace. When followed correctly, these guides can reduce the damage to buildings, inventories, and equipment, protect people and make it easier for them to return to their normal activities as soon as possible. How can emergency guides by companies save lives? 1. Emergency guides help staff make better decisions during disasters without waiting for commands An emergency often forces us to think on our feet and make several significant decisions within a short period. These emergency decisions must also be swift and decisive to be effective. Yet, the standard chain of command may not be available due to the constraints of time and circumstances. And this can be a problem when there is no leader around. Even if leaders are around, the increased stress can easily overwhelm them, leading to heavy losses for the company. Luckily, emergency guides take away the need to wait for orders from higher-ups before making a decision. They direct employees on what to do in specific situations, thereby shortening their reaction time and raising their survival chances. 2. Emergency guides reduce human errors during emergencies To effectively manage a disaster, it is necessary to create an emergency guide. When seconds count, having your workers know they always have handy solutions in any given situation is essential. If people know they have the right resources to manage a problem, they don’t give in to fear, thereby increasing their likelihood of making better choices. Emergency guides often show people the best ways to handle crises. Since a lot of research, testing and revision goes into writing an emergency guide, there is a good chance that human errors were considered and safety solutions around them well articulated. So in situations where a worker may have made a bad call when panicking, simply following the written procedure can save the day. 3. Emergency guides help company workers evacuate quickly during disasters Companies should inform staff and partners of what to do in the event of a fire and where to seek shelter during an earthquake, tornado, or forced exposure to other dangers. People should be ready to abandon their offices and seek refuge at a muster point or in general shelters. When companies take these safety steps, their workers know where to run for their basic medical needs in an emergency. Well-written emergency guides can save businesses from lawsuits and insurance issues. But most of all, these guides can direct workers to safety, especially when it prioritizes human life. The contents of an emergency guide — evacuation procedures during a fire, the strongest parts of the building and its weak points, emergency exits and detailed first-aid steps — can save lives. Emergency guides are crucial determinants of how well a company reacts to disasters, lives through them and bounces back. Therefore, it is essential that everyone gets proper training and understands what they should do when the chips are down. Everyone who runs a company should ideally invest in emergency guidelines. These written lifesavers will keep workers safe, facilitate the process of saving lives during difficult situations and security breaches, and prevent loss of life and property. About the Author: Amanda Dudley is a writer and a lecturer with a PhD in History from Stanford university. When she is not lecturing and helping students with complex assignments, she works as a part time essay writer, providing top quality essay writing service and academic projects. An efficient writer, she delivers projects in good time, ensuring that her clients are satisfied and content.
<urn:uuid:658a8562-59ff-4008-ac77-64ca345c1a48>
CC-MAIN-2022-40
https://continuityinsights.com/how-writing-emergency-guides-by-companies-helps-in-urgency-three-points/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00233.warc.gz
en
0.955499
1,005
2.78125
3
Security researchers from three universities have found seven types of vulnerabilities that affect the iOS mobile operating system. These issues are related to the sandbox feature and include exploits that could be used to plan a variety of attacks against iPhone or iPad users. The iOS Sandbox Has Become a Security Minefield The iOS Sandbox feature of Apple’s iOS mobile operating system (used for the iPhone and iPad devices) has been found to contain a lot of security issues that can be exploited against victim devices. The discoveries were made by researchers from three independent universities. This function allows users and administrators to constrain any malicious or third-party applications that are installed on the mobile devices. The found flaws can allow third-party developers to perform multiple unauthorized actions on the host systems. Examples include the exploitation of privacy settings, looking up location search history, system file metadata access and others. The problems are very severe as some of the issues allow the apps to block access to specific system resources. The tested code runs on the later version of the operating system including 9.0.2. The discoveries were made in a proof-of-concept test that aims to validate some of the claims about the heightened security in Apple’s closed source system. The researchers created a process named “SandScout” that extracted the profile of the sandbox and reversed engineered it into a readable form. By doing this, they were able to create a platform that can perform automated queries on the coding and scan for possible abuse scenarios. The security implications so far are devastating to the target device. Not only third-party applications can bypass the security measures, but they can perform other malicious actions such as consume disk storage space, obtain profile information and media library files, etc. Full information will be published in an upcoming paper titled “SandScout: Automatic Detection of Flaws in iOS Sandbox Profiles”. The publication will be at the ACM Conference on Computer and Communications Security conference in Vienna this October. Until the paper is published the team doesn’t want to reveal many details as they are working with Apple on amending the issues.
<urn:uuid:0ef1e782-008a-4092-ae86-3a38aa047522>
CC-MAIN-2022-40
https://bestsecuritysearch.com/security-experts-find-plethora-issues-ios-sandbox-feature/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00233.warc.gz
en
0.949916
432
2.5625
3
Why Do I Need A Surge Protector? Utility power supplied to electrical outlets is sometimes inconsistent. The short-duration voltage surges or spikes that occasionally happen can damage components in electronic devices such as computers and workstations. In addition to equipment damage, irretrievable data loss may occur. In the U.S., the nominal or standard voltage supplied to household and office wiring is 120 volts. A voltage surge or spike can cause electronic components to overheat, either immediately destroying them or causing permanent damage that can lead to premature failure. A surge protector or surge suppressor provides protection against power surges. This device is located in the power circuit between the utility power outlet and the connected electronic equipment. Surge protectors work by diverting excess voltage to the ground, allowing only the nominal voltage to travel through the wiring to connected devices. This is accomplished using a variable resistance component in the surge protector called a metal oxide varistor (MOV). Under normal voltage conditions, the resistance of the MOV is such that it remains closed. As utility voltage increases beyond nominal, the MOV resistance decreases accordingly which forces the unwanted overvoltage to ground, maintaining a constant flow of nominal voltage to sensitive electronic equipment.
<urn:uuid:bfb255d8-938e-40ef-ba08-80bbdbe8ff21>
CC-MAIN-2022-40
https://www.cyberpowersystems.com/feature_focus/why-do-i-need-a-surge-protector/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00233.warc.gz
en
0.921072
243
3.578125
4
Little did you know, but your great-great-great-grandparents owned a lucrative mining operation in Nigeria and a law firm in Lagos has been trying to track you down for the past five years to appropriate your inheritance. You probably haven’t seen an email like this for the past few years, but a quick look in your spam folder will still reveal endless 419 scams. Spam filtering technology has made huge improvements, but just because your inbox isn’t flooded with promises of lost lottery gains doesn’t mean you’re no longer at risk from a social engineering attack. If anything, these threats are evolving with twists and turns designed to take advantage of the main cause of data breaches — you. As IT systems gain more sophisticated defenses, it’s difficult to defend against layer-eight threats. Dutch industrialist J.C. Van Marken first coined the term “sociale ingenieurs” in the 19th century. He thought society needed engineers that could deal with human factors, not just machines or circuits. But it wasn’t until the authoritarian propaganda regimes of the early 20th century that we saw practical demonstration of suggestive techniques intentionally designed against the masses. By the late 20th century, most people had their first experience with social engineering through their email account. Originally, this was often a POP3 affair with email accounts being provided by whichever dial-up ISP they were using and downloaded to a local client. Threats were easy to identify to the tech-savvy consumer. Then consumers started to trust and use the Internet for e-commerce. People were more likely to enter their address and credit card information online. And now, mobile devices have opened new doors for scammers to again prey on the inability of a user to tell the genuine article from a fake. Dark Reading's all-day virtual event Nov. 15 offers an in-depth look at myths surrounding data defense and how to put business on a more effective security path. This year we’ve seen traditional phishing become more sophisticated by taking aim at enterprises via Business Email Compromise or BEC attacks. And now attackers are changing going one step further by attempting to use information you’ve posted on social media to seem like their communications are authentically coming from a friend. A modern social engineering attack needs three things: - A trigger for the attack. This can come in the form of an email, SMS, iMessage, etc., but the user has to trust or at least not suspect the message is for malicious purposes. - Target synergy. The attacker must be phishing for resources to which the victim readily has access. It’s no good asking for Bank of America credentials if the victim only banks at Wells Fargo. - Cloak and Dagger. The attack spoof must be good enough to fool the victim into giving up the required credentials for information. Ask for too much information and they might be suspicious or simply not have that information to hand, too little, and it will be of little use to the attacker. Slow Down and Think Even the most cyber-savvy individuals can get tripped up by an attack. But users can trip-up a threat at any one of these stages simply by being vigilant. Above all, when people use the Internet, they need to slow down and think. Are you on a trusted network in a secure location? Today, even hardware-based attacks that log keystrokes of people nearby are possible. While rushing between tasks, people often click a link or download an attachment without a second thought, which can quickly lead to inadvertently installing spyware or a virus. These messages often link to a webpage designed to look like your bank or credit card. If an email from, say, a financial institution, insists you follow a link to change your password because of a recent breach, instead go to the URL of the institution and see if they want the same changes. Also, many financial institutions now require multifactor authentication, sending you a text with a verification code after you input a password. If this isn’t the case, it could be a sign of a spoofed website. With password hacks, there is often more than meets the eye, since the modern Web surfer typically uses the same login authentication everywhere. It’s easy to see why: The average Internet user these days has 27 different accounts, and 37% forget a password at least once a week. In the past year alone, Yahoo, Dropbox, and LinkedIn, to name a few, all were hit by attacks, requiring their users to create new passwords. This leaves you vulnerable to an across-the-board information breach, where your information from an unrelated account could be used to access your credit card accounts. No PC Necessary Modern attacks don’t only come from your desktop. An increasing number of attacks are focusing on mobile phones and tablets. Threats to iOS devices increased 82% in 2013 and Android devices are targeted nearly 6,000 times a day. If your phone is losing battery extremely quickly or you are suddenly burning through your data, it may be a sign of an infection, which could have come from an SMS link or through a downloaded app. Some threats are less technical and come under the pretense of phone calls from imposter IT help desks, termed quid pro quo attacks. This unsolicited help is playing a numbers game: Call enough employees saying you will help with the issue they reported and one is bound to have actually done so recently. Be it a phone call asking you for a password or an email asking you to click on a link to update your software, employees must take care to verify the source asking for this information and question why they might be asking for credentials. Companies can train employees and their IT departments on how to use features like encrypted emails to relay sensitive information. Malicious attacks that target users through gaining their trust have a long history and are not going away. If Mark Zuckerberg’s data can be breached, we can all fall victim. Vigilance is key to creating a culture of data security intelligence where individuals feel empowered to identify a threat.
<urn:uuid:6eeb3268-8459-48e8-a11d-9a38d53d9657>
CC-MAIN-2022-40
https://www.darkreading.com/endpoint/stay-vigilant-to-the-evolving-threat-of-social-engineering
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00433.warc.gz
en
0.954447
1,265
2.734375
3
…about the IP address of your default router (default gateway in IPv4-speak). It’s tough to argue against the fact that most IPv6 addresses are not much fun to type. Being four times longer than IPv4 addresses and expressed in hexadecimal means things can get ugly on the keyboard pretty quickly. For people in the IT field one very common mechanism for testing IP connectivity is to ping the default gateway. And in IPv4 networks, the default gateway is always different for every layer-3 network. It has now been a thousand bajillion times in my career when I have either asked someone or told someone what the default gateway is for a host who is having connectivity problems. In IPv6 the ability (or inability) to ping the default router is just as helpful as it ever was in IPv4. But there are a few apparent problems/challenges: - IPv6 default router addresses can be painfully long. Something like fe80::21a:a0ff:fe97:9ad3 is not unusual. That’s a lot to type and if it fails you are always going to have to double-check to make sure you didn’t fat-finger the address (“Can I really not ping the gateway or is it that I just can’t type?) - Assuming you have more than one interface in your device (an ethernet card and a WLAN card, for instance) you will need to specify the interface when pinging the link-local IPv6 address of the default router. Because every interface has a link-local IPv6 address the system won’t know out which interface to send the packet unless you tell it. - For Windows: ping fe80::21a:a0ff:fe97:9ad3%15 (where %15 identifies the interface number) - For Linux: ping -I eth0 -c 5 fe80::21a:a0ff:fe97:9ad3 will do it. By all outward appearances the days of simply pinging your gateway (default) are gone. But wait! Not so! What may appear to the first glances of many as an unappreciated addition of complexity can actually be much more simple than anything IPv4 could have offered. Consider these facts: - Unless otherwise configured most routers will automatically advertise themselves as a default router on each network segment they support. This means that your devices never need to be configured with a default router; they learn it automatically by listening to the router’s advertisements. Effort required by IT staff: zero. - Odd as it seems the IPv6 address of the default router is usually a link-local IPv6 address. Link-local addresses are only relevant and useful on the local network segment (hence the name). They have no meaning on other interfaces, even when those interfaces are on the same device. This means that the link-local IPv6 address on interface fa0/1 of your router has nothing to do with the link-local IPv6 address on interface fa0/2 of the same router. And this is true even though the addresses are technically on the same logical subnet (fe80::/10). In IPv4 the router admins would be getting errors about overlapping networks but not so with IPv6. The magic here is that the link-local IPv6 addresses on fa0/1 and fa0/2 are not overlapping or conflicting because they are not on the same network. This means that they can even have the exact same IPv6 address and not conflict with each other! That’s outright blasphemy in IPv4! And this is exactly what I suggest you give some thought to doing: make the link-local IPv6 address for every router interface in your whole internal network the exact same address (something simple, like fe80::1111 would do nicely). - If every router interface has the same link-local IPv6 address the answer to the “what is the default router’s address” question is never again going to be a mystery; it’s the same address for every single computer in your enterprise, no matter what network/VLAN they are currently connected to. In the diagram below the two PC’s are on different physical networks (which translates to different logical layer-3 networks as well). Both have a link-local IPv6 address that allows them to communicate with other nodes on their local LAN segment. They cannot communicate with each other using these addresses. They will need a Unique-Local or Global Unicast address if they want to exchange packets. Each device has the same default router …or so it seems. In actuality they both have a different default router, the address just happens to be the same. The node on the left side of the diagram communicates with the link-local address configured on fa0/1 of the router. The node on the right communicates with the link-local address configured on fa0/2 of the router. The fact that both of those interfaces happen to have the same address is not relevant; the addresses are link-local. Once you come to terms with functionality like this you begin to understand how IPv6 can take networking to new level while sometimes, just sometimes, making things more simple in the process.
<urn:uuid:fa6b5310-2a54-46cb-8b75-6530e0ed7836>
CC-MAIN-2022-40
https://www.itdojo.com/ipv6-means-never-again-having-to-wonder/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00433.warc.gz
en
0.934952
1,098
2.625
3
The very first Olympics in ancient Greece mainly consisted of what we would today call track and field sports: running, jumping, and throwing competitions. Today, of course, the Olympics is a bit different and so is the sport of track and field, with one of the most profound changes, being the emergence of the field of big data and analytics. Track and field data, therefore, consists of historical and current athletic performance data. The running disciplines are: sprints, middle distance, long distance, and relay races. Jumping: long jump, triple jump, high jump, and pole vault. Throwing: shot put, javelin throw, discus throw, and hammer throw. Notably, this data category may include cross-country, racewalking, and road running data since these are all included under the sport of athletics umbrella with track and field. The main sources of this data include the World Athletics Organization, college-affiliated clubs, Olympic teams, and sport media. Other sources may include medical research facilities and open-source databases. These databases receive regular contributions from amateur athletes who, due to the nature of most of the disciplines in this sport, can participate without joining local clubs when none are available. Wearable technologies have also made a major impact on this sport, with the first wearable devices (fitness watches) ideally suited to collecting running and racing data. Each discipline generates—and uses—unique data. However, basic data attributes include time of race completion, distance jumped, distance a javelin was thrown, and so on. Other common attributes are sex and the age bracket of participants. One common aspect of this data category are world records; not every sport regularly tracks world records nor can every sport present world records as examples of the limits of human capability. Additional sources of data include weather and GPS data. Since most track and field sports take place outdoors, weather will impact athlete performance and the data provide valuable insight for athletes, coaches, and fans. Further, if a dataset includes cross-country racing, GPS or course condition data may also appear. Finally, track and field dataset may also include meet attendance and fan and athlete demographics. Companies catering to these population segments can find great use of this information. Coaches, athletes, businesses, and fans all use this data. Coaches and athletes use it to measure and improve performance. Businesses use it to market products to athletes and fans. Fans use it to discuss athlete performance, place bets, ponder the limits of humanity’s physical abilities, and so on. A quality dataset is accurate, complete, comprehensive, up-to-date, and relevant to the needs of the researcher. Track and field data is no different. For those who want to collect this data to improve their own performance as amateur athletes, a number of wearable athletic performance tracking devices provide personalized data as well as immediate analysis and visualizations. Those who want to track more general information, however, may have a harder time with data collection and analysis. Using a wide range of data sources and ensuring the resulting dataset is clean and standardized, however, will go a long way toward ensuring the data is ultimately of good quality. The problem is that once you step off the treadmill into the real world, the relationship [between mechanical and metabolic power] changes. When you head uphill, for example, your stride gets less bouncy and as a result you get less free energy from your tendons. …When you go from level ground to a 10 percent uphill gradient, your efficiency drops from roughly 60 percent to 50 percent. At a steeper gradient of 20 percent, efficiency drops even more to 40 percent. Opta Sports provides granular, real-time data and analytics on a range of sports. This includes data on players, teams, managers, and on-field action. The Opta Sports Basketball data set collects basketball-specific stats like dunks and 3-pointers Further, while their data feeds, widgets, and other services suffice for most users, Opta also offers help from experts to help craft bespoke data solutions.
<urn:uuid:770d4dbf-a251-4965-b7d2-8559802fa38d>
CC-MAIN-2022-40
https://www.data-hunters.com/category/sports-entertainment/track-and-field-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00433.warc.gz
en
0.943645
835
3.703125
4
As we all know, human progress has evolved over the past few decades. The advent of new disruptive technologies has made our lives easier and more convenient. So, what will our future work look like ten years from now? What will happen to the current projects and jobs which we work for? In this post, I would like to share my thoughts on the newest trends in technology that will change the current way of global businesses. What is Disruptive Technology? “A technology which creates major disruption(positive) in the way society functions. or A new technology that changes the current way of approaching a particular problem or issue. - Cloud computing disrupted the market for mainframe computers, Sun microsystems workstations, etc. - 3D Printing disrupted the market for offset printing and Xerox plain paper copier - Blockchain is a distributed database that allows for secure, transparent, and tamper-proof transactions that may disrupt the market of traditional databases, especially current banking technology - The introduction of AI in the workplace is just one example of how technology has changed the way humans work. It is now possible for machines to do some of the tasks that were once done by humans. AI-powered tools are already being used by many companies around the world in order to make their work more efficient and accurate Top Technologies for the Next Decade Serverless Architecture and Cloud Migration Serverless differs from other cloud computing models, in which the cloud provider is responsible for managing both the cloud infrastructure as well as scaling of apps. Serverless apps are deployed in containers that automatically launch on demand when called. Serverless technologies feature automatic scaling, built-in high availability, and a pay-for-use billing model to increase agility and optimize costs. Hyper Automation via AI/ML Hyper Automation provides a high-speed route to engaging everyone in transforming the business, supported by automating more and more complex work that relies on knowledge input from people. Hyper Automation involves the orchestrated use of multiple technologies, tools, or platforms, including Artificial intelligence (AI) and Machine learning(ML). Mixed Reality (VR+AR) MR brings together real-world and digital elements. In mixed reality, you interact with and manipulate both physical and virtual items and environments, using next-generation sensing and imaging technologies. Internet of Things (IoT) The Internet of Things or IoT refers to the billions of physical devices around the world that are now connected to the internet, all collecting and sharing data. Internet of Things refers to the rapidly growing network of connected objects that are able to collect and exchange data in real-time using embedded sensors. Thermostats, cars, lights, refrigerators, and more appliances can be connected to the IoT. Cybersecurity is the protection of internet-connected systems such as hardware, software, and data from cyber threats. 5G is the fifth-generation technology standard for broadband cellular networks, which cellular phone companies began deploying worldwide in 2019. Data science is the field of study that combines domain expertise, programming skills, and knowledge of mathematics and statistics to extract meaningful insights from data. Advanced Robotics (RPA) Robotic process automation (RPA) is a software technology that makes it easy to build, deploy, and manage software robots that emulate human actions interacting with digital systems and software. Autonomous or Near-Autonomous Vehicles An autonomous vehicle, or a driverless vehicle, is one that is able to operate itself and perform necessary functions without any human intervention, through the ability to sense its surroundings. Next-generation sequencing (NGS) is a massively parallel sequencing technology that offers ultra-high throughput, scalability, and speed. The technology is used to determine the order of nucleotides in entire genomes or targeted regions of DNA or RNA. Next-Generation Storage Technology is a state-of-the-art technology that caters to the growing need for improved data storage and management across various industry verticals, including Banking, Financial Services, Insurance (BFSI), Retail, IT, Telecommunication, Government, Healthcare, Manufacturing, and others. 3D printing, also known as additive manufacturing, is a method of creating a three-dimensional object layer-by-layer using a computer-created design. In recent years, 3D printing has developed significantly and can now perform crucial roles in many applications, with the most common applications being manufacturing, medicine, architecture, custom art, and design. A renewable electricity generation technology harnesses a naturally existing energy flux, such as wind, sun, heat, or tides, and converts that flux to electricity. Finally, we have only experienced a small part of what is yet to come. Technologies will definitely play a major role in changing society and the world itself as we already know. - The cloud will play an important role in the global accessibility of services and data - 5G networks will open a new chapter in the history of IT - The future is just as likely to bring opportunities and positive developments, as it is to bring disruption and uncertainty As technology continues to advance and develop, it’s important to consider what skills we need to learn and grow with the changing trends in IT and be ready for future projects in the above areas. Tomorrow is hard to predict and the skills needed for upcoming challenges are constantly changing, Be ready for what’s coming.
<urn:uuid:753ad63c-781c-4d39-b76f-9a3ca0659b61>
CC-MAIN-2022-40
https://blog.miraclesoft.com/disruptive-technology-and-its-impact-on-enterprises-and-employees/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00433.warc.gz
en
0.932443
1,139
2.546875
3
To say that an email is generally insecure is clearly an understatement because of the number of proven threats and invasive practices that have transpired mainly due to violation of its original intent. Email messages have their intended recipients and when some other party gets to eavesdrop, certain risks arise. These include identity theft, invasion of privacy, modification of messages, false messages, repudiation, replay of messages, and unprotected back-ups. An email is sent through the “Simple Mail Transport Protocol” or SMTP. It uses the “Hyper Text Transfer Protocol” or HTTP language to send a message through the SMTP server in order to reach the recipient. When the recipient’s actual SMTP server cannot be contacted, the sender’s server will try to contact back-up servers when available. It will try to contact the intended recipient’s server for a number of days before it finally gives up. The message becomes available for reading once it is received by the recipient’s server. The amount of time wherein an email message travels from the sender to the recipient varies depending on the servers’ traffic load. The travel time of an email is the most critical phase of the process in terms of exposure to risks. Potential risks can be lessened through the use of encryption. One way is through symmetric encryption wherein the sender and recipient share a secret key. Plain text is converted into cyphertext which would appear meaningless to anyone who does not have the secret key. The message needs to be decrypted before it is understood. Asymmetric encryption requires the use of a private and public key. The private key is expected to be kept secret by its holder for the asymmetric encryption to retain its security. Most email messages are made more secure through the Secure Socket Layer (SSL).
<urn:uuid:98ae293c-2e75-4efb-8e29-a8ebf8b0fbc0>
CC-MAIN-2022-40
https://www.it-security-blog.com/tag/email-risks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00633.warc.gz
en
0.925045
365
3.34375
3
Artificial Intelligence (AI) is here to stay, and many people are not happy. After all, it is hard to embrace something that could displace about 40 per cent of human jobs in the next 15 years. In an interview for CBS’s 60 minutes, Kai-Fu Lee (a Chinese AI expert) also mentioned truck drivers, chauffeurs, waiters, and chefs as some of the professions that will be disrupted. That right there must have hit a nerve. However, everything is about to change because this article highlights some of the reasons you should not fear AI. And even if you were to ask the experts, they would unwaveringly confirm that regardless of all the noise, AI is here to benefit us all. Case in point, an executive briefing by the McKinsey Global Institute revealed that AI and automation are creating opportunities for the economy, society, and business. That said, it’s time to repress the widespread idea of artificial intelligence taking jobs. So, let’s highlight some of the useful developments you can expect from this technological phenomenon. AI will create jobs While the relationship between artificial intelligence and jobs is a matter hot debate, it is still safe to say that AI will indeed offer new opportunities. According to the World Economic Forum report, robots and AI will create as many AI jobs as they displace. This conclusion is entirely viable as it is easy to identify some of the many careers in artificial intelligence, for example, data scientists, who evaluate the decisions made by AI algorithms to eliminate any biases. Apart from that, some other AI occupations include: - Transparency analysts: people tasked with classifying the various types of opacity for algorithms. - Smart-system interaction modelers: experts who develop machine behaviour based on employee behaviour. - Machine-relations managers: people who champion for the greater use of algorithms that perform well. As far as the competition for jobs between humans and robots goes, worth noting is that there are jobs that AI can't replace. Roles that require leadership, empathy, and delegation are examples of the many jobs that are safe from automation. AI will eliminate bias and diversity challenges at work Automation will stir positive change in the workplace. When AI is used during recruitment or even performance management, all workers will be evaluated in an unbiased, fact-based manner. In turn, Human Resources managers can get to concentrate on other essential strategic undertakings that ensure balance in the workplace. AI can help HR departments to use machine learning (ML) in discovering where issues such as bias stem from and assist them to act accordingly faster. ML shines in identifying instances of bias. In turn, this will promote fairness and diversity in the work setting. AI in the workplace will steer business-outcome strategies The impact of artificial intelligence in business is already felt, and this is expected to continue through to the future. A few years from now, AI-oriented architecture is forecasted to take the lead in assisting businesses to carry out operations in more comprehensive ways, thus shifting them from traditional data science and machine learning models. It will be necessary to move to business outcomes because AI will play an essential role in multiple aspects of the business. While there is no way of telling the future of artificial intelligence in business for sure, it makes sense for owners to keep up with the evolution to avoid being left behind. AI will boost innovation in the workplace of the future The workforce of the future will lean more towards innovation and creativity. Businesses have spent the better part of the last few years studying AI automation and how they can leverage it to achieve results fast. With statistics showing that workers spend up to 40 per cent of their hours at work performing repetitive tasks, every business should consider automating any functions that can be automated. Automation is not new; machines have been replacing human labour in different areas for decades. However, in the coming years, mundane daily tasks will become fully automated. Already, 39 per cent of organisations were completely reliant on automation in 2018. With repetitive tasks taken care of, employees can focus their energies on high-value customer-oriented tasks and collaboration. The designs of workplaces and workflows will also change with the implementation of AI technology. More people will begin to work more closely with machines as companies will strive to become more agile. Companies which implement AI in their business strategy will experience dramatic improvements in their customer experiences, and their employees will be more motivated. Encouraging creativity rather than the performance of repetitive tasks gives workers more fulfilment in their jobs as well. With another technology, AI will positively impact the world With the Internet of things and artificial intelligence working hand in hand, identifying trends and solving problems in the business world will become more convenient and also sustainable. AI, together with other technologies, is expected to change the world by impacting the way businesses run. With time, we will be able to combine both human and artificial brain to find solutions to major global problems. It will also be easier to foresee problems with more accuracy and nip them at the bud with the assistance of AI. But these positive impacts can only be felt if stakeholders are transparent and mindful in their use the technology for the greater good of everyone else. Robotics and AI will boost productivity According to a report by PWC, 54 per cent of businesses confirm that implementation of AI-driven solutions in their companies has already improved productivity. AI and automation, even when they are implemented partially, have unlimited potential for any business. Workers’ skills, attitude, training, command chain, and workflows protocols are some of the leading productivity challenges that companies face. Apart from increasing productivity, AI systems will help businesses to cut down on costs, improve the quality of their products or services, and create better customer profiles. As a result, companies will also get higher profits, which can be shared among stakeholders as dividends, or reinvest it back in the business. Improved productivity also means that firms will be able to sell their products at lower prices, thereby creating more demand among customers and more job opportunities for workers. Businesses can, therefore, use human labour to take up those jobs that have been created by AI and cannot be automated. Final thoughts on the impact of artificial intelligence on jobs Artificial Intelligence is not showing any signs of slowing down. Soon, it will become another necessity of life, just like the Internet. But for now, more and more businesses are beginning to realise how invaluable AI automation and data interpretation are. Though machines will take some jobs in the beginning, the jobs created by automation will also keep soaring in the next few years. Will we have reached the point of increasing human intelligence by artificial means? Who knows? What we know, however, is that those who will be sought-after in artificial intelligence and employment are individuals who have the have the relevant skills. There’s work that not even the machines can do; so, make yourself as valuable as you can be in your field and you will be irreplaceable. Alice Berg, blogger and a career advisor, Skillroads (opens in new tab) Image Credit: Geralt / Pixabay
<urn:uuid:1b5a9887-a4d8-465f-9761-8ccffe110667>
CC-MAIN-2022-40
https://www.itproportal.com/features/ai-and-the-future-of-work-main-changes-to-expect-in-the-next-years/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00633.warc.gz
en
0.957269
1,457
2.625
3
Fire and explosion risks due to Lithium-ion batteries are increasing and are impacting supply chains - Published: Thursday, 01 September 2022 08:34 Lithium-ion (Li-ion) batteries are increasingly impacting shipping and supply chain safety as demonstrated by a number of fires on vessels such as roll-on roll-off (ro-ro) car carriers and container ships. Given the many difficulties involving in suppressing such incidents, particularly at sea, focusing on loss prevention measures is crucial, whether batteries are transported within electric vehicles (EVs) or as standalone cargo, according to a new report from marine insurer Allianz Global Corporate & Specialty (AGCS). The report ‘Lithium-ion batteries: Fire risks and loss prevention measures in shipping’ highlights four main hazards: fire (Li-ion batteries contain electrolyte, an ignitable liquid); explosion (resulting from the release of ignitable vapor/gases in a confined space); thermal runaway (a rapid self-heating fire that can cause an explosion); and the toxic gases that these hazards can produce. The most common causes of these hazards are substandard manufacturing of battery cells/devices; over-charging of the battery cells; over-temperature by short circuiting, and damaged battery cells or devices, which, among other causes, can result from poor packing and handling or cargo shift in rough seas if not adequately secured. The primary focus must be on loss prevention and in the report AGCS experts highlight a number of recommendations for companies to consider, focusing on two areas in particular: storage and in transit. Among others, recommendations to mitigate the fire risk that can potentially result from Li-ion batteries during the transportation of EVs on car carriers and within freight containers include ensuring staff are trained to follow correct packing and handling procedures and that seafarers have had Li-ion battery firefighting training; checking the battery’s state of charge (SOC) is at the optimal level for transportation where possible; ensuring that EVs with low ground clearance are labelled as this can present loading/discharging challenges; and checking all EVs are properly secured to prevent any shifting during transportation. In transit, anything that can aid early detection is critical, including watchkeeping/fire rounds and utilizing thermal scanners, gas detectors, heat/smoke detectors, and CCTV cameras. The report also highlights a number of measures that can help ensure safe storage of Li-ion batteries in warehouses, noting that large-format batteries, such as those used in EVs, ignite more quickly in a warehouse fire than smaller batteries used in smartphones and laptops. Among others, recommendations include training staff in appropriate packing and handling procedures; establishing an emergency response plan to tackle damaged/overheating batteries and a hazard control plan to manage receiving, storage, dispatch and supervision of packaged Li-ion batteries; preventing the exposure of batteries to high temperatures and ensuring separation from other combustible materials; as well as prompt removal of damaged or defective Li-ion batteries. Read the report (PDF).
<urn:uuid:452b8be4-6602-4cd2-a70e-178395e91ccd>
CC-MAIN-2022-40
https://continuitycentral.com/index.php/news/resilience-news/7652-fire-and-explosion-risks-due-to-lithium-ion-batteries-are-increasing-and-are-impacting-supply-chains
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00633.warc.gz
en
0.924645
620
2.6875
3
Starlink has been helping tens of thousands of Ukrainian citizens stay connected to the Internet during what political scientists call “the biggest war in Europe since World War Two.” Starlink is a satellite Internet constellation of over 2,000 satellites in low Earth orbit operated by SpaceX. It provides Internet for people in remote locations where cell towers don’t reach. In Ukraine, the technology is crucial, with over 150,000 daily users relying on the service to stay online during the war. Since the beginning of the invasion, the Starlink app climbed to the top place among all downloads in Ukraine. Thousands of Starlink terminals were reportedly delivered to Ukraine after the country’s Vice Prime Minister Mykhailo Fedorov expressed concerns that Russia’s aggression may disrupt the country’s Internet connection. However, there is more to Starlink than robust Internet services. The Pentagon has claimed that SpaceX is capable of countering Russian cyberattacks on Ukraine’s networks quicker than the US military. Evidently, in March, Musk said they had “resisted all hacking and jamming attempts,” shifting the attention to focusing on counter measures. As such, only a day after news of Russia’s jamming attempts broke, SpaceX managed to update its network with new code to address the issue. Dave Tremper, Director of Electronic Warfare for the Office of the Secretary of Defense, praised Starlink for how quickly it can upgrade against threats. "There's a really interesting case study to look at the agility that Starlink has in their ability to tackle that problem," Tremper commented. Unsurprisingly, not everyone is happy with Starlink’s progress in Ukraine. In a tweet supposedly from Dmitry Rogozin’s message to Russian media, Director General of Russia’s Roscosmos space agency accuses Musk (who runs SpaceX) of aiding “fascist forces in Ukraine with military communication equipment” that was “delivered by Pentagon.” Earlier in March, Musk warned that it might be necessary to camouflage the antennas required to access Starlink as they are very distinctive. ‘Starlink is the only non-Russian communications system still working in some parts of Ukraine, so probability of being targeted is high. Please use with caution,’ Musk tweeted. Despite the visible progress, it’s not always possible to counter Russia’s cyber aggression. Musk has addressed the recent cyberattack on ViaSat's satellite KA-SAT network meant "to disrupt Ukrainian command and control during the invasion…[which] actions had spillover impacts into other European countries." According to Musk, despite Starlink’s efforts, Russian hackers are ramping up their efforts. More from Cybernews: Subscribe to our newsletter
<urn:uuid:807018d0-6fa0-4f02-b102-dbbb615f51ea>
CC-MAIN-2022-40
https://cybernews.com/cyber-war/starlink-fighting-for-ukraine-on-the-cyber-front/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00633.warc.gz
en
0.940442
576
2.59375
3
Social media reliance is so rampant in today’s landscape that it has come to drive some of our best and worst impulses and experiences. Outside of cyberbullying and poor treatment, there are much more serious implications—like when a community turns these platforms into a weapon to smear and systematically destroy the lives of others. It’s a form of weaponization that turns a relatively good-natured platform into something more sinister. But there are real-world implications for the improper and unethical use of social networks. Some recent examples include the backlash and subsequent firing of James Gunn for old jokes, the purported Russian interference during major U.S. elections, and the use of targeted fake news by domestic and foreign actors. How social media can be weaponized Believe it or not, there are several ways in which social media can be used to sabotage, disenfranchise or misinform others. One common form is a wave of angry people — not necessarily even a majority — who mob together and form a community of outrage. The resulting noise can be filled with allegations, both true and false, that do considerable damage to a person’s career or health. Often referred to as “cyberbullying,” this comes with a particularly aggressive form of verbal abuse that can take its toll on a person’s mental and physical health, potentially even leading to suicide. Alarmingly, it also seems to be happening more often these days. To combat this phenomenon — which has cost lives in the U.S., India and elsewhere — WhatsApp has started adding a specific label to forwarded messages. Another pointed use for social media is to further the rapid spread of disinformation. This can be on purpose, with the intent to develop a particularly nasty rumor, or it can be born out of sheer ignorance. People sometimes believe what they’re sharing is true and important when, in fact, it is not. This was done in increasing numbers during the last major U.S. election by various sources. Finally, there’s a more direct form of weaponization where the networks are used on a grand scale to influence action. For example, the way in which Israel and Hamas have leveraged social media and public sentiment in their war against one another. Real world implications: Russian interference and fake news Such dastardly uses of social media can have very real implications, both on society at large and on a more personal level. People can be wrongfully and willfully hung out to dry, have their careers damaged and sometimes even have their lives and their health threatened. In fact, this happens quite often when the mob mentality fires up and a community bands together to attack a particular target. In the case of foreign influence on social media, damaging forms of propaganda can be lobbed into the social sphere with the intention to create dissent. Contrary to popular belief, the goal isn’t to swing politically more to the left or right. It’s merely to cause a significant disturbance, as much as possible, in any way that will manifest. It ranges from the more outlandish conspiracy theories to the more believable calls for James Comey to resign or George Soros to be arrested. One Russian bot even pushed the agenda that cops had increased their attacks on civilians, merely due to Colin Kaepernick’s protests. When shared in vast amounts and repeatedly, such disparaging views and ideas can spread lightning-fast whether they’re true or not. What you can do to protect yourself It falls to us, as individuals, to learn to weed out the true from the false. While this is certainly not the easiest thing to do, it is almost always possible with the right research. A good rule to follow is to always look for credible sources when reading information — and to actually check the provided sources. This is an incredibly simple thing to fake, at a glance, when you add a faux hyperlink or author. Don’t take what you read on social media as fact, especially if it’s shared by others or within a community and there’s no visible source. Memes, for instance, can serve as a common and recycled form of disinformation. As far as protecting yourself against mob mentality, the best thing to do is to keep your opinions and more sensitive dealings close to the chest. If something is, or could potentially be, divisive, then try to avoid talking on it altogether, especially publicly (and permanently) online. This includes not just social media but all online forums, instant messaging and chat apps and even text messages, since conversations can be screenshotted easily and saved for later. If you’ve already published such things, which may be the case with tweets and social posts, learn how to delete or remove the resulting data. It’s best to use tools for this, especially if you aim to delete the bulk of your tweets or posts. Sure, you could sort through your profile and delete things one by one, but that would take forever. Instead, you’d be better off using a tool like TweetDelete, which allows you to filter and delete up to 3,000 tweets at a time. If you have a particular attachment to your social data, you can always back up the archives. Just make sure you store it in a secure location. When all is said and done, the best thing we can do to protect ourselves in the age of disinformation and mob mentalities is to take an active approach to sourcing our own information. Don’t merely take what another user or even a friend says at face value. If you’re truly passionate, spend time doing a little research and fact-checking.
<urn:uuid:87aaf453-4578-44cc-8aa5-b95ee419cc52>
CC-MAIN-2022-40
https://bdtechtalks.com/2018/09/13/weaponized-social-media-how-protect-yourself/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00633.warc.gz
en
0.950965
1,165
2.875
3
09 Dec Transceiver testing and quality requirements Optical transceiver modules are the main end-to-end components which allows optical communication through optical fiber. Transceiver manufacturing technologies are developed over time to become faster and less susceptible to errors. Optical transceiver like any other high tech electrical appliance, during their manufacture process is undergoing firm testing and quality check procedures. These procedures are involved at each stage of transceiver manufacturing process to ensure best outcome. And if any of procedures fails, then the item is rejected and returns to production line for repeated adjustment (in case of critical fail, transceiver is removed and dismantled). Optical transceiver manufacturing process involves a lot of steps, but the most important ones (not all accounted) which greatly affects quality of end product are as follows: - Calibration – Tx, Rx, Eye-diagram, Voltage measurements; - Aging test; - Switch test; - Wavelength and Spectrum check; - Lens cleaning. Calibration – Tx, Rx, Eye-diagram, Voltage measurements: Transmitter and receiver tuning, eye-diagram and voltage level adjustment is the most crucial steps at the optical transceiver manufacturing process. These steps initially sets up transceiver for its best working parameters which complies with quality and MSA standard requirements. Transmitter power, receiver sensitivity, transmitter eye-diagram, voltage and temperature calibration and debugging process is shown in diagram 1-1. At this process Optical transceiver is described as device under test (DUT), it is connected to testing board with appropriate electrical interface for specific form factor transceiver SFP, XFP, QSFP etc. DUT transmitter is connected to de-mulitiplexing component (DEMAX) – this unit with the help of optical prisms separates optical wavelength signals which are sent from DUT QSFP LR4 optical transceiver (this transceiver uses four CWDM lines at 1270, 1290, 1310 and 1330nm (EDGE Optics, PN: 40G-QSFP-10). If DUT is SFP/SFP+ transceiver with one output wavelength, then DEMAX unit with optical path control switch is not used. Optical Path Control Switch is unit which simply allows to select any necessary wavelength from input port and forward it to output port. This equipment uses optical switching principle which induces low (well known) insertion loss. Signal then is split, where one portion of signal is sent to oscilloscope (for eye-diagram), and other portion to power meter unit which after measurement forwards it to DUT transceiver receiving port. DUT transmit power is measured with power meter and controlled so that it is at necessary range. Transceiver voltage measurements are done on the test board directly and results are shown on controller PC. Temperature (hot and cold impact) testing unit is located parallel to all other measurement units, and performs DUT temperature change scenarios. What does this unit practically is heats up DUT till maximum operating temperature so that Tx, Rx and eye-diagram measurements can be taken at transceiver maximum operating temperature. And cools transceiver till the minimum operating temperature so that Tx, Rx and eye-diagram measurements can be taken again. These tests can potentially indicate problems related to DUT temperature – which if are not passed then additional calibration is performed on the affected part. However, testing board electrical signaling part is connected to Bit Error Rate Tester. This tester generates random signal pattern which is sent through DUT, latter analyzed with oscilloscope for eye-diagram purposes. Eye-mask measurements and adjustments is important phase of transceivers path to best signal quality guarantee which coincides with MSA standard requirements. Eye-mask definitions specify transmitter output performance in terms of normalized amplitude and time in such a way to ensure far-end receivers can consistently tell the difference between one and zero levels in the presence of timing noise and jitter. Eye-mask measurement results indicates quality of digital signal, but does not indicate protocol or logic problems. The quality of digital signals is simple to see with an eye diagram: Bit-Error-Rate (BER) degrades with eye closure. Eye opening is indicated with yellow arrows at diagram 1-2. MSA standard for optical transceivers indicates precise eye diagram mask (Grey rhomb drawing under yellow arrows) which should be not crossed by respective 0 and 1 signals (blue lines), and their transitions. If test signal lines crosses the eye mask, transceiver fails the test and has to undergo additional calibration. In general – more open the eye (yellow arrows), the lower chance that the receiver in transmission system will mistake a logical 1 bit for a logical 0 bit, or vice versa. Wavelength and Spectrum Test: Optical transceivers has to emit precise wavelength in order to successfully communicate with their counterpart transceivers. For example simplest 10Gbps 10 kilometer (EDGE SKU: 10G-SFP-10), SFP transceiver has transmission wavelength 1310nm with possible deviation +/- 50nm and multi-mode version 10Gbps 300 meter (EDGE SKU: 10G-SFP-300) SFP 850nm with possible deviation +/- 10nm. But transceivers, like, for example 10Gbps 40 kilometer (EDGE SKU: DWDM-10G-SFP-40-21) DWDM SFP transceiver, has 1560.61 nm wavelength with +/- 0.8nm deviation. This DWDM transceiver especially has to be with precise laser output wavelength, because it should not crosstalk with other side channels due to system design principle. During optical transceiver manufacturing wavelength precision is measured with spectrometer (Drawing 1-3). Transceiver is plugged in electrical power source – mostly in factory environment it is special powered PCB or a network device switch as used in this example. This graph represents 1Gbps 10 kilometer transceiver (EDGE SKU: 1.25G-SFP-10D). At spectrograph we can see X axis as wavelength, and Y axis power. The power peak forms a sharp form and is located at 1310.56nm (at -3.16dBm), it is very close to MSA standard regulation 1310.00nm – so we can say this module has passed the spectral test. In case if DUT has power peak at different wavelength which is not consistent with MSA regulation, then this transceiver is discarded as defective! During transceiver manufacturing process, after each testing step optical transceiver lenses are checked for dirt and scratches, and cleaned if necessary. It is due to fact that each testing procedure involves connecting equipment to transceiver optical parts, and therefore it is susceptible to dirt. Before cleaning procedure each transceiver lens is firmly checked by microscope. Picture of microscope test output can be seen in diagram 1-4. If there are no scratches or dirt’s on lens core and its cladding, then this test is positive. Otherwise cleaning procedure is performed. Cleaning procedure removes dirt`s, oils and other foreign bodies/substances. So that after cleaning another microscope test is performed. Of course if core has damages like for example scratches then this transceiver is rejected and dismantled. Optical transceiver manufacture process involves a lot of important steps. Most important ones are at the transceiver manufacturing first stage, when the key elements are soldered and powered on for the first time – for calibration purposes. Calibration stage is crucial because it determines how transceiver will perform for the rest of his life. If transceiver delivers bad performance at calibration stage, then the safest step would be to discard this unit. Other tests which are performed on transceiver after calibration stage can indicate potential problems and weakest points. Aging test and switch test are the perfect tests for indicating that exact transceiver will have problems in long term. Transceiver lens cleanness is important factor because, if lens is damaged by dirt, oil, or scratched, then this can cause issues in long term. Like for example, increased laser deterioration and burnout.
<urn:uuid:10feb95d-65c2-4da9-b094-f3f03bbee220>
CC-MAIN-2022-40
https://edgeoptic.com/transceiver-testing-and-quality-requirements/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00633.warc.gz
en
0.917614
1,706
2.515625
3
Cryptocurrency enthusiasts are familiar with blockchain technology. Bitcoins are attracting investors’ attention due to their sudden rise in value. In the small business environment, entrepreneurs are seeking opportunities to incorporate Bitcoin into their operations. How does Blockchain work, and what exactly is it? The following guide will provide you with a simple explanation of how it works. Here are some of the points it covers: - 21st Century’s Greatest Innovation: Blockchain - Exactly what is it? - How it Works 21st Century’s Most Important Innovation: Blockchain Blockchain is considered the most important innovation of the 21st century since it can distribute data without being copied. As the next big thing in the digital world after the internet, it is set to take over the world. Blockchain was invented for the purpose of resolving the cryptocurrencies’ double-spending problem. The purpose of this guide is to help you understand the basics of Blockchain Technology, its essential features, how it works, and how it can be used to your advantage. To learn more about block chain and cryptocurrencies, check out our resources on bitcoin and other cryptocurrencies. Exactly what is Blockchain? Blockchain, in its most basic form, is a series of time-stamped and immutable data that is periodically updated across a network of computers rather than by a single, centralized server. This network uses cryptographic encryption to encrypt data. This decentralized system cannot be controlled by a central authority. A further advantage is that the data can’t be altered, the network is transparent, and the data can’t be deleted or duplicated. All network members are able to see data transactions. Let’s look at a simple example to understand it better. So, How Does It Work? Have you ever used an internet-based shared spreadsheet? Imagine that thousands of computers are sharing a spreadsheet. Changes made by any user on the network are visible to everyone. Let’s examine each of the three pillars of blockchain in more detail. Blockchain: The Three Pillars Transparency, immutability, and decentralization are three excellent features that make blockchain technology popular across different industries. People today are not aware of how important transparency is, when it comes to Blockchain Technology. The concept of transparency and privacy is confusing to them. It would be helpful if we clarified everything. All transactions in the network can be seen thanks to the transparency feature. Cryptographic identity, however, ensures that it is impossible to determine who made the transaction. If you check the transaction history for a person, you will see his/her public address instead of his or her name. Immutability is the third intriguing feature. The centralized services were the only ones we knew before Bitcoin and BitTorrent. In a network, all information flow will be controlled by a central entity. You must interact with the central entity in order to get information from the particular database. Traditionally, banking systems operate in a centralized manner. The only way to transfer money is through banks, so we don’t have much control over it. Decentralized systems, on the other hand, allow you to securely transact with other users without the involvement of a third party, such as a bank. The introduction of this feature opened up the financial world to cryptocurrencies. Blockchain’s transparency makes it even more valuable. Once something has been added to the blockchain network, it cannot be changed. To allow it versatility in the finance industry, immutability is one of its best features. This simply means that it gives out a fixed-length output string after taking an input string. Due to its decentralized nature and shared network, the blockchain is truly a public network. Since it is decentralized, it cannot be corrupted by hackers. Technology has therefore become the most popular way to keep track of financial transactions.
<urn:uuid:9f611e73-0ca7-44a5-a373-79a3c9461bde>
CC-MAIN-2022-40
https://www.akibia.com/how-does-blockchain-technology-work-a-guide-for-dummies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00033.warc.gz
en
0.931825
806
3.015625
3
“Indicator of compromise (IOC) in computer forensics is an artifact observed on a network or in an operating system that with high confidence indicates a computer intrusion. Typical IOCs are virus signatures and IP addresses, MD5 hashes of malware files or URLs or domain names of botnet command and control servers. After IOCs have been identified in a process of incident response and computer forensics, they can be used for early detection of future attack attempts using intrusion detection systems and antivirus software.” Wikipedia Hello w0rld! In this post I am planning to do a brief introduction into network forensics and how network monitoring can be used to identify successful attacks. Network monitoring is essential in order to identify reconnaissance activities such as port scans but also for identifying successful attacks such as planted malware (such as ransomware) or spear-phishing. Generally when doing network forensics the network footprint is of significant importance since it allows us to replicate the timeline of events. With that said, network footprint can still be obscured/hidden by using cryptographic means such as point-2-point encryption. Even if you can’t see the actual traffic because it is encrypted, what you can see is the bandwidth load which might be an IoC. In incident response the first step is the time that is needed for the attack realization. If the attack is not realized then of course there is no ‘incident response’ (doh!). There is a list of things that the analyst should go over in order to try to identify if an attack was successful. The list is not definite and there are far more things that need to be checked than those discussed here. By identifying the Indicators of Compromise (IoC), we can have briefly describe each attack vector as follows depending on the network footprint that will have: - IP addresses - domain names - DNS resolve requests/response There are also indicators coming out from behavioural analysis. For example a malware which contacts a Command & Control server will ‘beacon’ in a timely (usually) fashion. This ‘beaconing’ behaviour can be identified by monitoring spikes of specific traffic or bandwidth utilisation of a host. Moreover it can be spotted by monitoring out-of-hours behaviour since a host shouldn’t send data except of X type (which is legit) or shouldn’t be sending any data at all. Ransomware will encrypt all accessible filesystems/mounted drives and will ask (guess what!?) for money! Most likely it will be downloaded somehow or will be dropped by exploit kits or other malware. Sometimes it is delivered through email attachments (if mail administrator has no clue!). As stand-alone ‘version’ ransomware comes in portable executable (PE file) format. However variants of Cryptolocker are employing even PowerShell for doing so. In order to detect them we need a way to extract the files from the network dump. There are couple of tools that does this such as foremost but it is also possible to do it ‘manually’ through wireshark by exporting the objects. This assumes that the file transfer happened through an unencrypted channel and not under SSL. Malware might serve many different purposes such as stealing data, utilizing bandwidth for DDoS, or used as a ‘dropper’ where a ransomware is pushed. One of the more concerning is turning a compromised host into a zombie computer. Fast flux malware have numerous IPs associated with a single FQDN whereas domain flux malware have multiple FQDN per single IP. The latter is not ideal for malware authors since this IP will be easily identified and traffic will be dropped (a bit more about ‘sinkhole‘ in the next paragraph!). Assuming that we are after a fast flux malware that uses a C&C, then there are ways to locate the malware by looking for beaconing. Quite often these malware make use of DGAs (Domain Generation Algorithms) which basically hide the C&C IP behind a series of different domain names. Malware that uses DGA are actively avoiding ‘sinkhole’ which allows ISPs to identify the malicious IP (C&C) and leading to the ‘blackhole’ of the traffic, shunning the communication of the infected system with it. An infected host will attempt to resolve (through DNS) a series of domain names acquired from the DGAs, This behaviour will lead to lots of ‘Non-Existent’ NX responses from the name server back to the infected machine. Monitoring the number of NX responses might help us identify infected systems. Moreover monitoring the DNS queries should also help. In a latter post I will publish a small script that I am using for looking for IoC.
<urn:uuid:1e34a5ea-15ad-403c-9593-951caea9d1cb>
CC-MAIN-2022-40
https://labs.jumpsec.com/short-introduction-network-forensics-indicators-compromise-ioc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00033.warc.gz
en
0.933203
1,103
2.796875
3
What's a business continuity plan? A business continuity plan (BCP) is a document that outlines how a business will continue operating during an unplanned disruption in service. It’s more comprehensive than a disaster recovery plan and contains contingencies for business processes, assets, human resources and business partners – every aspect of the business that might be affected. Plans typically contain a checklist that includes supplies and equipment, data backups and backup site locations. Plans can also identify plan administrators and include contact information for emergency responders, key personnel and backup site providers. Plans may provide detailed strategies on how business operations can be maintained for both short-term and long-term outages. A key component of a business continuity plan (BCP) is a disaster recovery plan that contains strategies for handling IT disruptions to networks, servers, personal computers and mobile devices. The plan should cover how to reestablish office productivity and enterprise software so that key business needs can be met. Manual workarounds should be outlined in the plan, so operations can continue until computer systems can be restored. There are three primary aspects to a business continuity plan for key applications and processes: - High availability: Provide for the capability and processes so that a business has access to applications regardless of local failures. These failures might be in the business processes, in the physical facilities or in the IT hardware or software. - Continuous operations: Safeguard the ability to keep things running during a disruption, as well as during planned outages such as scheduled backups or planned maintenance. - Disaster recovery: Establish a way to recover a data center at a different site if a disaster destroys the primary site or otherwise renders it inoperable. Evolution of business continuity plans Business continuity planning emerged from disaster recovery planning in the early 1970s. Financial organizations, such as banks and insurance companies, invested in alternative sites. Backup tapes were stored at protected sites away from computers. Recovery efforts were almost always triggered by a fire, flood, storm or other physical devastation. The 1980s saw the growth of commercial recovery sites offering computer services on a shared basis, but the emphasis was still only on IT recovery. The 1990s brought a sharp increase in corporate globalization and the pervasiveness of data access. Businesses thought beyond disaster recovery and more holistically about the entire business continuity process. Companies realized that without a thorough business continuity plan they might lose customers and their competitive advantage. At the same time, business continuity planning was becoming more complex because it had to consider application architectures such as distributed applications, distributed processing, distributed data and hybrid computing environments. Organizations today are increasingly aware of their vulnerability to cyber attacks that can cripple a business or permanently destroy its IT systems. Also, digital transformation and hyper-convergence creates unintended gateways to risks, vulnerabilities, attacks and failures. Business continuity plans are having to include a cyber resilience strategy that can help a business withstand disruptive cyber incidents. The plans typically include ways to defend against those risks, protect critical applications and data and recover from breach or failure in a controlled, measurable way. There’s also the issue of exponentially increasing data volumes. Applications such as decision support, data warehousing, data mining and customer resource management can require petabyte-size investments in online storage. Data recovery no longer lends itself to a one-dimensional approach. The complex IT infrastructure of most installations has exceeded the ability of most shops to respond in the way they did just a few years ago. Research studies have shown that without proper planning, businesses that somehow recovered from an immediate disaster event frequently didn’t survive in the medium term. Why is a business continuity plan important? It’s important to have a business continuity plan in place to identify and address resiliency synchronization between business processes, applications and IT infrastructure. According to IDC, on average, an infrastructure failure can cost USD $100,000 an hour and a critical application failure can cost USD $500,000 to USD $1 million per hour. To withstand and thrive during these many threats, businesses have realized that they need to do more than create a reliable infrastructure that supports growth and protects data. Companies are now developing holistic business continuity plans that can keep your business up and running, protect data, safeguard the brand, retain customers – and ultimately help reduce total operating costs over the long term. Having a business continuity plan in place can minimize downtime and achieve sustainable improvements in business continuity, IT disaster recovery, corporate crisis management capabilities and regulatory compliance. Yet developing a comprehensive business continuity plan has become more difficult because systems are increasingly integrated and distributed across hybrid IT environments – creating potential vulnerabilities. Linking more critical systems together to manage higher expectations complicates business continuity planning – along with disaster recovery, resiliency, regulatory compliance and security. When one link in the chain breaks or comes under attack, the impact can ripple throughout the business. An organization can face revenue loss and eroded customer trust if it fails to maintain business resiliency while rapidly adapting and responding to risks and opportunities. Using consulting, software and cloud-based solutions for a business continuity plan Many companies struggle to evolve their resiliency strategies quickly enough to address today’s hybrid IT environments and changing business demands. In an always-on, 24x7 world, global companies can gain a competitive advantage – or lose market share – depending on how reliably IT resources serve core business needs. Some organizations use external business continuity management consulting services to help identify and address resiliency synchronization between business processes, applications and IT infrastructure. Consultants can provide flexible business continuity and disaster recovery consulting to address a company’s needs – including assessments, planning and design, implementation, testing and full business continuity management. There are proactive services, such as Kyndryl IT Infrastructure Recovery Services to help businesses identify risks and ensure they are prepared to detect, react and recover from a disruption. With the growth of cyber attacks, companies are moving from a traditional/manual recovery approach to an automated and software-defined resiliency approach. The Kyndryl Cyber Resilience Services approach uses advanced technologies and best practices to help assess risks, prioritize and protect business-critical applications and data. These services can also help business rapidly recover IT during and after a cyber attack. Other companies turn to cloud-based backup services, such as Kyndryl Disaster Recovery as a Service (DRaaS) to provide continuous replication of critical applications, infrastructure, data and systems for rapid recovery after an IT outage. There are also virtual server options, such as Kyndryl Cloud Virtualized Server Recovery to protect critical servers in real-time. This enables rapid recovery of your applications at a Kyndryl Resiliency Center to keep businesses operational during periods of maintenance or unexpected downtime. For a growing number of organizations, the answer is with resiliency orchestration, a cloud-based approach that uses disaster recovery automation and a suite of business continuity management tools designed specifically for hybrid-IT environments. For instance, Kyndryl Resiliency Orchestration helps protect business process dependencies across applications, data and infrastructure components. It increases the availability of business applications so that companies can access necessary high-level or in-depth intelligence regarding Recovery Point Objective (RPO), Recovery Time Objective (RTO) and the overall health of IT continuity from a centralized dashboard. Key features of an effective business continuity plan (BCP) The components of business continuity are: - Strategy: Objects that are related to the strategies used by the business to complete day-to day activities while ensuring continuous operations - Organization: Objects that are related to the structure, skills, communications and responsibilities of its employees - Applications and data: Objects that are related to the software necessary to enable business operations, as well as the method to provide high availability that is used to implement that software - Processes: Objects that are related to the critical business process necessary to run the business, as well as the IT processes used to ensure smooth operations - Technology: Objects that are related to the systems, network and industry-specific technology necessary to enable continuous operations and backups for applications and data - Facilities: Objects that are related to providing a disaster recovery site if the primary site is destroyed The business continuity plan becomes a source reference at the time of a business continuity event or crisis and the blueprint for strategy and tactics to deal with the event or crisis. The following figure illustrates a business continuity planning process used by Kyndryl Global Technology Services. It’s a closed loop that supports continuing iteration and improvement as the objective. There are three major sections to the planning process: - Business prioritization: Identify various risks, threats and vulnerabilities, and establish priorities. - Integration into IT: Take the input from business prioritization and perform an overall business continuity program design. - Manage: Administer what has been assessed and designed.
<urn:uuid:163ecda6-4967-436b-a1d4-75b74264b533>
CC-MAIN-2022-40
https://www.kyndryl.com/au/en/learn/plan
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00033.warc.gz
en
0.937447
1,820
2.953125
3
Today in Military History – May 7th – Sinking of LusitaniaRich RMS Lusitania was an ocean liner owned by the Cunard Line and built by John Brown and Company of Clydebank, Scotland. She was torpedoed by the SM U-20, a German U-boat on 7 May 1915 and sank in eighteen minutes, eight miles (15 km) off the Old Head of Kinsale, Ireland, killing 1,198 of the 1,959 people aboard. The sinking turned public opinion in many countries against Germany, and was instrumental in bringing the United States into World War I. The sinking of the Lusitania caused great controversy, which persists to this day. Lusitania was approximately 30 miles (48 km) from Cape Clear Island when she encountered fog and reduced speed to 18 knots. She was making for the port of Queenstown (now Cobh), Ireland, 70 kilometres (43 mi) from the Old Head of Kinsale when the liner crossed in front of U-20 at 14:10. One story states that when Lieutenant Schwieger of the U-20 gave the order to fire, his quartermaster, Charles Voegele, would not take part in an attack on women and children, and refused to pass on the order to the torpedo room — a decision for which he was court-martialed and served three years in prison at Kiel. From roughly 750 yards, a single torpedo was fired at 14:10 hours. Leslie Morton, an eighteen year old lookout on the bow, spotted thin lines of foam racing toward the ship. “Torpedoes coming on the starboard side!” he shouted through a megaphone, thinking the bubbles came from two projectiles. The torpedo struck Lusitania under the bridge, sending a plume of debris, steel plating and water upward and knocking Lifeboat number five off of its davits. It sounded like a “million ton hammer hitting a steam boiler a hundred feet high,” one passenger said. A second, more powerful explosion followed, sending a geyser of water, coal, and debris high above the deck. Schwieger’s log entries attest that he only launched one torpedo, but some doubt the validity of this claim, contending that the German government subsequently doctored Schwieger’s log, but accounts from other U-20 crew members corroborate it. In Schwieger’s own words, recorded in the log of U-20: Torpedo hits starboard side right behind the bridge. An unusually heavy explosion takes place with a very strong explosive cloud. The explosion of the torpedo must have been followed by a second one [boiler or coal or powder?]… The ship stops immediately and heels over to starboard very quickly, immersing simultaneously at the bow… the name Lusitania becomes visible in golden letters. Lusitania’s wireless operator sent out an immediate SOS and Captain Turner gave the order to abandon ship. Water had flooded the ship’s starboard longitudinal compartments, causing a 15-degree list to starboard. Captain Turner tried turning the ship toward the Irish coast in the hope of beaching her, but the helm would not respond as the torpedo had knocked out the steam lines to the steering motor. Meanwhile, the ship’s propellers continued to drive the ship at 18 knots (33 km/h), forcing more water into her hull. The U-20’s torpedo officer, Raimund Weisbach, also viewed the destruction through the vessel’s periscope and felt the explosion was unusually severe. Within six minutes, Lusitania’s forecastle began to submerge. Lusitania’s severe starboard list complicated the launch of her lifeboats. Ten minutes after the torpedoing, when she had slowed enough to start putting boats in the water the lifeboats on the starboard side swung out too far to step aboard safely. While it was still possible to board the lifeboats on the port side, lowering them presented a different problem. As was typical for the period, the hull plates of the Lusitania were riveted, and as the lifeboats were lowered they dragged on the rivets, which threatened to seriously damage the boats before they landed in the water. Many lifeboats overturned while loading or lowering, spilling passengers into the sea; others were overturned by the ship’s motion when they hit the water. It has been claimed that some boats, due to the negligence of some officers, crashed down onto the deck, crushing other passengers, and sliding down towards the bridge. This has been refuted in various articles and by passenger and crew testimony. Crewmen would lose their grip on the falls—ropes used to lower the lifeboats—while trying to lower the boats into the ocean, and this caused the passengers from the boat to “spill into the sea like rag dolls.”[cite this quote] Others would tip on launch as some panicking people jumped into the boat. Lusitania had 48 lifeboats, more than enough for all the crew and passengers, but only six were successfully lowered, all from the starboard side. A few of her collapsible lifeboats washed off her decks as she sank and provided refuge for many of those in the water. Despite Turner’s efforts to beach the liner and reduce her speed, Lusitania no longer answered the helm. There was panic and disorder on the decks. Schwieger had been observing this through U-20’s periscope, and by 14:25, he dropped the periscope and headed out to sea. Later in the war, Schwieger was killed in action when, as commander of U-88, he was chased by HMS Stonecrop, hit a British mine, and sank on 5 September 1917, north of Terschelling. There were no survivors from U-88’s sinking.
<urn:uuid:01aac7a9-e221-42e0-a573-d24f9df3336b>
CC-MAIN-2022-40
https://blog.cedsolutions.com/673/today-in-military-history-may-7th-sinking-of-lusitania/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00033.warc.gz
en
0.972693
1,256
3.28125
3
In this, the next article in our ‘Storage Basics’ series, we take a look at a technology called Network Attached Storage (NAS). The most widely used method of storage on today’s networks is direct attached storage (DAS). In a DAS configuration, storage devices such hard disks, CD-ROM drives and tape devices are attached directly to the system through which they are accessed. These systems are normally a network server running an operating system such as Microsoft Windows, Novell NetWare, Linux or Unix. The term ‘direct’ is used to describe the connection between device and server because it is exactly that. The connection is normally achieved using a storage device interface such as Integrated Drive Electronics, or more commonly in a server environment via the Small Computer Systems Interface (SCSI). DAS devices can be physically inside the server to which they are attached or they can be in external housings that are connected to the server by means of a cable. Of the two device interface standards discussed, only SCSI supports external devices in this way. DAS is a very mature technology that can be implemented at relatively low cost. It does, however, have it’s drawbacks. First is that the storage devices must be accessed via the server, thereby requiring and using valuable system resources. There is also the issue that, by placing the information on a server system, a licensed connection is needed to access the data and last, and perhaps least, it restricts the amount of disk space that can be used. The solution to all of these problems is to take the storage devices away from the server and connect them directly to the network media. This is where NAS comes in. Before we continue our discussion, however, we should just point out that you must not confuse NAS with a storage area network (SAN). While both allow storage devices to be moved away from the server, SANs are mini networks dedicated to storage devices, while a NAS device is simply a storage subsystem that is connected to the network media. NAS devices are very streamlined and dedicated to a single purpose; make data available to all clients in a heterogeneous network. Because NAS devices are dedicated to a single purpose, their hardware components, software and firmware are tightly integrated leading to more reliability than a traditional file server. With NAS devices, application incompatibilities that can cause a system to crash are a thing of the past, and with fewer hardware devices there is less to go wrong on that front as well. NAS devices operate independently of network servers and communicate directly with the client, this means that in the event of a network server failure, clients will still be able to access files stored on a NAS device. The NAS device maintains its own file system and accommodates industry standard network protocols such as TCP/IP and IPX/SPX to allow clients to communicate with it over the network. To facilitate the actual file access, NAS devices will accommodate one, a couple, or all of the common file access protocols such as SMB, CIFS, NCP, HTTP and NFS. One of the major considerations for file serving is of course file system security. NAS devices tackle this issue by either providing file system security capabilities of their own, or by allowing user databases on NOS to be used for authentication purposes. By providing multiple authentication measures, the flexibility of NAS solutions is further enhanced. Apart from those already discussed, NAS also has other benefits. NAS allows devices to be placed close to the users who use them. Not only can this have the effect of reducing overall network traffic, it also allows users to physically access the NAS device if appropriate. Perhaps the best example of such a situation might be a CD Jukebox NAS system. Users could swap CD’s in and out of the jukebox to make them available on the network. Although there may still be security issues that need addressing, such a situation is far safer than giving a user access to the server room to change the contents of a CD jukebox. Perhaps one of the biggest advantages of NAS is that it offers platform independent access, an increasingly common consideration in today’s heterogeneous networking environments. Because of the fact that many environments now use more than one operating system platform, NAS provides a mechanism that allows users to access data irrespective of what NOS is used to authorize them on the network. One common misconception about NAS is that it is considerably faster than DAS, which is not the case. In terms of data retrieval from a storage device, the bottleneck is rarely the speed of storage devices or the server to which they are attached. Far more likely is that the speed of the network is the restricting factor. Consider that storage device throughput is generally measured in Megabytes per second whereas network media is measured in Megabits per second and you’ll see what we mean. So how easy is it to start using NAS? Well, easier than you might think. NAS devices from a number of companies now offer a great way for you to expand your storage infrastructure. Starting from just a few hundred dollars and going up from there, NAS devices are available as free-standing or rack-mounted units that can accommodate storage devices of all types and all capacities. Many NAS devices also incorporate technologies such as RAID and provide UPS capabilities as well. When you need a storage solution that is ‘outside of the box’ but you don’t want to get involved with the technology and expense of a SAN, NAS is the way to go.
<urn:uuid:eb5aa1b6-fea4-4ccc-ab27-f196b7c4061e>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/hardware/storage-basics-network-attached-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00033.warc.gz
en
0.957209
1,124
3.375
3
How Close Is Ordinary (Classically Entangled) Light To Doing Quantum Computing? (IEEESpectrum) Using a simple laser of the sort found in undergraduate optics labs, physicists may be able to perform some of the same calculations as a hard-to-handle quantum computer operating at ultracold temperatures. The trick is to use classically entangled light, a phenomenon that has some of the same properties as the traditional entanglement spun out of quantum mechanics. Researchers Yijie Shen from Tsinghua University, Beijing, China, and the University of Southampton, UK, and Andrew Forbes from the University of Witswaterand, Johannesburg, South Africa showed they could create a light beam with multiple entanglements. Classically entangled light, sometimes called “structured light,” is not a new concept. But until now no one had entangled more than two properties at once. Forbes says his group’s method can entangle a potentially infinite number of pathways, though the limitations of their equipment might impose a practical cap. In their paper, his group demonstrated eight degrees of freedom within one beam. They do it simply by changing the spacing between mirrors in the laser cavity. “There’s an infinite number of paths that you can take—up, down, left, right, diagonal,” Forbes says. “Not only could we make light that took many different paths at once, but we could encode information into those paths to make it look like we were holding a high-dimensional multi-photon quantum state.” Because quantum computing relies on particles existing in multiple states, some of the algorithms developed for it could be run using classically entangled light instead, Forbes says.
<urn:uuid:bc0df6ec-efbd-4816-a5a3-4367238a2fff>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/how-close-is-ordinary-classically-entangled-light-to-doing-quantum-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00033.warc.gz
en
0.944616
356
3.8125
4
The Threat the Internet of Things Poses to your Brand In 1999, Kevin Ashton a British Entrepreneur coined the term the Internet of Things (IoT) to describe things in our everyday lives which are embedded with electronics of some type that allow them to exchange information over the Internet. In a paper from 2014, from Brand Perfect, it was stated: “When you buy things that are embedded in the internet of things, it changes your relationship with the company you buy it from.” This directly ties to branding and the relationship between the company and the consumer. A Matter of Trust It really boils down to trust. When a consumer buys a smart product, they are giving up control of the device to some algorithm which will drive the device to behave on the customer's behalf. It's the same when you're a passenger in one of those driver-less cars. You gain a deep understanding of the complexity. The sensors that exist in the vehicle's infrastructure as well as sensors installed in other vehicles would plan the best and safest route, updating the route as needed and notifying the passenger of the estimated time of arrival. No Room for Mistakes For example, Target thought they had identified the signs related to someone having a baby. They just made the mistake of sending a 17 year old congrats which was intercepted by her parents. They were not too happy. In the age of big data, companies need to be extremely careful what they do with it and how they use it to engage consumers. Companies that do a good job with this data will be winners, those that slip up and the word will spread, will have a difficult time with customers. Data needs to build trust. Brands need to know what they want to stand for. A brand that ventures into the World of IoT needs to be extremely cautious. There are certain concerns that exist for those wanting to venture into this space. When consumers buy a product that is embedded within the IoT, it changes their relationship with the company they bought if from. You might have trusted them to this point with buying a toaster or a clock radio, but with IoT things get a tad more personal. They now have the access to user data about you. They'll uncover everything from how you go about your day, your schedule, your heart rate, your location at certain times. It really can't be anymore invasive. Consumers will feel as if it is more of an intrusion up on their lives than anything. When that happens, they are likely to trust a brand less and perhaps brands that are doing it the wrong way, will notice the churn effect, as more fans leave than come on board. Take insurance providers for example. Insurers are offering rebates and discounts and low premiums if they can monitor your driving and your habits. Everything from your tire pressure to your mileage, from your average speed to how many right turns you make. But, pretty soon, Cars will enter the IoT world in a big way, that these device will even be needed. They'll just know. How do consumers deal with something like that? Who will have control of the data. As smart as the IoT can be, it is up to us humans to be smarter and protect our data.
<urn:uuid:4027f71c-ef93-4320-8331-8d907eb4038f>
CC-MAIN-2022-40
https://blog.brandshield.com/the-threat-the-internet-of-things-poses-to-your-brand
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00033.warc.gz
en
0.974039
660
2.578125
3
The internet can help people to find useful stuff as well as contact each other through online text messages or even calls. You even have the option to store up your data online through backup or uploading it on drives. You can then share this data with your colleagues, friends, and even family. This is a really simple process that allows you to keep your memories safe. However, sometimes your devices might get infected by viruses. To prevent this from happening, people install antivirus programs on their devices. These ensure that the devices are kept safe and secure. Talking about this one of the best antivirus programs around is Bitdefender. While it might be a great program, some Bitdefender users have reported that they are getting a slow internet connection. If this has happened to you as well, then here some steps that should help you in getting rid of this problem. Bitdefender Slow Internet Connection - Disable Firewall Most antivirus programs, including Bitdefender, come with a firewall system installed into them. This protects their users from any malicious activity while browsing the internet. Although, in some cases, this might end up slowing down the speed of your connection. There can be many reasons for this, for instance, the firewall might be identifying the website you are using as insecure. Whatever the reason might be, if you getting this problem then one way to fix this is by disabling your firewall. You can easily access this by opening up your antivirus program. After this, locate the option labeled as ‘protection’. This should be in the left tab in most versions on the program. After opening it up, you will see that all the features provided by this program are listed in front of you. Find the firewall panel and open it up. You can now disable this feature to confirm if the firewall was causing the problem. - Change Firewall Settings If you are uncomfortable with keeping your firewall disabled then you can change up its settings to fix the problem. This will allow you to use the internet at full speed without your program interfering with the connection. To do so, start with going back to the firewall settings. You can follow the steps mentioned in the step above to open them. Now, instead of switching off the firewall, locate and open up the settings. You should notice that a pop-up will appear on your screen. Proceed to open up the tab labeled as ‘settings’ again. Now go to the ‘Port scan protection’ and switch off this feature. You can now change the stealth settings of your program. Open these up and click on edit. Now locate the network adapter you are using to connect with the internet connection from here and switch it off. Finally, move on to open up the network adapters tab. Change the settings for your adapter to home and office use from here and then save up all of your settings. Now give your system a reboot so that these changes can successfully configure onto the program. You should now notice that the speed of your connection has now improved. - Delete VPN Program Sometimes people use VPN programs to have access to websites that have been region locked. Alternatively, these programs might also be used to keep the user secure. However, one thing to note is that most VPN software and programs on your system will end up slowing down your internet speed. Talking about this, if you are using one then that might be the reason for your slow connection. To fix this, you can disable the program that you are using or even uninstall it from your device. If you don’t want to then you can try to set up an exception for your VPN in the firewall. While this may not give you the full possible speed of your connection. It might still be able to improve the speed by a little. To set up the exception, go to your firewall settings. Now locate the tab for ‘application exceptions’ from here and open it up. You can now either find your VPN from this list or browse it manually from the system. Now save these settings after adding your VPN to the exceptions list.
<urn:uuid:e6c543a2-0feb-44d1-8965-63dbf8105847>
CC-MAIN-2022-40
https://internet-access-guide.com/slow-internet-connection-bitdefender/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00033.warc.gz
en
0.942471
832
2.53125
3
As cyberattackers continue to take advantage of vulnerable people, processes, and technology, they are also expanding their operations beyond the usual targets. Nothing appears to be outside of their jurisdiction, and no one is 100% safe from their malicious campaigns. Although organizations are making progress in protecting themselves, as soon as one attack vector is thwarted, another quickly becomes exposed. Today's adversaries are focusing on APIs in particular, which are quickly becoming the new attack frontier. Recent reports suggest that by 2022, API abuses will be the vector most responsible for data breaches within enterprise web applications. This is primarily due to the extensive growth of API implementations worldwide, providing a new target that hasn't been widely exploited yet. With this, protecting APIs is becoming more important. Although the concepts of API security are somewhat new, the attacks that can be performed through them are not. Most organizations have been experiencing similar threats targeting their networks and Internet-facing applications for years. Now, they must focus their efforts on mobile apps, APIs, and back-end servers being targeted by similar methods as seen in the past. Before discussing the risks associated with today's APIs, we must first understand exactly what makes them unique and vulnerable. API-Based Apps vs. Traditional Apps API-based apps are significantly different from traditional applications. In the past, users/visitors would access a web server via a browser, for example, and most of the "data processing" was performed on the server itself. As client devices became more varied and increasingly powerful — with faster CPUs, extensive memory, more bandwidth, etc. — much of the logic moved away from being performed on back-end servers to the front end (that is, on the client device itself) as highlighted in the graphic below. In the modern application at the bottom of this graphic, the downstream server acts more like a proxy for the data consumed by the API-based app. The rendering component in this instance is the client that consumes the raw data, not the server itself. Many will remember the early days of using smartphones and traditional websites when trying to reserve a flight, for example. People would open a browser on their phone and attempt to use an airline website that was designed for a large computer monitor, not a small smartphone screen. This didn't work too well, and companies began to update their websites by making them more smartphone-friendly. Although this improved the customer experience, navigating a website and completing an airline reservation was still quite cumbersome. As a result, airlines, hotels, car rental companies, etc., began to develop their own mobile apps. Instead of trying to reserve a flight using a mobile-friendly version of the airline's website via a browser on their phone, people now download and install the airline's mobile app and use it exclusively when reserving flights directly from their smartphones. So, how is this different? When making a flight reservation using an airline mobile app, the app uses API calls that are interacting with back-end servers primarily to retrieve data about flight schedules, availability, pricing, seats, etc. The app is also interacting with the user, allowing the customer to specify travel dates, departure and arrival cities, seat selection, and purchase options. In this case, the smartphone is performing almost all of the processing load of the flight reservation within the mobile app itself, without the use of a browser. Although this has tremendously improved the flight reservation experience overall when using a smartphone, it raises the question: Are APIs just as vulnerable to cyberattacks as browser-based applications? The Risks Associated with APIs Unfortunately, APIs are also exposed to attacks and, at a very high level, API security issues exist, similar to their browser-based counterparts. However, since APIs expose the underlying implementation of a mobile app, and the user's state is usually maintained and monitored by the client app, plus more parameters are sent in each HTTP request (object IDs, filters, etc.), some of the security issues surrounding APIs are unique. For the most part, these issues lead to vulnerabilities that can be categorized into three areas of concern: - Exposing sensitive data - Intercepted communications - Launching denial-of-service (DoS) attacks against back-end servers A Good Project with a Nobel Cause As a result of a broadening threat landscape and the ever-increasing usage of APIs, I, along with Inon Shkedy, head of security research at Traceable.ai, have been spearheading the OWASP API Security Top 10 Project. The project is designed to help organizations, developers, and application security teams become more aware of the risks of APIs. Here's what makes the project important: According to the project's site, "a foundational element of innovation in today’s app-driven world is the API. From banks, retail, and transportation to IoT, autonomous vehicles, and smart cities, APIs are a critical part of modern mobile, SaaS, and web applications, and APIs can be found in customer facing, partner facing, and internal applications. By nature, APIs expose application logic and sensitive data such as Personally Identifiable Information (PII) and because of this, have increasingly become a target for attackers. Without secure APIs, rapid innovation would be impossible." The OWASP API Security Project aims to develop, release, and track an ongoing top 10 list of the risks that organizations face concerning their use of APIs, similar to the OWASP Top 10 Most Critical Web Application Security Risks. From broken object-level authorization to insufficient logging and monitoring, this list rounds up the most critical API risks facing businesses while also providing example attack scenarios and recommendations for mitigating these threats. IT teams, security professionals, and developers alike would be well-advised to carefully read through this list to better understand the benefits of APIs, as well as the potential risks presented through their implementation as adversaries set their sights on this emerging target. - 7 Steps to Web App Security - APIs Get Their Own Top 10 Security List - What Are the First Signs of a Cloud Data Leak? - Five Common Cloud Configuration Mistakes Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "The Beginner's Guide to Denial-of-Service Attacks: A Breakdown of Shutdowns."
<urn:uuid:1571dc32-3ad5-439d-b8d7-d997a72ee64b>
CC-MAIN-2022-40
https://www.darkreading.com/application-security/why-you-need-to-think-about-api-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00033.warc.gz
en
0.952125
1,295
2.703125
3
The WISE Choice in Engineering: One Female Engineer's Journey Breaking Stereotypes I started my career in engineering, unaware that less than 10% of UK engineers are women. This is a fact that really shocked me when I embarked upon my engineering career, and has set a pathway and drive inside me to become a role model for the next generation of female engineers in order to improve gender parity in the industry. Gender parity in the STEM (Science, Technology, Engineering and Mathematics) industries has been deliberated for more than 100 years. The Women’s Engineering Society (WES) was founded in 1919 by a group of women whose mission was to resist the pressure for men to reclaim their engineering jobs following World War I, and to promote engineering as a rewarding job for women. As STEM industries evolve, the world we live in develops with the introduction of new medicines, inventions, technologies, and mechanisms, and there is more work and effort focused on gender parity within industry. There is a need within all industries for different genders and mindsets to interact together—creating better ideas and solving problems more efficiently using different points of view. According to the website Female First, “Having women present at every stage of the design process is key to ensuring that both male and female perspectives are taken into account, allowing for diversity of thought and better products being produced.” Choosing a career path starts early on in a person’s life, beginning with GCSE choices, onto A-Level choices, and then choice of degree. Currently you are expected to choose your GCSE options as early as age 15, which starts the ‘what job will I do in the future’ cog turning. This was the typical pathway that was portrayed to me as a child, but today there are many alternative career paths available. Apprenticeships, full-time jobs, part-time degrees, HNC’s, and foundation-degree courses are all on offer to suit all types of learning. As the world advances, these decisions can be influenced earlier on in life, even as young as nine years old—during primary education. Teachers and STEM Ambassadors can show how fun and exciting a job within the industry can be, and the range of opportunities available within the STEM industry. As found on the website Careers Advice for Parents, “Research in 2012 by the UK Commission for Employment and Skills (UKCES) found that the share of full-time learners at 16-17 years old who combine work with their learning has been declining steadily from 40% in the late 1990’s to around 20% in 2011. This indicates that young people in the UK are leaving education increasingly less experienced in the working world.” And, according to Renishaw, a leading engineering and scientific technology company, “Employment surveys have found that a large number of employers rate previous job experience as critical for new recruits. It’s estimated that we need to be at least doubling the number of engineering apprenticeships and graduates each year just to keep up with demand.” Ten years ago, I was a 15-year-old girl deciding upon the career I was going to venture into in the future. I was advised to choose my options carefully and make sure I really enjoyed those subjects. However, being at an age where I had a lack of life experience, it was difficult to decide without the experience in different roles and industries. For me it was trial and error discovering what career I was interested in. Luckily, I had heaps of support from my parents who were able to find me a range of different work-experience placements (web design, chemical engineering, medicine, and pathology) which were fundamental in deciding what career I wanted to pursue. I believe that influences in our lives guide us, give us purpose, and impact on our career choice. I travelled around the world from a very young age and have also valued such influences as my grandad being an engineer and my father having spent 25 years of his working life in the aviation industry. This created my passion for aviation, adventures, and fixing things. I wanted a job where I could apply my passion for problem solving, mathematics, and my creative flair and design. Really there was only one obvious solution looking back—engineering! I started my engineering career with a degree in Aeronautical Engineering, including an industrial placement year at Technocover, a mechanical engineering company who are specialists in the security industry. Thereafter, I successfully completed a Knowledge Transfer Partnership (KTP) with Technocover & the University of Salford, revolutionising the design process of security products. Whilst completing the KTP I also gained a Masters by Research (MSc) in Mechanical & Aeronautical Engineering, a CMI Level 5 Award in Management, Lean Six Sigma Yellow & Green Belt, and became an Incorporated Engineer affiliated with the Institute of Mechanical Engineers. There is a need to break the mould of a stereotypical engineer, to show how fun and rewarding a role in engineering can be. Engineering was, and still is, seen to be a male-dominated environment, with most people thinking of jobs such as plumbers, electricians, mechanics, etc. More work needs to be done to promote the variety in engineering and the number of engineering jobs available—with engineering skills readily transferable from one industry to another. To achieve my goal of becoming a role model and inspiring the next generation of female engineers, I started my own engineering consultancy company, Techwuman Ltd, which specialises in design engineering. The company’s mission is to demonstrate that engineering is not just for men, and that if your dream is to become an engineer, follow it and reach for the stars! We will be helping to inspire and guide the next generation into a career in a STEM role, starting from primary school all the way up to university. One of our company's aims is to promote more women into STEM industries. Whilst the theory behind STEM subjects is important for pupils to understand, at primary level the easiest way to communicate concepts with students is often through practical applications—doing, making, touching, and playing. This can be achieved by: - holding focused assemblies - practical activities on Engineering/STEM topics - introducing role models within the STEM industries If you would like to follow my progress, you can keep up to date by following the social media channels below and also connecting with me on Linkedin.
<urn:uuid:4f03e55f-9a1a-46f6-95c2-15ae1090d9c7>
CC-MAIN-2022-40
https://www.ivanti.com/en-gb/blog/the-wise-choice-in-engineering
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00033.warc.gz
en
0.973198
1,325
2.78125
3
The Health Insurance Portability and Accountability Act (HIPAA) is an important piece of legislation, first introduced in 1996. But, why is HIPAA so important? How has HIPAA helped to improve the healthcare industry and the care given to patients? HIPAA was designed to address one issue in particular: Insurance coverage for individuals that are “between jobs”. Without HIPAA, employees being uninsured as they moved between jobs. HIPAA legislation was created to protect employees in these situations. HIPAA was also designed to prevent healthcare fraud and ensure that all ‘protected health information’ was appropriately secured and to restrict access to health data to authorized individuals only. HIPAA and Healthcare Organizations HIPAA introduced several important benefits for the healthcare industry to help with the transition from paper records to electronic copies of health information. The legislation has helped in the streamlining of administrative healthcare functions and improve efficiency in the healthcare industry. Furthermore, it has helped ensure protected health information is shared securely. The setting of standards for recording health data and electronic transactions ensures that patients’ private information will always be treated in the same careful manner, no matter which healthcare organisation they attend. Since all HIPAA-covered entities must use the same code sets and nationally recognized identifiers, this helps enormously with the transfer of electronic health information between healthcare providers, health plans, and other entities. HIPAA and Patients HIPAA is important for patients as it ensures healthcare providers, health plans, healthcare clearinghouses, and business associates of HIPAA-covered entities must implement multiple safeguards to protect sensitive personal and health information. Although healthcare organisations are likely to take measures to prevent the exposure of sensitive data or have health information stolen on their own accord, without HIPAA there would be no requirement for healthcare organizations to safeguard data – and, importantly, no repercussions if they failed to do so. HIPAA established rules that require healthcare organizations to control who has access to health data. This restricted who can view health information, and who that information can be shared with. HIPAA helps to ensure that any information disclosed to healthcare providers and health plans, or information that is created by them, transmitted, or stored by them, is subject to strict security controls. Patients are also given control over who their information is released to and who it is shared with, unlike times pre-HIPAA. HIPAA is important for patients who want to take a more active role in their healthcare and want to obtain copies of their health information. Even with great care, healthcare organizations can make mistakes when recording health information. If patients can obtain copies, they can check for errors in their records and ensure these mistakes are corrected, such that they get the best possible treatment available to them. Obtaining copies of health information also helps patients when they seek treatment from new healthcare providers – information can be passed on, tests do not need to be repeated, and new healthcare providers have the entire health history of a patient to inform their decisions. Prior to the Introduction of the HIPAA Privacy Rule, there was no requirements for healthcare organizations to release copies of patients’ health information. This made the process of switching healthcare providers extremely difficult for the patient.
<urn:uuid:7ee7ad9c-41e8-4ad9-9a02-d9fd1ec2d413>
CC-MAIN-2022-40
https://www.hipaanswers.com/why-is-hipaa-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00233.warc.gz
en
0.957275
654
3.515625
4
(The series continues with a look at the relationship between security and design in web-related technologies prior to HTML5. Look for part 3 on Monday.) Security From Design The web has had mixed success with software design and security. Before we dive into HTML5 consider some other web-related examples: PHP superglobals and the register_globals setting exemplify the long road to creating something that’s “default secure.” PHP 4.0 appeared 12 years ago in May 2000, just on the heels of HTML4 becoming official. PHP allowed a class of global variables to be set from user-influenced values like cookies, GET, and POST parameters. What this meant was that if a variable wasn’t initialized in a PHP file, a hacker could set its initial value just by including the variable name in a URL parameter. (Leading to outcomes like bypassing security checks, SQL injection, and accessing other users’ accounts.) Another problem of register_globals was that it was a run-time configuration controllable by the system administrator. In other words, code secure in one environment (secure, but poorly written) became insecure (and exploitable) simply by a site administrator switching register_globals on in the server’s php.ini file. Security-aware developers tried to influence the setting from their code, but that created new conflicts. You’d run into situations where one app depended on register_global behavior where another one required it to be off. Secure design is far easier to discuss than it is to deploy. It took two years for PHP to switch the default value to off. It took another seven to deprecate it. Not until this year was it finally abolished. One reason for this glacial pace was PHP’s extraordinary success. Changing default behavior or removing an API is difficult when so many sites depend upon it and programmers expect it. (Keep this in mind when we get to HTML5. PHP drives many sites on the web; HTML is the web.) Another reason for this delay was resistance by some developers who argued that register_globals isn’t inherently bad, it just makes already bad code worse. Kind of like saying that bit of iceberg above the surface over there doesn’t look so big. Such attitudes allow certain designs, once recognized as poor, to resurface in new and interesting ways. Thus, “default insecure” endures. The Ruby on Rails “mass assignment” feature is a recent example. Mass assignment is an integral part of Ruby’s data model. Warnings about the potential insecurity were raised as early as 2005 – in Rails’ security documentation no less. Seven years later in March 2012 a developer demonstrated the hack against the Rails paragon, GitHub, by showing that he could add his public key to any project and therefore impact its code. The hack provided an exercise for GitHub to embrace a positive attitude towards bug disclosure (eventually). It finally led to a change in the defaults for Ruby on Rails. SQL injection has to be mentioned if we’re going to talk about vulns and design. Prepared statements are the easy, recommended countermeasure for this vuln. You can pretty much design it out of your application. Sure, implementation mistakes happen and a bug or two might appear here and there, but that’s the kind of programming error that happens because we’re humans who make mistakes. Avoiding prepared statements is nothing more than advanced persistent ignorance of at least six years of web programming. A tool like sqlmap stays alive for so long because developers don’t adopt basic security design. SQL injection should be a thing of the past. Yet “developer insecure” is eternal. Same Origin Policy is a core of browser security. One of its drawback is that it permits pages to request any resource – which is why the web works in the first place, but also why we have problems like CSRF. Another drawback is that sites had to accept its all or nothing approach – which is why problems like XSS are so worrisome.
<urn:uuid:824de2f5-2de1-4b28-8bf2-e49e87b5fc95>
CC-MAIN-2022-40
https://deadliestwebattacks.com/presentation-notes/2012-05-25-html5-unbound-part-2-of-4
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00233.warc.gz
en
0.930284
1,443
2.609375
3
Choosing between SQL and NoSQL databases is often a decision developers have to make when building software or applications. Although they are decades old, they are still widely used today. The SQL database was first developed in the 1970s by IBM researchers, according to Business News Daily’s SQL database breakdown. Meanwhile, the NoSQL database emerged in the 2000s when developers wanted to work with a more flexible model. Having been around for years, these databases have served many different purposes. Here, we are going to break down their key differences. Having fast Authoritative DNS servers is very important today. Not only do they help the loading of web pages and make browsing websites faster. But, in addition, every email that gets sent by your domain has to be resolved quickly. In the early days of the internet all a mail server had to do is look for MX records and send the mail. Now a Sending Mail Server has to resolve DKIM, SPF, DMARC and maybe even BIMI records, not to mention checking blacklists. Not only that but EVERY "receiving" email server of your email has to resolve the same info. they check to see if the DKIM keys correct, was the Sender Authorized to send (SPF) and Do we need to report anything (DMARC). In order to eliminate errors in the DNS for your Domain, you need to host your DNS on a fast Anycast DNS platform. Most organizations today have more than one Domain registered. They may have a yourcompany.com domain, but then have similar domains registered such as a .net or .org domain name as well. Today, these other domains need to be protected from Spoofing as well. Domains that do not send emails can still be used in email spoofing or phishing attacks. You can protect your domain by adding SPF, DKIM and DMARC records that specifically tell other mail servers to reject mail from the domain trying to be spoofed. This will significantly make it more difficult for attackers to exploit these parked domains your organization may have. Online DIG Tools are a great way to do a quick check of DNS records to see if you have any errors in your configuration. If you are a Linux guy you can simply open up a terminal window and run a DIG command. However there are some really good tools online that can even make it easier. If you do a search for "online DIG tools", you will find that the most popular are Google Toolbox, digwebinterface.com and diggui.com. If you notice in each of these tools there is no option to check DMARC. The following video will show you how to check any dmarc record really easy using these tools. The [email protected] and [email protected] email accounts for your domain are very important. If you send out any amount of email you probably would like to monitor and see that your email is flowing properly. To monitor abuse and postmaster messages, add the people who should receive reports as group members, and set their subscription options. For example, you might want to add your legal or marketing team to the [email protected] Recently the asustor NAS devices have been hit by an ugly ransomware attack called Deadbolt. How can you protect yourself for attachs such as the Deadbolt Ransomware attack? Asus is not the first to get hit with these types of attacks. WD MyBookLive, QNAP have also been hit recently and Synology is also susceptible to these types of attacks. A lot of small organizations and companies use the free email services of gmail, yahoo or outlook.com. However you should be driving customers to your domain email for a number of reasons. This video and article will show you how to set up domain email forwarding properly, and to still make use of the free email services of your choice. The proper implementation of DMARC is a very important as it can be used to make your email flow better, give you the reports you require, and to provide you with better security. It is a necessity today for a number of reasons. If you have not implemented it yet, you should. As you may know, we a Clustered Networks are a large promoter of Open Source software, and we too, use in our office a number of free open source office suite products. I am sure most of you are aware that Google Apps such as DOCS / Sheets / Slides and Calendar are free if you have a gmail account. LibreOffice is a great option that we use as well. But, did you know that even MS Office and Apple offer FREE versions of their popular office suite products? Microsoft offers "Office on the Web" and you can create an Apple ID which will give you Apple iWork apps, even if you do not own an apple product. We have been using Rsync for nearly 20 years here at Clustered Networks. Rsync is an excellent tool to synchronize data between a computer and a server or from server to server. RClone on the other hand has some unique features that make it a very useful utility. At Clustered Networks we use both RSync and RClone in our scripts that we run on a daily basis.
<urn:uuid:67c6ea1e-b6f3-40b4-a58b-c9cb445e69fe>
CC-MAIN-2022-40
https://www.clusterednetworks.com/blog/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00233.warc.gz
en
0.95331
1,075
2.6875
3
In the past, there has been much excitement over research that purported to show a link between changes in a woman’s cycle and how attracted she was to men behaving in different ways. However, research at the University of Göttingen using the largest sample size to date questions these results. The new research showed that shifts in women’s cycles did not affect their preferences for men’s behaviour. The researchers found, however, that when fertile, women found all men slightly more attractive and, irrespective of their hormone cycle, flirtier men were evaluated as being more attractive for sexual relationships but less attractive for long-term relationships. The results were published in Psychological Science. According to the good genes ovulatory shift hypothesis (known as GGOSH), women’s preferences for certain behaviours, presumed to indicate men’s genetic fitness, should differ according to their fertility. To test this, the researchers studied 157 female participants who met strict criteria – including being 18 to 35 years old, heterosexual and having a natural, regular cycle. The participants watched videos showing a man getting to know a woman who was out of shot. In four separate testing sessions, the female participants rated the men both on sexual attractiveness for a short-term relationship without commitment, and on attractiveness for a long-term relationship. The participants were asked to focus on the way the men behaved. The researchers used saliva samples to analyse current hormone levels and highly sensitive urine ovulation tests for validating the ovulation date, and in particular the fertile period. They found there was no evidence that a woman’s mate preference changes across the ovulatory cycle. Rather, women seem to perceive or evaluate every man as slightly more attractive when fertile compared to other cycle phases. They also found that men who act in a more competitive manner and show more courtship behaviour (for instance flirting) were evaluated as being more attractive for short-term sexual relations but less attractive for long-term relationships, independent of cycle phase or hormone levels. To test whether women’s preferences for certain behaviours differed according to their fertility, the researchers used saliva analysis to analyse hormone levels and highly sensitive urine ovulation tests to pinpoint the ovulation date. First author Dr Julia Stern from the University of Göttingen’s Institute of Psychology said, “There is a lot of research on women’s mate preferences, so at first we were surprised that we didn’t see the same effects. However, our new results are in line with other recent studies using more rigorous methods than previous studies.” Stern added, “The finding that ratings of attractiveness increase in the fertile phase, independent of men’s behaviour, is new and indicates that women’s mating motivation is likely to be higher in the fertile phase.” Apart from using well a large sample that met strict criteria, the researchers followed rigorous methods, for instance by preregistering their study before data collection and employing “open science” practices such as making their data and analyses publicly available. One of the most noteworthy differences between humans and other closely related primates is the absence of clear advertise- ments of fertility within the ovulatory cycle (Dixson, 1998). Recent evidence has suggested, however, that there are subtle ovulatory cues in humans. Roberts et al. (2004) showed facial photographs of women taken during the follicular and luteal phases to male and female judges. On average, follicular phase images were judged more attractive approximately 54% of the time. Similarly, relative to those from other cycle phases, women’s body scents near ovulation are judged as more attractive by men (Doty et al., 1975; Singh and Bronstad, 2001; Thornhill et al., 2003) and women’s sexual desires vary across the cycle (Bullivant et al., 2004; Gangestad et al., 2002; Haselton and Gangestad, 2006). Thus, human ovulation may not be completely concealed. In the last decade, the literature on cyclic shifts in women’s social motivations has grown rapidly. For ancestral women, the time required to collect food could have been considerable; thus, Fessler (2003) reasoned that there likely were tradeoffs across the cycle between feeding and other activities such as mating. Fessler (2003) compiled and reviewed evidence that women’s appetites decrease near ovulation, and he hypothesized that this decrease in appetite at high fertility reflects an adaptation in women designed to decrease the motivational salience of goals that compete with efforts devoted to mating. As additional evidence supporting the hypothesis, Fessler reviewed studies showing that women’s ranging activities, such as locomotion and volunteering for social activities, tend to increase near ovulation. Other lines of evidence also indicate cyclic shifts in women’s mating motivations. In a daily report study, Haselton and Gangestad (2006) found that on high fertility days of the cycle women report a greater desire to go to clubs and parties where they might meet men. Macrae et al. (2002) found that women’s ability to categorize male faces and male stereotypic words is faster near ovulation, suggesting increased attentiveness to “maleness” at high fertility. Other research shows that women’s preferences for masculine features (e.g., masculine facial structure) increase near ovulation (reviewed in Gangestad et al., 2005a). Several rigorous within-subjects studies have found that women’s attraction to and flirtation with men other than their primary partner is higher near ovulation than in other phases of the cycle (Gangestad et al., 2002; Haselton and Gangestad, 2006; also see Bullivant et al., 2004). Finally, Fisher (2004) found that women tested near midcycle, compared with those tested in other cycle phases, tend to give lower attractiveness ratings to photographs of female faces—an effect Fisher interpreted as evidence that women are more intra- sexually competitive near ovulation. In sum, a variety of data sources indicate that women’s social motivations – in particular, their sexual motivations – increase near ovulation. Hypothesis: ovulation and ornamentation We hypothesize that changes in women’s motivations manifest themselves in changes in self-ornamentation through attentive personal grooming and attractive choice of dress. Ornamentation in non-humans, including bright plumage, long tails, and large bodies, is generally presumed to be the product of sexual selection (Andersson, 1994). These traits are effective in attracting mates, either because they indicate fitness (e.g., due to costs of their maintenance) or due to pre-existing sensory biases (Daly and Wilson, 1983; Parsons, 1995; Zahavi, 1975). Although rare, animals occasionally employ behavioral orna- mentation, as opposed to morphological ornamentation, in the effort to attract mates. Male bowerbirds, for example, found in Australia and New Guinea, build elaborate structures ornamen- ted with brightly colored flowers and fruits in order to attract mates. Male bowerbirds will often also pick up a brightly colored object in their beaks while displaying to a female, thus effectively ornamenting themselves (Diamond, 1982; Gilliard, 1969). The purpose of these traits, both morphological and behavioral, is to attract reproductive partners, and animals do not expend energy producing these displays when mating is not likely. The bowerbird dismantles its bower and abandons its territory during the non-breeding months (Pruett-Jones and Pruett-Jones, 1982), and even birds that rely on morphological ornamentation, such as brightly colored feathers or bills, may exhibit sexual dimorphism only seasonally (Badyaev and Duckworth, 2003; Peters et al., 2004). In humans too, ornamentation may serve the purpose of attracting mates, at least in part. In a recent study, Grammer et al. (2005) interviewed women at a discotheque; those who rated their clothing as “sexy” and “bold” also reported that their intention for the evening was to flirt or find a sex partner. Although the direction of causality is unclear, these findings suggest that women’s clothing choices are linked with their motivations. Prior research also has shown that men’s behaviors toward their partners shift across the cycle. Three studies have shown that, in the fertile relative to the luteal phase of the cycle, men are more attentive and loving toward their partners (Gangestad et al., 2002; Haselton and Gangestad, 2006; Pillsworth and Haselton, 2006) and two have shown that men are more jealous and possessive (Gangestad et al., 2002; Haselton and Gang- estad, 2006). It is not yet known what cues drive these changes in men’s behavior. One possibility is that men attend to differences in female behavior. For example, Haselton and Gangestad (2006) and Gangestad et al. (2002) found that women’s reports of flirtatiousness with men other than their primary partner were higher when assessed during the late follicular as compared with the luteal phase of the cycle. In both studies, ovulatory increases in flirtatiousness statistically predicted ovulatory increases in male mate retention effort but did not fully account for them, leaving open the possibility that other ovulatory cues affect men’s behavior—including the ornamentation effect we predict. In this study, we measure an overt, readily observable behavior in women that we propose will be linked with ovulation. Specifically, we predict that women engage in self- ornamentation during the high fertility phase of the ovulatory cycle, thus placing themselves in the foreground of the social array. University of Gottingen
<urn:uuid:2c318e7e-9b71-43ca-a020-5db08803429b>
CC-MAIN-2022-40
https://debuglies.com/2020/03/08/women-tend-to-perceive-men-as-slightly-more-attractive-when-they-are-fertile/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00233.warc.gz
en
0.951667
2,059
2.515625
3
The 21st century marks the rise of artificial intelligence (AI) and machine learning (ML) capabilities for mass consumption. Five years ago, there were virtually no cybersecurity vendors leveraging AI, but nowadays everyone seems to be using these 'magic' words. According to Gartner Research, the total market for all security will exceed $120B in 2019. However, the current hype around AI and ML has created a certain amount of confusion in the marketplace. Here is my take and what I consider important when considering an AI solution, as applied to cybersecurity. The Two Types of AI: - Strong artificial intelligence is a term used to describe a certain mindset of artificial intelligence development. Strong AI's goal is to develop artificial intelligence to the point where the machine's intellectual capability is functionally equal to a human's. The cybersecurity industry is not there (yet?). Strong AI’s possible existence in the future is often questioned. - Weak artificial intelligence is a form of AI specifically designed to be focused on a narrow task and to seem very intelligent at it. It contrasts with strong AI, in which an AI is capable of all and any cognitive functions that a human may have and is in essence no different than a real human mind. Today, most successful research and advancements have come from the sub-discipline of AI called machine learning, which focuses on teaching machines to learn by applying algorithms to data. Often, the terms AI and ML are used interchangeably, generating additional confusion in the marketplace. What to Consider and Ask for Clarification On Features: Features are critical to any ML model because they determine what and how information is exposed. Besides the important question of what information to include, how the information is encoded also matters. Data Sets: Effective machine learning requires vast quantities of data to ‘train’ on, which is one of the hurdles 21st century innovation has already overcome. Big Data and the Internet of Things (IoT) are now producing data at an unprecedented scale. The data used to train and evaluate the ML model fundamentally and hugely impacts its performance. If the data used to train the model are not representative of the real world, then the model will fail to do well in the field. I voluntarily left out data hygiene, unbalanced data, label, label noise etc… if you would like to learn more about the importance of Features and Data Sets, you can read our 173 pages of pure science produced by the Cylance Data Science Team: Introduction to Artificial Intelligence for Security Professionals – and here is the example code for our book on GitHub.) Computing Power: Where is the Training Done? Machine learning requires a massive amount of data to process, and it needs equally massive compute processing. Knowing where the training is done can be a good pre-indicator of the model’s robustness and its aptness of fit without testing the solution. Which Technology is Used and to What End? Is ML used to automate human tasks such as generating signatures faster, or is it used to develop an entirely new approach? How long has the model been trained for? A solution that does not change the existing paradigm will most likely not protect you against the ever-increasing sophistication of adversaries. In addition, rushing a model to market to jump on the bandwagon of AI may also expose companies to using training information that has not been thoroughly scrubbed of anomalous data points, resulting in fail delivery in the field. How Are ML Engines Trained? To give an example from my own experience, here at Cylance, we have created an Endpoint Protection Platform (EPP) solution that utilizes ML to prevent malware from executing on your system. The algorithm deployed on your endpoints allows you to defend them before malware has a chance to execute. To create our first ML engine, our data scientists initially fed the engine a large number of samples (≃ 500 million). Half of those samples were malicious, and the other half were non-malicious. The initial algorithms produced to prevent the malicious samples from executing were moderately successful. At this point it was obvious that the engine needed more training, and more importantly, a larger set of data to train from. We thus continued to feed the ML engine with more and more files (both malicious and non-malicious), training it over time to recognize the difference between a good file (that would be allowed to execute) and a bad file (that would be prevented from executing), by analyzing and understanding the intrinsic nature and intentions of each file at the binary level. Most importantly, the engine needed to be able to tell the difference before the file was allowed to run (pre-execution). Over time, the efficacy of our ML engine continued to increase. Our engine was and is still learning, growing and becoming more and more accurate over time. Today, after many years of intensive training, the algorithms our engine produces are at around 99% efficacy. When we started to dig deeper into the algorithms, we noticed that the engine was now looking at well over six million features of a binary; considering that the average human malware reverse engineer is looking in the hundreds, this can be considered a huge improvement in accuracy and scope. Below is a diagram showing how we are developing our ML engine: Which Generation of Machine Learning are We Talking About? Cybersecurity ML generations are distinguished from one another according to five primary factors, which reflect the intersection of data science and cybersecurity platforms. Each generation builds on the last one. Early generations can block "basic" attacks but will struggle or fail to block the most advanced ones. The dataset size and the number of features grows substantially in each generation. - Runtime: Where does the ML training and prediction occur (e.g. in the cloud, or locally on the endpoint)? - Features: How many features are generated? How are they pre-processed and evaluated? - Datasets: How is trust handled in the process of data curation? How are labels generated, sourced, and validated? - Human Interaction: How do people understand the model’s decisions and provide feedback? How are models overseen and monitored? - Quality of Fit: How well does the model reflect the datasets? How often does it need to be updated? The following table summarizes the characteristics of the generations according to the achievement within the factors just mentioned: Third-Generation Machine Learning: Deep Learning The cloud model complements and protects the local model. Decisions are explained by the model in a way that reflects its decision process. Models are evaluated and designed to be hardened against attacks. Concept drift is mitigated by great generalization. Deep learning reduces the amount of human time needed. Fourth-Generation Machine Learning: Adaptive Learning These models learn from local data without needing to upload observations. Features are designed by strategic interactions between humans and models. New features and models are constantly evaluated by ongoing experiments. Humans can provide feedback to correct and guide the model. Most are robust to well-known ML attacks. Fifth-Generation Machine Learning: Supervision becomes optional. Models learn in a distributed, semi-supervised environment. Human analysis is guided by model-provided insights. Models can be monitored and audited for tampering, and support deception capabilities for detecting ML attacks. If you wish to learn more about all the generations of ML, you can read our Generation of ML whitepaper here. Obviously, Ask to Test for Yourself Ideally, test in a production environment to see the solution in the field. Any testing should ideally be transparent, and you should be able to test the solution in any way you want to without restrictions being imposed by the vendor. Understand the Limitations of the Solution ML cannot entirely replace humans. It can assist humans, change a paradigm, automate multiple tasks etc., but in the end, no solution can nor will protect you 100%. Human-machine teams are key to solving the most complex cybersecurity challenges. Malicious AI Report states that as AI capabilities become more powerful and widespread, there will be an expansion of the introduction of new threats, as attackers exploit vulnerabilities in AI systems used by defenders (“Adversarial ML”) and increase the effectiveness of their existing attacks through (e.g.) automation. Furthermore, attackers are expected to use the ability of AI to learn from experience to craft attacks that current technical systems and IT professionals may be ill-prepared to deal with. This further emphasizes the need to educate and train employees to avoid potential cybersecurity disasters. In conclusion, it is important to understand that AI is not a 'magic tool' that solves all problems. ML solutions can only help address some of today's cybersecurity problems and give your business an advantage when facing a cyberattack. All ML solutions are not born nor developed equal and it is of prime importance to understand the strengths and limitations of each one. Knowing what a solution can and cannot do will help you better build and manage your SOC, and lower your overall risk profile.
<urn:uuid:4585c5eb-6b02-4494-8dfd-b5f82b8e0884>
CC-MAIN-2022-40
https://blogs.blackberry.com/en/2018/11/all-ai-is-not-equal-choosing-a-proper-ai-security-solution
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00233.warc.gz
en
0.949886
1,850
2.75
3