text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Complementary Metal-oxide Semiconductor (CMOS) Radio Frequency (RF) Power Amplifiers (PAs)
Complementary metal-oxide semiconductor (CMOS) radio frequency (RF) power amplifiers (PAs) are used in mobile devices and are capable of providing up to two watts of output power (28 to 33 decibels [dBm]). They are used primarily in mobile phones and PC data cards. Currently used in basic Global System for Mobile Communications (GSM)/general packet radio service (GPRS) handsets, it is likely that there will be greater adoption in third-generation (3G) handsets in the future—due to the lower output power requirement and the need for power control for 3G. Key benefits of the technology, compared with the alternatives, are lower cost and greater integration—leading to a smaller footprint in the mobile device.
|
<urn:uuid:ebfcc347-aebd-4654-a25b-e85b6ec14546>
|
CC-MAIN-2022-40
|
https://www.gartner.com/en/information-technology/glossary/complementary-metal-oxide-semiconductor-cmos-radio-frequency-rf-
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00249.warc.gz
|
en
| 0.883262 | 183 | 3.046875 | 3 |
One of the primary draws to virtualization is the resilient nature of virtual machines. They can be transported across hosts without downtime. The files for a backed up virtual machines can be used on any host running the same hypervisor as the original. When combined with Microsoft Failover Clustering, Hyper-V can use a multi-host strategy to increase this resiliency even further.
Understanding High Availability
Despite widespread usage in the infrastructure technology industry, many people do not quite understand what the term “high availability” means. It is not a technology or a group of technologies. In the strictest sense, it’s not really even a class of technologies. “High availability” is an arbitrary target metric.
If a single server is running an application that isn’t offline for more than a total of five minutes in a calendar year, then it has a 99.999% uptime rating (known as 5 nines) and would, therefore, qualify as being “highly available” by most standards. The fact that it isn’t using anything normally recognized as a “high availability” technology does not matter. Only the time that it is available is important.
It is up to each organization to create its own definition of “high availability”. Sometimes, these are formal metrics. A hosting company may enter into a contractual agreement with its customers to provide a solution and guarantee a certain level of availability; such an agreements is typically known as a Service Level Agreement (SLA).
An IT department can also make similar agreements with other business units within the same organization; these are often called Operational Level Agreements (OLA). Agreements will vary in their scope and complexity. Some agreements only count unexpected down time against the metric where others count any down time. Some specify monetary reimbursement, some provide for services compensation, others provide for no remuneration at all.
Not everyone will formalize their agreements. This would simply be known as “best effort” high availability. This is common in smaller organizations where the goal isn’t necessarily the ability to measure success but to ensure that systems are architected with availability as a priority.
A term often confused with high availability is “fault tolerance”. Fault tolerance is a feature of some software and hardware components that allow them to continue operating even when there is a failure in a subsystem. For instance, a RAID array continues operation when a member fails is considered “fault tolerant”.
Hyper-V and Fault Tolerance
Hyper-V itself has no true fault tolerance capability. It is built on the Windows Server kernel which can compensate for some hardware failures, but overall, it relies on hardware features to provide true fault tolerance. The main thing to note is that Hyper-V has no ability to continue running a virtual machine on a host that crashes without interruption to the virtual machine.
Hyper-V and High Availability
Failover Clustering is a technology that Microsoft developed to protect a variety of applications, including third party software and even custom scripts. Hyper-V is one of the applications that is natively supported. As with most other technologies, Failover Clustering provides Hyper-V with a “single active node, one or more passive nodes” protection scheme. However, it provides this protection at the virtual machine level. Any particular virtual machine can only be running on a single node at any given time, but every node in a failover cluster can actively run virtual machines simultaneously.
Failover clustering contributes to high availability by reducing the impact of host down time on guests. If a non-clustered Hyper-V host fails, its guests are offline until the host can be recovered or its contents restored to a replacement system. If a clustered host fails and there is sufficient capacity across the surviving nodes, all guests are re-activated within seconds. This cuts the unexpected downtime of a virtual machine to a fraction of the expectation of a physical system.
Appropriateness and Requirements of Clustering
Not every organization should cluster Hyper-V. Compared to standalone hosts, clustering is expensive in both equipment and maintenance overhead. Most of the maintenance can be automated, but configuring and maintaining such automation presents its own burdens.
Host Hardware Requirements
It is not necessary for all of the nodes in a cluster to be identical. Windows Server/Hyper-V Server will allow you to create a cluster with any members running the same version of the management operating system; it will even allow members running different editions. However, the further you stray from a common build, the less likely you are to achieve a supportable cluster. Running in an unsupported configuration may not provide any benefits at all.
Even if the system allows a cluster, these items should be considered the gold standard for a successful cluster:
- Use equipment listed on Windows Server Catalog. Be aware that Microsoft does not specifically certify for a cluster configuration.
- Use hosts with the same processor manufacturer. You will be completely unable to Live Migrate, Quick Migrate, or resume a saved guest across systems from different manufacturers. They will need to be completely turned off first. This requirement breaks the automated failover routines.
- Use hosts with processors from the same family. There is a setting to force CPU compatibility, but this causes the guest to use the lowest common denominator of processor capabilities, reducing its run-time effectiveness simply to increase the odds of a smooth failover.
Shared Storage Requirements
A failover cluster of Hyper-V hosts requires independent shared storage. This can be iSCSI, Fibre Channel, or SMB 3+. None of the Hyper-V cluster nodes can directly provide the storage for clustered virtual machines (although they can provide storage for local, non-clustered guests).
Hyper-V Cluster Networking Requirements
A standalone Hyper-V system can work perfectly well with a single network card. A Hyper-V cluster node cannot. The introduction of converged network has greatly reduced the requirement of having at least three physical network adapters (four if iSCSI is used), but two should still be considered the absolute minimum.
Whether physical adapters or virtual adapters, a Hyper-V cluster node requires the following distinct IP networks:
- Host management
- Cluster communications
- iSCSI (if iSCSI is used to store virtual machines) and/or SMB (if SMB is used to store virtual machines)
To minimize congestion issues, especially when using 1GbE adapters, a fourth IP network for Live Migration traffic is highly recommended. In order for clustering to use them all appropriately, each separate adapter (again, physical or virtual) must exist in a distinct IP network. The clustering service will use only one adapter per IP network per physical host for any given network.
No matter what, use the Cluster Validation wizard as explained in section 4.1 to verify that your cluster meets all expectations.
How to Determine if a Cluster is an Appropriate Solution
There is no universal rule to guide whether or not you should build a cluster. The leap from internal storage to external, shared storage is not insignificant. Costs are much higher, staff may need to be trained, additional time for maintenance will be necessary, and additional monitoring capability will need to be added. Not only do you need to acquire the storage equipment, but networking capability may also need to be expanded.
In addition to storage, most clusters are expected to add some resiliency. Generally, clusters will be expected to operate in at least an N + 1 capacity, which means that you will have sufficient nodes to run all virtual machines plus an additional node to handle the failover load. While you can certainly run on all nodes when they’re operational, it’s still generally expected that the cluster can lose at least one and continue to run all guests. This extra node has its own hardware costs and incurs additional licensing for guest operating systems.
It is up to you to weigh the costs of clustering against its benefits. If you’re not certain, seek out assistance from a reputable consultant or engage Microsoft Consulting Services.
Failover Clustering Concepts
As explained in the Hyper-V virtual networking configuration and best practices article, a cluster needs a network just for cluster communications. One of the things that will occur on this network is a “heartbeat” operation. All nodes will attempt to maintain almost constant contact with every other node and, in most cases, a non-node arbiter called a quorum disk or quorum share. When communications do not work as expected, the cluster takes protective action. The exact response depends upon a number of factors.
The central controlling feature of the cluster is quorum. Ultimately, this feature has only one purpose: to prevent a split-brain situation. Because each node in the cluster refers to virtual machines running from one or more common storage locations, it would be possible for more than one node to attempt to activate the same virtual machine. If this were to occur, the virtual machine would be said to have a split brain. Furthermore, if multiple nodes were unable to reach each other but the cluster continued to operate on all of them, the cluster itself would be considered to be in split-brain, as a cluster should be a singular unit.
There are multiple quorum modes and advanced configurations, but the most simplistic explanation is that quorum strives for a majority of nodes to be online in order for the cluster to function. By outward appearances, quorum appears to function as an agreement of active nodes. It would be somewhat more correct to say that quorum is controlled from the perspective of each node on its own. If a node is unable to contact enough other systems to achieve quorum, it voluntarily exits the cluster and places each clustered virtual machine that it owns in the state dictated by its Cluster-controlled offline action. This means that if there aren’t enough total votes for any node to maintain quorum, the entire cluster is offline and no clustered virtual machines will run without intervention.
By default, quorum is dynamically managed by the cluster and it’s recommended that you leave it in this configuration. You will need a non-node witness, which can be an iSCSI or Fibre Channel LUN or an SMB share (it can be SMB 2). In earlier versions of Hyper-V, this witness was only useful for clusters with an even number of nodes. Starting in 2012 R2, Dynamic Quorum works best when there is an independent witness regardless of node count. If you’d like to explore the various advanced options for quorum read more here.
Arguably, the greatest reason to cluster at all is clustering’s failover capability. Cluster nodes exchange information constantly in a mesh pattern. Every node communicates directly with every other node. When something changes on one node, all the others update their own local information. You can see much of this reflected in the registry under HKEY_LOCAL_MACHINE\Cluster. If a quorum disk is present, information is stored there as well.
Whereas quorum is primarily node-centric, the failover process is the opposite. When a node ceases to respond to communications, the remaining nodes that still have quorum “discuss” the situation and determine how its virtual machines will be distributed. The exact process is not documented, but the general effect is a fairly even distribution of the guests. If there are insufficient resources available across all remaining nodes, lower priority virtual machines are left off. If all virtual machines have the same priority, the selection criteria appears to be mostly random.
A cluster of Hyper-V nodes provides the ability to rapidly move virtual machines from one to another. The original migration method is known as Quick Migration. The virtual machine is placed in a saved state, ownership is transferred to another host, and the virtual machine’s operations are restored there. No data, CPU activities, or I/O operations are lost, but they are paused. Remotely connected systems will lose connectivity for the duration of the migration and any of their pending operations may time out. The time required for a Quick Migration depends on how long it takes to copy the active memory contents to disk on the source and return them to physical memory on the destination.
The more recent migration method is called Live Migration. This is a more complicated operation that does not interrupt the virtual machine. Contents of system memory are transferred to the target while operations continue at the source. Once all the latent memory is transferred, the machine is paused at the source, the remaining bits are transferred, and the machine is resumed at the destination. The pause is so brief that connected systems are rarely aware that anything occurred. Networking is one potential exception. The MAC address for the guest is either changed or transitioned to the destination host’s physical switch port. This requires a few moments to replicate through the network topology. The transition occurs well within the TCP timeout, but inbound UDP traffic is likely to be lost.
Basic Clustering Guidance
Clustering may be new to you, but it is not an overly difficult concept to learn, implement, and maintain. It offers many solid benefits that make it worthwhile for many organizations. Take the time to discover its capabilities and measure their value against your organization’s needs. The information presented in this article encompasses the major points and should be all you need to know in order to make the best decision.
|
<urn:uuid:6888f100-38b2-4a87-9d9f-fb9f549e351b>
|
CC-MAIN-2022-40
|
https://www.altaro.com/hyper-v/how-high-availability-works-in-hyper-v/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00249.warc.gz
|
en
| 0.929546 | 2,787 | 2.578125 | 3 |
Open source and Free Software are now synonymous with the software industry, which is still a relatively new area of computing, all things considered, writes Jan Wildeboer, EMEA open source evangelist, Red Hat. However, the earliest known “open source” initiative dates back to 1911 when Henry Ford launched a group that saw US car manufacturers sharing technology openly, without monetary benefit. Similarly, in the 1950s Volvo decided to keep the design patent open for its three-point seatbelt for other car manufacturers to use for free.
In universities, in big companies and public organisations, sharing software was the norm. Computers were very expensive, specialised and the majority of software was developed more or less from scratch to solve specific issues. Over the years, computers became more ubiquitous and standardised, so software could be separated from the hardware. This gave way to pure software companies that decided they needed to protect their source code of their products. Proprietary software became the norm.
Proprietary software gave companies a competitive advantage but in shutting off access to source code, collaboration all but stopped. Free and open source software (FOSS) became a niche subject that very few participated in.
However, a few notable examples helped grow the spirit of free and open source software. One such project, GNU – a recursive algorithm for “GNU is not UNIX’ – is a FOSS operating system (OS) first developed in 1985. At the time, UNIX was a popular proprietary OS, so the idea was to create a UNIX-compatible OS that was freely available to anyone.
Of course, you can’t talk about the history of FOSS without mentioning Linus Torvalds and Linux. But there are many more open source innovations over the past 40 years that have helped to bring open source to the mainstream once more – the Apache Web Server, the Android Operating System, PHP, MySQL, OpenJDK (an open source version of the Java Platform), and Netscape (who can remember that?), to name a few.
Nowadays, the most innovative technology innovations come from the open source communities – AI and ML, containers and kubernetes. The licensing of open source even influenced the creation of the Creative Commons Licence, amongst other legal innovations.
For more than a century we’ve seen examples of how sharing, making ideas, products and projects available to modify, expand and rework has resulted in better technology. So it’s no surprise then that open source use in the enterprise is growing – in The State of Enterprise Open Source: A Red Hat Report, 95% say it is strategically important to their business, with 77% agreeing that enterprise open source will continue to grow.
Software Freedom Day 2020: Ideas become stronger with open source
There are many examples of FOSS, some we’re all familiar with such as Android, and some which only those in the tech industry may know – Kubernetes. But one of the most important aspects of open source software development has to be the ability for people and organisations to collaborate in the open to solve some of the world’s most pressing issues.
Ideas become stronger with open source. It’s a simple yet powerful belief that has helped to transform technology. Open data, for example, is helping makers, scholars and artisans protect habitats and preserve heritage in Chile. Open hardware is helping people make groundbreaking scientific discoveries, and enabling students to grow their own food in a classroom.
Open source is helping UNICEF map every single school in the world and show their connectivity in real-time. Open source is helping Greenpeace design an entirely new global engagement platform to help connect its millions of supporters to causes they care about.
These are just a few examples that demonstrate what people are doing with open source – the common denominator is the idea that collaboration and sharing are what makes these projects more successful.
What is software freedom?
- Free to use: The freedom to run the program as you wish, for any purpose
- Free to study: The freedom to study how the program works, and modify it
- Free to distribute: The freedom to redistribute copies so you can help your neighbour
- Free to modify: The freedom to distribute copies of your modified versions to others
- Free to access: The freedom to access the source code of the software
Without software freedom, where would we be? If we take any of the above examples – mapping all the schools across the globe – if it wasn’t for software freedoms this simply wouldn’t be possible. In a world where software is proprietary, where access to code comes at a price, then you will be excluding the vast majority of people who can benefit from it most.
As technologists, developers, sysadmins, IT directors, CTOs, CEOs and every conceivable role in between, it’s our responsibility to ensure that technology and software remains free and that anyone with an interest in it can access it. We can do this by contributing time, money, and resources to open source software projects and foundations. We can do this by supporting the GPL Initiative. And we can do this by being vocal supporters of free and open source software.
|
<urn:uuid:b9802a3e-221b-4320-bf6a-04a99bf15435>
|
CC-MAIN-2022-40
|
https://techmonitor.ai/technology/hardware/software-freedom-day-2020
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00249.warc.gz
|
en
| 0.945009 | 1,075 | 3 | 3 |
Technology is the New Addiction.
Any sufficiently advanced technology is equivalent to magic. – Arthur C. Clarke
What is Technology? Is it a helping hand for us? Or is it a substitute meant to replace us in future?
Technology according to us is a very vast term, where even a simple tool/machine like a hand fan is known to be a part of innovation and a big tool like Artificial Intelligence is also studied with equal relevance.
Technology or should we say ease of getting things done has become a part of our DNA. Today, a new born baby within few months, learns to operate an iPad or an iPhone. There used to be a time when young children used to learn how to operate a normal kid’s toy and probably failed at it.
It seems as if today’s generation has some sort of coding embedded in their DNA which enables them to start operating any gadget at an early age.
With advancement in technology, today almost everything is possible. Maximum of the things have been automated and with AI being developed so rigorously, in the future we wouldn’t need Humans to do our work. Everything would be automated.
Cars have become self-aware (Google Self Drive Car), anything you want can be printed in 3D with help of 3D Printers, any color you like can be scanned and picked up to draw or write. Nothing seems impossible in today’s world.
Despite using science and technology to better our lives, we are the real slaves to technology. We indulge in the need to always have something electronic in our hands – a tool that connects us to the Internet, our games or to our social networks. We’re bypassing the real world to get a digital quick-fix; our work, play and plans for stress release seem to depend on a broadband connection.
Now, fast forward this situation to a decade from now. You see adults sitting around a table in a Wi-Fi-enabled café. Chances are they are not going to be talking to each other, not in the real world at least. At home, fights and arguments will occur a lot more often between spouses due to a lack of communication, and it’s not going to get any better when this generation has kids of their own.
This is the whole point of technology. It creates an appetite for immortality on the one hand. It threatens universal extinction on the other. Technology is lust removed from nature. The real danger is not that computers will begin to think like men, but that men will begin to think like computers. Still, we feel that, end of the day human touch or interaction is very important. Future may have 100 different robots for 100 different things but the feeling of a human next to you can never be replaced.
Come to think of it, all of this is already happening right now.
Technology has slowly eased its way into our lives and formed glass walls between individuals who can communicate with each other but instead chose not to.
So, in the end we just hope that computer/robots/gadgets replace humans only for work and gives us more time to be with each other.
|
<urn:uuid:e0ae2232-c930-41c2-94fe-e23cb10bf444>
|
CC-MAIN-2022-40
|
https://www.impactqa.com/blog/technology-is-the-new-addiction/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00249.warc.gz
|
en
| 0.959096 | 647 | 2.609375 | 3 |
Sited on The Wellcome Genome Campus, The Wellcome Sanger Institute is at the heart of a global hub of genomic research, education and engagement. It carries out scientific studies to understand the biology of humans and pathogens and leads partnerships across the globe to support ground-breaking research and transformative healthcare innovations. The institute’s study programmes are of a magnitude way beyond that of most biomedical research foundations. In short, its science is on an industrial scale.
In addition to a research site, the campus delivers educational and collaborative programmes and conferences, bringing together scientists from across the globe, with on-site accommodation set within a tranquil environment.
On a site of such significance, protecting critical equipment, servers and security systems is a given, and reliability, flexibility and the ongoing cost of maintenance are key in the choice of an uninterrupted power supply (UPS) solution. In addition, increasing numbers of organisations appreciate that keeping energy expenses in check goes hand-in-hand with reducing their carbon footprint. The Wellcome Genome Campus addressed both of these challenges.
The UPS solution
The relationship between The Campus and SolarEdge Critical Power (previously Gamatronic) began with the installation of the first UPS system over 10 years ago. Currently, the site includes over 30 UPS systems ranging in capacity up to 800kW (made up of 4 x 200kVA modular UPSs utilising 25kW power modules).
The flexible, modular approach was key in the choice of SolarEdge Critical Power as it allows the UPS systems to be upgraded as power demands grow and allows the customer the flexibility to select and distribute the 25kW modules where they are required.
In addition to critical UPS product delivery, the SolarEdge team provides annual, preventative maintenance services to the site’s UPS systems. As a result of the SolarEdge Critical Power team’s efforts, significant improvements in mean time before failure (MTBF) and mean time to repair (MTTR) have been made. Furthermore, due to the success of the SolarEdge UPS solutions, a replacement programme to swap out non-SolarEdge systems with SolarEdge Critical Power modular UPS equipment is now in place.
“We are very pleased with the SolarEdge UPS systems that have been installed across our facility for over a decade and they maintain stable and trouble-free operation. SolarEdge provides us consistent world-class service throughout the years for a growing portion of our UPS fleet,” says a Wellcome Institute spokesperson.
The solar PV system
Some years ago, two non-SolarEdge, rooftop solar PV systems were installed to generate long-term savings with green, renewable energy. However, the inverters had developed faults and stopped generating. With no ability to remotely monitor the systems and receive valuable, real-time data directly from the inverters, the failures resulted in valuable lost production.
Based on the reliability and serviceability of the SolarEdge UPS systems and, with energy storage and expertise in power electronics and manufacturing common to both divisions, when it came to overhauling the two PV systems, the site engineers turned once again to SolarEdge.
Future-proofing the ssset
The selection of a SolarEdge PV solution gave the client the opportunity to optimise the system and take advantage of secure and comprehensive system monitoring. Repairing and upgrading the 260-panel solar PV system involved replacing the inverter and installing 30 Power Optimisers on an accommodation building, and replacing two inverters on a research building and installing a further 100 Power Optimisers. The installations were completed, with inverters generating green AC power in under seven days.
The bottom line
Today, the SolarEdge Critical Power solution at The Wellcome Genome Campus comprises two parts: a series of fully maintained UPS systems, and two solar PV systems, delivering energy savings alongside continuity and peace of mind.
The modular UPS systems provide continuous power support to a number of Campus research facilities and their site security systems, which will continue to grow in step with the developing power needs of the site.
The upgrade to DC-optimised PV systems now maximises the power output from the solar panels, to reduce energy bills even further by allowing the panels to work at their optimum points. Remote, module level monitoring enables the customer to stay informed about system performance 24/7, so faults can be detected immediately. In the future, it will also mean fewer site visits, increased system uptime, and lower operational and maintenance costs. The customer also benefits from a system that includes both SafeDC and arc-fault detection and interruption to protect staff and visitors to the site.
“The fact that SolarEdge has been providing us with advanced and reliable UPS solutions for years made the decision to choose SolarEdge for the PV system an easy one. By upgrading the solar PV system we’ve increased the energy yield, and now that we are able to monitor the system on a regular basis, we can see exactly what’s going on,” says a Wellcome Institute spokesperson.
PV systems at a glance
- Accommodation building
1 x SE27.6K three-phase inverter
60 x P650 Power Optimizers
- Research building
2 x SE25K three-phase inverters
100 x P600 Power Optimisers
UPS systems at a glance
- Wellcome Genome Campus
6 x 200kVA
2 x 100kVA
1 x 80kVA
2 x 60kVA
1 x 50kVA
1 x 40kVA
2 x 30kVA
17 x 10kVA
4 x 3kVA
|
<urn:uuid:40f28b96-039c-4b89-938f-9c677c62dca7>
|
CC-MAIN-2022-40
|
https://dcnnmagazine.com/news/case-study-wellcome-genome-campus-ups-and-pv-system/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00249.warc.gz
|
en
| 0.921098 | 1,148 | 2.640625 | 3 |
The best ideas are usually the oldest ones, thought up by early geniuses and perfected incrementally by others, and this is no less true in computing. Show us a new idea in computing and we can usually find that it is a twist on an old idea, modernized for a particular set of hardware and software and perhaps aimed at a slightly different use case.
This publication is called The Next Platform because everyone is trying to build one, either to sell to someone else or to run in their own datacenters. But creating a single, integrated platform that has all of the pieces necessary to run a diverse set of applications is a very old idea, and one that many system makers based their developments upon. The necessity to support different customer sets and different operating systems – first the Unix open systems revolution, then the Linux open source revolution alongside the movement of Windows from the desktop to the datacenter – eroded these platforms, and the advent of so-called industry-standard X86 servers did, too. The mainframe is still kicking, and proprietary midrange systems are still available and both are as technically sophisticated as they ever were. But these vintage platforms have been largely replaced by a new kind of distributed platform, one that at large enterprises, governments, supercomputing centers, and hyperscale datacenter operators is increasingly being built on open source software and that is being used to manage all kinds of transaction processing, analytics, and simulation workloads.
Decades ago, proprietary and monolithic systems were the very first data processing platforms, with everything from compilers to databases and data stores to middleware (even if it wasn’t called that at the time) all woven into an operating system. The mainframe was arguably the first such platform in that it offered a complete software stack and, importantly, spanned a wide range of price points and performance capabilities across a variety of very different underlying systems.
No one would be so foolish to suggest that we go all the way back to the proprietary days of the mainframe and minicomputer eras, with their relatively simple kinds of computing tasks. But we do mean that modern platforms are clearly taking some lessons from that earlier era. And increasingly, in our conversations with upstarts in hardware and software platforms, more than a few of them pay homage to the earlier computing generations. After a few decades of managing the integration of best-of-breed components to build systems, customers are looking for software stacks that are already integrated, but which also leave them room for choice, allowing them to pull out components and snap in other ones as they see fit. Organizations want the benefits of integration in The Next Platforms they are building or buying but they do not want to sacrifice the ability to innovate with those platforms. And while there is plenty of closed source software out there in the enterprise and some in the HPC and hyperscale worlds, open source software, provided it has strong community backing and technical support, is as trusted as any closed source products on the market and in many cases is the preferred way to build the software components of a platform.
There is a dizzying array of hardware and software technologies that organizations are considering as they build out their platforms, and The Next Platform is dedicated to tracking and analyzing these with a keen eye on what users are doing and why they are doing it.
It may be difficult some days to see the lines between servers, switches, and storage, but there are still distinctions that matter. So The Next Platform will track these technologies as elements of modern platforms under the Compute, Store, Connect, Control, Code, and Analyze sections. Compute is where you will find processors and coprocessors and the software that virtualizes computing infrastructure. Store is where all things relating to storage, from storage devices on up to file systems, will live. Connect is all about networking in the datacenter, and Control is focused on the management software to control systems, switches, clouds, clusters, and applications. Code is where we cover the tools for creating applications, from compilers all the way up to platform cloud frameworks. And Analyze is a section devoted explicitly to the technologies created to do data analytics in its myriad forms. For those who are looking for more of an industry breakdown, we will also sort content into HPC, Enterprise, Hyperscale, and Cloud sections based on their primary use case.
This publication is also dedicated to the very high end of the market, where the largest enterprises, government organizations, and hyperscale and cloud service providers are pushing the boundaries of performance and scale and, to put it bluntly, are out there on the cutting edge and taking the most risks.
Many of the technologies dreamed about in academia, in high performance computing, and among hyperscalers eventually get tested out there first and find broader applicability in the enterprise – Hadoop analytics and InfiniBand networking are but two examples that spring immediately to mind. Or, in some cases, ideas cross pollenate between HPC and hyperscale and have yet to trickle down to enterprises. Facebook’s Open Compute server and storage designs and Rackspace’s OpenStack cloud controller are making headway into the HPC centers of the world and among financial services firms. GPU, FPGA, DSP, and other kinds of accelerators are being deployed by enterprises after many years of development and deployment in academia and segments of the HPC and financial services sectors. Whatever the technology and however it is moving around between these groups of customers, suffice it to say that if a technology might find its way into large enterprises, The Next Platform will tell you about it. Any technology that can be used to increase throughput or lower latency and that has potential appeal to HPC, hyperscale, cloud, or enterprise sectors will be covered as well.
A lot of boundaries that we are all used to are blurring. As an example, with converged systems in the enterprise, serving and networking are brought under the same metals skins, and with hyperconverged systems, the storage is merged with the serving and networks to create a seamless and generally virtualized set of infrastructure that is for all intents and purposes a complete platform unto itself for running a class of enterprise applications.
Other examples of this fuzziness abound. For instance, most modern infrastructure is based on clusters of X86 systems, although there is room in the market for the addition of ARM and Power systems in clusters and for scale-out NUMA machines that offer in-memory processing on a much larger scale than nodes in a cluster can do. Similarly, there are differences in storage and in interconnects that link the systems to each other and to their storage. But a system that was created primarily for modeling and simulation, for instance, can be tweaked here and there and then be effective as a platform for data analytics or in memory transaction processing. (This is precisely what SGI has just done to expand its UV line of shared memory systems.) The word convergence gets a lot of use these days, but no matter how you describe it, there is certainly an increasing amount of overlap as vendors of traditional HPC systems are chasing new markets.
Tracking this interplay and interchange of technology between these different parts of the high-end of the IT market is one of the core missions of The Next Platform. How risk-averse enterprises adopt these technologies for competitive advantage – and why and when they do it – is a theme we will return to again and again. This is what is most interesting about the IT sector, after all, and it is what drives the budgets that keep this whole industry moving forward—and here at The Next Platform, us too.
|
<urn:uuid:b0d91b02-ca06-406b-b2e5-a02021cb1fc7>
|
CC-MAIN-2022-40
|
https://www.nextplatform.com/2015/03/01/welcome-to-the-platform/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00249.warc.gz
|
en
| 0.960266 | 1,562 | 2.734375 | 3 |
How Can a Blockchain be Used in Business?
What is a blockchain and how can it be used in business processes? This is a question being asked by companies worldwide as they seek to take advantage of these new technologies.
Read on to learn more about blockchain, how it works, and how it can help businesses.
What is a Blockchain in Business?
A blockchain is a secure method of storing transactional information electronically without a third party necessary (for typical transactions, this would be a bank or the government). Blockchain is typically noted for its critical part in securing cryptocurrency transactions like Bitcoin and other digital forms of currency or digital assets.
It is used as the main storage form of security for cryptocurrencies because it can maintain a secure, decentralized transaction record for purchasing. How it works is, as data comes in to be saved, it is entered into a new block, then it chains itself to the last block, saving in chronological order.
It works so well for crypto specifically because decentralized blockchains are immutable, meaning the data entered is irreversible and provides a permanently recorded record of transactions.
Key Elements and Features of Blockchain
What makes the blockchain so effective? These key elements work together to make it a secure way to record and store digital data:
- Decentralization: A decentralized network means no governing body or third party looking over it.
- Immutable Records: Records stored via blockchain are immutable (incorruptible and unalterable). Blocks on the chain cannot be changed or updated and to add more, every node on the chain must check for validity.
- Greater Security: Blockchains utilize cryptography to protect data using a complex algorithm that acts as a defense against attacks.
- Faster Settlement: Blockchain tech makes transactional periods much quicker than traditional banking systems which can take days to process information. Blockchain speeds this up and makes transactions over long distances faster, easier, and more secure.
- Distributed Ledgers: Distributed ledger technology allows for simultaneous access, validation, and updating of records in an immutable manner across a network that is broken up across multiple entities. This is a core component of blockchain technology because it makes it possible to secure a decentralized digital transaction database. Having networks distributed removes the need for a third party to check for authenticity and manipulation.
How Do Businesses Use Blockchain?
Securely Share Records
Businesses can more securely store and transfer records using blockchain networks with strong, built-in encryption. This can sometimes be a cheaper way to store data rather than renting space in a data center.
Supply Chain Management
Supply chains are complex and managing them takes hours upon hours of time from businesses and their teams, especially when different links in the chain are in different states or countries. A blockchain’s immutable record-holding technology solves many issues associated with supply chain management by eliminating the lack of transparency and inefficiencies in payment processes,
Smart contracts can use blockchain technology to ease the headaches associated with managing contracts for businesses. A smart contract is an automated, self-fulfilling contract where payment is only released once it is confirmed that both parties have fulfilled their agreed-upon terms.
A blockchain is an effective way for businesses to store their digital records and data but it can be a complicated topic to discuss, learn about, and implement. But, if you put the time in to learn these systems, it provides a revolutionary new way to perform secure, unique transactions online, especially with cryptocurrency and other digital assets.
Blockchain, crypto, cybersecurity, and other innovative business technologies were on full display at our business technology summit, Impact Optimize, in 2022. We assembled experts from across the country, including the founders of Orca Capital Austin Barnard and Jeff Sekinger, to speak more on blockchain and how businesses can use it every day along with an assortment of unique breakout sessions on a variety of technology topics. Learn more about Impact Optimize and what’s coming up in 2023.
|
<urn:uuid:6a9ba989-a439-4bf6-8f81-7cc2ce3f5d9b>
|
CC-MAIN-2022-40
|
https://www.impactmybiz.com/blog/how-can-blockchain-be-used-in-business/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00449.warc.gz
|
en
| 0.922537 | 808 | 2.734375 | 3 |
Russia has been hotly criticised for its brutal and aggressive display of power in Ukraine. The methods hackers use to fight against Russia can be seen through their government’s response to the cyber attacks and leaks. Russia militarily invaded Ukraine in 2014, claiming that they had “to protect Russian-speaking citizens living in the Crimea region of Ukraine.” The countries by which hackers have successfully targeted Russia can be found through each government’s response to cyber attacks and leaks. Ukraine has been the most successful country when it comes to resisting aggression. They successfully hacked Russian military and government computer systems, putting the hackers on U.S. and European governments’ lists of cyber warriors working against Russian businesses and organisations.
The Ukraine invasion has provided a gold rush for hackers. Since the conflict began, there’s been a significant increase in cyber attacks in the region, particularly on the military, government, and financial hubs. While the hacks have been effective, many have been costly for the intruders, who were the victims of hacks. The Ukraine incident has also enabled hackers to come together and share their techniques and tools – and so, cyber attacks have become more decentralised, with more people taking part. In this article, you’ll learn about some of the tactics and techniques that have become more commonly used since the Ukraine invasion and how you can protect yourself from them.
One of the most common methods used to target Russia is through the use of malware. It is a type of software designed to damage or disable computers. In 2016, a group of hackers known as “Fancy Bear” used malware to hack into the Democratic National Committee’s server to interfere with the U.S. presidential election. It can be used to delete files, steal information, or even take control of a computer. Hackers often use malware to target Russian businesses and government agencies.
To protect your website from malware, you should install a reliable security solution to detect and remove malware. You should also keep your software and operating system up to date to reduce the risk of being infected by malware.
Another common method used to target Russia is hacking. This involves gaining unauthorised access to a computer or network to steal data or cause damage. Hackers often target Russian websites and deface them with pro-Ukraine or anti-Russian messages. In some cases, hackers have also targeted Russian businesses and government agencies to disrupt their operations.
To protect your website from hacking, you should implement strong security measures like two-factor authentication and password encryption. You should also keep your software and operating system up to date to reduce the risk of being hacked.
IP spoofing is used to disguise a computer’s IP address. This can be done for malicious purposes, such as launching a denial-of-service attack or stealing data. Hackers often use IP spoofing to target Russian websites and businesses.
To protect your website from being the victim of IP spoofing, you should implement security measures such as firewalls and intrusion detection systems. You should also keep your software and operating system up to date to reduce the risk of being hacked.
DDoS (distributed denial of service) attacks are another common method used by hackers to target a country. In a DDoS attack, a group of computers (known as a “botnet”) is used to flood a website or server with requests, overwhelming the system and causing it to crash. In 2015, a group of hackers known as “Cyber Berkut” launched a series of DDoS attacks against the Ukrainian government to disrupt the country’s infrastructure.
IP Address and Domain Scanning
IP address and domain scanning is another common method to target Russia. During this process, the hackers find information regarding the Internet Protocol addresses and domains used by a website or business. They can then use this information to send harmful links to these servers. If a visitor clicks on one of these links, malicious software is launched that can steal valuable data such as login credentials, customer information, or financial data.
To protect your website from IP address and domain scanning, you should install security measures such as two-factor authentication, so you can access the site even if your computer is compromised.
Network exploitation allows hackers to access computers and devices on a network to steal data or cause damage. It often involves phishing, in which users are tricked into clicking on a malicious link or file. Hackers often target Russian businesses and government organisations with network exploitation. In 2016, Fancy Bear targeted the World Anti-Doping Agency (WADA) with network exploitation to release sensitive information.
To protect your website from network exploitation, you should implement security measures such as firewalls. You should also keep your software and operating system up to date to reduce the risk of being hacked.
|
<urn:uuid:d53571f9-42d7-4a95-b5d8-2f84163d07b8>
|
CC-MAIN-2022-40
|
https://mitigatecyber.com/the-methods-hackers-are-using-to-target-russia-and-fight-back-following-ukraine-invasion/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00449.warc.gz
|
en
| 0.951545 | 1,001 | 2.953125 | 3 |
It’s been 76 years since renowned science fiction author Isaac Asimov penned his Laws of Robotics . At the time, they must have seemed future-proof. But just how well do those rules hold up in a world where AI has permeated society so deeply we don’t even see it anymore?
copyright by thenextweb.com
Originally published in the short story Runaround , Asimov’s laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
For nearly a century now Asimov’s Laws seemed like a good place to start when it comes to regulating robots — Will Smith even made a movie about it. But according to the experts , they simply don’t apply to today’s modern AI.
In fairness to Mr. Asimov, nobody saw Google and Facebook coming back in the 1940s. Everyone was thinking about robots with arms and lasers, not social media advertising and search engine algorithms.
Yet, here we are on the verge of normalizing artificial intelligence to the point of making it seem dull — at least until the singularity. And this means stopping robots from murdering us is probably the least of our worries.
In lieu of sentience, the next stop on the artificial intelligence hype-train is regulation-ville. Politicians around the world are calling upon the world’s leading experts to advise them on the impending automation takeover.
Regardless of the way in which rules are set and who imposes them, we think the following principles identified by various groups above are the important ones to capture in law and working practices:
- Responsibility: There needs to be a specific person responsible for the effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes.
- Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is.
- Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed.
- Transparency: It needs to be possible to test, review (publicly or privately), criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained.
- Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviour becoming embedded.
You’ll notice there’s no mention of AI refraining from the willful destruction of humans. This is likely because, at the time of this writing, machines aren’t capable of making those decisions for themselves.
Common sense rules for the development of all AI needs to address real-world concerns. The chances of the algorithms powering Apple’s Face ID murdering you are slim, but an unethical programmer could certainly design AI that invades privacy using a smartphone camera. […]
|
<urn:uuid:e976562e-7bba-4f84-b356-f4fa08efc6bc>
|
CC-MAIN-2022-40
|
https://swisscognitive.ch/2018/03/05/are-asimovs-laws-of-robotics-still-good-enough-in-2018/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00449.warc.gz
|
en
| 0.936129 | 661 | 3.171875 | 3 |
What Optical Character Recognition Can Do For Your Business
Optical Character Recognition
Businesses everywhere are looking for better ways to automate or speed up document processing. Most offices come equipped with a document scanner, and if you were paying attention, the term optical character recognition (OCR) may have crossed your path. However, what exactly is OCR, and what is it used for?
OCR is a widely used technology used to recognize text within images. The text could be in the form of a document or even text within a photo image. OCR is the electronic or mechanical transformation of images of text converting it into machine data. The text can be typed, handwritten, or printed, which can be found within a scanned document, image, document photo, scene photo, or even subtitles placed over an image.
The technology is broadly used for data entry applications. It allows for quick processing of printed paper data records. Examples are passport documents, invoices, bank statements, computerized receipts, business cards, mail, printouts of static data, or any suitable printed text data. OCR is a standard way of digitizing printed text so that it can be searched or edited.
It also allows for more compact storage or online applications. The results can also be used in machine processes such as cognitive computing, machine translation, extracted text-to-speech, key data, and text mining. OCR continues to go through constant improvement and innovations and is used to research pattern recognition, artificial intelligence, and computer vision.
Early versions of the technology were trained to recognize each character’s images through artificial intelligence and machine learning. At that time, OCR typically read in one font at a time. Today’s far more advanced systems are effective in producing recognition patterns with a high degree of accuracy. Most fonts, including script handwriting and print, are now able to be digitized quickly and efficiently.
These newer systems are also capable of reading a variety of digital image file format inputs. Some of the top OCR systems are advanced enough to reproduce formatted output with close approximations of the original scanned documents, including images, columns, and other non-textual components.
How It Works
There are a couple of different ways to approach image recognition in computing. However, when it comes to reading text, it can be far more difficult. There are thousands of types of fonts used to represent text in word documents. For handwritten letters, scripts, or printed, every person has different handwriting styles. In light of the multitude of different representations for the same letter, for example, the letter “A,” what are the best image recognition methods?
- Pattern Recognition – Getting everyone to write identically would make things easier. In the 1960s, a special font was created called OCR-A. It was primarily developed for the banking and financial industries. Every letter was created to be exactly the same width, also called monospace font, and check printers and banks all used the same font type. The first OCR systems were designed to read this specific font type. Once this was successful, the system was then taught to recognize a variety of fonts. The ability to use artificial intelligence perceptron or perception to teach the machine to recognize specific letters is through the repetition of data and machine learning principles.
- Feature Detection – This is a more sophisticated method of recognizing letters and words. This is also known as feature detection or intelligent character recognition (ICR). Feature detection is often used hand in hand with more powerful machines that include neural networks or programs that automatically extract patterns similar to the human brain. Rather than using perceptron and repetition to define the letter “A” in multiple fonts, this type of detection is rule-based. Rules define that if the image contains two lines that come to and meets a point on top and approximately halfway down another line connects the first two, then regardless of the font used, those rules would apply to the letter “A.” Letters are written out in rules that describe their component features. Most modern OCR programs use feature detection.
Types of OCR
OCR is usually an offline application that uses analytics to read a static document. There are a variety of types of OCR applications available on the market.
- Optical character recognition (OCR) – targets and analyzes typewritten text, one glyph or character at a time.
- Optical word recognition – targets typewritten text, one word at a time (for languages that use a space as a word divider). (Usually just called “OCR.”)
- Intelligent character recognition (ICR) – This type of application targets handwritten print script or cursive text one glyph or character at a time, usually involving machine learning. This application method is especially useful for languages in which glyphs are not separated in cursive writing.
Improving Business Applications
Recently, major OCR technology providers began to tweak their OCR systems for improvement. These improvements allow systems to deal more effectively with specified input. Better performance can also be had when the system considers business rules, standard expressions, or rich data that contains color images. For business models that require a more custom digitization method, this type of strategy is called ‘Application-Oriented OCR’ or ‘Customized OCR.’ It is generally used on data points that include license plates, invoices, screenshots, ID cards, driver licenses, or automobile manufacturing.
Saving time and money for any office process improves any tool used in an office setting. One example of how OCR technology has improved the data processing for a company is The New York Times. They have adapted OCR into their custom tool entitled Document Helper. This customized OCR software application enables their offices to process as many as 5,400 pages per hour to prepare their reporters to review.
Intelligent Automation and OCR
Many businesses today have well-established procedures with their OCR application in which accuracy rates are pre-established. This improves accuracy rates when the OCR tool-set is deployed or analyzes document data. Custom OCR applications often work with standard operations procedures defined as sorters, scanners, verifiers, and data-entry operators. To incorporate robotic process applications and AI within the OCR software to improve both efficiency and accuracy, as well as reduce expenses. Using intelligent automation and OCR to process documents involves the following steps:
- Identification: Identify and verify the document’s type, image, machine-readable text form, handwritten document scanned in the system, etc.
- Classification: Classification is based on identification. Data files are organized into understandable formats like invoices, trade bills, and timesheets.
- Read: Use character recognition and analytics to digitize the document.
- Interpretation: Interpretation is derived from conclusions based on text recognized in the document.
- Assimilation: Training and policies determine the assimilation process in which the user performs actions based on the conclusions, like setting up reminders, sending notifications, and storing data into a structured format.
Redaction and OCR
For privacy and cybersecurity reasons, many businesses turn to redaction as a way to sanitize documents. OCR takes a scanned copy of a handwritten data file and once it is digitized into a text format. This new data format allows for searching and manipulating the data with ease. When redacting a document, the search function becomes a primary asset. To specify names and addresses of customers and privacy-related data that need more security, these become data points in which the redaction software application sanitizes the document.
Redaction describes removing specified personal data or other data points to protect an individual’s data privacy. Manual redaction of a single document can take several hours. CaseGuard uses intelligent automation, machine learning, and artificial intelligence in its automated redaction systems. By incorporating OCR within the software application, the user can then digitize any form of data. This combination of features speeds up the process of data entry, digitization, and final data quickly and effectively. Using the OCR steps, redaction would likely fall under the assimilation process.
With improved data processing speeds that facilitate these types of data entry speeds, such as those seen at The New York Times, the time spent processing data from beginning to final storage becomes far more efficient for businesses – saving both time and money. For many businesses, the costs of training employees on a variety of software packages can be exorbitant. Choosing CaseGuard automatic redaction software applications allows the company to save time and money while being far more efficient, safe, and accurate.
|
<urn:uuid:4f708fcb-e686-443b-90d9-2dbd5f7239d7>
|
CC-MAIN-2022-40
|
https://caseguard.com/articles/what-optical-character-recognition-can-do-for-your-business/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00449.warc.gz
|
en
| 0.932004 | 1,773 | 3.359375 | 3 |
August 3, 2020 | Written by: Levente Klein and Siyuan Lu
Categorized: COVID-19 | Science
Share this post:
In 1854, physician John Snow produced a famous map showing that cholera deaths were clustered around a pump on Broad Street in London. Snow’s seminal geospatial analysis — conducted with little more than a pen, a map, and his own observations — led him to formulate the theory that germ-contaminated water was the source of cholera. His is recommendation that the pump be taken out of service effectively ended the deadly outbreak.
With similar intent — but using far more sophisticated geospatial technology — we are shedding light on the environmental and societal impacts of the COVID-19 pandemic, which has kept much of the world in various states of quarantine over the past six months.
Geospatial Big Data
The IBM PAIRS Geoscope is a massive data store of aligned, pre-processed geospatial satellite images, weather predictions, socioeconomic data, news information, and more. Leveraging efficient indexing methods to spatially and temporally link data layers, PAIRS is well suited to perform complex analyses and rapid searches that help reveal key interconnections between data sets. A query of PAIRS which has curated six petabytes of data to date, offers a glimpse of the impact of and response to COVID-19.
COVID-19 started to dominate news reports in the U.S. in early March 2020. By March 21, nearly half of all US news coverage was about COVID-19. High levels of public attention coincident with governmental actions like school closures and shelter-in-place mandates were followed by a drastic reduction of activity — so drastic as to be observable from the space. Figure 1 shows a PAIRS query of New York City’s light intensity as observed at night from the VIIRS instrument onboard the NASA Suomi NPP spacecraft. The light intensity dropped by around 20% starting March 15 and remained low from April onwards compared to late February and early March. The dimming of night light indicates reduced business activity and traffic and may offer an early market intelligence signal.
Figure 1. Change of night light intensity of New York City as observed from the NASA Suomi NPP spacecraft.
Greenhouse Gas Emissions
The reduction of activity is also observable from the query result comparing the average atmospheric nitrogen oxides (NOx) measured by ESA’s Sentinel-5P satellite from January to May (Figure 2). In urban areas, road traffic is the dominant source of NOx emission. It is clear that the emission — and thus driving — in major population centers drastically reduced across the world. The steplike reduction in China occurred between January and February while reduction in the U.S. happened between February and March – reflecting the variation in timing of the COVID-19 outbreaks in different parts of the world.
The observation is supported by data from U.S. Department of Transportation as well, which tracks the frequency of trips of different lengths. Transportation CO2 emission can be derived from trip mileage and estimated average emission per vehicle per mile. We estimate that in the U.S. in April 2020 there was approximately 7.3 billion pounds of CO2 emission from car travel, which was 25% less than in January 2020 and 40% less than a year prior in April 2019.
Figure 2. Animation showing atmospheric NOx from January to May 2020 as observed from ESA’s Sentinel-5P satellite.
Conventionally, it would take a data scientist days to obtain the comparisons shown in Figures 1 and 2. Satellites like Sentinel-5P take around 50,000 images of the earth every month — each covering a small region — so a data scientist would have to download numerous photos and use software to stitch them together on a map. With PAIRS, it takes only a simple query and less than 40 lines of code to get the result.
Climate Impact Science
A major focus for PAIRS is climate science and mitigating humanity’s impact on the planet’s climate. Researchers use PAIRS as a foundational technology to accelerate the development of AI methods to drive regional downscaling of climate forecasting, climate impact modeling, greenhouse gas detection, and optimization of supply chain and cloud computing operations to minimize carbon footprint.
Anyone may try the PAIRS instance on the IBM Cloud using this link. Enter PAIRS by registering for a free IBMid. Enterprise clients may deploy PAIRS in a hybrid cloud environment in which the client’s on-prem PAIRS instance containing private data is federated with the IBM instance. Red Hat OpenShift orchestrates many common PAIRS workloads including data ingestion and analytics.
|
<urn:uuid:4e53b243-fc7f-443c-b24e-7295a90d0f23>
|
CC-MAIN-2022-40
|
https://www.ibm.com/blogs/research/2020/08/pairs-reveals-impacts-covid-19/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00649.warc.gz
|
en
| 0.949744 | 989 | 3.40625 | 3 |
Malware encompasses all types of malicious software, including viruses, and cybercriminals use it for many reasons, such as:
Tricking a victim into providing personal data for identity theft
Stealing consumer credit card data or other financial data
Assuming control of multiple computers to launch denial-of-service attacks against other networks
Infecting computers and using them to mine bitcoin or other cryptocurrencies
Since its birth more than 30 years ago, malware has found several methods of attack. They include email attachments, malicious advertisements on popular sites (malvertising), fake software installations, infected USB drives, infected apps, phishing emails and even text messages.
Unfortunately, there is a lot of malware out there, but understanding the different types of malware is one way to help protect your data and devices:
A virus usually comes as an attachment in an email that holds a virus payload, or the part of the malware that performs the malicious action. Once the victim opens the file, the device is infected.
One of the most profitable, and therefore one of the most popular, types of malware amongst cybercriminals is ransomware. This malware installs itself onto a victim’s machine, encrypts their files, and then turns around and demands a ransom (usually in Bitcoin) to return that data to the user.
Cybercriminals scare us into thinking that our computers or smartphones have become infected to convince victims to purchase a fake application. In a typical scareware scam, you might see an alarming message while browsing the Web that says “Warning: Your computer is infected!” or “You have a virus!” Cybercriminals use these programs and unethical advertising practices to frighten users into purchasing rogue applications.
Worms have the ability to copy themselves from machine to machine, usually by exploiting some sort of security weakness in a software or operating system and don’t require user interaction to function.
Spyware is a program installed on your computer, usually without your explicit knowledge, that captures and transmits personal information or Internet browsing habits and details to its user. Spyware enables its users to monitor all forms of communications on the targeted device. Spyware is often used by law enforcement, government agencies and information security organizations to test and monitor communications in a sensitive environment or in an investigation. But spyware is also available to consumers, allowing purchasers to spy on their spouse, children and employees.
Trojans masquerade as harmless applications, tricking users into downloading and using them. Once up and running, they then can steal personal data, crash a device, spy on activities or even launch an attack.
Adware programs push unwanted advertisements at users and typically display blinking advertisements or pop-up windows when you perform a certain action. Adware programs are often installed in exchange for another service, such as the right to use a program without paying for it.
Fileless malware is a type of malicious software that uses legitimate programs to infect a computer. Fileless malware registry attacks leave no malware files to scan and no malicious processes to detect. It does not rely on files and leaves no footprint, making it challenging to detect and remove.
The most common signs that your computer has been compromised by malware are:
Slow computer performance
Browser redirects, or when your web browser takes you to sites you did not intend to visit
Infection warnings, frequently accompanied by solicitations to buy something to fix them
Problems shutting down or starting up your computer
Frequent pop-up ads
The more of these common symptoms you see, the higher the likelihood your computer has a malware infection. Browser redirects and large numbers of pop-up warnings claiming you have a virus are the strongest indicators that your computer has been compromised.
Even though there are a lot of types of malware out there, the good news is, there are just as many ways to protect yourself from malware. Check out these top tips:
Protect your devices
Keep your operating system and applications updated. Cybercriminals look for vulnerabilities in old or outdated software, so make sure you install updates as soon as they become available.
Never click on a link in a popup. Simply close the message by clicking on “X” in the upper corner and navigate away from the site that generated it.
Limit the number of apps on your devices. Only install apps you think you need and will use regularly. And if you no longer use an app, uninstall it.
Use a mobile security solution like McAfee® Security, available for Android and iOS. As malware and adware campaigns continue to infect mobile applications, make sure your mobile devices are prepared for any threat coming their way.
Don’t lend out your phone or leave your devices unattended for any reason, and be sure to check their settings and apps. If your default settings have changed, or a new app has mysteriously appeared, it might be a sign that spyware has been installed.
Be careful online
Avoid clicking on unknown links. Whether it comes via email, a social networking site or a text message, if a link seems unfamiliar, keep away from it.
Be selective about which sites you visit. Do your best to only use known and trusted sites, as well as using a safe search plug-in like McAfee® WebAdvisor, to avoid any sites that may be malicious without your knowing.
Beware of emails requesting personal information. If an email appears to come from your bank and instructs you to click a link and reset your password or access your account, don't click it. Go directly to your online banking site and log in there.
Avoid risky websites, such as those offering free screensavers.
Pay attention to downloads and other software purchases
Only purchase security software from a reputable company via their official website or in a retail store.
Stick to official app stores. While spyware can be found on official app stores, they thrive on obscure third-party stores promoting unofficial apps. By downloading apps for jailbroken or rooted devices, you bypass built-in security and essentially place your device’s data into the hands of a stranger.
When looking for your next favorite app, make sure you only download something that checks out. Read app reviews, utilize only official app stores, and if something comes off as remotely fishy, steer clear.
Do not open an email attachment unless you know what it is, even if it came from a friend or someone you know.
Perform regular checks
If you are concerned that your device may be infected, run a scan using the security software you have installed on your device.
Check your bank accounts and credit reports regularly.
With these tips and some reliable security software, you’ll be well on your way to protecting your data and devices from all kinds of malware.
|
<urn:uuid:025c1da9-7825-492d-9691-1bc4e9b7a5bc>
|
CC-MAIN-2022-40
|
https://www.mcafee.com/en-sg/antivirus/malware.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00649.warc.gz
|
en
| 0.926577 | 1,422 | 3.5 | 4 |
If you followed the headlines last week, we saw multiple tragedies at schools and colleges around the globe. However, one high school in Missouri was able to prevent what could have been a potentially devastating attack.
An 18 year old student was arrested with conspiring to shoot other students at his school. There were multiple warning signs identified and fortunately two brave students came forward with information prior to his attack.
The student revealed several red flags during his planning phase:
- Tried to recruit students to help him execute shooting
- Internet research on how to make guns and other weapons
- Note on computer with statement, “I hate everything and everyone, I wanted everyone to die.”
A second student was arrested the next day after posting a Facebook message, claiming “he wasn’t going to kill anybody because he’d told police about his plan and couldn’t pull it off now.”
It is critical for schools to look for new and innovative methods for identifying red flags and warning signs so they can prevent incidents like the tragedies above. If those students had not come forward, the attacker’s plan may have been executed.
The Department of Education and Secret Service School Safety report revealed that at least one other person had some type of knowledge of the attacker’s plan in 81% of school shooting incidents.
However, one of the most concerning things is that students are not reporting suspicious comments, acts of violence, bullying, harassment, etc. out of fear of peer abuse or retaliation, they don’t feel their reports will be kept anonymous or they don’t trust the administration to act on their reports.
It is critical for schools to implement safe, anonymous and non-retaliatory reporting procedures, policies, plans, and training for students and faculty to identity warning signs of violence. At the recent White House Conference on Bullying, experts agreed listening to students is critical and it is important to develop simple and effective incident reporting processes.
How are your schools working to improve prevention, encourage students to come forward with information, protect communities and save lives?
|
<urn:uuid:53f82115-2d5e-445a-b785-eae91fd75393>
|
CC-MAIN-2022-40
|
https://www.awareity.com/2011/04/13/preventing-the-next-school-tragedy/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00649.warc.gz
|
en
| 0.976732 | 429 | 3.015625 | 3 |
From targeted spear phishing strategies, to insidious malware campaigns, the tactics employed by cybercriminals continue to evolve in step with those who design defences for enterprise networks. In recent years it has become hard to read the news without being hit by a headline regarding ransomware. With demands for ever-increasing amounts of cryptocurrency, groups of threat actors, sometimes referred to as ransomware collectives, are targeting businesses both small to large with attacks designed to coerce payment using theft and threats.
Ransomware effectively steals company data by encrypting information and denying access to its owners. Under threats of releasing confidential material to the public or simply disrupting essential business processes, the hackers extort a ransom from their victims. One name in ransomware has cropped up recently time and time again, employing even bolder methods than its peers, and cutting a reputation for itself: Maze.
Who and what is Maze?
A cybercriminal group responsible for the ransomware named after it, Maze was first identified in May 2019 and shows no sign of slowing up its activities, as outlined by its recent attack on IT service giant Cognizant, costing the company over $70m (£55.5m).
Also known under the name ChaCha, Maze was initially observed to be an unexceptional example of ransomware that was related to online extortion campaigns. However, within six months, it had changed its shape, gaining a notoriety for more aggressive and public attacks. In November, taking steps previously unseen by almost every instance of malware to date, Maze started to publicly expose its targeted victims. If the companies refused to pay the requested ransom, Maze operators would post their names on an open-to-the-public site, often threatening them with samples of stolen data as proof of penetration.
Attack campaigns that utilise Maze ransomware commonly impersonate government agencies and other official bodies to encrypt and steal data files before attempting to extort payment from the information’s owner. Typically, Maze is employed as one element of a multi-pronged attack strategy. It has often been observed to appear in either the second or even the third step in such a campaign but is rarely involved in the initial method to obtain access.
How does Maze differ from other types of ransomware?
Ransomware has become more prevalent in the past five years and is now a prominent form of cyberattack. However, experts have noticed that for the most part, ransomware assaults have been typically one-dimensional up until now, with threat actors limiting attacks to encrypting data that is local to their mark’s targeted environment. Although this can still prove disruptive and problematic for its victims, particularly those without a dedicated IT security team, in many cases ransomware targets have successfully managed to decrypt their company information without conceding to cybercriminal demands.
In this respect, the functionality of Maze goes far beyond what is traditionally expected of ransomware tactics, employing a three-blow combination attack consisting of the “Three E’s” – Encrypt, Exfiltrate and Extort.
The profound difference between Maze and other kinds of ransomware being utilised lies in its capability to extract confidential encrypted data and to extort a payment from its victim. While other ransomware collectives are merely encrypting locally based victim data, actors using Maze can potentially apply far greater pressure to targets by intimidating them with threats of leaking sensitive material.
Threats from Maze should not be taken lightly. Cyber security specialists at Trend Micro have observed that ransomware groups employing Maze have not delivered empty threats. Instances have been confirmed where sensitive data belonging to victims was publicly leaked on dedicated sites online, established for this purpose. Originally rearing its ugly head in December 2019, this tactic comprises leaking and posting parts of confidential files or raw databases owned by victims that refuse to pay the ransom demanded.
The way Maze ransomware works
Varying types of malware will work in different ways, depending on the code they employ that instructs them what tasks to execute. Ultimately, ransomware only requires access to a system in order to work, which makes managing to obtain entry the largest part of its job.
While most other forms of ransomware commonly employ spam campaigns via email or social engineering to obtain illegal access to the targeted system, Maze ransomware instead uses exploit kits in drive-by downloads. Exploit kits are comprised of a collection of recognised software vulnerabilities that, when they come together, can serve as an exploit toolkit.
While there is nothing new about exploit kits or the use of them, their adoption in the world of ransomware is entirely unheard of except by Maze operators. One of such tools employed by Maze is dubbed “Fallout” and is a kit comprising various different exploits identified on GitHub, including the exploit CVE=2018-15982 in Flash Player. Rather than using the web browser to launch its payload, the relatively recent exploit kit Fallout employs PowerShell to complete the dump instead.
After obtaining access to an organisation’s systems, Maze will then encrypt data, locking barring access from its owner. It will then exfiltrate the encrypted data so it can threaten to leak it publicly and leave a digital note behind for victims, so they know how to make the requested payment.
Defending against Maze ransomware attacks
To safeguard themselves against attacks from Maze ransomware attacks, there are multiple steps organisations and institutions can take. Establishing offsite backups is essential so that if data is locked off, your firm can still function by restoring the required data if necessary. All your company computers should be employing the latest security solutions and implementing the most up-to-date patches available against any vulnerabilities newly discovered. Multifactor authentication protocols should be put in place and personnel should be well-trained on the tactics used by threat actors to penetrate companies so they can report suspicious activity.
One of the most effective defences against Maze ransomware attacks is encrypting your sensitive data. That’s why at Galaxkey, we have developed a secure platform for enterprises that is easy to use. With powerful end-to-end encryption whether your data is at rest or on the move, it will always be inaccessible to anyone without the required authorisation.
|
<urn:uuid:f198d8cd-b22e-403d-8271-551b8fb8c4b8>
|
CC-MAIN-2022-40
|
https://www.galaxkey.com/blog/how-does-the-maze-ransomware-work/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00649.warc.gz
|
en
| 0.954179 | 1,245 | 3.078125 | 3 |
In this Cisco CCNA training tutorial, you’ll learn about ARP, what the Address Resolution Protocol is, how it works, and what are the commands to be used. Scroll down for the video and also text tutorial.
ARP – The Address Resolution Protocol Video Tutorial
OSI Reference Model – Encapsulation
We’ll start by looking at how this fits into the OSI stack. So, we’ve got a sender on the left, it’s going to send some traffic to the receiver on the right. The sender will compose the packet, starting with the Layer 7 information, that’s the Application Layer.
It will then encapsulate that with Layer 6, the Presentation Layer header.
That will be encapsulated in the Session Layer header.
Then at Layer 4, with the Transport Layer header, it will include information, such as whether it’s TCP or UDP, and the port number.
That will then get encapsulated with the Layer 3 IP header, which includes the source and destination IP address.
Then it will be encapsulated in the Layer 2 Data-Link header, which includes the source and destination MAC address, and that will then get put onto the physical wire.
IP to MAC Address Resolution
The sender can either send directly to an IP address or it can send to an FQDN. If it sends to that Fully Qualified Domain Name, then that will need to be resolved into the IP address using DNS.
We’ll find the destination IP address, then when the packet gets down to Layer 2, the sender also needs to know the destination and MAC address. So when it composes a packet, it needs to know both the destination IP address and the MAC address as well.
Now, the IP address is a logical address, which is controlled by administrators. It makes sense that we can have that referenced in the application, either directly as the destination IP address or as the FQDN, which can be dissolved by DNS.
The MAC address, on the other hand, is not a logical address. We just have that great, big, flat global address space, so it’s not really possible either for the user to enter the destination MAC address themself or for it to be configured in the application.
Because of that, we need a way that it can be automatically derived. We need a protocol that’s it’s going to be able to figure out what the MAC address is automatically, and that’s what ARP is, the Address Resolution Protocol. ARP maps the destination IP address to the destination MAC address.
ARP Address Resolution Protocol
In the example here, we’ve got a sender on the left at 172.23.4.1, and its MAC address is 1111.2222.3333. It’s going to send some traffic to the receiver on the right with IP address 172.23.4.2 and the MAC address 2222.3333.4444.
In our example, the sender already knows that it wants to send traffic to IP address 172.23.4.2. It can compose the packet as far as the Layer 3 IP header, but it doesn’t know the receiver’s MAC address yet. It’s going to use ARP to find that out.
It will send out an ARP request, which is a Layer 2 broadcast. The ARP request says, “Hey, I’m looking for 172.23.4.2, what’s your MAC address?” That will come from the sender’s MAC address of 1111.2222.3333, and it goes to a destination MAC address of FFFF.FFFF.FFFF. That is the Layer 2 broadcast address.
Obviously, the sender has to send it out everywhere because it doesn’t know what the intended destination’s MAC address is yet. That will come into the switch, the switch will see that it is broadcast traffic, so it will flood it at all ports.
It will hit everything plugged into that switch, including, in our example, the receiver on the right, which will process that ARP request.
It will see that it’s looking for 172.23.4.2, and that is it’s own IP address, so it will respond to the ARP request. It will send an ARP reply back, saying, “I’m 172.23.4.2, and here’s my MAC address.” That comes from its source MAC of 2222.3333.4444, and the destination MAC address is the original sender’s unicast MAC address of 1111.2222.3333.
The receiver knows exactly what to send it back to because that original MAC address of 1111.2222.3333 was in the ARP request. The switch will then send that ARP reply just out of Port 1, down to the original sender. That was unicast traffic and it’s for a known MAC address which is already in its MAC address table.
That is how ARP works when both of the hosts are on the same IP subnet. ARP replies are saved in a host ARP cache so that it doesn’t need to send an ARP request every time it wants to communicate with somebody else.
Host ARP Commands
To view the ARP cache, we can do that on Windows with the arp -a command. On a Linux host, we use the arp -n command.
To flush the cache on Windows, we use the command:
netsh interface ip delete arpcache
To flush the cache on Linux, we use the command:
ip -s -s neigh flush all
We wouldn’t normally do that, but if we were troubleshooting it or if the ARP cache had somehow got corrupted, that’s how we would clear it.
On a Windows host with an IP address of 172.23.4.1, for example, I’ve got a Linux host at 172.23.4.2. Let’s ping that to generate some traffic. In the Windows host here, I’m going to ping 172.23.4.2. It knows what the destination IP addresses, but it doesn’t know what the matching destination MAC address is yet.
It’s going to do an ARP request to find that out. If I now do an arp -a, I should see an entry for 172.23.4.2, which is our Linux host, and I can see what its MAC address is.
If I jump onto that Linux host, and I do an arp -n there, then I’ll see an entry for 172.23.4.2. Because I had some traffic between those two hosts, it’s got an entry in its ARP cache for that host IP address and its MAC address as well. That’s how ARP works when both hosts are in the same subnet.
Now, we’ll see how ARP works when the traffic has to go through a router. In the example, 172.23.4.1 wants to send a packet to 192.168.10.1, so that’s two different IP subnets.
You see the sender over there on the left, the receiver is on the right, and we’ve got a router that is going to route traffic between those two subnets.
We can’t have ARP working the way it did earlier when both hosts were on the same subnet. If the sender on the left sends out a normal ARP request for 192.168.10.1, that will go out as a Layer 2 broadcast, and it wouldn’t be forwarded by the router.
The ARP request would never get to the receiver. Also, we knew that when we send traffic between two different IP subnets, it has to be sent via a router. So, the sender is not going to send an ARP request for 192.168.10.1.
It knows not to do that because it compares its own IP address and subnet mask with the destination IP address, and it can see that it’s on a different IP subnet. So, the sender knows that it has to send the traffic via a router.
It doesn’t send an ARP request for the final destination, instead, it sends an ARP request for its default gateway. The sender at 172.23.4.1 will send an ARP request for 172.23.4.254, which is its default gateway. That comes from a source MAC of 1111.2222.3333 of the sender. The destination MAC is, as usual, for an ARP request, FFFF.FFFF.FFFF, the Layer 2 broadcast address.
In the ARP request, it says it’s a request for 172.23.4.25, asking it for its MAC address. That will hit everything in the 172.23.4.0 subnet, including the router. The router will see it’s an ARP request for itself, so it will send an ARP reply.
That comes from its source MAC of the 172.23.4.254 IP address, which was 4444.5555.6666 and the destination MAC address is 1111.2222.3333. The router knows to send it there because that source MAC was in the original ARP request.
The sender will have been holding the IP packet that’s intended for the final destination while it sent out the original ARP request there. It now knows where to send it to for the destination MAC address, so it will now send that IP packet.
The IP packet comes from a source IP address of the sender, 172.23.4.1. The destination IP address is the IP address of the final destination, so that will be 192.168.10.1. The source MAC is 1111.2222.3333, and the destination MAC is for that sender’s default gateway, which was 4444.5555.6666.
That packet will hit the router, and the router sees that it needs to send it to 192.168.10.1. The router does not know the MAC address of 192.168.10.1, because it hasn’t communicated with it before in our example, so it’s not in the ARP cache.
The router will hold the IP packet from the sender on the left, and it will send an ARP request for 192.168.10.1. That will go out its interface on the right, which has got IP address 192.168.10.254, so it’s in the same IP subnet as the final destination.
It will say it’s an ARP request for 192.168.10.1, it comes from the source MAC of the router’s IP address, 192.168.10.254. That was MAC address 4444.5555.7777, and it’s an ARP request, so it goes to a destination MAC of the Layer 2 broadcast, FFFF.FFFF.FFFF.
That will hit everything in the 192.168.10 subnet, including the receiver over on the right. The receiver on the right will see it’s an ARP request for its IP address of 192.168.10.1, so it will send the ARP reply.
The ARP reply comes from its source MAC 2222.3333.4444, and it goes to the destination MAC of the router’s interface on the right-hand side there, which was 4444.5555.7777. The router now knows the MAC address of the final destination on the right, so it will send the IP packet.
The IP formation in the packet never changes. The source IP address is always the original sender, which is 172.23.4.1 in our example, and the destination IP is always the final destination address, which was 192.168.10.1.
The source MAC is the router’s interface on the right-hand side, which was 4444.5555.7777, and the destination MAC is 2222.3333.4444.
The source and the destination IP address never change end to end, but the MAC address source and destination will change physical hop by physical hop.
Router ARP Commands
If you want to view the ARP cache on a Cisco router, the command is:
To clear the ARP cache, the command is:
In the example below, we have R1 at 10.10.10.1, R2 is at 10.10.10.2, and R3 is at 10.10.20.1. So if I jump onto R1 here, and I’ll do a show arp, it’s already done pings to R1 and R2. The MAC address is reachable at interface FastEthernet 0/0. Same through with R2 and R3.
If I go onto R3 and do a show arp on here, we’ll see that R3 is in the 10.10.20 subnet. It got entries for 10.10.20.1 and 10.10.20.2. The show arp command shows you the known IP address, the MAC address, and the interface it is reachable on.
ARP – The Address Resolution Protocol Configuration Example
This configuration example is taken from my free ‘Cisco CCNA Lab Guide’ which includes over 350 pages of lab exercises and full instructions to set up the lab for free on your laptop.
- Do you expect to see an entry for R3 in the ARP cache of R1? Why or why not?
ARP requests use broadcast traffic so they are not forwarded by a router. R1 will have entries in its ARP cache for all hosts it has seen on its directly connected networks (10.10.10.0/24).
R1 is not directly connected to the 10.10.20.0/24 network so it will not have an entry in the ARP cache for R3 at 10.10.20.1.
R1 can reach R3 via R2’s IP address 10.10.10.2 – this IP address is included in the ARP cache.
The DNS server at 10.10.10.10 is also in the same IP subnet as R1 so will also appear in the ARP cache.
2. Verify the ARP cache on R1, R2, and R3.
Protocol Address Age (min) Hardware Addr Type Interface
Internet 10.10.10.1 – 0090.0CD7.0D01 ARPA FastEthernet0/0
Internet 10.10.10.2 4 0004.9A96.A9A5 ARPA FastEthernet0/0
Internet 10.10.10.10 2 0090.21C6.D284 ARPA FastEthernet0/0
Protocol Address Age (min) Hardware Addr Type Interface
Internet 10.10.10.1 4 0090.0CD7.0D01 ARPA FastEthernet0/0
Internet 10.10.10.2 – 0004.9A96.A9A5 ARPA FastEthernet0/0
Internet 10.10.10.10 1 0090.21C6.D284 ARPA FastEthernet0/0
Internet 10.10.20.1 4 0030.F2BA.30E7 ARPA FastEthernet1/0
Internet 10.10.20.2 – 0060.2FCA.ACA0 ARPA FastEthernet1/0
Protocol Address Age (min) Hardware Addr Type Interface
Internet 10.10.20.1 – 0030.F2BA.30E7 ARPA FastEthernet0/0
Internet 10.10.20.2 4 0060.2FCA.ACA0 ARPA FastEthernet0/0
R2 is directly connected to 10.10.10.0/24 and 10.10.20.0/24 so it has entries in its ARP cache for both networks.
R3 is directly connected to the 10.10.20.0/24 network so it has entries in its ARP cache for that network only. It does not have any entries for the 10.10.10.0/24 network.
Monitoring and Maintaining ARP Information: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_arp/configuration/15-s/arp-15-s-book/Monitoring-and-Maintaining-ARP-Information.html
|
<urn:uuid:33a45a63-4b44-4150-8bd5-050923e88d8b>
|
CC-MAIN-2022-40
|
https://www.flackbox.com/arp-the-address-resolution-protocol
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00649.warc.gz
|
en
| 0.903625 | 3,641 | 3.515625 | 4 |
Most companies, regardless of size, have a formal organizational structure. This structure tends to provide an anchor and reference point for managerial authority, or other basic HR functions in the business. It also provides an outline for organizational workflow, makes it easier to add employees, and highlight what areas of the business are growing or need additional development.
A good organizational structure is also essential for employees to understand who they should report to in specific situations and who has responsibility for various projects or pieces of work. And it can provide a clear path for employees looking to progress in their careers. Without the right organizational structure, a business’s direction may be compromised and, the internal processes confused, which will ultimately have a negative effect on its overall productivity.
What are organizational charts?
Organizational charts (or org charts) are the visual representation of a business’s structure. They show the relationships between teams and the different positions within. Like a family tree, positions and their descriptions are usually written in boxes and connected to other related positions. Org charts are snapshots of the company, how it’s built, and who fits where.
The basics for org chart success:
- Its design should clearly depict the structure of the organization
- It should be built in collaboration with business leaders, managers, and HR
- It should be accurate and updated regularly
What are they used for?
Traditionally, org charts have been used by various people in an organization:
- HR departments use them to assist new hires as part of the onboarding process
- They can be used to provide a current view of what employees are in each position
- Business leaders use them for optimizing resources
- Finance departments use them to assist with cost allocations to parts of the business
- Decision makers use them to optimize their resources
- Employees use them to help develop a career path within the company
- New hires use them for getting to know who's who and establish new working relationships
- They can be used to find heads of departments and who to escalate questions or tasks to
Methods for creating org charts
There are various methods of creating org charts: from the rudimentary methods of Excel and PowerPoint to automated tools or org chart products.
Office 365, for example, provides different methods for creating org charts:
- Office 365 contact cards – includes an organization tab that displays where an employee fits into an organization’s structure.
- Microsoft Teams – you can view where a colleague sits in the hierarchy of your organization along with their peers. (see screenshot below)
- Microsoft Visio – a specific product from Microsoft that enables you to create flexible organization charts.
- Delve - has a very simple linear organization chart that comes out of the box. It shows the management chain but not peers.
- PowerApps - PowerApps has a template app for an employee search mobile app.
See our previous post on getting the most out of org charts in Office 365.
How each business creates their own will depend on factors like size, time, and available resources. But, for every business, the success of your org chart will come down to how accurate, easy it is to update and whether it’s readily accessible.
Challenges facing IT professionals and Office 365 admins
Despite the numerous options for creating and using org charts, there remains an underlying problem: they are only ever as good as the information in your Active Directory.
If that information is inconsistent, your org charts risk being inaccurate or inconsistent. While challenges of inconsistent information and inaccurate org charts might seem relatively minor if you are a company of fewer than 15 or 20 people, imagine a company with multiple departments and hundreds of people. Inaccurate org charts have the potential to cause disruptions across the business.
Let’s look at some of the challenges that admin staff must deal with when it comes to ensuring their organization’s org chart information is effect and up to date.
- Static data - A lack of integration with user profiles means data cannot be updated automatically.
- Time drain - When employee information must be manually updated, it can quickly become a tedious and time-intensive task, pulling workers away from more valuable work.
- Inconsistent - Manually updating information increases the risk of mistakes being made or best practices not being followed. This will mean information becomes inconsistent and dated over time.
- Departmental relationships aren’t displayed - This can lead to confusion over authority, the wrong information being sent to the wrong person, delays in project tasks, and an overall decrease in productivity.
- Hard to navigate - When information is inaccurate, it makes the process of navigating the org chart difficult and in some cases irrelevant.
Hyperfish has got your back
IT managers and Office 365 administrators looking to reduce the manual effort and complexity that’s required in keeping profile information up to date should consider Hyperfish.
We are experts when it comes to modern organizational charts and the software that can solve the above challenges. Our solution uses AI to understand where employee profile information is incorrect or missing and notifies the user of problems. This information can be inputted on mobile devices, making it easier for the end-user to update information on the go. We understand the issues Office 365 administrators face when it comes to making sure organizational charts are being used effectively, that's why Hyperfish was built. Download Hyperfish Lite for an immediate taste of how you can achieve complete Office 365 profiles.
For more information on the capabilities of the full version of Hyperfish, get in touch with us today.
|
<urn:uuid:df64fad3-beb5-41dd-902e-da995909ce6c>
|
CC-MAIN-2022-40
|
https://blog.hyperfish.com/your-guide-to-organizational-charts
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00649.warc.gz
|
en
| 0.930703 | 1,133 | 2.640625 | 3 |
A team of scientists led by NASAâs Jet Propulsion Laboratory has developed a satellite data analysis technique for identifying vulnerabilities in infrastructure such as bridges that may lead to catastrophic events. NASA said Tuesday that the group comprised of NASA, the University of Bath in England and the Italian Space Agency used synthetic aperture radar data to measure relative displacement and structural changes in the Morandi Bridge in Italy from 2003 through its collapse in August 2018.
The team detected subtle changes in the bridge in 2015 as well as significant transformations in its structure in the lead-up to its collapse.
“We couldn’t have forecasted this particular collapse because standard assessment techniques available at the time couldn’t detect what we can see now. But going forward, this technique, combined with techniques already in use, has the potential to do a lot of good,â said Pietro Milillo from NASA JPL.
NASA partnered with the Indian Space Research Organization to expand the NASA-ISRO Synthetic Aperture Radarâs coverage by 2022 to include areas that lack systems supporting consistent SAR functionalities.
|
<urn:uuid:a6147d5c-10cf-4e16-bf6b-5f0db0cb6c81>
|
CC-MAIN-2022-40
|
https://executivegov.com/2019/07/nasa-led-team-deploys-synthetic-aperture-radar-to-detect-infrastructure-risks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00649.warc.gz
|
en
| 0.931744 | 222 | 3.53125 | 4 |
In this Event Log Forwarding Guide, we will have a look at what it means, how to do it manually using native commands and features, as well as four tools that can be used to automate the whole process.
What is an event?
Let us start by defining what an event is.
Most operating systems – both desktop and server versions – keep records of errors that occur as they are running. These errors are considered to be “events” that are registered to help administrators figure out what caused them.
Events are errors that occur on a computer or server and can be caused either by a user or a running process. Of course, the user might be an innocent end-user or a malicious hacker. Meanwhile, the process could be run by a misconfigured application or a rogue one that has been planted by a hacker looking to breach the server.
The information that is gathered by way of the events helps administrators figure out exactly what is going on.
What is an event log?
All the events are recorded in a log – an archive that is used to store, group, and present them easily. They are the logs that administrators look into when troubleshooting, tracking root causes or resolving issues.
An event log presents the data – that is gathered from various sources within the server it is hosted on – in a way that is easy to understand. In Windows, the Event Viewer is used to present the events in the logs. They are grouped depending on characteristics like severity, source, and time of occurrence.
An important point, that will be important as we talk about Event Log Forwarding, is that the administrator needs to connect to each server individually to work with its logs.
How does an event log help us?
The main reason we need event logs is to have a way of monitoring the health of our computers and servers. These logs give us insight into what has happened (in case of a crash), what is happening (to see what is causing the server to run slow), and even predict future performance issues (tracking disk space warnings that could cause issues soon).
It also gives us insights into applications that aren’t working well or if there is an attempt being made to breach the network.
An example could be if any attempts are being made to gain unauthorized access to the network or the assets that are on it. A good indication of this would be the number of failed login attempts that are being made compared to the time that has passed between each attempt. Too many failed logins in a short amount of time need to be investigated as an attempt being made via a brute force attack.
A look at the event logging in major operating systems
Most – if not all – operating systems have event logging capabilities. While some may have native log management and viewing tools, others rely on third-party solutions to allow users to make sense of the data.
Now, we will have a look at the native ways we can access and leverage log data of two major operating systems: Windows and Linux.
Windows has Event Viewer – a component of the operating system that is used to view event logs on a local or remote machine.
To start the Event Viewer:
- Click on Start.
- Type “Event Viewer” into the search box.
- Click on the resulting application shown at the top of the results.
That’s it. This opens the Event Viewer application and you’re ready to start investigating.
The next step is to scroll down the errors list and see what kind of events have been logged.
Administrators can now investigate an event by drilling down into the details – all they need do is double-click on it and find information on:
- Level The seriousness of the event, which we will see in the next section.
- Date and time The date and time the event was logged.
- User The username of the user logged onto the machine at the time of the event.
- Computer The name of the computer the event occurred on.
- Event ID A Windows identification number that specifies the event type.
- Source The software or hardware that triggered the event.
- Type The type of event, including information, warning, error, security success audit, or security failure audit.
What levels of Windows events do we have?
To answer this question we will need to look at events in a Windows operating system where there are six types or “Levels” of events:
- Critical These messages are an indication of a severe problem. This is the most serious type of problem that needs to be addressed immediately. It is usually triggered by something that is broken somewhere, and the component that triggered the event has probably crashed. An example would be, Event ID 41 – Kernel-Power which indicates that a machine rebooted without a clean shutdown.
- Errors These events indicate a significant problem such as loss of data or loss of functionality. A failure of a service to load during startup would be an example that could trigger this event.
- Warnings Although, not necessarily significant, these events may indicate a possible future problem. A good example here would be when disk space is low. Also, if an application recovered from an event without loss of functionality or data, it would trigger a Warning event.
- Information These are events that describe the successful operation of an application, driver, or service. When a driver loads successfully, for example, it may be logged as an Information event.
- Success Audit These events record audited security access attempts that are successful. For example, users’ successful attempts to log on to the system are logged as Success Audit events.
- Failure Audit These events record audited security access attempts that have failed. For example, if users tried to access a network and failed, their attempts are logged as Failure Audit events.
Depending on the severity level of the logged event, the administrator can move on to taking the appropriate action.
Microsoft has a list of events that need to be monitored. Some examples include:
- Event ID 4649 A replay attack was detected. This event can be a sign of a Kerberos replay attack or simply, among other things, a network device configuration or routing problem.
- Event ID 104 Indicates that an event log was cleared; this is something a hacker would do to hide their tracks.
- Event ID 4663 Generated when a high number of files are being deleted, it could be that an administrator is clearing data or an attacker is erasing a hard disk.
- Event ID 530 A logon attempt failed, despite the account and password combination is correct, because it was made outside the allowed
- Event ID 531 A failed logon attempt was made using a disabled account.
- Event ID 539 A failed logon attempt was made using an account that had been locked out. If there are a large number of these events logged, it could mean that a service account password is configured incorrectly or a program is trying to connect with a password that doesn’t match the password on the server. It could also mean that there is a brute force attack in progress.
- Event ID 644 An account was locked following several unsuccessful login attempts within a specified time limit. This would lead to triggering Event ID 59 above if the application, service, or user continued to try and log in using the locked account.
- EVENT ID 4724 A password reset has occurred. This could mean that a conscientious administrator is simply resetting passwords as a security measure or that a disgruntled administrator just locked everyone out.
- EVENT ID 4704 and 4717 Changes were made to user rights assignments; this is something a hacker would do as they attempted to elevate their account permissions to access secure resources.
- EVENT ID 4719 and 4739 Show that someone has tampered with the Audit and Account policies. Depending on the user, it could be harmless or a prelude to an internal hack.
- EVENT ID 1102 This event indicates that the security log has been cleared. Here too, it could be a sign of good housekeeping or a good hacker that is trying to cover their tracks.
Linux is the other popular operating system that we will be looking at. Its log files are stored in plain text and can be found in the /var/log directory and subdirectory. There are Linux logs for everything including the system, kernel, package managers, boot processes, Apache, MySQL, and more.
After navigating to the log directory – by typing cd /var/log from the root directory – we can access the files by following these steps:
- The command to display the content of the directory is ls – it will list the files.
One of the most important logs to view is the syslog because it logs everything except auth-related messages.
- Issue the command sudo cat syslog – while still in the /var/log directory – to view everything logged in syslog.
- To get to the last entry – which is the latest one – type Shift+G. This will skip to the “END”.
- To stay on top of entries, as they occur, you can use the “tail” command. For example, to keep track of events in the syslog file as they come in, the command would be tail /var/log/syslog (to list the last 10 events) or tail –f /var/log/syslog to “follow” the events as they are being appended to the bottom of the list. More commands and their full syntaxes can be found here.
- The command to exit the log is Ctrl+X.
Next, we will have a look at some logs that need to be monitored. These include:
- /var/log/messages Logs generic system activities and is used to store informational and non-critical system messages.
- /var/log/auth.log Llogs authentication-related events in Debian and Ubuntu servers.
- /var/log/boot.log Logs boot-up messages generated by /etc/init.d/bootmisc.sh – the system initialization script – as a device goes through the startup process.
- /var/log/dmesg Logs information related to hardware devices and drivers.
- /var/log/kern.log Logs critical kernel information.
- /var/log/faillog Logs important information about failed login attempts.
- /var/log/cron Logs information on cron jobs.
- /var/log/yum.log Logs information about new packages that have been installed using the “yum” command.
- /var/log/maillog or /var/log/mail.log Logs records concerning mail servers.
- var/log/httpd/ This is the directory that holds logs recorded by an Apache server and that are stored in two files: error_log and access_log.
- /var/log/mysqld.log or /var/log/mysql.log Logs MySQL records regarding debug, failure, and success messages related to the [mysqld] and [mysqld_safe]
What is event log forwarding?
Ok; now that we have seen what an event and event log is, it is time to talk about Event Log Forwarding.
Event log forwarding is the process of consolidating event logs from distributed servers on a network into one central repository. The purpose of forwarding event logs is to have to deal with one all-inclusive archive instead of having to connect to, and monitor, servers individually.
Most major operating systems, including Windows and Linux, have log forwarding capabilities.
Windows, by default, allows event logs to be forwarded from servers to a central server. The forwarded events are stored in the Forwarded Events folder that is located under Windows Logs.
Meanwhile, Linux also offers rsyslog – a software solution that is included by default in the operating system – to send logs remotely. In case it isn’t installed, users can easily download it.
Why do we need to forward logs?
Some of the main reasons to opt for Event Log Forwarding include:
- Centralized logging and auditing Administrators can create a warehouse of data that gives them a complete picture of the current status of all their devices.
- One query for all servers Administrators can run queries against a single data repository, instead of having to collect the same data on each server.
- As part of a disaster recovery plan Hackers tend to tamper with the events logs of the servers they attack in a bid to cover their tracks. A centralized repository will serve as a backup to help with uncovering what they have done and then restore breached systems.
- Saving disk space Data storage on mission-critical servers can be optimized by moving event logs to a dedicated repository.
- Restricted access Administrators can grant access to the central repository instead of allowing other stakeholders (auditors, security engineers, etc.) without allowing them to log into the actual production servers.
The Best Event Log Forwarding Tools
Next, let us have a brief look at the four best tools for event log forwarding for both Windows and Linux servers.
SolarWinds Kiwi Syslog Server is a tool for managing syslog messages and SNMP traps from network devices as well as operating systems like Windows, Linux, and UNIX.
Administrators can centrally manage forwarded events from one console. They can also receive real-time alerts of critical events indicating problems with servers or network devices. They can access their syslog data from anywhere using a browser.
The tool allows for automatic responses to syslog messages as well as easily inspecting log messages for faster troubleshooting.
You can try SolarWinds Kiwi Syslog Server through a FREE and fully functional 14-day trial.
2. ManageEngine EventLog Analyzer
ManageEngine EventLog Analyzer collects logs from multiple log sources – including Windows, Linux, and UNIX servers. But apart from servers it also accepts logs from applications, databases, firewalls, routers, switches, IDS/IPS, and other cloud infrastructures.
Administrators can collect, filter, parse, analyze, correlate, search, and archive event logs sent from all mission-critical devices on their network.
They can then create and extract custom reports – with their new fields of their own if need be – to make their log data more informative. Interactive dashboards display intuitive graphs while reports can be used to monitor for suspicious activities, depending on requirements.
Try ManageEngine EventLog Analyzer for FREE.
Administrators can analyze forwarded event logs triggered by applications, security incidents, setups, and system faults or failures. They can also identify faults in server memory and track security events like failed logins, cleared audit logs, and more.
It can send out instant alerts about errors associated with event facilities like clock daemon, kernel, FTP, and performance degradation before end-users are affected.
Administrators can use this tool to analyze event logs and search for specific keywords, filter them by ID, and list them by occurrences.
Try Site24x7 FREE for 30 days.
LogDNA is a tool for ingesting, storing, processing, analyzing, and routing log data. It centralizes data from any source, processes it in real-time, and forwards it to multiple destinations for specific use cases.
The tool has screens, dashboards, and graphs that aggregate and visualize critical log events to help identify trends.
Administrators have access to tools like powerful exclusion rules to manage log volume by only storing critical events. Regardless of the source, the tool can handle input from the LogDNA Agent, syslog, code libraries, or APIs. Once captured, this tool can automatically parse major log line types while using its Custom Parsing Templates for other input formats.
Try LogDNA for FREE for 14 days.
Event Log Forwarding is part of infrastructure security
We hope that this Event Log Forwarding Guide has been of help to you. The main point to be taken from it is that this is a process that is critical for businesses that want to keep their data and networks secure. It can be used alongside infrastructure monitoring tools to protect mission-critical digital assets.
We would like to hear your thoughts – leave us a comment below.
|
<urn:uuid:9c1bcd60-04fc-4ac6-8931-88a8cd6f7997>
|
CC-MAIN-2022-40
|
https://www.itprc.com/event-log-forwarding/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00649.warc.gz
|
en
| 0.940069 | 3,400 | 2.828125 | 3 |
SQL Server supports two methods of data encryption:
- Column-level encryption
- Transparent Data Encryption
Column-level encryption allows the encryption of particular data columns. Several pairs of complementary functions are used to implement column-level encryption. I will not discuss this encryption method further because its implementation is a complex manual process that requires the modification of your application.
Transparent Data Encryption (TDE) introduces a new database option that encrypts the database files automatically, without needing to alter any applications. That way, you can prevent the database access of unauthorized persons, even if they obtain the database files or database backup files.
Encryption of the database file is performed at the page level. The pages in an encrypted database are encrypted before they are written to disk and decrypted when they are read into memory.
TDE, like most other encryption methods, is based on an encryption key. It uses a symmetric key, which secures the encrypted database.
For a particular database, TDE can be implemented in four steps:
- Create a database master key using the CREATE MASTER KEY statement. (Example 12.1 shows the use of the statement.)
- Create a certificate using the CREATE CERTIFICATE statement (see Example 12.1).
- Create an encryption key using the CREATE DATABASE ENCRYPTION KEY statement.
- Configure the database to use encryption. (This step can be implemented by setting the SET ENCRPYTION clause of the ALTER DATABASE statement to ON.)
|
<urn:uuid:b10c89c4-1891-4b54-a6e6-0af4bb34c075>
|
CC-MAIN-2022-40
|
https://logicalread.com/sql-server-data-encryption-mc03/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00049.warc.gz
|
en
| 0.797338 | 321 | 3.078125 | 3 |
For example, can you answer this question?
Q. The CEO of a company recently received an email. The email indicates that her company is being sued and names her specifically as a defendant in the lawsuit. It includes an attachment and the email describes the attachment as a subpoena. Which of the following BEST describes the social engineering principle used by the sender in this scenario?
More, do you know why the correct answer is correct and the incorrect answers are incorrect? The answer and explanation is available at the end of this post.
Attackers have been increasingly using email to launch attacks. One of the reasons is because they’ve been so successful. Many people don’t understand how dangerous a simple email can be for the entire organization. Without understanding the danger, they often click a link within a malicious email, which gives attackers access to an entire network. Email attacks include spam, phishing, spear phishing, and whaling.
One Click Lets Them In
It’s worth stressing that it only takes one click by an uneducated user to give an attacker almost unlimited access to an organization’s network. Consider the figure. It outlines the process APTs have used to launch attacks.
Steps in an attack
Note that the attacker (located in the attacker space) can be located anywhere in the world, and only needs access to the Internet. The neutral space might be servers owned and operated by the attackers. They might be in the same country as attackers, or they might be in another country. In some cases, the attackers use servers owned by others, but controlled by the attackers, such as servers in a botnet. The victim space is the internal network of the target. Refer to figure as you read the following steps in an attack:
1. The attacker uses open-source intelligence to identify a target. Some typical sources are social media sites and news outlets. Other times, attackers use social engineering tactics via phone calls and emails to get information on the organization or individuals employed by the organization.
2. Next, the attacker crafts a spear phishing email with a malicious link. The email might include links to malware hosted on another site and encourage the user to click the link. In some cases, this link can activate a drive-by download that installs itself on the user’s computer without the user’s knowledge. Cozy Bear (APT 29) used this technique and at least one targeted individual clicked the link. Similarly, criminals commonly use this technique to download ransomware onto a user’s computer. In other cases, the email might indicate that the user’s password has expired and the user needs to change the password or all access will be suspended. Fancy Bear (APT 28) used a similar technique.
3. The attacker sends the spear phishing email to the recipient from a server in the neutral space. This email includes a malicious link and uses words designed to trick the user into clicking it.
4. If the user clicks on the link, it takes the user to a web site that looks legitimate. This web site might attempt a drive-by download, or it might mimic a legitimate web site and encourage the user to enter a username and password.
5. If the malicious link tricked the user into entering credentials, the web site sends the information back to the attacker. If the malicious link installed malware on the user’s system, such as a RAT, the attacker uses it to collect information on the user’s computer (including the user’s credentials, once discovered) and sends it back to the attacker.
6. The attacker uses the credentials to access targeted systems. In many cases, the attacker uses the infected computer to scan the network for vulnerabilities.
7. The attacker installs malware on the targeted systems.
8. This malware examines all the available data on these systems, such as emails and files on computers and servers.
9. The malware gathers all data of interest and typically divides it into encrypted chunks.
10. These encrypted chunks are exfiltrated out of the network and back to the attacker. Privilege escalation occurs when a user or process accesses elevated rights and permissions. Combined, rights and permissions are privileges. When attackers first compromise a system, they often have minimal privileges. However, privilege escalation tactics allow them to get more and more privileges. The recipient shown in Figure 6.1 might have minimal privileges, but malware will use various privilege escalation techniques to gain more and more privileges on the user’s computer and within the user’s network.
If users are logged on with administrative privileges, it makes it much easier for the malware to gain control of the user’s system and within the network. This is one of the reasons organizations require administrators to have two accounts. Administrators use one account for regular use and one for administrative use. The only time they would log on with the administrator account is when they are performing administrative work. This reduces the time the administrative account is in use, and makes it more difficult for the malware to use privilege escalation techniques.
Do you know what many experts are referring to as the biggest cybersecurity threat?
This isn’t insiders intentionally stealing data.
Instead, it’s insiders making the same mistakes over and over such as clicking on links that install malware on their systems, or providing their passwords via fake pages that prompt them to change their password.
Q. The CEO of a company recently received an email. The email indicates that her company is being sued and names her specifically as a defendant in the lawsuit. It includes an attachment and the email describes the attachment as a Which of the following BEST describes the social engineering principle used by the sender in this scenario?
Answer is D. The sender is using the social engineering principle of authority in this A chief executive officer (CEO) would respect legal authorities and might be more inclined to open an attachment from such an authority.
While the scenario describes whaling, a specific type of phishing attack, whaling and phishing are attacks, not social engineering principles.
The social engineering principle of consensus attempts to show that other people like a product, but this is unrelated to this scenario.
If you’re studying for the SY0-501 version of the exam, check out the CompTIA Security+ Get Certified Get Ahead: SY0-501 Study Guide.
|
<urn:uuid:3620ddb2-440d-4af2-bc65-4d709339cc72>
|
CC-MAIN-2022-40
|
https://blogs.getcertifiedgetahead.com/understanding-email-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00049.warc.gz
|
en
| 0.932199 | 1,355 | 3.328125 | 3 |
Receive the latest articles and
content straight to your mailbox
Network Cable Ratings and Jacket Type Comparison
Few people give much thought to the outer jacket on network cables. However, jacketing, also known as sheathing, is vitally important to fire safety and the proper functioning of the cable.
It is also more complex than one might think. Multiple types of jacket material are used for copper cables to meet various fire ratings, and the U.S. has different cable jacket material standards than the E.U. and much of the rest of the world. Fiber-optic cables follow different standards than copper, although the E.U. allows the use of the same type of jacket material for both.
What Is a Cable Jacket?
As the name implies, the cable jacket is the outer layer. In a copper cable, the jacket covers a shielding material, which covers a layer of insulation, which covers the copper wires. In a fiber-optic cable, the jacket covers a later of strengthening fibers, which covers a primary coating, which covers glass cladding, which covers the optical fibers themselves. The cable jacket’s primary purpose is to protect the cable’s insulation and core from damage and deterioration.
What Is a Cable Sheath?
Cable sheath is another way to describe the cable jacket. It is the outermost layer of the cable protecting the cable’s insulation and core from damage and deterioration.
Different cable materials can be used for different applications. Cable jackets may be color-coded for identification. Cable jacket material selection all depends on your requirements. The different types of network cable ratings serve specific purposes. We’ll explore below.
U.S. Fire Ratings for Copper Cable Sheath Types
In the U.S., there are three common ethernet cable fire ratings: Communications Multipurpose (CM), Communications Multipurpose Cable, Riser (CMR), and Communications Multipurpose, Plenum (CMP). They are distinguished by the type of material used for the cable jacket.
“Risers” are the vertical shafts between the floors of a building. “Plenums” refer to dropped ceilings, raised floors, and other air spaces.
Communications Multipurpose Cable Fire Rating
Jackets for CM-rated cables are made from polyvinyl chloride (PVC) because it’s flexible and inexpensive. The jacketing must pass the flame test described in the UL 1865 standard. However, the PVC material can produce thick smoke and dangerous gases in a fire and may not be used in risers or plenums.
Communications Multipurpose, Riser Cable Fire Rating
Low-smoke PVC and fluorinated ethylene polymer (FEP) are used for CMR cable jackets. These materials are more flame-retardant than PVC and produce less smoke. CMR jacketing must pass the UL 1666 flame test, which is much stricter than the UL 1865 test used for CM cable jackets. The goal is to reduce the risk of the cable spreading fire from one floor to another.
Communications Multipurpose, Plenum Cable Fire Rating
CMP cable jackets are made from PVC, FEP, or other materials. Whatever material is used must pass the test defined in the NFPA 262 standard, which requires limited flame travel and low smoke.
E.U. Fire Ratings for Copper Cable Sheath Types
Low Smoke Zero Halogen (LSZH) is typically used for copper cable sheathing in the E.U. and many other parts of the world. It is also called Low Smoke Halogen Free (LSHF). The halogen in this instance is chlorine — LSZH cable jackets are made from polyethylene, which contains little to no chlorine. In a fire, polyethylene produces a limited amount of smoke and almost no hydrogen chloride, which is the most dangerous of the halogen gases.
Fiber-Optic Cable Jacket Types
For fiber-optic cables, the most common standards are Optical Fiber Nonconductive Riser (OFNR) and Optical Fiber Nonconductive Plenum (OFNP). The OFNR standard has nothing to do with the jacket materials — it refers to a cable made of non-conductive materials. The OFNP is similar to OFNR but has a jacket made of a material suitable for plenum spaces.
In the E.U., fiber-optic cables also use the LSZH material for sheathing. However, some E.U. customers prefer to follow the OFNR and OFNP standards.
Enconnex Can Meet Your Cabling Needs
Enconnex offers a plethora of network cables and accessories, including copper and fiber-optic cables to support a wide range of use cases. Our cabling experts are here to help you choose the right solutions for your requirements; just get in touch.
Posted by Mike Chen on August 5, 2022
Mike has 20+ years of senior program management, product management, and consulting experience in IT, consumer electronics, and communication products, both at finished goods and components levels. Mike is the Product Manager for Network Cabling at Enconnex.
|
<urn:uuid:94663b20-5ba5-4977-aa9b-7fc33d60b02d>
|
CC-MAIN-2022-40
|
https://blog.enconnex.com/network-cable-ratings-and-jacket-type-comparison
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00049.warc.gz
|
en
| 0.911549 | 1,081 | 3.015625 | 3 |
In my book on Configuration Management Database, I have discussed the hierarchy of DIKW (Data, Information, Knowledge and Wisdom) and further to that in another book, I have expressed my apprehension on the capability of technology to produce wisdom. I have argued that although the technology is extracting lot of knowledge from the data and information but not achieving the wisdom. There is a difference between knowledge and wisdom. In old scriptures, knowledge and wisdom have been used interchangeably because in old times knowledgeable persons were invariably wise but in modern times it is not true anymore. A person can be knowledgeable but may not be wise. Knowledge can be learned but the wisdom ought to be earned.
With the advancement in the area of “Trusted AI” I am hoping to see the possibility of “Wise AI” in near future. Being trustworthy in my opinion is one of the key characteristics of being wise.
In order for the AI systems to be trustworthy, we must ensure that it addresses all the dimensions of trust and thus inspires confidence. Although there are multiple dimensions for the trusted AI such as explainability, security and privacy, transparency, and accountability, but I would like to discuss the dimension of fairness as I consider it as the most important and the most complex.
AI systems source the data from the world that carries biases and prejudices, and these biases and prejudices can easily enter AI systems through training data. The AI systems pick these biases and prejudices, encode them, and may even scale them up. This risk brings in the necessity of calibrating and instrumenting AI for fairness. Building fairness in AI will enable us to break the chain of human biases.
We all know that biases and prejudices can eclipse the wisdom of a person but a person with strong thinking and reasoning can overcome these weaknesses and become wise. Trusted AI need to be continuously learning with newer and newer data but has limitations of thinking natively. Will that time come when machine thinking could be matched with human capability?
Yet another characteristic of wisdom is righteousness. As such, there is a systematic decline of righteousness in the society with the rise of unreason and it is being pushed back by populism. So, will anyone be interested in teaching AI system to be righteous? The prevailing trend is to be popular rather than be righteous. At times you have opportunity to be righteous as well as popular, but these kinds of opportunities are diminishing in the business and society. So, I will leave the same question on the table- will that time come when it will be viable to build a “Wise AI”?
|
<urn:uuid:ae2625f3-6639-43fc-a019-64060604d312>
|
CC-MAIN-2022-40
|
https://www.dryice.ai/resource/blog/can-ai-system-be-wise-enough
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00049.warc.gz
|
en
| 0.955736 | 522 | 2.609375 | 3 |
Earlier this month, the existence of a critical vulnerability in Apache Log4j 2 was revealed and a PoC for it published. Dubbed Log4Shell, it’s an issue in a logging library for Java applications that is widely used across popular open-source projects and enterprise-grade back-end applications. Log4Shell introduced a critical security risk, scoring 10 out of 10 in severity.
Using open-source software has proven to be one of the most effective responses to this type of risk. While open-source software doesn’t guarantee a life free of vulnerabilities, it does guarantee fast response and remediation, which is crucial in the event of a large-scale security risk such as that brought on by Log4Shell.
Benefits of open source
Open-source software is defined as “software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose.” Some of the benefits of this are lower hardware costs, higher-quality software, flexibility, security, and transparency.
Having access to the source code and permission to alter it means that anyone can submit to the creator or maintainer desired changes to be included in the upstream software. While it also means that malicious actors can explore the code and probe it for vulnerabilities, those people already do this with closed-source software using techniques like fuzzing.
Closed-source software is not inherently safer than open-source software.
When a software’s source code is not made available for inspection, only the vendor can implement fixes, and this creates disparity between attackers and defenders (i.e., more attackers, fewer defenders.) That gap is flipped for open-source software, with more defenders than attackers. Additionally, closed-source software is more likely to have exploitable (or actively exploited) vulnerabilities in the wild for a longer time and will have a lengthier mean time to repair (MTTR) for those vulnerabilities. Companies are also less likely to publicly report vulnerabilities in their closed-source software due to image and liability concerns, while open-source software is built on a model of transparency and openness.
In the case of the Log4j 2 vulnerability, its open-source status meant that once it was discovered, the entire community of developers could work on fixing it and also perform code reviews that uncovered two additional vulnerabilities (as of December 20, 2021).
The vulnerability makes it possible for malicious actors to execute arbitrary code on or retrieve confidential information from the system being attacked. There are two ways to mitigate a vulnerability like this: by patching or updating Log4j in all systems and applications where it’s deployed, or by blocking malicious requests as they enter the network, often through a reverse proxy or load balancer.
In the hours and days following the disclosure, many developers and companies published free detection and mitigation features, including container image scanning utilities and network plugins to detect and block the Log4Shell exploit before it reaches a vulnerable backend server. The influx of energy and defensive response showcases the true power of the open-source community.
|
<urn:uuid:1eaf4e9e-15be-4480-8d14-e943b8b63441>
|
CC-MAIN-2022-40
|
https://www.helpnetsecurity.com/2021/12/22/solving-log4shell-problems/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00049.warc.gz
|
en
| 0.946568 | 649 | 2.765625 | 3 |
What is the challenge?
Nowadays, with employees able to access work resources from different types of devices (e.g., desktop, tablet, mobile phone) and different locations (e.g., office, home, public), the challenge
for security and mobility teams is ensuring that only authorized devices are allowed onto the network.
It’s also about striking the right balance that allows users/employees to be productive from anywhere, at any time, while still protecting corporate resources. Conditional Access is here to ensure that only authorized devices are permitted to connect with the corporate infrastructure.
How is it usually dealt with?
Traditionally, usernames and passwords have sufficed as credentials to allow an employee to access a corporate resource or service from a device, either desktop or mobile; this means one factor only (the user) was used to decide whether to grant access or not. As we migrate to a Zero Trust architecture, an additional layer of access should be included to ensure that not only the user but also the device itself is authorized to access those resources, services, and handle corporate data, ensuring said data will not be compromised in any of its states (in transit, at rest and in use); a second factor (the device) is also taken into the equation. Combined with the Zero Trust architecture, which consists of providing only the minimum access required, on a per-resource basis, would deliver better protection. However, this is no longer enough to attend to today’s needs in term of cybersecurity.
What is Conditional Access?
Conditional Access enables corporations to strengthen and fine-tune their access control policies with additional factors like device, location, application, or real-time risk level to determine whether a user will be allowed to access a service, be blocked, or be allowed but only after validating additional checks.
How does it work?
Conditional Access uses multiple factors to make decisions and enforce organizational policies.
Common factors considered to make decisions include:
· User: Specific users or group of users can be targeted with different policies
· Location: Where is the user connecting from?
o Known, trusted location (e.g., office)
o Known, untrusted location (pre-defined area, region, country)
o Unknown, untrusted location (any other location)
· Device: Specific platform
· Application: Specific to an application or groups of applications (e.g., email)
· Real-time, calculated risk
o If the device is compliant with company security policy (e.g., managed by UEM solution)
o If the device is at a potential risk level (e.g., threat detected by MTD solution)
Unlike a default allow/block approach, a third one is included, which triggers complementary authentication and/or requests additional factors before allowing access:
· Block access (most restrictive)
· Allow full/limited access
· Allow access but first require one or multiple additional options
o Request Multi Factor Authentication (MFA)
o Device to be marked as compliant
Connection from unusual/unknown location:
An employee is travelling to a foreign country and connecting from a public place (e.g., airport) at arrival; the user is authenticated, the device managed and safe, however, the location is unusual, and so an additional confirmation is triggered (e.g., validate from another known, trusted device) to ensure that the request is legit and coming from the user, not a possible attacker trying to impersonate one of his devices.
An employee managed to install a supposedly safe, yet still unapproved, software on his device. That is detected by the UEM software and the device is marked as uncompliant. The device status is then sent to the organization’s back-end infrastructure (e.g., Microsoft 365) so access to certain/all services, including email, internal website, etc. is prevented until the situation is fixed and the device reports back as compliant by UEM software.
Conditional Access feature must be available directly, embedded into the solution or service for which we want to restrict access (e.g., mail, web), either on-premises or cloud-based.
While moving from a traditional security approach to a more advanced, optimal one for your organization, Conditional Access is an important tool of any security strategy, involving all five pillars of Zero Trust architecture (identity, device, network, application, and data).
It can be leveraged across the whole infrastructure, using factors not only gathered by the corresponding service provider (e.g., Microsoft 365) but also from other complementary third-party solutions. Such data can include device compliance status (compliant/uncompliant) and enrollment state (managed/unmanaged) provided by a Unified Endpoint Management (UEM) software, and/or device risk level (e.g., threat detected, sideloaded app, encryption disabled) provided by a Mobile Threat Detection (MTD) software. The more factors involved, the better the security.
(C) Rémi Frédéric Keusseyan, Global Head of Training, ISEC7 Group
|
<urn:uuid:0f7a02c5-1181-42e7-85f7-77477dab0274>
|
CC-MAIN-2022-40
|
https://www.isec7.com/2022/05/03/demystifying-security-conditional-access/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00049.warc.gz
|
en
| 0.931495 | 1,079 | 2.765625 | 3 |
In an increasingly digital business world, data backup has become vital for the protection of an organization’s interest. Businesses of all sizes can get hacked or breached, and indiscriminate ransomware attacks cause countless orgs to lose their valuable data to cyberthieves every year. Even disgruntled employees or other insider threats can steal or delete their valuable digital assets.
Data backup is a practice that combines techniques and solutions to combat these threats and mitigate these risks. By copying data to one or more locations, at predetermined frequencies, it is easier to keep safe and restore when the need arises.
What this article will cover:
- What is data backup?
- Why data backups are important
- Types of data backup
- What is a disaster recovery plan?
- Data backup and recovery solutions
What is data backup?
As mentioned earlier, data backup is the process of making copies of valuable data and storing them in a safe location. This concept has come a long way over the years, but you may remember the old tape backup drives that allowed users to copy their hard drives to large data cassettes. The user was then supposed to keep those backup copies in a secure location like a fireproof safe or somewhere offsite.
Current Backup and Disaster Recovery (BDR) technology allows for more elegant solutions, but the idea remains the same. We’ll take a closer look at backup options later in the article.
The overall practice of data backup includes several important concepts:
- Backup solutions: these are the tools used to expedite and automate the backup and restore process.
- Backup administrator: the person within an organization or IT department that is responsible for backups. They are accountable for making sure the backup system is set up correctly, and for testing backups periodically for accuracy and integrity.
- Backup scope and schedule: system administrators must define their backup policy, specifying which files and systems are important enough to be backed up, and how frequently those backups should occur.
- Recovery Point Objective (RPO): RPO helps define the frequency of backups by determining the amount of data the organization is willing to lose after a disaster. If systems are backed up daily, the RPO is 24 hours. Lower RPOs mean less data lost, but also require more resources and storage capacity to execute.
- Recovery Time Objective (RTO): Much like the RPO specifies how much data an organization can afford to lose, the RTO determines how much time can be lost. The RTO is the time it takes to restore data or systems from backup and resume normal operations. Massive amounts of data can take time to restore leading to long RTOs unless new technology or methods are introduced.
Why are data backups important?
Data loss can be caused by tech failures, cyberattacks, or even just bad fortune.
Natural disasters can rob organizations of their data through floods, fires, or other means.
Every organization must contend with viruses, trojans, malware, ransomware, and all those malicious intruders sent forth by cyber criminals.
And the performance of computing hardware degrades as it grows older, sometimes leading to unexpected failures.
Data loss from any of these events will likely have far-reaching consequences. Businesses will usually lose valuable customer data, financial records, and production. Additionally, there will likely be added downtime and possible hits to their reputation.
Personal users are just as at risk, as their own devices hold family photos, music, financial data, and other important documents that cannot be replaced.
In any scenario where data disappears, backups are the answer. Having a backup and restore process in place will ensure you don’t lose your data forever and give you peace of mind knowing that all your files are safely stored in multiple locations.
Types and methods of data backup
Different backup methods are recommended depending on your specific situation. A full backup saves all files to a designated backup source, just like it sounds. An incremental backup saves files that have changed since the most recent backup run and can run multiple times between full backups. A differential backup saves all files that have changed since the last full backup.
Backup plans often use all three methods. For example, a backup may be configured over a three-day period as follows:
- Friday: Full backup
- Monday: Incremental backup
- Wednesday: Differential backup
When the incremental backup runs on Monday, it backs up all files that have changed on Friday, Saturday, and Sunday.The differential backup on Wednesday will copy all files changed since the last full backup, so it will go all the way back until the previous Friday.Then the full backup runs on Friday, completely backing up all files regardless if they have changed or not.
While incremental backups sound a lot like differential backups, they use less storage space. This means they’re well suited to organizations looking to save on costs and resources.On the other hand, a full backup saves all the files on the system. As you would expect, this creates a huge file. While full backups simplify file restoration, they take the longest and use the most system resources.
Because a differential backup is a cumulative backup, it takes far less time to run backups than a full backup. If the system needs to be restored, it will restore from the last full backup combined with the last differential backup.As far as speed goes, incremental backups are the quickest for backing up but slowest to restore. This is because restoring from incremental requires restoring the last full backup file and all incremental data backup files.
Backup technologies and locations
Just as there are several types of backup to choose from, there are many methods of backup technology to consider. Below are several of the most common backup and data recovery technologies:
This is the simplest option that harkens back to those days of tape drives and fireproof safes. Modern removable media like USB drives or DVDs can be used to create hard copy backups which can then be stored in a safe location. While this method is simple, one must consider the costs of removable media and the workload involved with making, testing, transporting, and storing removable media backups. This method also tends to be slow and cumbersome when a system needs to be restored.
Redundancy is a type of backup that involves setting up an additional hard drive that is a replica of another drive at a specific point in time. Alternatively, entire redundant systems can be set up for complete failover purposes. Redundancy is complex to manage, requires frequent cloning of systems, and can only offer true disaster preparedness if the redundant system is located safely off site.
External hard drive
Another simple method of backup is to install a high-volume external hard drive on the local network. You can then use archive software to save local changes to that hard drive. This is both risky and inefficient due to the limitations of external hard drives and the need to install the drive locally -- where it will likely still be vulnerable to physical damage and cyberattack.
Rack-mounted backup appliances come with large storage capacity and pre-integrated backup software. Users must install backup agents on the systems they’re protecting, define their backup scheduling and policies, and data starts copying to the backup device as specified. Again, this is another method that doesn’t serve disaster preparedness needs very well unless it’s installed off site in a safe location.
Software-based backup solutions are among the most common tools used by modern organizations. While they can be challenging to deploy and configure versus “plug-and-play” hardware, they offer greater flexibility at a lower cost.
Cloud backup services
Backup as a Service (BaaS) solutions from cloud providers are becoming the new standard for BDR. These systems allow users to push local data to a public or private cloud and recover data back from the cloud with a few button presses. BaaS solutions are easy to use and have the strong advantage that data is innately stored in a remote location. These methods are often the most cost-effective as well.
What is a disaster recovery plan?
Remember that backups should be part of a larger disaster recovery plan. An IT disaster recovery plan (DRP) is created by an organization to document the policies and procedures they will use to prepare for and respond to a disaster. Such plans in this context are relevant to IT concerns, such as keeping the network online and protecting valuable data through backup policies. The DRP should be treated as a component of the Business Continuity Plan (BCP), and should be tested and updated regularly. Recovery plans that don’t work in the heat of a disaster are essentially useless.
DRPs minimize risk exposure, reduce business disruption, and ensure a codified and reliable response to unfortunate events. These plans can also reduce insurance premiums and potential liability, and are required if the organization needs to comply with certain regulatory requirements.
Backup tools and solutions
As you probably already know, there are numerous backup tools and solutions available to consumers and IT professionals alike. Apart from the variety of options outlined above, there are countless vendors offering their own versions of cloud backup, backup appliances, and everything else imaginable.
What’s important to consider is the balance between cost and efficacy. Take for example the built-in “backup solutions” available from services like OneDrive or Dropbox. While these might be tempting because an organization is already paying for the service, they lack many of the features that a true backup solution needs to perform properly.
When it comes to something as critical as disaster recovery and data backups, it’s best to opt for purpose-built tools that offer all of the required features for a robust backup plan. Files should be truly backed up, not synced, and you should be able to configure backup schedules and choose when and where to use incremental, differential, and full backups.
NinjaOne and data backup
Ninja Data Protection is a built-to-last and best-in-class BDR solution designed to meet all of your disaster readiness needs. Our robust NDP service is available with flexible solutions that meet your data protection, cost, and RTO objectives every time.
- Full image backup
- Document, file, and folder backup
- Cloud-only, local-only, and hybrid storage options
- Fast and easy file restore
- End-user self-service file restore
- Bare metal restore
- Built seamlessly into NinjaOne and fully managed via the RMM dashboard
Data loss is a serious threat to every organization’s data. Loss of sensitive information from hackers, natural disasters, or human error could lead to financial damages, downtime, and hits to your reputation. Developing your own data backup and recovery plan as part of a full Disaster Recovery Plan is essential.
When you begin seeking out a data backup tool to execute that plan, it’s important to be educated both about the available options and what your organization specifically needs. There are a lot of options out there but tackling the challenge with the right information will ensure that you choose the solution that’s best for you.
|
<urn:uuid:0c54d554-fc80-4ade-a855-333ed8d5df97>
|
CC-MAIN-2022-40
|
https://www.ninjaone.com/blog/complete-guide-to-data-backup-recovery/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00049.warc.gz
|
en
| 0.937401 | 2,300 | 2.984375 | 3 |
Leadership Skill Sets
Leaders must have a number of skills to successfully achieve the goals when managing a project. Some of these include effective communicator, motivator, delegator, decision-maker and empathy.
Defining a project requires clear direction from the leader. An overall understanding of the entire scope, goals and timeline needs to be communicated to the project management team. Each member must understand their role and their contribution to the project.
Face-to-face meetings or phone/video conferencing are good methods for communication. Texts, e-mails and online collaboration tools can be utilized for keeping the project moving forward, however, these cannot be solely relied on for effective communication.
This topic of motivating a team usually arises when discussing projects and business with other leaders. How do you motivate each member of the project team? If you think it is only about the money you may find out this is not the case for most members.
Besides providing the overall goal and the importance of each member’s role, leaders focus on individual personalities. Build stronger relationships with each team member and get to know them better. You will find they require different motivations and it is okay to build your strategy with unique motivators.
Delegate with Authority
Provide clear direction on what the tasks entail and empower each member to execute the tasks. It is easy to micro-manage and want to be involved in every step. Let go and trust your team.
When you empower them to make decisions, they might make some mistakes or they might ask the leader to make a decision. Encourage them to make their own decisions, provide guidance as needed and help everyone learn from any mistakes made along the way.
As the leader, you will be relied on from time to time to make decisions. Do not wait until you feel you have 100% of all the information as this is not realistic. Remove your emotions and personal biases, ask questions, gather the facts and make a decision. The key element here is to make the decision.
Procrastinating does not help and may result in the project not being successful. As mentioned above, provide the tools and information to your team so they can make most of the decisions for a successful project.
Leaders demonstrate and teach others how to be empathetic. Businesses and projects are made up of people. Show them you care. Teach them to care. It is a simple concept and we all need to show we care and help one another.
Listen to your team, ask them questions and show them you support them. Caring about others helps with overall satisfaction, morale and success of projects.
What skill sets do you bring to your Project Management Team?
|
<urn:uuid:64c71553-ff05-44d7-853c-07d5124cbcab>
|
CC-MAIN-2022-40
|
https://www.alphakor.com/blogs/leadership/managing-projects/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00049.warc.gz
|
en
| 0.951854 | 568 | 2.84375 | 3 |
GitHub, an incredibly important code resource for major organisations around the world, fell victim to a colossal DDoS attack on Wednesday—the largest ever on record—helped along by something called Memcrashing (more on this later). 1.35 terabits per second of traffic hit GitHub all at once, causing intermittent outages. However, GitHub was prepared, and the attackers backed off quickly when they realized they had met their match.
GitHub? What's that?Launched in 2008, GitHub is a launchpad for all things code—from a library of files with revisions to forks (branching off to make code alterations while leaving the originals untouched) to groups being able to alter code collaboratively. It's a crucial resource for programmers.
Anything able to interfere with the otherwise smooth operation of GitHub could quickly cause chaos. If one of your company's key pieces of DNA is rapid updates and alterations to code, then a GitHub crash could take you out of action until everything is up and working again. Of course, you should always have local copies of files in the event of an outage, but it's not quite the same as being able to do everything, with everyone, at all hours of the day.
What's Memcrashing?For that, you need to know what Memcached is. Memcached is an open-source distributed caching system. Lots of sites and services make use of it to alleviate database load by caching things in RAM, and only rolling them out to places that need it, when they need it. Unfortunately, sysadmins are leaving Memcached exposed over the Internet (you're not supposed to do this), and then people with mischief in mind are using those exposed nodes to "amplify" already powerful DDoS attacks into the digital stratosphere. The technical name for this is a "UDP-based reflection attack vector," which is just a fancy way of saying, "We're going to bury your server under a thousand miles of data-driven concrete."
Despite the sheer size of the attack, Github had taken proactive steps to ensure any DDoS would have to jump through quite a few hoops to take the site offline. As it turns out, they had just ten minutes of intermittent downtime before anti-DDoS technology played the role of the calvary, and just eight minutes later, the attack started to drain away to nothing (by comparison).
Previously, on the "GitHub attacked by DDoS channel"...You'd probably have to go back to 2015 and China's so-called "Great Cannon" to see a similarly massive attack. The cannon was used to launch a five-day assault on, you guessed it, GitHub, and the suspicion was that the attacks were political in nature. This most recent attack is still a developing story, and it'll be interesting to see where the blame potentially lies, though of course the main priority right now is that GitHub ensures they're doing everything they can to ward off any follow-up attacks.
If you're running Memcached and need to shore things up, there's a couple of things you can try. You really owe it to your fellow netizens to patch any exposed soft spots; few have the resources available to GitHub, and even the mightiest may struggle with an attack clocking in at 51,000 times their original strength. If you're just a regular organisation, with a regular website, and a standard off-the-shelf-hosting deal, you might have a bit more trouble. We're back to that whole server, thousand miles, data-driven concrete thing again—and unlike GitHub, you probably won't be able to claw your way back out until the attackers get bored and move on.
DDoS attacks have been around for a long time, and I remember when a 600MB+/second attack was the biggest thing around. Time and tech wait for no one, and the ability of scammers is now leagues beyond what was once available. The arms race between offence and defence where DDoS is concerned is never-ending, and it's up to all of us to do our bit and help to keep the possibility of attacks down to a minimum.
Avoiding dubious files will help keep you out of a botnet attack. Hiding services from the web that don't need to be there will prevent bad people from using them for nefarious purposes. Whether you're in charge of a multinational corporation or you're running your website and services from your home, there's no excuse not to get patching and avoid a fresh wave of DDoS.
|
<urn:uuid:06329220-c173-4f69-aa67-694d2a9fa483>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2018/03/massive-ddos-attack-washes-over-github
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00049.warc.gz
|
en
| 0.969039 | 937 | 2.90625 | 3 |
One near constant that you have been seeing in the pages of The Next Platform is that the downside of having a slowing rate at which the speed of new processors is increasing is offset by the upside of having a lot more processing elements in a device. While the performance of programs running on individual processors might not be speeding up like we might like, we instead get huge capacity increases via today’s systems by having the tens through many thousands of processors.
You have also seen in these pages that our usual vanilla view of a processor or a system is nowadays being challenged in a lot of strange and ingenious ways.
The Next Platform brings these to you almost religiously. While most – but by no means all – of these processors share a common concept in that they are driven by software and access data and instructions from some form of memory, it is also true that something in the system is assigning work for them to do.
Practically every programmer knows how to write a program – representing a single unit of work – to run on an individual processor. Many are familiar with how to write web-based and/or database applications that are inherently running on multiple systems. Still, it seems like an accepted given that breaking your needs up into tens to thousands of units of work to use the capacity available in today’s – and certainly tomorrow’s – systems is considerably more difficult.
It has been like searching for a holy grail to find ways to do that automatically, so results have been mixed. More recently, though, we have seen ways for programmers to provide parallelism hints via the likes of
- Hadoop, to spread work over many distributed memory systems, each with sets of processors accessing their own memory.
- OpenMP, to allow for the parallel execution of loops within a process associated with a shared-memory multiprocessor.
These abstractions and productivity enhancements are intended to hide much of the detail of having a programmer managing the parallelization of the work. These are all great, all worthy efforts. Who would be against better performance with not much extra effort?
Also true, though, is that occasionally, perhaps for reasons of better performance, you want to be in control of what constitutes the “work” and when and where it is supposed to execute. For that matter, don’t you really want to have a better idea of just what it is that these productivity-based abstractions are doing on your behalf. And for that matter, is knowing and then using what these tools are doing really all that tough? In the next few articles, we are going take a look under the hood, hopefully showing you that there is not all that much to fear in making use of these ever-increasing number of processors.
So What The Heck Is A Processor?
For most of you folks reading this, the answer to that is easy enough, right? It’s an Intel, Power, or ARM processor, which are general-purpose processors with relatively primitive instructions, some supporting virtual addressing, driving small units of data through simple functional hardware units. The series of these instructions together – a “program” if you like – accomplish some higher-level purpose.
For reasons of parallelism, and since the chip real estate exists, there might be multiple of such processors on a single chip. Each and all access a shared memory from which both instructions and data are fed and ultimately reside. It’s an SMP, short for symmetric multiprocessor. Standard stuff.
And since the instructions can execute far faster than they can be fed with instructions and data, we know that there are various levels of faster intermediate – largely software transparent – memory called cache between the processors and shared memory. And, even with as much capacity as these on-chip processors can drive, shared-memory multiprocessors get built from multiples of these chips, typically all keeping the cached data coherent with the expected program state. Making all of that happen is pretty impressive stuff actually.
Each such processor does its work fast, and it has been doing so for decades (we just keep finding more types of work to use its improved speed):
- For a long time now, these processors are so fast that units of work have had to be interleaved even within individual available processor(s) while other units of work waited to get data out of slower I/O devices.
- These processors are so fast that it appears to humans, which move at much slower speeds, that our work was making progress even while the work was really just taking turns, sharing the available processor(s).
Still, the notion that all computing can be done by rapidly executing strings of primitive instructions, as screamingly fast as it had become over the decades, is not making the progress at improving as in the past. As much as can be done on today’s systems, even that seems to be not enough. We talk now of the need for accelerators to make the next jumps in performance. Even that, though, is not really all that new. For example, units of work driving encryption/decryption were early on offloaded to crypto coprocessors residing in IO space, outside of the traditional processor.
The rate of development of all sorts of strange and wonderful accelerators and associated architectures has really increased of late. This is certainly not a complete list, nor even approaching one, but these may have been built based on:
- Coprocessors sharing the same chip as the processors
- Graphics Processing Units (GPUs), previously encompassing systems in their own right
- Field Programmable Gate Arrays (FPGAs), approaching actual circuitry driving specialized high-level function
- Application Specific Integrated Circuit (ASIC), actual circuitry specialized for a complex operation
The latter two are means of implementing operations in hardware more complex than the relatively primitive operations driven by the instructions supported by the traditional processors.
We are not going to get in deep as to the organization and purpose of these accelerators, maybe that being for another day, but that is not really the point of this article. What is the point is that work can be asynchronously offloaded to any of these, hopefully resulting in faster execution than is possible by offloading to another of an SMP’s processors. In either case, the results are processed some short time after completion of the offloaded function. Although they might not be processors as you have come to think of them for, well, like forever, we still define work for them, perhaps – in doing so – invoking some strange and wonderful form of a program to execute there. The point is that we don’t care here how the work is accomplished, perhaps by millions, perhaps by one instruction, but either way we need to tell these executable units that we have some work for them to do.
Further, and although not strictly an accelerator, for the last decade or so individual processor cores, previously executing one instruction stream at a time, now are capable of concurrently executing the instruction streams of multiple units of work. IBM calls it simultaneous multi-threading (SMT) and Intel calls it Hyper-Threading (HT). Their purpose is to squeeze considerably more compute capacity out of the processor cores both by allowing other instruction streams to execute during delays such as cache misses and also by allowing fuller use of the massive fine-grained parallelism available in a modern processor core. From the point of view of individual units of work, those single cores look and act like a multiplier of the number of processors.
It is worth additionally pointing out here that not only can we have access to a lot of processor-based compute parallelism, but each of these processors has become increasingly cheaper. Maybe it is because we know that there is a price for each, and so it would seem that we need to make good use of each and every one of them, it is also increasingly becoming true that we can leave some of them essentially unused. But, if unused, perhaps instead we should have some processors doing work which most of the time we simply throw away. Instead of grinding our way through to just the right answer, it increasingly becomes reasonable to generate all possible answers in parallel, only later choosing the answer which is correct or seems to be the most probable.
Of Multi-Threading And Tasks
All of those, whether vanilla cores in a symmetric multiprocessor or accelerators of all sorts and stripes, that’s just the hardware. It just sits there consuming power until we ask parts of it to do some work. And the work, generally speaking, executes when it executes; there is no direct relationship between any of the concurrently executing work assigned to these processors. The work is largely just sets of independently executing instruction streams. Every one of them started executing on some processor at some one instruction, with each processor having to be loaded with an initial set of data parameters. From there, individual processing of the work just continues until complete or until some external resource is needed.
That’s the essence of a task. When you have some work to do, you point at where the processing is to begin, pass in a few parameters, and say GO. The work of the task, as defined by your program just takes off doing its processing from there.
On an SMP, you typically do not even say which of the processors will execute your task. You provide the task to the operating system and that system software takes it from there, making the decision of when and where your task will execute. You can concurrently provide hundreds or thousands of tasks and the task dispatcher in the operating system still decides when and where each of these tasks will be executing alongside other tasks. This is true no matter the number of processors.
Getting a bit more detailed, the operating system manages all of this by having any given task be in one of the following states:
- Executing: The task’s instruction stream is at this moment executing on a processor.
- Dispatchable: The task is in all ways available to be executing on a processor, but – typically – no processor is at that moment available to execute that task’s instruction stream. (It soon will be, if for no other reason than the OS task dispatcher wants to provide the illusion that all dispatchable tasks are making progress.)
- Waiting: The task is waiting for the availability of some resource and won’t enter a dispatchable state until it becomes available. Examples, among scads of such, include waiting for data
- To be read from disk.
- To be received over a communications link
- To be made available by another task.
When such data does become available, the OS transitions the waiting task to be dispatchable. Conversely, when the executing task finds it needs some unavailable data, the OS transitions the task to a waiting state.
You can see each of these states in this animation using MIT’s Scratch. Tasks on the wait queues are Waiting, in the task dispatch object, Dispatchable, and on a core, Executing.
The basic point is that all that is really needed is to provide work to the system is to create tasks and making them dispatchable. From that the OS will make the decision, among all of the currently dispatchable tasks, just when and where those tasks will be dispatched. Keep in mind, though, that the maximum parallelism possible – the maximum number of concurrently executing tasks – is the same as the number of processors. Realize also that, because of the SMT effect mentioned earlier, the number of “processors” might be a few multiples more than the number of cores.
Now go back and replay that animation. How much of the available processor capacity – the CPU utilization – is being consumed by these tasks? If a task is not on a processor, an available processor is available capacity. Why are they available? Rather than spending all their time on the processors, these tasks are waiting for something, reads from I/O-based storage like disk drives for example. If that is what your application will be doing, more capacity is available for multi-task parallelism, so still more Tasks – well over that of the processor count – could be created to use that capacity.
Of Creating Threads
Catch the name change there? For most programmers, the notion of a task is really that of a thread, a thread – potentially one a very many – within a process. When applications are first started, they appear first as a process with a single thread, with the thread representing all that a task represents. Such initial threads start their execution at the beginning of the program and things progress from there, including, perhaps, the creation of more threads. These additional threads are scoped to the same process and so are capable of easily sharing the same data; they are, after all, sharing the same address space represented by the process.
Again, with all due respect to OpenMP – which we will get to in a later article in this series – the creation of a new thread on your own is relatively straightforward. All a new thread really needs is the place to start and some initial data.
Let’s start with some examples from C++, C# and Java.
With C++, you can create and start a thread using PTHREAD_CREATE. At its most simple, you just provide it with the addresses of:
- The code where the new thread starts execution.
- A set of data parameters.
- A pthread_t structure used for subsequent control of that new thread.
I had intended to provide an example here, but a far better one than what I had in mind can be found here at TutorialsPoint. The key concepts to catch are that once the PTHREAD_CREATE executes, both the new thread and the pre-existing one executing this command concurrently execute. The new thread starts where it is told and the current thread just continues it processing after the PTHREAD_CREATE.
The parameter data structure created for the new thread is actually accessible to both threads, largely because both threads belong to the same process. With appropriate control over shared memory, which we’ll get into in the next article, either thread can change and read the contents of this structure, using that for communicating if they desire.
C# builds off of its object-based orientation as in the following:
- Define the starting point – the starting method – of the new thread via construction of a ThreadStart
- Construct a Thread object, using the ThreadStart object as a parameter.
- Using that Thread object, execute its Start method.
Recalling that most C# methods are associated with an object – one addressed with the THIS pointer – the object associated with the Start method can hold the data parameters common to bother threads. Both the original thread and the new thread can access this same object and are both capable of understanding the contents of – and of executing the methods associated with – such an object. Again, this object is available for communication between the threads. Since both threads could be concurrently executing, they might also be concurrently executing the same methods – the same code – and accessing the same objects. Pretty cool, but also quite straightforward once you get the basics out of the way. More can be found on Microsoft’s Developer Network and TutorialsPoint.
Java, also being object-based, has two ways of creating and starting new threads. In one, you are doing the following:
- Construct an object whose class is inherited from the Thread class; there you will execute the code at its object’s constructor under the scope of the original thread.
- Execute that object’s start() method, again within the scope of the original thread.
- Executing start() results in the new thread beginning its execution at the beginning of the object’s run() method, here under the scope of the new
Once again, the original thread invokes the object’s start() method, which automatically results in the JVM starting the new thread at the run() method. (Odd, perhaps, but that’s the architecture and it works fine once you get your head around it.)
The other means of starting a new Java thread is similar:
- Construct an object whose class is inherited from the Runnable This constructor executes under the scope of the original thread.
- Execute that object’s start() method, again within the scope of the original thread.
- The newly created thread begins execution at the object’s run()
Thread creation is easy, right?
If you have glanced at the documentation at some of the provided links, you may have seen some examples with thread creation within a loop. With each loop iteration, a thread was created. So, suppose you have a list, maybe an array, and you want separate portions of it concurrently processed by separate threads. Making this happen is simple. All that is needed to get this started is to tell each thread where in the list it should begin, including a count of the number of items each should process. Then just tell the thread(s) to start.
Done and – almost – done. At least you have the list processing started. The actual difficult part is knowing for certain when the list processing was totally complete. You can easily arrange for each thread to kill itself off when it completes its part, but how do you know when any one or all of them really have completed, allowing the initial thread to continue?
One way, but we’ll get into more interesting ones shortly in subsequent articles, is through the notion of a JOIN. The purpose of a threading Join is to have the thread executing the Join wait until a particular other thread has ended. You can find here documentation of this for C#, Java, C/C++ . In all cases, the join function requires some notion of a handle to the thread being monitored; this is easily available to the thread which did the thread creation. In short, the creating thread can ask to wait on the completion of any particular thread it had created.
Others means of knowing the same thing are available via shared memory, a subject which we cover in the next article. As a bit of a preview, know again that every thread of a process shares the same address space; one can pass a given address to any other in the process and it means exactly the same thing. Additionally, all of these threads can use any of the processors of an SMP, wherein all of the processors can access the same physical memory; their shared data resides in this common memory. But if all or any of these threads are executing at the same time and at the same moment accessing and changing the same data, what keeps them from damaging their common data?
|
<urn:uuid:dd7cdb49-679d-49b0-9a2c-c65789965987>
|
CC-MAIN-2022-40
|
https://www.nextplatform.com/2017/01/05/essentials-multiprocessor-programming/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00250.warc.gz
|
en
| 0.959269 | 3,863 | 3.03125 | 3 |
The role of AI in cyber security and how it’s reinventing cyber security and cybercrime alike
Artificial intelligence poses both a blessing and a curse to businesses, customers, and cybercriminals alike.
AI technology is what provides us with speech recognition technology (think Siri), Google’s search engine, and Facebook’s facial recognition software. Some credit card companies are now using AI to help financial institutions prevent billions of dollars in fraud annually. But what about its applications in cyber security? Is artificial intelligence an advantage or a threat to your company’s digital security?
On one hand, artificial intelligence in cyber security is beneficial because it improves how security experts analyze, study, and understand cybercrime. It enhances the cyber security technologies that companies use to combat cybercriminals and help keep organizations and customers safe. On the other hand, artificial intelligence can be very resource intensive. It may not be practical in all applications. More importantly, it also can serve as a new weapon in the arsenal of cybercriminals who use the technology to hone and improve their cyberattacks.
The discussion about artificial intelligence in cyber security is nothing new. In fact, two years ago, we were writing about how artificial intelligence and machine learning would change the future of cyber security. After all, data is at the core of cyber security trends. And what better way to analyze data than to use computers that can think and do in nanoseconds tasks that would take people significantly more time?
Artificial intelligence is a growing area of interest and investment within the cyber security community. We’ll discuss advances in artificial intelligence security tools and how the technology impacts organizations, cybercriminals, and consumers alike.
Let’s hash it out.
How artificial intelligence cyber security measures improve digital security
Ideally, if you’re like many modern businesses, you have multiple levels of protection in place — perimeter, network, endpoint, application, and data security measures. For example, you may have hardware or software firewalls and network security solutions that track and determine which network connections are allowed and block others. If hackers make it past these defenses, then they’ll be up against your antivirus and anti-malware solutions. Then perhaps they may face your intrusion detection/intrusion prevention solutions (IDS/IPS), etc., etc.
But what happens when cybercriminals get past these protections? If your cyber security is dependent on the capabilities of human-based monitoring alone, you’re in trouble. After all, cybercrime doesn’t follow a set schedule —your cyber security response capabilities shouldn’t either. You need to be able to detect, identify, and respond to the threats immediately — 24/7/365. Regardless of holidays, non-work hours, or when employees are otherwise unavailable, your digital security solutions need to be up to the task and able to respond immediately. Artificial intelligence-based cyber security solutions are designed to work around the clock to protect you. AI can respond in milliseconds to cyberattacks that would take minutes, hours, days, or even months it would take humans to identify.
What cyber security executives think about AI
Capgemini Research Institute analyzed the role of cyber security and their report “Reinventing Cybersecurity with Artificial Intelligence” indicates that building up cyber security defenses with AI is imperative for organizations. This is, in part, because the survey’s respondents (850 executives from cyber security, IT information security and IT operations across 10 countries) believe that AI-enabled response is necessary because hackers are already using the technology to perform cyberattacks.
Some of the report’s other key takeaways include:
- 75% of surveyed executives say that AI allows their organization to respond faster to breaches.
- 69% of organizations think AI is necessary to respond to cyberattacks.
- Three in five firms say that using AI improves the accuracy and efficiency of cyber analysts.
The use of artificial intelligence can help broaden the horizons of existing cyber security solutions and pave the way to create new ones. As networks become larger and more complex, artificial intelligence can be a huge boon to your organization’s cyber protections. Simply put, the growing complexity of networks is beyond what human beings are capable of handling on their own. And that’s okay to acknowledge — you don’t have to be prideful. But it does leave you with answering a critical question: What are you going to do to ensure your organization’s sensitive data and customer information are secure?
Artificial intelligence in cyber security: how you can add AI to your defense
Effectively integrating artificial intelligence technology into your existing cyber security systems isn’t something that can be done overnight. As you’d guess, it takes planning, training, and groundwork preparation to ensure your systems and employees can use it to its full advantage.
In an article for Forbes, Allerin CEO and founder Naveen Joshi shares that there are many ways that AI systems can integrate with existing cyber security functions.
Some of these functions include:
- Creating more accurate, biometric-based login techniques
- Detecting threats and malicious activities using predictive analytics
- Enhancing learning and analysis through natural language processing
- Securing conditional authentication and access
Once you’ve integrated AI into your cyber security solutions, your cyber security analysts and other IT security employees need to know how to effectively use it. This takes both time and training. Be sure to not neglect investing in your organization’s human element.
Companies that integrate artificial intelligence in their cyber security solutions
If you look around the industry, there are many heavy hitters that are already using AI as part of their solutions and services. Examples of businesses already integrating artificial intelligence cybersecurity tools include major industry players like:
- Check Point
- Palo Alto Networks
The downsides of artificial intelligence in cyber security: cost, resources, and training
Although there are many advantages to integrating artificial intelligence in cyber security, there are also disadvantages to be aware of. Among the chief challenges of implementing AI in cyber security is that it requires more resources and finances than traditional non-AI cyber security solutions.
In part, that’s because cyber security solutions that are built on AI frameworks — and those aren’t cheap. As such, they’ve historically been prohibitively expensive for many businesses — small to midsize businesses (SMBs) in particular. However, there are new security-as-a-service (SaaS) solutions that are making AI cyber security solutions more cost-effective for businesses. And, let’s just be realistic, it’s a lot cheaper to pay for effective cyber security solutions than it is to pay the fines, downtime, and other costs associated with successful cyberattacks.
Dealing with the vulnerabilities that artificial intelligence cyber security tools create
The use of artificial intelligence in cyber security creates new threats to digital security. Just as AI technology can be used to more accurately identify and stop cyberattacks, the AI systems also can be used by cybercriminals to launch more sophisticated attacks. This is, in part, because access to advanced artificial intelligence solutions and machine learning tools are increasing as the costs of developing and adapting these technologies decreases. This means that more complex and adaptive malicious software can be created more easily and at lower cost to cybercriminals.
This combination of factors creates vulnerabilities for cybercriminals to exploit. Let’s consider the following example:
Imagine that one of your finance employees receives a phone call from “you.” In the call, “you” instruct them to transfer more than $2 million from the company’s account to a vendor or partner company. When they ask for verification, “you” assure them that it’s fine and for them to perform the transfer immediately so as to not hold up an important project.
However, the problem is that you haven’t called them — nor did you tell them to send millions of dollars to another account. In fact, as it turns out, a cybercriminal used a combination of social engineering and “vishing,” or a voice phishing call, to your employee while pretending to be you. However, they took their attack to the next level by using artificial intelligence-based software that “learns” to mimic and “speak” using your voice. This means that even if the victim knows what you sound like, they’re more likely to fall for the scam because it actually sounds like you making the call.
But how is this possible? XinhuaNet reports that there are AI software programs that, after just 20 minutes of listening to your voice, is capable of “speaking” any typed message in your voice.
The hidden danger of artificial intelligence in cyber security
One of the less-acknowledged risks of artificial intelligence in cyber security concerns the human element of complacency. If your organization adopts AI and machine learning as part of their cyber security strategy, there’s a risk that your employees may be more willing to lower their guard. We don’t need to re-state the dangers of complacent and unaware employees as we’ve already talked about the importance of cyber security awareness many times.
Adversarial AI: how hackers use your AI against you
Another risk of artificial intelligence in cyber security comes in the form of adversarial AI, a term used to refer to the development and use of AI for malicious purposes. Accenture identifies adversarial AI as something that “causes machine learning models to misinterpret inputs into the system and behave in a way that’s favorable to the attacker.” Essentially, this occurs when an AI system’s neural networks are tricked into misidentifying or misclassifying objects due to intentionally modified inputs. Let’s consider the example of a pair of sunglasses sitting on a table. A human eye would be able to see the image of the sunglasses. With adversarial AI, the sunglasses aren’t there.
What’s the purpose of doing that? Let’s replace the table-and-sunglasses scenario with a self-driving vehicle. Imagine what would happen if a hacker decided to create adversarial images of stop signs or red lights. The AI would no longer see these traffic signals and would risk maiming or killing the vehicle’s occupant(s) along with other drivers, pedestrians, etc. Or, imagine that a cybercriminal creates an adversarial image that can bypass facial recognition software. For example, an iPhone X’s “FaceID” access feature uses neural networks to recognize faces, making it susceptible to adversarial AI attacks. This would allow hackers to simply bypass the security feature and continue their assault without drawing attention.
Without the right protections or defenses in place, the applications for cyber criminals could be virtually limitless. Thankfully, cyber security researchers recognize the risks associated with adversarial AI. They’re donning their white hats and are “building defenses and creating pre-emptive adversarial attack models to probe AI vulnerabilities,” according to an article in IBM’s Security Intelligence research blog. IBM’s Dublin labs is also involved in the effort and developed an adversarial AI library called the IBM Adversarial Robustness Toolbox (ART).
Even with the negative aspects of the increasing use of artificial intelligence in cyber security, we still think the good outweighs the bad. After all, a human being simply can’t process the amount of data — at the necessary speed —that’s needed to keep your network and data safe. AI can — and it can do it without needing to sleep, eat, or take a vacation.
Of course, all of this isn’t to say that people aren’t still needed in cyber security. The human element is still integral to cyber security. This is why more and more industry experts are arguing that AI should be integrated into the systems within each business’s cyber security operation center (CSOC). The main message we want to drive home is that it’s imperative to ensure you have the appropriate systems, training, and resources in place to effectively manage and use AI cyber security solutions. This will help you to reduce the risks associated with using artificial intelligence security tools.
t was originally published on The SSL Store.
|
<urn:uuid:519d84cf-8e99-41cf-9595-b4f4b1180140>
|
CC-MAIN-2022-40
|
https://resources.experfy.com/ai-ml/artificial-intelligence-in-cyber-security-the-savior-or-enemy-of-your-business/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00250.warc.gz
|
en
| 0.93891 | 2,586 | 2.984375 | 3 |
In conventional Layer 3 forwarding mechanisms, as a packet traverses the network, each router extracts all the information relevant to forwarding the packet from the Layer 3 header. This information is then used as an index for a routing table lookup to determine the next hop for the packet.
In the most common case, the only relevant field in the header is the destination address field, but in some cases, other header fields might also be relevant. As a result, the header analysis must be done independently at each router through which the packet passes. In addition, a complicated table lookup must also be done at each router.
In label switching, the analysis of the Layer 3 header is done only once. The Layer 3 header is then mapped into a fixed-length, unstructured value called a label.
Many different headers can map to the same label, as long as those headers always result in the same choice of next hop. In effect, a label represents a forwarding equivalence class—that is, a set of packets which, however different they may be, are indistinguishable by the forwarding function.
The initial choice of a label need not be based exclusively on the contents of the Layer 3 packet header; for example, forwarding decisions at subsequent hops can also be based on routing policy.
Once a label is assigned, a short label header is added at the front of the Layer 3 packet. This header is carried across the network as part of the packet. At subsequent hops through each MPLS router in the network, labels are swapped and forwarding decisions are made by means of MPLS forwarding table lookup for the label carried in the packet header. Hence, the packet header does not need to be reevaluated during packet transit through the network. Because the label is of fixed length and unstructured, the MPLS forwarding table lookup process is both straightforward and fast.
|
<urn:uuid:a3b9a46a-9db9-4196-896c-eb28d378a912>
|
CC-MAIN-2022-40
|
https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r4-1/mpls/configuration/guide/mpls_cg41asr9k_chapter3.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00250.warc.gz
|
en
| 0.932038 | 375 | 3.328125 | 3 |
Exploring if IPv6 is actually faster across the Internet than IPv4
In the first part of this blog, we reviewed some recent studies on the performance of IPv6 compared with IPv4. Geoff Huston’s research concluded that IPv4 and IPv6 can be equivalent in many cases. Facebook’s Paul Saab concluded that their customers experience faster performance over native IPv6. Similar to the measurements of the amounts of IPv6 traffic observed on the Internet, it depends on where you take your measurements. Now, let’s consider the IPv6 protocol and see if there are any fundamental characteristics of IPv6 that could make it faster in certain circumstances.
One of the key differences between IPv4 and IPv6 is that IPv4 relies heavily on Network Address Translation (NAT) and Port Address Translation (PAT) due to IPv4 address exhaustion. IPv6 uses global addresses and typically does not rely on NAT in any way. For IPv4, every time a packet crosses a NAT/PAT middle-box, the packet source address is re-written, and the TCP/UDP source port is changed. The packet is then forwarded on to the IPv4 destination address. The NAT/PAT system then has to maintain connection state, reversing the process on the response packets, so that the return traffic can reach the original packet source.
Many applications create many connections and with numerous users behind a single NAT/PAT. As a result, there is congestion, not for Internet bandwidth, but for TCP/UDP port space and limited IPv4 address pool resources. The packet manipulation performed by the NAT function can add latency and jitter to IPv4 packets. There are many benefits to avoiding the use of NATs. As ISPs explore the use of Carrier Grade NAT (CGN), they start to understand how this can negatively impact their end-user experience by negatively affecting some applications. Lee Howard has given presentations on the TCO for CGN and has concluded that service providers should work to deploy as little CGN as they can get away with.
Refreshingly, IPv6 does not need or use any NAT or PAT functionality and restores the native end-to-end nature of the IP protocol. IPv6 performance is streamlined as a result of its native communication.
Checksums for Determining IP Packet Bit Errors
IPv4 was created at a time when WAN transmission links were prone to bit errors. Therefore, IPv4 implements a header checksum to determine if there is an error in the fields in the IPv4 header itself. With IPv4, as each packet is forwarded across a Layer-3 hop, the header Time To Live (TTL) value is decremented by one. This results in the router having to re-compute the header checksum each hop along the transmission path. There are methods to detect bit errors at Layer-1, there are checksums at Layer-2, a checksum in the header at Layer-3, and TCP implements options for retransmissions, of course, so applications can check if an error occurred in the payload. This all adds up to lots of checking to determine if a bit error has occurred, adding to the protocol overhead, and potentially decreasing performance.
In contrast, IPv6 was created when transmission link quality was improving. We now have higher quality copper connections and cables and far greater use of fiber optic communications with vastly better error rates than links from the 1980s. IPv6 does not have any type of header checksum. IPv6 still uses a Hop Limit value within its header and that is decremented every Layer-3 hop along the transmission path, but that does not result in any header checksum calculation. Another difference between IPv4 and IPv6 is that IPv4 packets may be variable in length while the basic IPv6 header is a fixed 40-bytes. Even though IPv6 packets are larger, they can still be hardware accelerated like IPv4 and the performance difference is negligible.
Both IPv4 and IPv6 protocols support the same transport layer protocols above them. Both IPv4 and IPv6 use checksums for TCP and UDP packets. There are TCP checksums for IPv4 and IPv6 and UDP checksums for IPv4 and IPv6. These checksums are functionally equivalent and the computation of the checksum is only slightly different for IPv4 and IPv6. The way that these transport layer checksums are computed have virtually identical performance for IPv4 and IPv6. When an IPv4 packet traverses a NAT that modifies the TCP or UDP checksum, then those header checksums must also be recomputed which adds a small amount of delay. Without NAT, IPv6 packets won’t need their TCP or UDP header checksums recomputed along the transmission path.
Testing IPv6 and IPv4 Performance
There are many different ways to test end-to end IPv4 and IPv6 connectivity, speed, throughput, packet loss, and any other number of communication measurements. To test end-to-end network performance, this requires tools that load a network with IPv6 traffic. This could be synthetic or real traffic. There are numerous network tools that you can use to test end-to-end connectivity and gain some round-trip-time (RTT) measurements or other performance statistics. Common tools include: ping, traceroute, Microsoft pathping, MTR, tcptraceroute6, Cisco IOS IP SLA, iperf, xjperf, pchar, among others. However, often it is easier to simply browse to one of many speedtest sites and click on the test button.
Speedtest.net has been a long time “go to” site for testing network performance. However, it only tests raw download and upload performance and ping latency to the closest test point. Unfortunately, it doesn’t compare IPv4 to IPv6 speeds.
There are several dual-protocol tests that measure the performance of your IPv6 Internet link. You just need to be sure that it is one that tests IPv6 connectivity. This speedtest site http://ipv6.speedtest.premieronline.net is operated by Premier Communications in the U.S. If you are in the UK, you can check out http://ipv6-speedtest.net and if you are in Japan, this site would work for you http://speedtest6.com. Many of these speedtest sites seem to be operated by Oookla. Fast.com also offers a free Internet Speed Test that works over IPv6.
Comcast has put a tremendous amount of effort into their IPv6 deployment over the past decade. The graphs from World IPv6 Launch site shows that a significant amount of Comcast subscribers are now using IPv6.
There have also been documented cases where using IPv6 with Comcast can be faster than using IPv4. Comcast has their own Speedtest system that compares the performance of IPv4 and IPv6. If you are a Comcast subscriber you can try this by browsing to http://speedtest.xfinity.com/. One subscriber Shumon Huque found that IPv6 ping latency was 32 ms. compared to IPv4 ping latency of 63 ms. and his download speeds and upload speeds were both significantly faster with IPv6.
I have been a long-time satisfied Comcast subscriber with solid dual-protocol connectivity to my house in Denver. Using this speedtest with the local testing location results is nearly equivalent performance for IPv4 and IPv6.
If you are an AT&T subscriber, you can use their speedtest site. Charter Communications has a speedtest site. Your service provider may have their own speedtest site, but you may want to try to verify yourself it is tests IPv6 and IPv4 performance and compare the results.
As they often say, “Your Mileage May Vary (YMMV)”. Depending on the nature of your applications, operating system, network equipment, and Internet connectivity, IPv6 might actually be slower for you than IPv4. This could be the case if you are relying on 6to4 and Teredo tunnels. However, if you have native IPv6 connectivity and a robust ISP, you may actually be getter better performance over IPv6 transport. However, there is much work remaining to make IPv6 as ubiquitous as IPv4.
There are still many content providers and popular web sites that are not yet using IPv6. There is still work to be done to complete IPv6 and deploy it within corporate networks to end-users. Stated another way, many organizations are not benefiting from the same performance improvements that Facebook is experiencing by enabling IPv6. And when organizations deploy IPv6, they may experience a faster Internet.
|
<urn:uuid:7103616c-ed19-407c-a30e-348d67ea78df>
|
CC-MAIN-2022-40
|
https://blogs.infoblox.com/ipv6-coe/can-ipv6-really-be-faster-than-ipv4-part-2/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00250.warc.gz
|
en
| 0.923954 | 1,801 | 2.8125 | 3 |
In this Python OCR crash course, we will learn how easy it is to get started with OCR and Python, the world’s most popular programming language.
Python OCR: A Crash Course
OCR is short for optical character recognition, an AI technique designed to extract written characters from images.
This AI technique, used in conjunction with other natural language processing techniques, can be used to create innovative software and app features. It is commonly used for workplace digitization and automation tasks such as the digitization of business records, though OCR can be used to automate other business functions that require the reading of written text, such as:
- Digitizing IDs
- Digitizing receipts
- Digitizing invoices
- Digitizing books
Python is one of the best languages to use for NLP, for several reasons:
- Python is a very user-friendly language
- Its global support from a passionate community of developers makes it one of the best languages to study for AI applications such as OCR
- There are libraries and tools for OCR development, as well as a wide range of other resources for NLP, AI, machine learning, data science, and more
Below will look at a few of the resources and technologies available for those interested in using Python for OCR.
Python Tesseract: An Open-Source OCR Engine
Tesseract, as the title of this section suggests, is Python’s open-source OCR engine, a wrapper for Google’s Tesseract-OCR engine. It is the best starting place for anyone interested in using Python for OCR.
With the right support, Python Tesseract can recognize over 100 languages. It can also be trained to recognize those that aren’t already supported.
Anyone familiar with Python will have no trouble getting started with this engine.
It is installed the same way other packages are, through commands such as “pip install” or “brew install.”
The primary function you will be using, image_to_data, includes easy-to-understand parameters that allow you to:
- Pass an image into the Tesseract engine
- Define the language to look for
- Customize the output for pandas
- Define the output type
Prerequisites are basic. You need Python 3.6 or above, the Python Imaging Library, and Google Tesseract OCR.
Although Tesseract is the main engine used, for OCR, it is important to use another engine for preprocessing and other tasks.
Using Tesseract with OpenCV
Using OpenCV, you can perform tasks essential to generating accurate OCR results, such as image preprocessing.
As with Tesseract, it is easy to use – simply install it using “pip install” or “brew install,” require it in your Python program, and begin using its functions.
Here are a few functions of this library:
- Read and write images
- Resize images
- Set a region of interest within an image
- Access and modify pixel values and image properties
- Rotate images
- Drawing basic geometric shapes within files
- Detecting features within images, such as edges and blobs (binary large objects)
- Perform a range of other manipulations, such as blurring and smoothing
In short, OpenCV can perform a number of tasks that can be used to preprocess images and enhance your ability to accurately extract characters from those images.
NumPy is one of the most widely used data science libraries in Python.
The more complex your OCR programs get, the more critical it will be to perform data-heavy manipulations on your images.
This means you will need to use a library designed for such tasks – in this case, NumPy.
One of the biggest reasons to use it is because it offers a number of features designed for multi-dimensional arrays.
When using it with the two packages mentioned above: namely, install NumPy as you would any other package and require it in your files.
Additional functionalities of NumPy include:
- Creating arrays
- Defining attributes for those arrays
- Arrange the elements within an array
- Performing mathematical calculations on those arrays
- Applying conditions to calculations on arrays
- Plotting arrays graphically
When working with OpenCV, NumPy can be used to perform tasks such as defining and applying different filters to images and performing more complex manipulations to images.
NumPy can also assist with other complex operations such as manipulating videos.
Python OCR is perhaps the best, easiest way to get started with OCR, for the reasons mentioned above. Not only is the language easy to learn, it contains libraries and features that are robust, widely supported, and professional-grade.
That being said, Python isn’t the only language that offers OCR, NLP, or AI support. In some cases, other languages may be more suitable since Python is not always the fastest.
Those interested in taking their career to the next level, for instance, may want to investigate other programming languages.
C++, for instance, is a fast programming language, which makes it useful to know for those interested in developing industrial-grade OCR applications.
Java, likewise, is a lower-level language than Python and it includes a number of native image recognition libraries, making it easy to develop OCR apps from scratch.When evaluating these options, start by assessing your own needs and the advantages offered by the various options, then choose the one that is most suitable for your use case. In some cases, Python will be the best choice, in other cases, a different language may be best, or, in others, you may find that no-code automation tools are best.
|
<urn:uuid:b285ac76-88cb-48e2-aca3-f1f156119826>
|
CC-MAIN-2022-40
|
https://www.digital-adoption.com/python-ocr/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00250.warc.gz
|
en
| 0.923496 | 1,217 | 3.328125 | 3 |
Co-authored with Dani Schrakamp
All aboard the digital express
Communities and countries of all sizes are in motion toward a digital future…and if not, they risk being left behind. This then begs the question, what does ‘digitize’ really mean? Certainly, there’s no instruction manual for the task. The roadmap features some identifiable landmarks—flagged by early pioneers—but there is still plenty of unchartered territory. In fact to navigate this rapidly changing landscape, we definitely have our work cut out for us, both in the developed and emerging parts of the world.
We frequently talk about all things becoming connected, but in reality, the majority of the global inhabitants are still faced with little to no Internet access, a disturbing fact when you consider the socio-economic benefits that technology affords. The digital divide is real. Despite the proliferation and rapid advancement of technology, many just are not receiving the benefits of the changes made in ICT.
However, an important tool in shedding light on the digital puzzle is the sharing of success stories and best practices. Sharing of experiences and expertise can open the discussion on how digital government can and should evolve. Using the power of the global community, the ever-increasing propagation of technology can begin to help digital countries develop faster and more efficiently through sharing and learning. And by bringing to light the stories of transformation, large and small, around the globe we hope to offer guidance and leadership to those embarked on the journey or planning a trip soon.
Where in the world is the digital citizen?
So how exactly do you separate fact from fiction and who is just presenting smoke and mirrors? Since the discussions concerning the digital shift began, there have been a number of myths and promises. With the growing numbers of examples to draw from, we are now in a much better position to assess the possible processes of digitization in a more realistic manner. And based on the experiences of the early-adopters, we can begin suggesting the steps that governments can take and/or avoid in planning their digital country strategy.
This week, our digital citizen is a jet setter. Think Carmen Sandiego circa 1990. First stop, the United Kingdom. The country is in its second phase of digitization planning, which includes efforts such as public sector development, accelerated cybersecurity innovation, and public-private initiatives like the British Innovation Gateway (BIG). Strategic investment to accelerate existing government goals for driving economic growth through high-tech innovation is helping the UK to becoming one of the top digitized countries in the world.
A quick trip over to the continent and our citizen is making the next stop in France. Drawing on a dynamic start-up culture, the reform-driven country plans to extract value from its efforts to enhance security, increase productivity, create jobs, and improve citizens’ lives through digitization. The Cisco Networking Academy program plans to open 1,500 additional academies and train upwards of 200,000 students in France, giving the French workforce the skills needed to accelerate the country’s digitization process. Not only is France expected to gain a GDP boost from 1-2 percent, this transformation will contribute to France’s overall global competitiveness by supporting job growth, education, cybersecurity, innovation and entrepreneurial initiatives.
We’re off again and on to India, where Smart City Bangalore is a prime example of a bottom-up digital country strategy, starting at the smart city level. Electronics City, in a newly developed area of Bangalore, is meant to be a model for smart cities, not just in India, but also around the world. Our citizen learns that for this, and for the 90+ other smart cities planned for India under the new government’s plan, its leaders are thinking about better ways to deliver citizen services and foster education initiatives to nurture the next-generation workforce. India is working toward a scalable blueprint on how to continue to be relevant in the rapidly evolving global environment.
And finally, we arrive in Singapore. While visiting, our citizen enjoys ubiquitous Internet connectivity—Singapore’s government has so far connected almost 99 percent of its residents to an ultra-high-speed network. Our citizen also can’t help but notice that Singapore is a bustling, world-class hub for modern business, enabled by the push for high technology adoption and by allowing innovation to flourish. In this year’s Global Information Technology Report, Singapore takes the top rank of the world’s most tech-savvy nations, recognizing the government’s successful promotion of innovative ICT and of providing online services to its citizens.
Well, we’re now approaching 2016, and while we might not have quite ended the traffic jam conundrum, the future of digital transformation in government is here and continues to build momentum. The answer is not a simple one, or a simple fix for technology alone. It is clear that digital transformation, at any level, will not happen overnight. However, it can be said that the future of digital success will rely on high collaboration and best practice sharing. Because amidst all the disruptive change that is due to come our way, governments must recognize they are not necessarily alone. Do’s and don’ts can and should be widely shared to point others on their digital journeys toward success.
Stay tuned for next Wednesday’s post to discover more information about cybersecurity and staying safe online in honor of #CyberAware month. And be sure to check back each week as we explore new themes, challenges and observations.
Additionally, you can click here and register now to get your questions answered on how to become the next digital community.
Finally, we invite you to be a part of the conversation by using the hashtag #WednesdayWalkabout and by following @CiscoGovt on Twitter. For more information and additional examples, visit our Smart+Connected Communities page and our Government page on Cisco.com. Enjoy the Wednesday walkabout!
|
<urn:uuid:b4f9d4d4-cee2-457b-9d43-ec4d3e7c7140>
|
CC-MAIN-2022-40
|
https://blogs.cisco.com/government/wednesdaywalkabout-series-digital-countries-stories-of-success
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00250.warc.gz
|
en
| 0.934557 | 1,219 | 2.59375 | 3 |
The Internet of Things, in the Real World
The Internet of things (IoT) is a term coined to describe the increasing number of devices embedded with electronics, software, sensors, and network connectivity that collect and exchange data. Everything from smart TVs, medical devices, home automation tools, and intelligent building products are getting connected to private and public networks. All these are considered part of the IoT ecosystem. Additionally, programmable logic controllers (PLCs) are increasingly being installed to allow the automation of devices that control traditional analog machinery such as factory assembly lines, making them digitally accessible for improved monitoring and control purposes. With all these connected devices, data and analytics are being collected in areas that were never possible before giving us unprecedented opportunities to improve outcomes, efficiencies, experiences, and offerings. IoT has been making its rounds throughout the tech community for the last few years and represents an innovative idea of the future. Is IoT ready for primetime, and what does it really mean for you and me in our day to day lives?
First world economies are seeing the fastest adoption of IoT technologies due to the increasing commoditization of sensors and the ubiquity of connectivity. Starting with the birth of the modern smart phone, sensors have become part of our everyday lives. The demand for these millions of sensors over the last decade has created a manufacturing infrastructure that has driven the cost and modularity to what I call the "Maker" level. This is creating an easy entry point for startups and hobby enthusiasts, giving them even more opportunities for innovation in this growing market. It is sobering to realize that GPS, light, proximity, touch, infrared, step, fingerprint, iris, and photo sensors are only a few of the many components that are part of devices that we carry around in our pockets every day.
Admittedly, there is plenty of attention around IoT and it is seemingly unavoidable not to get absorbed into the allure of the latest technologies associated with it. It is difficult to ignore its promise of sci-fi like home automation, intelligent digital assistants, proactive healthcare, and advanced equipment diagnostics. Unfortunately, as we pull back the layers and look past the sales and marketing hype, you’ll find that the current landscape is far from ideal.
The increased bandwidth and addressing needs coming from IoT devices, and the backend infrastructures built to support them are real bottlenecks to tackle in the near future
It is easy to consider all these technological advancements great, but we are at a point where issues with interoperability, increased data volume, connectivity demands, and security are all important pieces of that we need to explore and consider for the successful future of IoT. To date, other than some connectivity guidelines (i.e. Bluetooth, Wi-Fi, ZigBee, etc.) there are no standards to guide development of either hardware or software. This leaves us with a situation where "Wild, Wild West" rules still apply.
As with a lot of new technologies, there are many players crowding the field. This has lead us to a huge lack of integration and just like with the Beta vs. VHS and Blu-ray vs. HD-DVD standardization issues of the past, I can envision us needing to make similarly difficult “may the best one win” decisions in the future. If you aren’t building your own devices or conforming to a single vendor solution, interoperability is going to be difficult. As soon as you start to try to integrate devices, you'll quickly realize that you may need to develop a custom solution. I've found that it is becoming increasingly improbable to escape vendor lock-in. You’d hope this is not by design, but some vendors tend to be so short sighted that they are only interested in the quick buck. They count on their reputation for an immediate market grab, leaving us to fend for ourselves when trying to decide on the best fit for our needs. Luckily, services like If This Then That (IFTTT), Zapier, and Microsoft Flow are at least thinking about the future of interoperability. There are still many limitations, but let’s hope that this trend continues and more manufacturers join this movement to play well with others.
Remember IPv6? It is time for it to become a reality very soon. The increased bandwidth and addressing needs coming from IoT devices, and the backend infrastructures built to support them are real bottlenecks to tackle in the near future. IP addresses are numerical labels used by devices to connect and route them through the internet. The IPv4 systems we use today give us approximately 4.3 billion addresses, and even though that is a very large number, we are and have been running out for a while. Back in 1998, IPv6 was developed to address this IPv4 address exhaustion issue and as soon as we get around to converting, we will have approximately 3.4×1038 addresses. At the continued rate of devices getting on the net, it seems nothing but inevitable, and I've already seen Amazon Web Services, Comcast, and others seriously stepping up their efforts to make this a reality. Additionally, several new technologies like 5G (for mobile) and seven-core glass fiber cables (for physical connections) make the future look promising for our increasing bandwidth needs.
The biggest issue I see with IoT is the illusion of security. The lack of standards, users’ perceived need for simplicity and vendor complicity or complacency, has left us exposed with weak passwords, poor protocol compliance, and little to no update procedures. For even the most novices of hackers, there are an abundance of vulnerabilities to exploit. There isn't a month that goes by without another major site scandal regarding some sort of security breach, and that is just what is known and reported. By comparison, there are thousands of breaches that go undetected and we are just continuing to create a more target rich environment. Fortunately, some simple solutions are within our reach and by adding encryption, proactive security patching, and some stronger password capabilities, we will start to go in the right direction. As with everything, security should not be an afterthought and be one of the first things considered when developing a successful solution or product.
IoT is a great, growing technology and I am excited to see where it can take us. It is not quite where I’d like it to be, but the potential is there for success. I hope to see many new entrants in the space in the coming years, and I look forward to all the new innovation. As long as we are realistic and take a measured approach to adoption, I can see it becoming a seamless part of our day to day life.
Check Out : Top IoT Startups
|
<urn:uuid:c9a27e8d-2621-4b30-bde6-c9833c83768d>
|
CC-MAIN-2022-40
|
https://internet-of-things.cioreview.com/cioviewpoint/the-internet-of-things-in-the-real-world-nid-18414-cid-133.html?utm_source=google&utm_campaign=cioreview_topslider
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00250.warc.gz
|
en
| 0.959825 | 1,378 | 2.625 | 3 |
The courses presented to date have concentrated on simple sequential and partitioned data sets. In this course you will look at other types of data that can reside on a mainframe, in particular VSAM data sets and z/OS UNIX files and how they can be accessed. You will also see the benefits of creating generation data sets and the JCL code used to create and reference them. The last module concentrates on placing data onto tape, providing some best practices when dealing with this medium.
Operators and programmers who need to know how to code and submit JCL batch jobs.
Completion of the JCL Coding Basics – DD Statements Interskill course, or appropriate knowledge.
After completing this course, the student will be able to:
- Code JCL statements to create and reference generation data sets
- Identify JCL parameters used when dealing with VSAM data sets
- Access and reference z/OS UNIX files using JCL
- Describe how tape stores data and how it can be referenced using JCL
Accessing and Creating Generations of Data
Generation data set concepts
Accessing an existing Generation Data Set
Building a GDG base entry
Creating a new generation of a data set
Common GDG problems
Version 2 PDSE Generations
Dealing with VSAM Data Sets
Difference between VSAM and other data sets
Referencing an existing VSAM data set
DD statement parameters that can be used when creating VSAM data sets
Common problems when dealing with VSAM data sets
Working with z/OS UNIX Files
Why the need to access z/OS UNIX files?
Accessing an existing z/OS UNIX file
Creating a new z/OS UNIX file
Common z/OS UNIX problems
Working with Tape Data Sets
The need for tapes
Tape data set recording formats
DD statement parameters used when referencing tapes
Common tape handling problems
|
<urn:uuid:62bd313f-346f-4a72-8652-641e76ff406f>
|
CC-MAIN-2022-40
|
https://interskill.com/?catalogue_item=jcl-z-os-advanced-jcl-data-set-use&noredirect=en-US
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00250.warc.gz
|
en
| 0.721053 | 414 | 2.84375 | 3 |
Things are starting to look better for the U.S., as COVID-19 vaccines are steadily being distributed across the country. But just because we are making headway, it doesn’t mean that we should become complacent.
According to the CDC, we should still practice social distancing, wearing masks and washing our hands regularly. These safety precautions are important until enough people have received the vaccine and we’ve reached herd immunity. Tap or click here for insider tech tips for scheduling your COVID-19 vaccination.
It is nearly impossible to know for sure where the next outbreak will be. Fortunately, a new tool can help. It can predict possible hotspots a week in advance through algorithms and an advanced prediction model.
Here’s the backstory
Data Driven Health (DDH) has developed its own COVID-19 hotspot prediction map and claims to be 92% accurate. Using data from private and public sources, the FluDemic website can tell you where a hotspot could be in the next seven days.
Its use will go beyond COVID-19. The prediction model has been set up to track pandemics as well as other infectious diseases. DDH explains that it targets multiple populations and predicts which geographies are the most susceptible for future surges.
The tool uses clinical and non-clinical datasets fed into proprietary machine learning algorithms. It then analyzes the data and predicts potential outbreaks, socioeconomic risks and quantifies overall impact beyond viruses.
“The goal is to assist government, health system administrators, community leadership, and all people to be proactive in their decision-making through data. Our evolving real-time datasets include cases and testing, hospital preparedness and capacity, economic impacts, social risk factors, and behavioral health,” DDH explains.
How to use FluDemic
Being 92% accurate down to neighborhood level is rather impressive, but it does have some shortcomings. In sparsely populated areas, the data isn’t as accurate as heavily populated areas, so there could be some discrepancies.
Also, the data that is made available to the public for free uses a certain set of data points, but a paid-for premium version pulls data from more sources. This gives governments and local officials an increased scope, predicting down to block-level.
On the main screen of FluDemic, you have several options to choose from, but the Dashboard will give you an overall view of your area.
It will display new daily cases, total cases, new daily deaths and average cases in a seven-day period. Clicking on Explore on the right-hand side of the page, you can see the dashboard figures’ historical charts and graphs.
The Interactive Map is where the prediction model comes in.
- Navigate the map to your city. (Note: The site asks to know your location. Select allow and it zooms into your area automatically.)
- On the right-hand side, make sure that your Country and State are correct.
- Select your County from the drop-down.
- Click on Apply Criteria.
- Below that, select which seven-day metric you want to see.
- Click Animate.
The website will take a few seconds to get all the data and will proceed to show you how (if any) the virus will spread over the next seven days. Here is a screenshot for the Phoenix area over the next seven days:
Using the AI predictive model, the figures at the top of the page will also change to reflect the predictions. This will include total deaths, the increase in daily new cases and the average seven-day cases.
The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health provider regarding any questions you may have regarding a medical condition, advice, or health objectives.
|
<urn:uuid:facf6ca0-963c-4b4a-976d-f2f54b3ff951>
|
CC-MAIN-2022-40
|
https://www.komando.com/coronavirus/fludemic-covid-prediction-map/781956/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00450.warc.gz
|
en
| 0.937862 | 804 | 2.9375 | 3 |
With over 200+ million repositories across the globe, GitHub is undoubtedly the most popular open-source code management platform on the planet. But like any open-source system, GitHub security needs to be a top priority.
Whether you’re a developer or an organization using GitHub, this extensive guide will explain everything you need to know about GitHub security and safety best practices.
Why is GitHub Security and Safety So Important?
GitHub is trusted by over 73 million developers. 84% of Fortune 100 companies rely on GitHub as well. It’s used by enterprise organizations and hobbyist developers alike.
Developers worldwide rely on GitHub’s open-source code for various software development projects. But the re-use of open source code creates vulnerabilities between dependencies and repositories.
GitHub hosts public and private repositories, which allows remote developers to upload source code and share it with other contributors. That’s why so many organizations rely on GitHub for open-source collaborative development projects.
Public repositories on GitHub are just copies of private folders stored on a developer’s device. So a developer can work independently, then share the final code once it’s done.
This structure means that some developers unknowingly share private files and sensitive company data to a public repository on GitHub. So a GitHub user could accidentally share SSH keys embedded in code to a public, searchable database.
The use and continuous re-use of public code make it easier for hackers and those with malicious intent to infiltrate various systems. These vulnerabilities increase if something like SSH keys are copied into a public database. SSH keys are just one of the many types of files, code, and sensitive data that have been published on GitHub’s public repositories.
GitHub has built-in tools and features for security and safety. But these aren’t bulletproof, especially when it comes to human error. So if your organization is using GitHub, you need to have additional security and safety measures in place to ensure your developers aren’t unknowingly sharing sensitive data or using code with vulnerabilities.
How GitHub Security and Safety Works
Before we dive deep into GitHub security best practices, let’s first take a closer look at GitHub’s security offered out-of-the-box.
GitHub strives to constantly improve its security, audit, and compliance solutions. They have security measures in place to remove spam and eliminate abuse. This is accomplished through practices like incident response investments and anti-abuse initiatives.
Even though the platform is open-source, GitHub has its own security lab with a world-class research and development team. These security experts do their best to stay ahead of the curve and identify threats before they become a problem. The platform empowers the GitHub community to achieve top-of-the-line security via collaborative efforts.
GitHub’s product security engineering team begins with extensive development training. The team uses practices like threat modeling, security reviews, and threat testing to detect and prevent vulnerabilities in the development lifecycle.
GitHub also relies on the “red team” cybersecurity strategy. These internal team members play the role of a hacker or user with malicious intent. It’s their goal to hack the system and give the “blue team” a chance to stop the threat.
Here are some other noteworthy standouts of GitHub’s security and safety features:
- Bug Bounty Program — Anyone in the development community can get rewarded for CodeQL queries that detect and prevent open source vulnerabilities at scale. Some users have been rewarded bounties of $30,000 or more for assistance with critical vulnerabilities. It’s paid out more than $1.5+ million in bounties since launching back in 2014. This program really gives developers an incentive to look for threats and solutions in GitHub.
- Advanced Security License — Extra security features are available to GitHub users on an advanced security license. These features include code scanning, secret scanning, and dependency reviews. This can help you search for security vulnerabilities and coding errors while showing you the complete impact of changes to dependencies. These advanced features are also available for public repositories on GitHub.
- Data Privacy — GitHub is GDPR compliant through action, not certifications. The platform also offers AICPA SOC 1 and SOC 2 compliance reports. These include IAASB International Standards on Assurance Engagement, ISAE 3402, and ISAE 2000. You can also view a SOC 3 report for GitHub’s Enterprise Cloud system. GitHub is a member of the Cloud Security Alliance and supports FedRAMP LI-SaaS Authorization to Operate (ATO).
But no system is 100% secure. I want to quickly cover some real instances of GitHub security incidents to showcase where the platform can be vulnerable for your organization or development project:
Example #1: Optimus Scanner Malware
The GitHub Security Labs detected the Optimus Scanner Malware on 26 repositories in March of 2020. This malware targets open-source code and becomes active when a developer downloads an infected project from a public GitHub repository.
Once downloaded, the scanner deploys a RAT (remote access trojan) that infects the source code and files for the project. This essentially allows the malicious user to take control of the machine where the scanner has been installed.
Example #2: Unauthorized Crypto Mining
Justin Perdok, a security engineer at GitHub, discovered that cybercriminals were using GitHub repositories for unauthorized cryptocurrency mining. These hackers added malicious code to GitHub repositories.
This led to nearly 100 unauthorized crypto mining applications being unwillingly deployed to owner’s repositories.
Example #3: Corporate Secrets Leaked
More than two million corporate secrets were leaked on GitHub in a single year, according to a recent report. 12% of these leaks came from public repositories, and 85% came from the personal repositories of different developers.
Examples of the secrets found include:
- Google keys
- Development tools
- Data storage
- Cloud providers
- Private keys
- Social networks
- Collaboration tools
- Version control platforms
- Messaging systems
- Payment systems
Things like usernames, passwords, API keys, and other sensitive corporate information were accidentally exposed to the public on GitHub.
How to Get Started With GitHub Security and Safety
As you can see from the examples listed above, GitHub’s built-in security features alone aren’t enough to protect your organization from threats, breaches, and users with malicious intent. There are additional steps you must take into consideration if you want to apply GitHub security best practices to your process.
If you’re using GitHub to save time with open-source code or you’re using it for collaborative development, make sure you follow the tips below:
Never Store Credentials or Sensitive Data in Your Code or GitHub Files
If you run a quick search on GitHub, you’ll quickly see how big of a problem this is. So many developers and organizations have stored passwords in their repositories. I found more than 400,000+ commits for removed passwords.
That query doesn’t cover the users who weren’t using such an obvious commit message. It also doesn’t include developers who tried to remove their history.
To prevent yourself or your developers from accidentally storing passwords or credentials in GitHub code, you can use tools like pre-commit Git Hook or Git-secrets. These tools analyze your code and search for sensitive data and passwords before pushing information into a GitHub repository. If the tool finds something that looks like a password or sensitive file, the commits will be rejected.
This process may slow down your commit pushes, but it’s worth the extra step to ensure nothing sensitive is being pushed into GitHub.
Let’s say you’re searching a repository to see if you’ve exposed sensitive information. Now what?
First and foremost, assume that the information is already in the hands of a hacker or someone with malicious intent. So you need to quickly invalidate all of the passwords or tokens that were made public. Once you remove the sensitive data from your repositories, you also need to clear your history and changelogs.
Strengthen Access Controls
Having a proper access control policy is one of the best ways to beef up GitHub security.
For starters, you should implement a least privilege model. This means that each user is only granted access to files, data, and permissions that are strictly necessary for their position, job, or assignment. You need to consider access control policies for your in-house team, as well as any external contributors or collaborators you’re working with.
Here are some tips for you to consider as you’re going through this process:
- Make sure you require two-factor authentication for each contributor account.
- Don’t let anyone share a GitHub account or password.
- Secure access to any computers or devices with source code.
- Make sure repository admins are only giving contributors access to data that they need to do their job.
- Manually revoke access to all GitHub accounts that are no longer part of your project or team.
- Periodically review the access right to all GitHub projects.
- Rotate SSH keys and personal access tokens.
By tightening up your access controls, you can benefit from enhanced security using GitHub by reducing your biggest vulnerability—human error. Lots of hackers or people with malicious intent will target weak users as a way to breach the system. But if each user has limited access, it ultimately limits your exposure.
Validate All GitHub Applications
If you browse through the GitHub marketplace, you’ll find hundreds of applications that come from third-party organizations and developers. Before you blindly add one of these apps to your GitHub repository, you need to conduct some due diligence to ensure it’s safe.
Here are some best practices to keep in mind when you’re using a third-party party application:
- Don’t allow applications to have more access rights than required.
- Investigate why an application requires the access levels it’s requesting.
- Assess the potential damage that might be caused by granting an application a certain level of access.
- Verify the legitimacy of the author or organization that created the application.
Remember, you’re only as secure as the weakest link in your fence. Similar to users and contributors, an application with weak security can add vulnerabilities to your sensitive data. This means that if an application is hacked or breached, a hacker can access the code that you’ve permitted the application to access.
Disable Forking and Visibility Changes
Forking is a common technique used on GitHub. The practice allows developers to copy repositories without implicating the original code. This is great for sandboxing and experiments, but it makes it difficult to track where your sensitive data and credentials are going.
For example, you might have a private repository that can go public if forking is enabled. This increases each time a fork occurs, creating an exponential chain of potential security breaches.
You disable forking by going to the Member Privileges menu of your organizational Settings. Then uncheck the option labeled Repository Forking.
It’s also in your best interest to disable visibility changes. A developer who doesn’t have security best practices in mind may accidentally change a repository’s visibility. This simple and seemingly innocent mistake can make your repositories public or grant access to users in your organization that don’t need the information.
If people have the ability to change the visibility settings of a repository, it increases your potential points of failure. This is obviously not good if your repositories have sensitive data.
You should change these settings to only allow admins or organizational owners to change visibility settings. The number of users who can complete this action should be extremely limited.
Limit GitHub Access to Allowed IPs
It’s difficult for larger organizations to keep track of everyone at all times. So an easy way to prevent unwanted access to your GitHub repositories is by creating access controls via IP addresses.
This only allows on-site users to access the organization’s source code and repositories. Users who have access to your company’s static remote IP will also have access here.
GitHub allows you to manage and restrict whitelist IP addresses through your Organizational Security settings. Admins can configure a specific list of IP addresses or ranges that can access the system.
These best practices give you more control over the environment that contributors are using while working on your code. You won’t have to worry about them working from an unsecured device or network that’s not part of your whitelist IP settings.
|
<urn:uuid:dc605240-26af-4b40-bd87-25bd020f91d7>
|
CC-MAIN-2022-40
|
https://nira.com/github-security-and-safety/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00450.warc.gz
|
en
| 0.904181 | 2,630 | 2.8125 | 3 |
Cloud computing is no longer hype; it is the reality today for most organizations because of the numerous benefits that it brings. Cloud computing is not without its risks, however.
There are three main deployment models for cloud computing—private cloud, public cloud, and a hybrid mix of the two.
Private, Public, and Hybrid Cloud - How Are They Different?
The US National Institute of Standards and Technology (NIST) provides the following definitions for each of these deployment models:
- Private cloud—the cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers, such as business units. It may be owned, managed and operated by the organization, a third party, or a combination of the two. It may exist on or off the organization's premises.
- Public cloud—the cloud infrastructure is provisioned for use by the general public. It may be owned, managed and operated by a business, academic or government organization, or some combination of them. It exists on the premises of the cloud provider.
- Hybrid cloud—the cloud infrastructure is a composition of distinct cloud infrastructures, public and private, that remain unique entities, but that are bound together by standardized or proprietary technology that enables data and application portability, such as cloud bursting for load balancing between clouds.
Public clouds aim to serve as large a customer base as possible at an attractive price and to offer standard services to all customers. They are therefore highly structured and automated. They operate on standard models, including security and the conditions of the service-level agreements (SLAs) offered, with organizations only able to set the terms of the SLA in very few cases, and generally only if they are sufficiently large to have the clout to do so.
Organizations should therefore carefully scrutinize the security controls that are offered by public cloud providers to ensure that they are sufficient for their needs. Where there are any doubts or where data is considered to be too critical to the organization to place in the hands of a third party, many organizations will opt to keep that data in-house. In some cases, this will result in an organization using a hybrid mix of public and private cloud deployment models, or opting to keep on using traditional on-premise applications—especially smaller organizations that cannot afford to set up their own private clouds.
Risks and Disadvantages of Different Deployment Models
One of the most often cited concerns with the use of public cloud services is that some feel security is weaker, with objections raised including co-mingling of data with that of other organizations in a multi-tenant environment, and fears over the security of data both when transferred over the internet and when it is being processed or stored in the cloud. However, such concerns need not be a risk to the organization if the service provider has the necessary controls in place, such as privileged user management controls and strong encryption, with the keys held by the customer in their own possession. Organizations should look for service providers that offer a strong data assurance framework. They should also look for assurances that the service provider upholds strong security standards through regular audits and through adherence with frameworks such as ISO 27001. It is no longer considered sufficient that a service provider has a general-purpose certification such as SAS 70.
Reliability and availability are also perceived to be issues—and there have been some well-publicized cases of outages among the major cloud service providers that have affected all of their customers simultaneously. Organizations should not only rely on guarantees provided by service providers over uptime, but should demand to see performance records that attest to those levels being met. Most SLAs provide not only guarantees of uptime, but also provide recompense should those guarantees not be met. However, in many cases, it is up to the customer to proactively demand that they be recompensed. Organizations should also ensure that backup and disaster recovery capabilities that are provided do not lead to their data being stored in a jurisdiction that would leave them out of compliance with the local data protection regulations that they face.
One other major risk is that the organization will lose control over its own data, which could lead to its losing data. Organizations should therefore ensure that adequate controls are in place regarding ownership of the data and what should happen to that data once the contract has expired or should the service provider go out of business.
|
<urn:uuid:9288d83d-e273-4de6-b129-165462f666dc>
|
CC-MAIN-2022-40
|
https://www.dbta.com/Editorial/Trends-and-Applications/Cloud-Security-Risks-and-Recommendations-92522.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00450.warc.gz
|
en
| 0.967691 | 884 | 2.9375 | 3 |
Our data recovery lab received an interesting case recently. On the surface, it looked like just an ordinary USB flash drive that had had its plug snapped off—the single most common form of data loss in USB flash drives. But upon removing the casing, our engineers discovered something truly bizarre.
Although many USB flash drives use monolithic chips today, the go-to for cheaper models is still the old standby of a NAND memory chip and a controller chip soldered to a simple green circuit board. This familiar sight is what greeted our engineers as they dug into this case.
No, your eyes are not deceiving you. That is an entire microSD card soldered to the PCB where a NAND chip should be.
Or take this specimen our engineers received recently
This might not look quite as unusual until you realize that the NAND chip isn’t the kind that normally goes into flash drives, but rather the kind you’d find inside an iPhone. This flash drive had been cobbled together from the usual flash drive parts and a chip meant for an iPhone that hadn’t been good enough to use in an actual iPhone.
What Goes Into USB Flash Drives, Anyway?
As Forrest Gump’s momma used to say, sometimes life is like a box of chocolates. You never know what you’re going to get. USB flash drives are the same way.
The data in your typical USB flash drive lives on its NAND flash memory chip. The data leaves the chip, is assembled into something recognizable to your computer (and you) by a controller chip on the circuit board, and travels through the USB interface and into your computer. USB flash drives go by many names—jump drives, thumb drives, data pens, etc.—but no matter what you call them, they’re all more or less the same on the inside.
Or are they?
At Gillware, sometimes we encounter flash drives that are more than they appear, or less, or just plain weird. The reasons for these oddities fall into two categories:
1. Manufacturer Cost-Cutting
Flash memory manufacturers typically have shallow margins. As a result, almost every scrap they make needs to be used, including chips that don’t quite pass the muster. Sometimes even defective products that would normally end up in a landfill have uses of their own.
For both traditional and flash data storage devices, during the manufacturing process, a few sectors/columns are just born bad simply due to factory defects. The manufacturers know this, so data storage devices are calibrated right off the assembly line to record where these bad parts live and ignore them. For example, take the physical sectors 4, 5, and 6. If the physical sector 5 is bad, the hard drive will know not to use sector 5 as a logical sector and will make 6 the new 5, 7 the new 6, and so on and so forth. Manufacturers also give flash devices just a little more memory cells than they need. This practice is called “over-provisioning.”
Flash memory manufacturers don’t like throwing away the things coming off of their assembly line unless the results are completely unusable. If a NAND chip with space for eight gigabytes can only use two (which is something we’ve seen before in our lab), it can still be packaged and sold as a two-gigabyte flash drive.
It might sound shady, but it isn’t. You are, after all, still getting what you paid for.
Data Recovery Software to recover
lost or deleted data on Windows
If you’ve lost or deleted any crucial files or folders from your PC, hard disk drive, or USB drive and need to recover it instantly, try our recommended data recovery tool.
Retrieve deleted or lost documents, videos, email files, photos, and more
Restore data from PCs, laptops, HDDs, SSDs, USB drives, etc.
Recover data lost due to deletion, formatting, or corruption
How the Sausage Gets Made
Here’s what likely went down with our Franken-flash drives:
For the first one, the manufacturers built a microSD card, ran it through QA, and found out that it had a faulty controller. Not wanting to let a good NAND chip soldered into that tiny little package go to waste, they connected it to a flash drive’s controller chip, soldered it all together, plopped it into a case, and sold it.
For the second one, manufacturers encountered an iPhone NAND chip that couldn’t hack it inside an iPhone, but could work just fine installed in a cheap flash drive… so they plopped it onto a USB drive’s circuit board.
These sorts of cobbled-together “Frankenflash” USB devices are made pretty much the same way hot dogs are.made from the ground-up gristle, fat, and other leftovers of assorted animals after all the “good stuff” has been parceled out. Usually, these flash devices are the ones you buy in bulk for cheap online when you need to put in an order for 1,000 USB flash drives with your company logo printed on them.
Hot dogs taste great with some mustard and relish, and likewise, these flash devices tend to work as advertised. After all, this odd little flash drive worked fine at its advertised capacity (even if its insides were a little weird). It only needed data recovery because its USB plug broke out. And that’s the same way most flash drives fail, anyway—even ones that are made up to standard.
By the way, in both cases, we recovered 100% of the owner’s data.
When you buy cheap, you usually get what you pay for. Usually, that’s okay. You get your money’s worth—no harm, no foul.
But sometimes you get scammed.
Third-party vendors sometimes peddle deals that just seem too good to be true. A 128 GB flash drive for the price of a 4 GB drive? What a score!
Unfortunately, “too good to be true” tends to be just that more often than not. These drives typically have just as much capacity as the lower-gigabyte drives their prices match—they’ve only been altered so that your computer misreads their capacities. Once you fill them up, the drives either start overwriting the data you’ve already put on them. At worst, they stop mounting entirely until you reformat them.
Not too long ago we had a client who, once we’d recovered his data, decided to send in a USB flash drive he’d purchased on his own to use as a transfer drive. Sadly, the 256-gigabyte drive had less than half of the capacity it claimed to have. Only half of the customer’s data fit on it. Sadly, we had to break the news to the poor man that he’d been scammed. He agreed to send in another flash drive of his. It, too, ended up being a counterfeit. We ended up putting the client’s data onto one of our own hard drives instead and recommended he find a new supplier of flash drives.
Do You Have Counterfeit Flash Media?
In the world of USB flash drives, if it seems like you’re getting more than you paid for, you’re probably getting much, much less.
There are multiple tools and methods you can use to validate a flash device’s capacity, such as H2testw and FakeFlashTest.
From the right vendor, nothing beats a good ballpark frank, whether you prefer yours Chicago-style, New York-style, or Vietnam-style. You just need to be careful your hot dog didn’t come out of a vat full of tapeworms…
|
<urn:uuid:17960eb6-41c6-4f9d-9702-a5fcb52de1c7>
|
CC-MAIN-2022-40
|
https://www.gillware.com/flash-drive-data-recovery/usb-flash-drives-hiding-secret/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00450.warc.gz
|
en
| 0.949313 | 1,667 | 2.78125 | 3 |
Understanding Dashboard Components
The dashboard consists of layouts and widgets. Widgets are informative panels that present data in various specific ways and provide a way for users to interact with that data. There are many widget types for different purposes: Chart, Table, Links, etc. Most widgets are highly customizable and allow you to fine tune their look-and-feel, as well as narrow down the scope of displayed information.
Layouts are like wire frames for the dashboard—they only specify the arrangement of widgets as a collection of rows and columns. The dashboard has its own layout, which you can customize at any time. You can also create supplementary reusable layouts. These layouts can be applied to the dashboard, which makes the process of customizing the dashboard quick and easy. Another advantage of reusable layouts is that they can be applied to the dashboard not only by the administrator, but also by technicians working in the Desktop App.
NOTE: Currently, dashboard customization is not available in the Web App.
Here are the basic steps of setting up the dashboard:
- Create and configure dashboard widgets, if needed. For details, see Creating and Modifying Dashboard Widgets.
- Optionally, create and configure dashboard layouts. For details, see Creating and Modifying Dashboard Layouts.
Customize the dashboard using widgets and layouts created earlier. For details, see Customizing the Dashboard.
|
<urn:uuid:b872d196-b10c-4a5a-a517-154af5c32eb7>
|
CC-MAIN-2022-40
|
https://docs.alloysoftware.com/alloynavigatorexpress/docs/adminguide/adminguide/setting-up-dashboards/understanding-dashboard-components.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00450.warc.gz
|
en
| 0.924123 | 283 | 2.671875 | 3 |
Facebook is currently the most popular platform among college students for discussing school related information. It’s the social networking site most often used to organize study groups, and it’s often the easiest way to get a friend to quickly update you on what you missed in class that day.
More recently, there has been an abundance of student-made groups and pages with a more specific goal in mind. There are groups where student’s can try and buy and sell used textbooks. There are groups where student’s from specific schools can discuss which classes are GPA boosters, and which ones to avoid. There are even groups where student’s can post funny conversations they overheard, or quirky comments made by professors and other students (“Overheard at ______”).
Students have effectively utilized Facebook in order to allow quicker and easier communication among peers.
|
<urn:uuid:6eda7203-7b90-4dc6-b99d-319b947892de>
|
CC-MAIN-2022-40
|
https://www.faronics.com/news/blog/facebook-is-popular-with-college-students
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00450.warc.gz
|
en
| 0.958943 | 176 | 2.703125 | 3 |
HBO’s wildly-popular series “Game of Thrones” follows several factions’ quest to rule the Seven Kingdoms. It’s a numbers game. It’s an exercise in one-upmanship. And it’s likely to culminate in a brutal fight to the death
As Daenerys Targaryen leads her army (and her dragons) and plots to battle Cersei Lannister to claim the Iron Throne as the one true queen, any leak of information could spell certain doom. Deceit is not a foreign concept in “Game of Thrones,” meaning there’s great potential for Daenerys’ or Cersei’s plans to fall into the wrong hands to be exploited.
In an example of life imitating art, hackers recently stole 1.5 terabytes of data from HBO, including full episodes of popular shows like “Ballers” and at least one script from an upcoming episode of “Game of Thrones.” The thieves also made off with internal company documents and HBO employee data. Some of the data has already been shared online.
HBO isn’t alone. The headlines are filled with companies that have had data and information stolen, whether it’s customer data, employee information, classified product data and more.
To understand how to prevent data theft, like the HBO hack, It’s important to first look at the different ways data can be stolen from an organization.
Data breaches can occur either physically or digitally “over-the-wire.” Physical data leakage can occur when someone transfers data from a user’s device to a USB drive and then walks it out the door, or transfers it via a rogue wireless network. However, that vector is typically used by employees with a motive.
An over-the-wire data breach can occur with various degrees of complexity, duration and effort. Exploits that potentially give access to the stolen content might be as simple as taking advantage of improper security measures to bypass authentication for streaming services, or exploits that give command and control over a host to the intruder.
Others vectors used to steal data include spear phishing or deeper penetration into the corporate network or from a connected subsidiary or partner. If the main attack is through an intermediate and compromised system, there is a delicate balance that an intruder might consider in deciding at which rate to exfiltrate the data. The longer the intrusion, the higher chance of being discovered or inadvertently losing access because of nightly patching or the power state of the compromised system. However, if the intruder sends large amounts of data too quickly, it might raise some eyebrows and generate alerts from security solutions.
So how can companies prevent data breaches like these from happening?
When it comes to preventing data breaches and leaks, analytics and visibility are critical and can help detect data exfiltration events.
Detailed telemetry solutions that have good analytics are key to monitoring traffic that is leaving the network, and can detect any traffic flows that are outside the norm. From there they provide insight into what’s happening and act to stop any malicious activity.
In a case where data is exiting the network via fast exfiltration, IT management can use security solutions that create rules to lock down traffic in extreme circumstances, or even proactively set up policies that limit traffic. Additionally, Data Loss Prevention (DLP) systems that use the Internet Content Adaption Protocol (ICAP) to connect to the network can help prevent unauthorized data exfiltration.
If a data theft event uses SSL encrypted traffic to transfer stolen data, A10 Thunder SSLi can provide visibility and can often prevent such breaches. Thunder SSLi decrypts SSL encrypted traffic so it can be inspected by security tools, and then re-encrypts the inspected traffic and passes it to its intended destination. It can uncover “over-the-wire” data thefts.
Thunder SSLi can log the bytes being sent through SSL encrypted traffic, and if it’s determined to be outside the norm, SSLi will discover that and give insight into what’s happening.
Thunder SSLi decrypts across all ports and protocols and supports ICAP connectivity, meaning it passes traffic to a network’s existing DLP systems without the need for additional solutions. And because Thunder SSLi is built on a full-proxy architecture, ciphers can be re-negotiated to ciphers of similar strength to prepare for future ciphers or TLS versions.
SSLi also supports service chaining, to enable selective redirection of traffic, based on application type, to different service chains with fine-grained policies. This can reduce latency and potential bottlenecks with a decrypt once, inspect many times approach, consolidating SSL decryption and SSL encryption duties.
Thunder SSLi can protect your data from theft (but probably not from dragons).
To learn more about how A10 Thunder SSLi can help prevent data loss and theft, read our data sheet.
|
<urn:uuid:e867e047-710c-4d03-962a-80e2eb9e6806>
|
CC-MAIN-2022-40
|
https://www.a10networks.com/blog/data-theft-data-exfiltration-and-breaches-and-leaks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00450.warc.gz
|
en
| 0.920818 | 1,032 | 2.53125 | 3 |
With the advent of cloud, IoT, and other next-gen technologies, the Federal government’s digital footprint is growing at an exponential rate. But as the amount of data continues to explode, so do the number of cyber adversaries and vulnerabilities in our government’s networks. And without the proper resources and capabilities to manually defend against this deluge of cyber threats, artificial intelligence (AI) could be the missing link in fully securing our government.
AI still maintains a futuristic connotation, but machine learning and cognitive solutions can have an immediate impact on Federal cyber defense. With the abilities to track human behavior and detect threats with greater speed and efficiency, AI has the potential to change the course of cyber security for government agencies and keep them a step ahead at all times.
Download “The Federal Cyber AI IQ Test” to discover:
- What are the security implications within the Federal government for the rise of AI?
- What role can AI play in incident response? Can it help prepare agencies for real-world cyber threat scenarios?
- How can agencies leverage behavioral analytics through AI to monitor human activity and deter insider threat?
- How can the Federal cyber workforce coexist with cognitive technologies? Could AI lead to more streamlining within agencies?
- What AI efforts are already underway in the Federal government? How are they performing? Where do IT professionals see it heading in the next 3-5 years?
|
<urn:uuid:ea5527b1-8fbb-424d-8c5a-6f25fc476ea5>
|
CC-MAIN-2022-40
|
https://origin.meritalk.com/study/federal-cyber-ai-iq-test/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00650.warc.gz
|
en
| 0.924648 | 288 | 2.6875 | 3 |
Child sexual exploitation and modern slavery are related aspects of human trafficking which result in the commercial exploitation of children and often being victims of Child Sexual Abuse Material creation as well.
Children who are victims of human trafficking and modern slavery are often subject to physical and emotional abuse and neglect. Traffickers exert control by inflicting emotional, physical and even sexual abuse of children that can be forced into illegal forms of labor and slavery-like conditions.
How do children become victims of the modern slavery industry?
While there are many ways this happens, the UK’s National Society for the Prevention of Cruelty to Children states, “Traffickers often groom children, families and communities to gain their trust. They may also threaten families with violence or threats. Traffickers often promise children and families that they'll have a better future elsewhere.”
Child trafficking is an economic crime
Poverty plays a factor when a parent believes they can profit from their child. They may not realize that their child is being exploited for forced labor (agriculture, domestic work, drug trade, etc). Likewise, traffickers and organized crime syndicates can force or trick families into allowing their children to be exploited, sometimes to pay off a debt owed to the traffickers.
Most parents don’t willingly give their kids away to be victims of trafficking or exploitation. Criminals make false claims to parents that this is all a means to an end and the child will be safe. The worst part about this is that in most situations, children look to their parents for advice and wisdom, and they expect them to protect them.
According to the National Center on Safe Supportive Learning Environments, child labor trafficking often “occurs in the context of domestic service, agricultural work, peddling, and hospitality industries (e.g., restaurants and hotels).” while Commercial Sexual Exploitation of Children (CSEC) refers to various activities and crimes “involving the sexual abuse or exploitation of a child for the financial benefit of any person” according to the US Office of Juvenile Justice and Delinquency Prevention.
Both Justice and Social Issues
These issues are not just criminal justice issues, they are social issues that require the work of families, communities and law enforcement internationally. Many countries exchange information through joint law enforcement efforts or via Interpol which manages an International Child Sexual Exploitation (ICSE) database to help investigators make connections between victims, abusers and locations.
In the United States, the Department of Justice funds more than ICAC Task Forces that track down CyberTips from NCMEC and investigate child exploitation cases.
Supporting Field Investigations
Solving crimes against children is a daunting task, especially with the rise in cybercrime and the sophistication of criminals that continue to exploit children. Online enticement of children is a serious and growing issue with an increase in online enticement attempts up over 230% in a four year period. To help police and prosecutors, ADF has developed digital forensic image recognition and classification software for mobile and computer forensic investigations. Field forensic software such as Mobile Device Investigator empowers law enforcement to solve crimes against children fast starting on-scene.
ADF is partnered with the Anti-Human Trafficking Intelligence Initiative to develop a series of Anti-Human Trafficking Search Profiles available to law enforcement within our digital forensic tools. Qualified law enforcement agencies can request access to these Search Profiles to help speed investigations.
|
<urn:uuid:cc430037-c994-4d18-b0e9-c4a3d8ab387d>
|
CC-MAIN-2022-40
|
https://www.adfsolutions.com/news/csam-modern-slavery-human-trafficking
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00650.warc.gz
|
en
| 0.940776 | 697 | 3.359375 | 3 |
Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence.
What is intelligence? Is it the capacity to solve complicated mathematical problems at very fast speeds? The power to beat world champions in chess and go? The ability to detect thousands of different objects in images? Predict the next word in a sentence?
Those are all manifestations of intelligence. And thanks to advances in artificial intelligence, we have been able to replicate them in computers, to different degrees of success. But AI scientists are still having a hard time reaching a consensus on the definition and measure of intelligence. And having a collection of problem-solving capabilities does not seem to get us closer to recreating the intelligence found in nature.
To Daeyeol Lee, professor of neuroscience at Johns Hopkins University, current AI systems are “surrogates of human intelligence” because they are designed to accomplish the goals of their human creators, not their own.
True intelligence, Lee argues in his book Birth of Intelligence: From RNA to Artificial Intelligence, is “the ability of life to solve complex problems in a variety of environments for its self-replication.” In other words, every living species that has passed the test of time and has been able to reproduce—from bacteria to trees, insects, fish, birds, mammals, and humans—is intelligent.
“[If] we want to evaluate the intelligence of various life forms, it would be reasonable to consider which life form can replicate itself successfully by solving more complex problems in a broader range of environments,” Lee writes.
Looking at intelligence through the lens of life and survival is crucial to understanding the current state of artificial intelligence, including its limits, its potential, and its future directions.
Life is a race against death. From birth, every organism faces dangers from its environment, whether they appear as scarcity of food, sudden changes in the weather, other organisms that prey on it or compete with it for resources, or the simple passage of time.
Organisms that live long enough—whether by being better equipped to survive in their environment or through sheer luck—get to reproduce and pass on their genes to their descendants. Their offspring do not inherit perfect copies of their genes. They have slight differences, also called mutations. Sometimes, these mutations enhance the capabilities that are crucial to survival and improve the chances of reproduction. Eventually, after millions of cycles of reproduction and mutation, the species evolves to enhance its capabilities and develop new organs that improve its response to the conditions imposed by the environments.
Otherwise put, its descendants become more intelligent because they are better survivors and self-replicators.
In single-cell organisms and plants, intelligence is derived from taxis and tropisms, static behavior directly encoded in the genes. Taxis and tropisms enable organisms to respond to different stimuli in their environment, such as turning to face light sources or moving toward locations where food sources are denser.
In these organisms, genes are in full control of behavior, and intelligence depends on genetic evolution.
More complex organisms, such as animals, have developed brains and nervous systems, which provide them with more diverse and complicated patterns of behavior.
The nervous system has reflexive behaviors, such as instinctive responses to pain and threatening noises. But its greatest advantage is the capacity to learn. Animals with brains learn by interacting with their environment adjusting their behavior to favor actions that maximize their rewards. This is also called reinforcement learning.
Learning makes organisms more intelligent and enables them to change their behavior during their lifetime. Compared to single-cell organisms, animals are better at responding to changes in their environments, and they don’t need to wait for several generations of mutations before behavioral changes are baked into the genes of their descendants. They can develop very complicated and dynamic behaviors such as creating shelters for themselves, hunting, taking care of their young, and socializing.
Intelligence in animals with brains and nervous systems can be seen as two concentric loops. The outer loop is genetic evolution, the slow enhancement of the species’ body and limbs across generations. The inner loop is fast learning, the skills that each organism acquires throughout its lifetime.
There are synergies between the two kinds of intelligence. The brain serves the genes by improving the organism’s capability to survive and reproduce. In exchange, evolution favors genetic mutations that improve the brain’s innate and learning capacities for each species (this is why some animals are born with the ability to walk while others learn it weeks or months later).
At the same time, the brain comes with tradeoffs. Genes lose some of their control over the behavior of the organism when they relegate their duties to the brain. Sometimes, the brain can go chasing rewards that do not serve the self-replication of the genes (e.g., addiction, suicide). Also, the behavior learned by the brain does not pass on through genes (this is why you didn’t inherit your parents’ knowledge and had to learn language, math, and sports from scratch).
As Lee writes in Birth of Intelligence, “The fact that brain functions can be modified by experience implies that genes do not fully control the brain. However, this does not mean that the brain is completely free from genes, either. If the behaviors selected by the brain prevent the self-replication of its own genes, such brains would be eliminated during evolution. Thus, the brain interacts with the genes bidirectionally.”
Takeaways for artificial intelligence
The AI community usually turns to the brain to get inspiration for algorithms and new directions of research. Scientists try to replicate the cognitive functions of the brain and the nervous systems in computers.
But the evolutionary view of intelligence shows us that the brain, with all its wonders and unlocked secrets, is a product of the long-lived history of genetic evolution. It is an agent of the gene, albeit one that is very complicated and at times beyond the control of its principal.
“Present-day AI is still not truly intelligent, not because it is made of materials and building blocks that are different from those of the human brain, but because it is designed to solve the problems chosen by humans,” Lee writes. “If AI is truly intelligent, it must have its own goals and seek solutions to any problems for its own sake. AI is built to improve the well-being and prosperity of human beings rather than its own.”
Seen in this light, AI—at least in its present form—is an extension of human intelligence, just like brains are an extension of genetic intelligence. Our AI algorithms can do billions of computations in a second and learn to do things that would be beyond the capacity of the human brain. But they are still designed to solve known problems that human brains have discovered and formulated. And our brains are the agents of our genes. You can think of AI as a third loop in the intelligence graph. It evolves much faster than intelligence and organic learning, but is still bound by constraints set by its outer loops.
This does not mean AI will not harm people. There are already plenty of examples where AI systems are causing harm. But it is not the work of a runaway AI that is actively scheming to hurt humans. These mistakes are the result of faulty AI systems designed and misused by humans.
What about the narrow AI systems that are defeating StarCraft champions, matching humans in image classification, and performing real-time speech recognition? Are they threatening human existence?
No, Lee argues, because the goal of these narrow AI systems is to tackle problems that are impossible or hard to solve for humans. Otherwise, they would have no use.
“[Competition] between AI and human performance is not a threat to human society, but rather a necessary condition for AI,” he writes. “Brains evolved as sophisticated learning machines, and this was a solution, not a threat, to the principal-agent relationship between brains and genes. Similarly, advances in AI technology itself would not pose a threat to humans.”
Therefore, AI that does not develop its own goals and utilities will remain one of the many tools that humans have invented to increase the efficiency of their labor.
“As long as computers do not physically reproduce themselves, humans will remain the principal and control the behaviors of computers with AI, just like the brain is unable to replicate itself and, as result, continues to function as an agent for the genes,” Lee writes.
|
<urn:uuid:4a2fcde6-4d03-4994-b00c-5e05feef0763>
|
CC-MAIN-2022-40
|
https://bdtechtalks.com/2021/11/15/birth-of-intelligence-book-review/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00650.warc.gz
|
en
| 0.963107 | 1,743 | 3.78125 | 4 |
Superuser Privilege Management (SUPM)
What is Superuser Privilege Management?
On Unix systems, superusers are users who gain privileged access for a limited period of time. Unix allows certain users to elevate their privilege to superuser status for a specific task, and when they’ve completed their task they revert to being a standard user. Superuser privilege management controls when users are allowed to elevate to superuser status, and what commands they can run in superuser mode.
|
<urn:uuid:9c4e4836-6ef4-4c00-afc7-8a6c0291841f>
|
CC-MAIN-2022-40
|
https://delinea.com/what-is/superuser-privilege-management-supm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00650.warc.gz
|
en
| 0.857491 | 99 | 2.734375 | 3 |
There are two critical characteristics to examine in a gage system. The two Measurement Methods are Precision and Accuracy
Accuracy: Accuracy is an unbiased true value and is normally reported as the difference between the average of a number of measurements and the true value. Checking a micro meter with a gage block is an example of an accuracy check.
Precision: In gage terminology, “repeatability” is often substituted for precision.
Repeatability is the ability to repeat the same measurement by the same operator at or near the same time.
The calibration of measuring instruments is necessary to maintain accuracy, but does not necessarily increase precision. In order to improve the accuracy and precision of a measurement process, it must have a defined test method and must be statistically stable.
Measurement Methods – Gage Repeatability and Reproducibility
Measurement system errors can be due to:
Repeatability – Variation in measurement when a person measures the same unit repeatedly with the same measuring gage(or tool)
Reproducibility - Variation in measurement when two or more persons measure the same unit using the same measuring gage(or tool)
There are certain thumb rules for acceptable level of Measurement System Variation.
For continuous data, if the percentage tolerance is less than 8%, percentage contribution is less than 2% and number of distinct categories is greater than 10, then the Measurement System is Acceptable. Likewise, we would evaluate the risks of the measurement system if the percentage tolerance is between 8% and 30%, percentage contribution is between 2% and 7.7% and number of distinct categories is between 4 and 10.
Measurement System Analysis – Variable and Attribute MSA
The observed process variation can be due to:
1. Variation from the Measurement System (and/or)
2. Actual process variation
Variation from Measurement system is due to
1. Variation from appraiser (and/or)
2. Variation from gage
3. Variation from appraiser is the variation caused by the user not using the gage as it should be used
4. Variation from gage is the variation caused by the gage due to inappropriate functionality.
Variation from gage can be further bifurcated into:
1. Accuracy: The difference between the average of observed values and the standard.
2. Stability: Variation in measurement when a person measures the same unit repeatedly with the same measuring gage (or tool).
3. Reproducibility: Variation in measurement when two or more persons measure the same unit using the same measuring gage (or tool).
4. Repeatability: Variation in measurement when the same person measures the same unit using the same measuringage (or tool) over an extended period of time.
5. Linearity: The consistency of the measurement across the entire range of the measuring gage.
|
<urn:uuid:498d4986-6a34-4da0-8a87-53e6b8fed9cb>
|
CC-MAIN-2022-40
|
https://www.greycampus.com/opencampus/lean-six-sigma-green-belt/measurement-methods
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00050.warc.gz
|
en
| 0.907445 | 600 | 4 | 4 |
In some cases, properties are calculated from other properties. In the case of a person, the property FullName is created from FirstName and LastName. This does not mean you cannot set the property, but that it is not necessary to do so.
Using the calculated properties concept, a person’s full name, for instance, is calculated by the concatenation of the first names and the last name. This is exposed to the Maltego client:
Some example Entities that use calculated properties are:
The current version of the Python library only supports string types, to match the current version of the XML protocol used between the client and the server. Subsequent releases might extend this to include all the other types of properties that Maltego supports. When dealing with Entities that have property types other than strings, make sure you format the string value properly so that the Maltego client can parse the correct type from the string value.
Static vs. dynamic properties
When you design an Entity, you can add properties to the Entity. These are called static properties. This means, that should a user drag an Entity from the palette to the graph, the properties will be available for editing. Refer to the example below showing the property view for Person, with three static properties:
Transforms can create new properties for the Entities they return. The Transform simply adds a property to the Entity when it’s returned. These are called dynamic properties. If a Transform returns a property that had been defined as static in the Entity definition it will store the value as a static property, if it’s not defined in the Entity definition it then becomes dynamic. The only real difference between static and dynamic properties is that users cannot edit dynamic properties on a fresh node from the palette…simply because the property does not exist yet - a Transform is needed to create it.
The concept of inheritance means that the MXRecord, NSRecord, and Website Entities are in fact only specialized DNSNames. The benefit to this is that one Transform e.g. DNSName 2 IPAddress, works on all of them and saves on Transform configuration time. The result being that should you specify on the TDS that a Transform will run on a DNSName it will also run on all Entities down the ‘family tree’ – MXRecord, Website, and NSRecord.
At the top of the tree is the Base Entity, ‘maltego.Unknown’, This means that should you configure a Transform to run on this Base Entity type – it will be available when you right-click on any Entity.
Entity Design Summary
Below is a quick summary of some of the more advanced concepts to be considered when working with Entities.
During the Entity design phase, you can set the default value of an Entity to be calculated from other properties. For example, a person’s full name can be calculated by concatenating the first and last names by default. Such a property is referred to as a Calculated Property.
Static vs Dynamic Properties
When writing a Transform, a property can be added to any Entity, even if that property is not part of the design or specification of the Entity. For example, a ‘Date’ property can be set to reflect the date that the Transform was run to the IP Address Entity which comes from the DNS lookup Transform. Such a property is referred to as a Dynamic Property, and conversely, properties that form part of the specification are referred to as Static Properties.
During the Entity design phase, you can choose to extend from an existing Entity, inheriting all the base-Entity’s properties. You can then add additional properties to the new entity to add specific functionality to the extended Entity. Transforms designed to work with the new extended Entity type will only work on that Entity type. Conversely, Transforms designed to work for the Base Entity will also work for the extended Entity.
When a new Entity is added to a Maltego graph, Maltego automatically checks whether such an Entity already exists. If another Entity of the same type with the same value exists, they will be merged. In addition to these requirements, any property on the new Entity that was set by the Transform developer at runtime to use the ‘strict’-matching rule, will have to match as well before the Entities will merge.
Bookmarks and Notes
Maltego users can add bookmarks and notes to any Entity. Transform developers can also set the bookmark color and add notes to Entities that is returned. When merging Entities, only the new Entity’s bookmark will be used, and the notes will be concatenated.
|
<urn:uuid:c1359d33-c594-4820-897a-160f9779cf22>
|
CC-MAIN-2022-40
|
https://docs.maltego.com/support/solutions/articles/15000019246-advanced-entity-creation
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00050.warc.gz
|
en
| 0.895707 | 1,012 | 2.640625 | 3 |
Spear-phishing is one of the main tools used by attackers to compromise endpoints and gain a foothold in the enterprise network. The issue has been deemed serious enough for the FBI to a deliver a warning about the worrying rise in spear-phishing (opens in new tab), and multiple industry sectors have fallen foul of the attacks all over the world.
In spear-phishing scams, cyber-criminals utilise a specially crafted email message that entices users to perform an action that will result in a drive-by download, data theft, or both. Compromising systems and grabbing login credentials of employees enables the attacker to access corporate resources. This is why spear-phishing is often the first step used to enable Advance Persistent Threats (APTs) and targeted attacks.
You can’t blame the user
In the past, it was believed that proper user education would prevent phishing attacks. However, despite the significant time and resources invested in education programs, spear-phishing attacks continue to be successful. This is not only because enterprise users can be naïve and careless. This is mainly because attackers use information gained through social engineering to personalise the spear-phishing messages and convince targeted users that the message is legitimate.
As the FBI warning explains, “Often, the e-mails contain accurate information about victims obtained via a previous intrusion or from data posted on social networking sites, blogs, or other websites. This information adds a veneer of legitimacy to the message, increasing the chances the victims will open the e-mail and respond as directed.”
It is impossible to prevent enterprise users from opening email attachments or following links since it is a routine part of their everyday activity. As long as our lives are dependent on online information, spear-phishing (opens in new tab) will remain a threat.
Endpoints used as gateway to network
Spear-phishing emails often result in drive-by downloads: a silent malware download that takes place in the background, without the user’s awareness. Drive-by downloads are enabled by vulnerabilities in user applications like browsers (or browser plug-ins), Java applications, Adobe Acrobat and more. Exploiting unpatched, or unknown, zero-day vulnerabilities, attackers can download malware (opens in new tab) to the user’s machine while the user remains unaware of the download.
The attacker can then use a compromised device to gain access to the corporate network, steal intellectual property and compromise operational systems and/or financial assets. The FBI explains that, “In spear-phishing attacks, cyber criminals target victims because of their involvement in an industry or organisation they wish to compromise.”
Protect all login credentials
Attackers also use phishing sites - fake websites designed to look like the real application login site - in order to convince users to submit their credentials. If a user is fooled to submit corporate credentials on a phishing site, the attacker can exploit those to access corporate applications and other resources.
In order to effectively stop spear-phishing attacks, organisations must prevent drive-by downloads and protect enterprise credentials. What’s required are security solutions which minimise the exploitation of application vulnerabilities, block information theft, prevent attackers from gaining remote control over employee endpoints, and ensure that enterprise users do not submit and expose their credentials on phishing sites.
Dana Tamir is director of enterprise security at Trusteer.
Image: Flickr (infocux Technologies)
|
<urn:uuid:ef4b9764-b2f1-4e7b-bfca-598d7c51f4f6>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2013/07/25/why-fbi-warning-world-about-spear-phishing-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00050.warc.gz
|
en
| 0.915444 | 713 | 2.71875 | 3 |
Along a stretch of beach heavily marred by a crude oil spill, workers in hard hats and white protective suits use wire brushes and putty knives to scrape the black liquid off cobblestones and cliff faces.
The painstaking task at Refugio State Beach marks a new front in the cleanup after an underground pipeline leaked last month and released up to 101,000 gallons of oil, about 21,000 gallons of which flowed into a storm drain, sullied the beach and washed out to sea. Because the region is home to threatened shorebirds and cultural resources, a decision was made early on to clean oil-stained beaches the old-fashioned way by using hand tools instead of heavy equipment or chemicals.
The environmental toll from the largest coastal spill in California in 25 years is still being tallied. Progress has been made in corralling the slick in the ocean and removing flecks of oil on sandy beaches.
Scrubbing rocks by hand will take time, however. "It's a very labor-intensive process, but that's where we're at now," Carl Childs of the National Oceanic and Atmospheric Administration, one of several agencies involved in the cleanup, said recently.
There's no timetable for when the cleanup will end. The effort so far has cost at least $65 million, which is being paid for by Texas-based Plains All American Pipeline. A heavily corroded section of Plains' 10-mile pipeline that moves oil from offshore rigs to inland refineries ruptured on May 19, causing two state beaches to close and prompting a fishing ban. One of the beaches, El Capitan State Beach, is set to reopen next week.
The spill blackened a section of the Santa Barbara County coastline that was also fouled during the 1969 offshore oil-platform blowout that spewed an estimated three million gallons of crude, killing thousands of birds and other animals.
Cleanup techniques have evolved since the 1969 disaster that helped usher in a new era of conservation. Back then, crews used straw to soak up oily sand. That's no longer done because straw is hard to pick up and removing too much sand can harm a beach.
In the latest spill, workers shoveled tar balls and contaminated sand into plastic bags that were then carried away for disposal. They had to be careful not to disturb populations of western snowy plovers that were in the middle of their breeding season.
"We're more concerned about the impact of the cleanup doing more injury than the oil did originally," said Kim McCleneghan of the state Department of Fish and Wildlife, who responded to both spills.
About 91 percent of 97 miles of coastline — mostly sandy beaches — surveyed by teams of experts from various federal and state agencies has been given the all-clear.
Significant work remains to clean oil-covered cobblestones and boulders dotting beaches. Workers scrape rocks until an oil stain remains. That can't be scrubbed off and must wear away naturally with time, McCleneghan said.
Once the expert teams are satisfied with the cleanliness of a beach or stone, monitoring continues to make sure there's no setback.
Santa Barbara County is home to one of the largest naturally occurring oil seeps in the world where thousands of gallons of oil ooze from cracks in the seafloor every day.
Frequently, the sticky substance ends up on the soles of swimmers and surfers. The natural seeps generally flow at a low rate unlike significant accidental spills, which release large volumes of oil at once. Oilfingerprinting can distinguish natural seeps from spills.
Crews are removing oil no matter if it came from the pipeline or if it occurred naturally, said Wade Bryant, senior environmental scientist at CK Associates, an environmental consulting group.
The cause of the spill remains under investigation. Earlier this week, Plains completed flushing the idled pipeline, a process that should allow for a more precise calculation of how much oil escaped.
Compared with the 1969 spill, the environmental damage in the latest episode has been much less severe. Nearly 300 dead birds and marine mammals have been recovered so far. Dozens more were rescued and are rehabilitating.
Michael Ziccardi, who heads the Oiled Wildlife Care Network at the University of California, Davis, said he's puzzled by the seemingly heavy toll on marine mammals, especially sea lions. Necropsies should help scientists determine if they died from the spill or other causes, he said.
Scientists hired by the state have spent several weeks documenting the effects of the spill on the environment and ecosystem. A preliminary analysis isn't expected for several months.
"It doesn't look like it's going to be catastrophic. But that doesn't mean there's not going to be damage," said Pete Raimondi of the University of California, Santa Cruz, who is studying the potential effects on mussels, abalone and sea stars in tide pools.
|
<urn:uuid:c4a3452c-4516-476f-a457-34e1445a7e98>
|
CC-MAIN-2022-40
|
https://www.mbtmag.com/global/news/13214098/workers-clean-up-oil-spill-on-california-beaches-by-hand
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00050.warc.gz
|
en
| 0.962606 | 1,000 | 2.953125 | 3 |
Do you at least occasionally use public Wi-Fi networks? The ones available in the subway and airports, cafes and parks, and often totally free? It’s very convenient, but potentially dangerous.
Home (and especially corporate) wireless networks generally protect data by encrypting it. Suppose an attacker tries to intercept information that you receive (for example, emails) or send (for example, account passwords or bank card details). Thanks
to encryption, all they will get is a useless unreadable set of symbols!
But public Wi-Fi often does away with encryption. So any cybercriminal can spy on what you send and receive simply by connecting to the same network.
Another attacker might set up a fake access point with a tempting name like Free_Wi_Fi, and quietly harvest all the data of users who connect to it.
The situation is made worse by the fact that smartphones and tablets most often connect to available networks automatically. This is handy, but means that the device can connect to an insecure fake network set up by cybercriminals to steal your data.
The dangers of free Wi-Fi in trains and other public places are well-known. But few people consider the security of free Internet access in a rented apartment or hotel. With that in mind, attackers sometimes target hotel networks in the hope of profiting
from guests’ data, while the owner of a rented apartment could take an unhealthy interest in other people’s online secrets.
How to protect your data from the dangers that lurk around every corner? It’s not that hard to do. Either don’t use insecure networks, or be sure to have reliable encryption. Let’s analyze both options in more detail.
One convenient way to avoid getting mixed up with free networks is to always have your own access point to hand, which these days any modern smartphone can provide. Make sure that when you set up an access point, you leave encryption turned on and
create a strong, secure password — then your hotspot can be considered safe. At least in your smartphone no attacker will be trying to scoop up all your data — assuming you correctly configured protection. From security perspective, the access
point in your smartphone is no different to that in your home.
There remains, however, the problem of expensive mobile data, especially when roaming abroad — a smartphone hotspot can be a pricey affair. In that case you can use a virtual private network (VPN) to protect your data. Data in a VPN is encrypted so
that you can securely transfer confidential information like card numbers and account passwords even in a public network. To draw a motoring analogy, on the free data highway you will have your own lane fenced off by a crash barrier. Or your own
personal tunnel under a public road, through which only you can go.
Using a VPN is as easy as pie: For example, Kaspersky Secure Connection not only lets you enable a VPN with one touch, but also independently checks the networks that you connect to, and reminds you to take security measures in those that might be
dangerous. Moreover, the application can be configured to automatically activate the VPN when connected to known public networks or when using certain programs that transfer confidential data.
You’re not at home, you’ve run out of mobile data on your smartphone, and you urgently need to send a work email. What do you do?
|
<urn:uuid:54472232-815f-40e2-8c28-319b37eea4f4>
|
CC-MAIN-2022-40
|
https://education.kaspersky.com/en/lesson/16/page/79
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00250.warc.gz
|
en
| 0.931166 | 700 | 2.625 | 3 |
TCP – Transmission control protocol in short terms is used as TCP which is one amongst the TCP/IP prime protocols. Its function is to allow two hosts, to set up a connection first over the internet and then exchange the data in form of stream. Reliability of TCP can be judged from that it provides the guarantee of data delivery by ensuring so that each packet will be transferred in the same pattern as was designed at the time of depart.
What is LAN? Connecting two PCs for the purpose of sharing data and attached peripherals that mean, you have established a LAN (local area network). But the figure of computers connected within a LAN can be fixed and can be to a limit of several hundreds computer systems.
What is network? A computer network in simple form can be consisted on just two computers that are allied to share available resources as variety of hardware, files and to correspond with each other. But in broad sense, this network can enclose thousands of computers in it because running a big business in traditional way can be a difficult task without a network and that provides an easy way to be in touch and cooperate with employees too. That means computer’s networking is referred to the assemblage of various kinds of hardware components plus computer’s interconnection with the help of communications mediums. The key purpose of networking is to allow the sharing of an organization’s resources as well as information.
The term “network” is by and large referred to computer’s network that is encompassed different types of hardware and computers. The key reason of such set up is to have the transportation of data and communication lenience. Nowadays, versatile technologies are used in these systems which are generally included: wired equipment, wireless and exotic technologies.
But whenever you will think about the major threats that lurk towards these networking systems, the point of their security will first come to your mind. Commonly, two types of security are significant in order to prevent invasions from somewhere else to your system. And these security measures can be installed in the form of personal firewall and encryption systems. You can avoid potential security vulnerability to your network by putting in firewall while another renowned approach to secure your computers connection is by encrypting traffic at the top level in TCP/IP stack that is known as packet level encryption.
In short, networking endows with manifold benefits and these are file sharing, hardware sharing, computer’s mobility and internet connection sharing etc. Get advantage from these facilities if you are an owner of an institute or organization.
When you open a web page, all sorts of things will need to be done in the background before you get your shiny website on your screen.
We will see now what is happening in the networking system to make that possible.
TCP/IP protocol is what makes sending and receiving most of the data across the Internet possible. But how data packets know how to find us and how we know how to find the IP addresses of the web servers where these pages are saved?
Data will maybe not even take the same route in each direction. It can happen that when we send something, a request to the server, the packets will flow through one route and the server response towards our computer will take some other route.
The Internet is the biggest computer networking system. It knows in every moment how to find the best route to some device connected anywhere in-between all his nodes.
But how is this data transferred across the wires, fibres and air?
Data is divided into small packets. Every time we send a request towards a server, our request must firstly be divided into packets, most of the same size. Each of those packets needs an IP address of the destination to be written on it so that he can be routed through the network.
In order to find out what is the destination IP address of the server – (remember that we are typing an URL into the browser, usually we are not typing IP address into it) – your computer, before sending out all those packets, will contact public DNS server – domain name server, that will have the information about IP address to which packets must be forwarded in order to get to your URL-linked page.
Public DNS servers are set up into a hierarchical system that keeps the knowledge of IP addresses for all URLs (domain names) that are registered on the Internet. With this database, DNS is able to translate our request for the web page URL into the IP address of the server on which the web page is stored.
|
<urn:uuid:c0efaf63-de24-432d-8a57-7dd38f9c00bd>
|
CC-MAIN-2022-40
|
https://howdoesinternetwork.com/tag/internet/page/2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00250.warc.gz
|
en
| 0.951971 | 924 | 3.125 | 3 |
Florence Nightingale, Pie Charts and the Birth of Business Intelligence
Did you know that the actual pioneer of Business Intelligence was a nurse? Florence Nightingale is best known for revolutionizing the field of nursing and instigating widespread hospital reforms, but her methodology looks a lot like the cutting-edge business strategies of today.
During the Crimean War, the death toll extended far beyond the battlefield. Mortality rates in Army hospitals were rapidly increasing even among soldiers admitted with minor wounds. The medical system desperately needed modernization and reform, but the doctors in Army hospitals were too busy treating the sick and wounded to step back and understand the core problems at hand.
In 1854, British Parliament sent Florence Nightingale along with 38 nurses to Crimean war hospitals to assist in wounded patients’ care. Nightingale came to believe that poor sanitation contributed to the high death rates in Army hospitals, but she needed to prove it.
The process Nightingale began in Army hospitals prefigured present-day Business Intelligence in the way that it accumulated, visualized, and communicated data to radically change outcomes.
1) Collecting and centralizing of data
Nightingale knew that even seemingly insignificant factors could affect the spread of disease among the wounded. To her, no piece of data was too insignificant to contribute to the larger issue of the increasing death rate. She kept meticulous records detailing the size of the rooms where soldiers were housed, the amount of ventilation and sunlight these rooms received, and even the distance between the kitchens where the soldiers’ food was prepared and the sick areas.
She brought the data together so that doctors and statisticians could analyze the problem as a whole and extrapolate actionable insights. Factors formerly ignored became part of problem solvers’ critical lens. Nightingale’s work demonstrated the value of data-informed, data-driven insights gathered at the ground level in making high-level, transformative change.
2) Data visualization reveals the core problem.
Nightingale had to go farther than gathering data to bring change to the hospitals. She had to envision data in a completely new way. Like many present-day businesses, she realized that change-makers may have been fighting the right battle all along, just on the wrong battlefield.
The data brought back from Army war hospitals in Crimea fueled years of study for Nightingale and the statisticians who made up her team. The next step in solving the larger problem required coming to a better understanding of the core issues. To do this, she rejected the long-standing one-dimension method of conveying data for a deeper, multi-dimensional visualization: pie charts.
By visualizing the data in this way, Nightingale was able to quantify the impact of small changes and effectively answer questions that politicians never thought to ask.
Although Nightingale wasn’t the first person to use data visualization, she was the first to popularize it. She knew that in order to convince stakeholders there was a need to change, she would need to convey her complex findings in an innovative and easy-to-grasp way. She was right.
3) Data collection and data visualization leads to cultural transformation.
The real change required in Victorian England was a radical shift in perception. Nightingale’s data collection and visualization crystallized the problem and forced decision-makers out of their old ways of thinking.
The data revealed the larger issues and convinced policy-makers of the day—not known for quick progress—to make rapid, radical change in the Army hospital systems. Nightingale’s research convinced stakeholders that her theories were more than just theories and that change was critical.
Her recommendations for hospital reform were concrete and specific. Caregivers rallied behind the new standards and guidelines she suggested and the result was a complete transformation in military hospital care.
4) Results: Exceeded expectations.
After a decade of sanitary reforms, the mortality rate of British soldiers in Indian hospitals dropped 75%. Nightingale and her team’s work affected thousands of lives and changed the way that leadership approached seemingly unsolvable problems.
Like Nightingale’s work, Business Intelligence is more than a one-time solution. It’s a radical new way of thinking. Whether you’re running a business or seeking, like Nightingale, to find answers to medical mysteries hidden in data—Business Intelligence can identify the problem, collect data around the processes and systems that support the problem, and create visualizations that reveal the problem’s source. All of this translates into actionable insight that can be used to solve the problem, transform your organization’s culture, and lead to new opportunities and long-term growth.
Andrew Kurtz is the president of Kopis (formerly ProActive Technology), which has been serving Greenville, SC with custom software development solutions since 1999. Andy and his team are experts in business intelligence, custom software development, database administration, mobile app development, and much more.
McDonald, Lynn. “Florence Nightingale, statistics and the Crimean War. Journal Of The Royal Statistical Society: Series A (Statistics In Society) 177, no. 3 (June 2014): 569-586. Business Source Alumni Edition, EBSCOhost (accessed March 10, 2016).
Wuithiran, Rosrin. “THE ORIGIN OF GRAPHS AND CHARTS.” History Magazine 13, no. 1 (October 2011): 6-7. History Reference Center, EBSCOhost (accessed March 10, 2016).
Brasseur, Lee. “Florence Nightingale’s Visual Rhetoric in the Rose Diagrams.” Technical Communication Quarterly 14, no. 2 (Spring2005 2005): 161-182. Education Research Complete, EBSCOhost (accessed March 10, 2016).
Conway, Anne-Marie. “LADY WITH THE DIAGRAM.” Eye (0960779X) 21, no. 82 (Winter2011 2011): 29. Art & Architecture Complete, EBSCOhost (accessed March 10, 2016).
Nightingale, Florence, and Lynn McDonald. Florence Nightingale and Hospital Reform. Waterloo, Ontario, Canada: Wilfrid Laurier University Press, 2012. eBook Collection (EBSCOhost), EBSCOhost (accessed March 10, 2016).
|
<urn:uuid:6d40bc8b-5446-44cb-bd38-d388b62c78c5>
|
CC-MAIN-2022-40
|
https://kopisusa.com/florence-nightingale-pie-charts-birth-bi/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00250.warc.gz
|
en
| 0.933464 | 1,330 | 3.359375 | 3 |
Technologies have transformed many industries at a rapid pace as is the story with the healthcare industry. A convergence of digital technologies and healthcare gave people an opportunity to provide with quality care. Technology has not only changes the experience for patients, but it had a significant impact on the medical process and practices of healthcare practitioners. The latest trends that bring technological innovations to the sector are
Artificial Intelligence (AI)
There is nothing quite as impressive as Artificial Intelligence (AI) with rapid growth and exciting opportunities surrounding it. AI systems can reduce and mitigate the risks of medical scenarios. AI adds an extra layer through which cuts down errors in the field of health care. The common ways AI will change healthcare include Managing medical records and data, treatment design, digital consultation, virtual nursing assistance, medication management, drug creation, precision medicine, health monitoring, and healthcare system analysis
Augmented Reality is beginning to make its mark in the healthcare sector. Early adopters are reaping the rewards of implementing AR across their operations. The role augmented reality in healthcare is forecast to keep increasing. Pharmaceuticals realize the significant benefits of AR implementation firstly with educating the consumer through visual simulation. AR allows pharmaceuticals to improve patient education by visualizing complex medicinal products. Secondly, it aids the physician with providing data and information in front of them without detaching from critical tasks. AR is a technology capable of radicalizing the efficiency and cost optimization aspects of surgery. It also provides parallel benefits of enhanced surgeon comfort, reduced effort, and potentially lower prices due to AR’s capability to replace operation room display systems.
The abundant health data amassed from numerous sources including electronic health records, medical imaging, wearables, and medical devices. Big data analytics along with Internet of Things is revolutionizing the tracking of various user statistics and vitals. It can be a great way to reduce the cost of health organizations. With the capability of predictive analysis, it helps in predicting the admission rates and staff allocation. Big data can also help to assist high-risk patients by identifying the patients approaching the hospital repeatedly and identifying their chronic issues which will help to give them better care and provide an insight into corrective measures.
The blockchain era in healthcare is going on. It provides with a tremendous opportunity to overcome challenges that exist in the healthcare industry. They include interoperability, security, integrity, traceability, and universal access. It addresses the problem of syncing patient data while ensuring security and privacy by adopting a distributed framework for handling patient identity. Blockchain can be used for EHRs to alleviate data fragmentation issue.
|
<urn:uuid:b3c5ef20-b59a-4f27-bf9a-0e1122d15d06>
|
CC-MAIN-2022-40
|
https://www.cioadvisorapac.com/news/noteworthy-digital-health-technologies-nwid-1362.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00250.warc.gz
|
en
| 0.938442 | 524 | 2.90625 | 3 |
Understanding each Encryption Technique
When we dive in-depth to explore more about wireless key management and encryption technologies, then we would find that network protocols play an imperative part in it. As we are dealing with the technologies that are effectively used for the encryption feature on wireless networks, it is essential that they precisely cover trusted access, swift transmission across the points, confidentiality and privacy.
There are different types of protocols that are successfully used in the field of encryption technologies and wireless key management. But here, our emphasis is going to be on three specific types of protocols and what they can offer to us against one another. The comparison is going to be made in between WEP vs TKIP vs CCMP and what are the specializations that they carry when they play against each other on specific parameters.
WEP is short for Wired Equivalent Privacy and it is one of the protocols that are not commonly used these days.
TKIP is abbreviation for Temporal Key Integrity Protocol.
The 3rd is the list is CCMP. It is one of the most common technologies that are employed in WPA2 on modern-day basis. It stands for Counter Mode with Cypher Block Chaining Message Authentication Code Protocol.
One of the key reasons behind limited usage of WEP is that some of the cryptographic flaws are not conducive for a long term from the wireless network standpoint. In contrast to WEP, TKIP has been specially built in order to rotate the keys around, so that the users do not face the same problems in encryption that they may run across in obsolete WEP protocol.
Further, CCMP is used in conjunction with Advanced Encryption Standard (AES) algorithm. Infact from usage perspective, WEP protocol is mostly non-existent in the current modern wireless networks unlike TKIP which is commonly used. CCMP may be used, however it is generally referred to when the user is setting up wireless network in the form of WPA2 and in terms of parentheses it might have TKIP.
When talking about cryptographic standard, one of the successes of the TKIP Cryptographic Standard over WEP standard is that it creates the keys together. The key made in this protocol is more secure because it changes on a constant basis. CCMP carries more advanced encryption standard in comparison of both the other technologies discussed.
When USP (Unique selling point) of these encryption protocols compared, the root key in case of TKIP is mixed with the initialization vector in order to achieve unique nature of security. The ability of sequence countering is one of the most effective tactics to avoid reply attacks. CCMPs value add is that it has a larger size key and comprises of a larger block size to conduct the encryption.
In terms of data integrity, WEP has data confidentiality which is better than traditional wired network, but it can only safeguard your network against average users. TKIP implements a 64-bit message integrity verification. It simply denotes the fact that the information transferred could not be changed in somewhere between the conversation. Whereas, in case of CCMP, only certain people are authorized to receive the information and the information is only shared when the user on the other end of the network is really a genuine one.
Related – Wired vs Wireless
Comparing WEP vs TKIP vs CCMP : Encryption technique
We made the comparison above WEP vs TKIP vs CCMP, to reach the verdict that for the older hardware WEP and TKIP are the better options. But if you can upgrade to newer hardware, facilitate more computing resources and data safety is utmost priority then CCMP is a considerable choice!
|
<urn:uuid:bf832f73-ded9-42f3-b037-2f13c0a2b61d>
|
CC-MAIN-2022-40
|
https://networkinterview.com/wep-vs-tkip-vs-ccmp-understanding-what-each-encryption-technique-offers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00250.warc.gz
|
en
| 0.93449 | 747 | 3.046875 | 3 |
Sentiment analysis involves data mining large quantities of natural language text to determine the subjective viewpoints of masses of users about a given topic. In essence, sentiment analysis monitors huge amount of opinionated content to extract commercially valuable insight.
Sentiment analysis – part of the Big Data market – is often performed on streams of social media content, online reviews, survey responses, blogs, and new items. As such, the challenge of sentiment analysis is statistically quantifying material that typically contains many shades of nuanced human moods.
The Sentiment Analysis market (also known as the Text Analytics Market) is forecast to grow at a torrid CAGR of 24.2 percent from 2018 to 2025. The total market size is predicted to be 18.3 billion in 2025. SA software is a type of Big Data software. Indeed, many Big Data companies offers SA services.
Examples of the questions that users ask of sentiment analysis software include:
- Products: How are consumers responding to the new microwave?
- Movies: How is word of mouth forming the overall view of this film?
- Politics: Is this candidate resonating with the broader electorate?
This is done using sentiment analysis tools (see below), which employ artificial intelligence and deep learning techniques. These tools also employ technologies like computational linguistics and algorithmic text analysis.
At its most basic, sentiment analysis determines the positive-negative polarity of an in instance of user generated text. Does the text essentially approve, disapprove of the given topic? Or does the text offer a neutral assessment of a given topic?
Moving up in complexity, more advanced sentiment analysis offers a range of values across a spectrum of fondness to disdain.
Although the field of sentiment analysis is relatively new, its methods can be roughly grouped into two categories:
- Statistical Techniques: Statistical methods deploy tool sets from AI, deep learning and machine learning. These technologies dig into grammatical relationships, sometimes attempting to differentiate the subject of the opinion from the users offering those opinions.
- Knowledge Techniques: This methodology identifies “sign post” words that tend to be clear identifies of emotion, ranging from “love” to “hate” to “adore” to “lame.” At its more advanced, knowledge techniques can evaluate words on a spectrum of like/dislike.
Many sentiment analysis tools use a combined, hybrid approach of these two techniques. The goal of this hybrid approach is to mix tools and so create a more nuanced sentiment analysis portrait of the given subject.
How to Choose a Sentiment Analysis Tool
Sentiment analysis tools are variously described as performing opinion extraction, subjectivity analysis, opinion mining or sentiment mining. In either case, choosing the best sentiment analysis tool for your company typically includes considering the following:
- Volume of material: Estimate the amount you want to analyze – if your company is truly hoping for a deep, across-the-market sentiment analysis, you will need a larger, more robust tool.
- Test the software: Perhaps this one goes without saying, but…does the software give an accurate view of the market’s opinion? That is, if you look at the text yourself, does the software agree with human understanding?
- Total features: Sentiment analysis is a complex process. Is this tool comfortable with unstructured data. You will want to know the “F1 score,” which is a measure of a statistical test’s accuracy.
- Pricing vs. assumed value: Perhaps this one is hard to fully compute. But look at the price tag on the sentiment analysis software and then ask yourself: Are we going to make many more times that amount (in profit) based on the insight from this app? Is the app that good?
Top Sentiment Analysis Tools
The following sentiment analysis tools take a variety of different approaches, but these are all top solutions.
- IBM Watson Tone Analyzer
- Clarabridge Speech Analytics
- OpenText Sentiment Analysis
- SAP HANA Sentiment Analysis
- SAS Sentiment Analysis Tool
- Basis Technology Rosette API
- Linguamatics I2E
- Expert System’s Cognitive Automation
- Sentiment Analysis Vendors: Comparison Chart
Backed by the major AI infrastructure of IBM Watson, the Tone Analyzer examines online emotions and “tones” in what users post, from Tweets to reviews to random social media.
Tone Analyzer has a particular focus on customer service and support. This may be its greatest strength. Tone Analyzer will report on support conversations to monitor whether phone agents are polite and helpful, and whether they truly answered the questions. Plus: what was the mood of the caller? Were they satisfied?
Tone Analyzer can be built into chatbots. A business can program it to create appropriate “dialogue strategies” so the conversation moves in productive patterns.
If the telephone is a major sales channel for you, then Clarabridge Speech Analytics may be the app for your business. The software captures and processes the speech from calls, processing the data to ascertain customer emotions and/or satisfaction.
The software automatically parses call records so key business insights are retained. To aid the software, Clarabridge analyzed massive amounts of recordings to create a portrait of typical patterns. Then its text and sentiment analytics engine plows through the material, creating metrics about the customers’ state of mind.
Clarabridge uses Natural Language Processing (NLP) and what it calls “Sentiment Scoring” to capture not only the voice, but the emotional context of the customer, from phrasing to voice tones.
The OpenText Sentiment Analysis module – geared for the enterprise – looks at text on a document and sentence level to get the proper context for its analysis of nuance and color. It then records whether a given comment is positive, negative, neutral or mixed. It can be deployed on-premise of (like most of the solutions in this list) via cloud computing.
The vendor also offer OpenText Semantic Navigation, a complete semantic search engine. This allows customers to get a full menu of visualization and analytic widgets, which enables a company to focus on highly specific, targeted use cases. Custom queries can be created, as can interactive visualization and alerts.
OpenText is a solution that handles social media across borders; it has full support for English, Spanish, French, German and Portuguese. As the US market grows increasingly cross-lingual (especially English-Spanish) this is an important characteristic.
SAP HANA is, of course, a large tool that handles many digital tasks. Just one of these is sentiment analysis. A strength of HANA in sentiment analysis is its ability to offer a visual view of the sentiment. The “tag cloud” offers a real time portrait of the many emotion-laden terms employed by users.
The software outputs three options: Analysis, Distinct Values and Raw Data.
The software can be set for a simple positive-negative analysis. Or it can analyze a full range of user sentiments, including Anger, Fear, Positive, Surprise, Anticipation, Joy, Sarcasm, and Negative.
The SAP HANA Sentiment Analysis tools displays a “word cloud” as part of its visual interface.
SAS Sentiment Analysis employs a sophisticated mix of linguistic and metric-based guidelines to determine if a body of text – unstructured data – is positive or negative or “unclassified.”
Users can define taxonomies at a variety of levels, with great flexibility. This is a key feature that can provide significant insight into a document/text/posting. The software can determine mood or affect from the entire document, a single idea container therein, or a given attribute or feature of a single idea. This flexibility offers great nuance into determining the flavor a text.
The software is available in English, or additional languages as application add-ons.
The SAS Sentiment Analysis Studio allows for a hybrid approach to sentiment analysis.
Basis in 2016 unveiled Rosette API, a solution that uses artificial intelligence to decipher natural language. Rosette was first developed to handle social media, complex compliance issues, and search challenges. The Rosette API takes these developments and – offered as a cloud-based tool – now enables document analysis and sentiment decoding.
Rosette includes morphological analysis, which identifies part of speech, and features lemmatization, which groups inflected forms of a word so they can be data mined as single concept.
Important for the global marketplace, Rosette includes name matching and name translation, which helps decode across languages and cultures. Its entity extraction capability helps contextualize organizations and peoples, relating them to idiomatic usage.
Among Linguamatics strong points: users can data mine text by asking questions using clear and simple language. Or, for more advanced projects, the system can handle queries requiring complex linguistic analysis.
The response to these questions – based on a deep sentiment analysis of the text – are structured clearly. Users can then place them in a variety of formats, from charts to conceptual maps to HTML tables.
A key differentiator is that I2E has a user interface that is accessible to both casual and professional uses. This feature saves significant staff time by allowing less skilled staff members produce results without help from advanced tech staff.
The software is available as on-premise or cloud-based.
Expert System’s Cognitive Automation is geared to understand in a style that reflects typical human understanding. It is automated, providing the speed and machine-like consistency of software with, to an extent, the nuance and color context of a human reader.
Using a process it calls RPA (Robotic Process Automation) helps with this data-heavy analytics work.
Cognitive Automation’s use of natural language processing helps with its data mining of unstructured text. The vendor touts the application as being particularly well suited to banking and insurance text environments. As such, it can be set up for uses like claims automation, which speeds up the typically human-intensive work of claims handling.
|IBM Watson Tone Analyzer||particular focus on customer service and support||Backed by IBM Watson|
|Clarabridge Speech Analytics||Uses Natural Language Processing||Processes the speech from phone calls|
|OpenText Sentiment Analysis||Handles social media across borders||Complete semantic search engine|
|SAP HANA Sentiment Analysis||Offer a visual view of the sentiment||Three options for output|
|SAS Sentiment Analysis Studio||Sophisticated mix of linguistics||Define taxonomies at many levels|
|Basis Technology Rosette API||Deploys lemmatization||Uses artificial intelligence|
|Linguamatics I2E||Query in simple or complex language||Accessible to both casual and expert users|
|Expert System’s Cognitive Automation||Robotic Process Automation||Suited to banking and insurance|
|
<urn:uuid:2d2352a6-2c42-4e7b-87c0-0e175546f5fa>
|
CC-MAIN-2022-40
|
https://www.datamation.com/big-data/top-sentiment-analysis-tools/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00250.warc.gz
|
en
| 0.886746 | 2,301 | 2.71875 | 3 |
Your data isn't as safe as you think.
Instances of people gaining access to private information — such as email accounts, business documents, or banking information — are steadily on the rise. Using malicious software (“malware”) administered via phishing, installed spyware apps, or hacking unprotected wifi signals are all ways to gain access. Though all of these are just waiting for exploitation, you’re most likely to have your account infiltrated by someone guessing, hacking, or managing to reset your password. For this reason, it is recommended that you employ two-step or two-factor authentication wherever it is available.
Passwords Can Be Flimsy
A complicated, complex password is a significant first step on the road to cybersecurity, but it’s not always enough to keep your data safe. Cybercriminals have ways of stealing or guessing password information that can be used to gain access to your account. Whether you left a locked database open at work over the weekend, left sensitive notes where the public can see them, or have a weak password, it’s possible to gain access to your sensitive data. Some systems even allow you access to reset a password with expired password information. Lucky for you, many systems are now including a second way to prove your identity — two-step or two-factor authentication.
What is Secondary Authentication?
Two-step/factor authentication relies on the idea that it is unlikely that a cyber threat would have access to more than one of your devices at a time. Secondary authentication gives you a chance to vouch for your own identity from your phone, email address, or another locked account that only you should be able to access. What this looks like in action would be an email account requiring a password as well as a unique code that the service texts your phone. Though a hacker may have figured out your email password, it’s improbable that they have access to your fully unlocked mobile device.
What is the Difference Between Two-Step Authentication vs. Two-Factor Authentication?
Many online queries, websites, forums, and articles will use the terms “two-step authentication” and “two-factor authentication” interchangeably, though there is a subtle difference. In the past, two-step authentication purely means that authenticating details provided to a secondary source are requested, even if the details themselves are identical. More recently, the expression of “two-factor” authentication is used as it implies that differing details are required for a person to gain access to a restricted area. An example of this would be the email account that requires both your password as well as a unique, disposable code sent your mobile device. The password would be one factor, and the code would be the second factor in such an authentication process.
Where Can I Use Two-Factor Authentication?
As cybersecurity for mobile devices has increased over the years, so too has the availability of two-factor authentication. Looking through the security settings of most social media accounts, email accounts, and databases will likely reveal the option for two-factor authentication.
Some Popular Online Destinations with Two-Step Authentication
This is by no means a complete list, but it does provide users with knowledge that the service exists. Email Clients
- Facebook Messenger
Social Media Sites
File Storage Services
|
<urn:uuid:0e6973b7-b3d9-4bcd-b172-af3ec7f7defe>
|
CC-MAIN-2022-40
|
https://www.jdyoung.com/resource-center/posts/view/243/what-is-two-factortwo-step-authentication
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00450.warc.gz
|
en
| 0.927801 | 776 | 3.171875 | 3 |
What Is Unstructured Data?
Unstructured data is information that is not stored according to a predefined data model or schema, such as a relational database management system, or even non-relational databases such as NoSQL. The vast majority of data in the world is unstructured, encompassing text, rich media, video, images, audio, sensor data from Internet of Things (IoT) devices, and more. Unstructured data can be created by humans or machines, and is challenging to store or analyze using traditional data management strategies.
Why Is Unstructured Data Important?
Data is increasingly recognized as the most important asset that businesses possess. Yet few organizations have been able to reap full value from the immense volumes of unstructured data — estimated by analysts to be 80 percent of all data they generate or otherwise acquire during the course of doing business. Managing unstructured data at scale using conventional file services approaches with network attached storage (NAS) devices has proven difficult and costly because of data replication, physical limitations and governance challenges.
Organizations can extract tremendous value from unstructured data with the right tools. For example, businesses could mine social media posts for data that reflects satisfaction with their brands. Clinicians at hospitals could share a common — and massive — repository of genomic sequences for research purposes.
But how and where to store all this unstructured data, as files or objects, has continued to challenge businesses. Traditional NAS infrastructure helps with performance but it is costly and doesn’t scale out. Next-generation, scale-out NAS is available, but not yet widely deployed. Software-defined object storage is beginning to be deployed but most enterprise workloads weren’t designed to use object storage. Adoption has been slow and difficult. Enterprises need a more scalable and efficient way to manage unstructured data.
What Is an Example of Unstructured Data?
Examples of unstructured data include the following:
- An invoice that comes into your finance department for processing that is of a unique (non-standard) design
- A waiter’s handwritten customer orders that a restaurant chain is attempting to tally up for food inventory purposes
- A photo displayed on your webpage to show what an item for sale looks like
- A barcode that lets your cashier check out items for customers
- An x-ray that a doctor can analyze to treat a patient
- An email sent to you by a colleague
- An office memo written in a word-processing document
- A presentation deck containing both text and images
What Are Unstructured Data Sources?
Sources of unstructured data include the following:
Text files – Virtually every office file you’re used to handling is a source of unstructured data. This includes word processing documents, presentations, and PDFs — anything that doesn’t have a pre-defined format.
Rich media files – Audio and video files do not fit into a structured data model. Neither do digital photographs. Each of these file types can come in its own format, making it even more difficult to analyze it.
Email – Some aspects of email are considered semi-structured (the “to” and “from” and “subject” lines, for example) but mostly emails are the source of unstructured text.
Social media – Social media is also a source of unstructured data, although like email, some of it can be considered semi-structured.
IoT data – Device sensors generate an extraordinarily large volume of log files that are unstructured and difficult to analyze in conventional ways.
What Is Unstructured Data Used For?
Unstructured data is used within every function in business. Finance (invoices). Marketing (photos). IT (IoT data). Sales (emails with customers). Customer service (social media).
Although it’s changing rapidly, at this point, much of the unstructured data that is collected and stored is processed manually, if at all. For example, email is mostly processed by a human reading it, extracting what is important (sometimes by copying and pasting into another email, or into an application), and taking action based on its contents.
But with advancing AI technologies such as machine learning, machine vision, and natural language processing, more of this unstructured information can be harnessed and analyzed automatically, driving faster business insight.
What Is Structured vs Unstructured Data?
Structured data is data that is stored in a fixed place within a file or record. It’s typically stored in a relational database (RDBMS) but can also be found in NoSQL databases, for example. Structured data can be text, dates, or numbers.
Unstructured data has not been defined or stored in a predefined way. Although most commonly unstructured data consists of text, it can also feature numbers, images, and audio.
How do You Classify Unstructured Data?
Data classification is the process of analyzing data and categorizing it into buckets, typically based on metadata (data about data) such as type of file, contents, or date.
By classifying unstructured data by, for example, how sensitive it is, you can better perform unstructured data management that complies with your governance policies by deciding where the data should be stored, and who should have access to it.
Are Files Unstructured Data?
Files can be either structured or unstructured data. Common examples of structured data are spreadsheet or SQL database files. Other files, like word processing documents, presentations, and emails are unstructured. Some files — like invoice templates that display the exact same information in the exact same way every time the template is used — are called semi-structured, because there’s a way of getting the information out of them without AI or machine-learning models. So it’s not a question of whether the data is in a file or not; the question is whether within that file the data is stored in a predefined format.
What Are the Characteristics of Unstructured Data?
Unstructured data is information that either does not have a predefined data model or is not organized in a predefined manner. That means that it:
- Isn’t stored according to a data model
- Doesn’t have any discernible structure
- Doesn’t have a pattern to it
- Can’t be stored as rows and columns
How Much Data Is Unstructured?
Approximately 80% of all data is unstructured, and that percent grows higher every year.
How Is Unstructured Data Processed?
There are a number of techniques that you can use to process unstructured data. Here are some of the most widely used:
Metadata analysis – This “data about data” is critical to analyzing unstructured data. For example, a blog post (unstructured text) has metadata consisting of title, author, URL, publishing date, any descriptive tags or keywords, and even perhaps a category name — there are no metadata standards, so each business defines its own.
Image analysis – Images contain unstructured data types that can be very valuable to extract for business, financial, medical, and scientific reasons. New AI-based systems can analyze and match an unstructured image that has similar characteristics to a known image. For example, optical character recognition (OCR) technology converts text in image files by matching the shapes of specific images to characters in a language.
Natural language processing (NLP) – This is a subset of AI/ML that aids in analyzing unstructured textual data. NLP uses a number of techniques to process and extract meaning and make sense of unstructured text, such as grammar and semantics.
Data visualization – When teams choose to visualize data, they present it in a graphical form to allow viewers to understand and analyze it simply by looking at it.
A Modern Approach to Managing Files and Objects
Cohesity’s software-defined, hyperscale platform simplifies data management by consolidating backups as well as unstructured data in the form of files and objects from multiple application workloads on a single platform. The platform is architected on Cohesity SpanFS, a unique globally distributed file system that supports a variety of protocols, including NFS, SMB, and S3 object storage.
With Cohesity, your organization can protect existing NAS investments — in fact optimize them — by only using that storage for higher performance data while offloading infrequently accessed unstructured data to Cohesity SmartFiles. A modern approach to files and objects management, SmartFiles eliminates legacy hardware forklift upgrades and costly and time-consuming manual infrastructure updates while guaranteeing all of your unstructured data is protected wherever it resides — in the data center, the cloud, or at the edge.
Cohesity SmartFiles also features:
- Unlimited scaling in a pay-as-you-grow model
- Global deduplication and compression
- Global actionable search on all file and object metadata
- User and file system quotas with audit logs
- Small file optimization
- Integration with Cohesity Marketplace apps for increased data visibility, cyberthreat resilience, and analytics
- Lower TCO for unstructured data management
|
<urn:uuid:fa657e19-f754-4e45-8438-12db27d91b0a>
|
CC-MAIN-2022-40
|
https://www.cohesity.com/glossary/unstructured-data/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00450.warc.gz
|
en
| 0.917301 | 1,939 | 3.34375 | 3 |
Cross-site request forgery
It is hard to tell when the first CSRF attacks occurred, but we can in some cases trace them back as early as 2001. Even so, CSRF exploits did not start to get well-documented until 2007. Some of the notable companies targeted with CSRF attacks include, but are in no way limited to, Korean eBay, a prominent Mexican bank, YouTube, Netflix, and the computer security company McAfee.
What can you do to avoid such exploits? Teach your developers how to prevent cross-site request forgery with the help of Avatao’s secure coding training platform!
What is Cross-site request forgery (CSRF)?
Cross-Site Request Forgery is a well-known vulnerability in the IT security world. In a nutshell, an attacker tricks the victim into performing actions on a website which the user does not intend to do. In other words, the attacker sends unauthorized commands in the name of the victim to the targeted webpage.
While “Cross-Site” may sound familiar from the Cross-Site Scripting vulnerability, we should not confuse the two. CSRF attacks exploit the trust that a webpage has in a victim’s browser, whereas XSS exploits the trust the victim has for the webpage.
What are the possible consequences of CSRF?
An experienced attacker can easily make the victim execute an unintended action. These actions can be things like changing passwords, e-mail addresses, and usernames, or even more serious actions such as deleting profiles or transferring large amounts of money. In the worst case scenario, the attacker can gain access to a victim’s account and use it like his own.
CSRF in practice
The attacker crafts web requests to a website that the victim has privileged access to. This web request can be crafted to include URL parameters, cookies, and other data that appear normal to the web server processing the request.
The three main conditions needed for a CSRF attack are as follows:
- Utilizing cookie sessions: Nowadays, most applications rely solely on session cookies to identify the user who has made the requests. So the only way to perform a CRSF attack and validate user requests is to use the sessions.
- Simple request parameters: If the request contains parameters with complex values that are unpredictable, the attack is highly unfeasible. The best thing is when the request does not contain any parameters.
- Significant action: There are a variety of reasons to execute this type of attack. The main motivations are often transferring money or getting access to a user’s data. If the information has a high value and is worth the effort to obtain it, it is reasonable to be wary of CSRF attacks.
Preventing Cross-site request forgery
What can users do to prevent CSRF?
Signing out of accounts and avoiding the “Remember me” option can reduce the risk of CSRF attacks. Additionally, users should treat links in emails and any other insecure links with great care.
What can developers do to prevent CSRF?
- Use a specific CSRF Token in every request from the user
- Request authentication data from the user
- Limit the Cookie Sessions lifetime
- Check the HTTP Referer and HTTP Origin headers
Get started with secure coding training today
Reach out to our team and find out how we can help your company scale secure coding training efficiently.
Copyright © 2022 Avatao
|
<urn:uuid:59c73b05-1203-4d31-8afc-c06f1c2e08e7>
|
CC-MAIN-2022-40
|
https://avatao.com/cross-site-request-forgery/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00450.warc.gz
|
en
| 0.916479 | 714 | 3.09375 | 3 |
Cybersecurity has two great resources that work well together -- experienced security ops personnel and learning machines. Machines can work at the speed of electrons and process enormous quantities of data, but they are challenged when dealing with unforeseen scenarios. Human judgment and experience cannot be replicated by machines, but humans struggle to find patterns in massive data sets and they operate in minutes, not microseconds. For us to be truly effective as an industry, we need to deliver solutions that combine human and machine working together to fend off cyberattacks that can multiply and adapt in microseconds.
We are facing a significant labor market shortage in cybersecurity, both in numbers and experience. At the same time, there are traditional fears about automation and machine intelligence. One is that people will be replaced by machines, and another is that the machines will create enormous messes by compounding poor decisions. In this case, we are talking about using the machines to amplify the effectiveness of security operations and incident-response teams. Technology is not replacing people, but in the spirit of the best teams, each is working to its strengths.
One example of this is computers and chess players. In 1997, an IBM supercomputer beat a human chess grandmaster for the first time. Chess has a large quantity of data and a lot of patterns, which plays well into the strengths of the machines. However, in 2005 a couple of amateur chess players augmented with three PCs beat a whole range of supercomputers and grandmasters. The human/machine team was better than either alone.
In cybersecurity, we are gathering vast amounts of data, and there is an assumption that with increased visibility, enough data, and the right algorithms we will be able to predict threats. However, cyberattacks are not deterministic, as they contain at the core a human who can be innovative or random in his approach, and visibility does not give you insight into your adversary. Algorithms and analytics on their own cannot comprehend the strategic nature of the adversarial game that is being played against the cybersecurity bad actors.
So technology will not be replacing security professionals anytime soon, but it does bring tremendous advantages to the defense. Shared threat intelligence helps prevents attacks from being used over and over again, or from propagating rapidly throughout your network. You need a learning machine to detect and contain attacks at the speed of light, while humans work to mitigate the problem and develop long-term solutions.
With the increasing number of targeted attacks that are executed only once, threat intelligence might not help. The same is true of zero-day exploits or new attack types. The machines won’t have rules to deal with this, but they can help filter the alerts and correlate actions to raise the alarm to their human colleagues sooner than a human acting alone.
The machine revolution is coming, but not the way Hollywood movies portray it. Machines are coming to be the best teammate you could ask for.
|
<urn:uuid:0981faa1-1f95-4e49-9c21-927bcbdfaea6>
|
CC-MAIN-2022-40
|
https://www.darkreading.com/intel/the-machines-are-coming-the-machines-are-coming-
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00450.warc.gz
|
en
| 0.959294 | 585 | 2.75 | 3 |
VLAN is known as the devices working on more than one LANs which are configured to communicate as they are connected to the same wire, when in actual they are located on different segments of LAN. As VLAN are based on the logical connections instead of physical, these are highly flexible. VLANs describe broadcast domains for layer 2 network. Broadcast domain is known as the set of all the devices that will receive frames coming from any of the devices available in the set.
Configuring VLAN in Database Mode
When the switch is in transparent or VTP server mode, you are able to configure VLAN in VLAN database mode. When configuring in the database mode, the VLAN is stored in vlan.dat file instead of startup-config or running-config files. In order to show the VLAN configuration, press "show running config vlan" command. VLANs that can be configured have unique IDs ranging from 1-4094. Database mode will support the configuration of IDs ranging from 1-1001, but not more than 1006. In order to create a VLAN, you have to enter the VLAN command using an unused ID. To verify if a specific ID is working, you have to enter "show vlan id". For modification of a VLAN, enter the command vlan to know about the existing VLAN.
The access ports transport network traffic from and to particular VLAN provided to it. Unlike the trunk port, access ports will not deliver the exclusive identifying tags as the intended VLAN is pre assigned. Mostly, the access ports will have only a single set up of VLAN on interface and carries the traffic for only single VLAN. If for the access port the VLAN is not configured, interface will carry the traffic using the default VLAN only, which is mostly VLAN1. Ethernet interfaces can also be configured as either trunk ports or access ports; however, they are not able to function as both types at the same time of the port. Access ports are the most common kind of links for any VLAN switch. All the network hosts are connected to the access ports of the switch for gaining access of local network.
For effective functionality of these access ports, these ports must be configured as the host ports. Whenever the access ports are set as host ports, it automatically sets as the access port while the channel grouping is disabled. End stations however can only be configured as the host ports. If the ports are configured as the hosts, you receive an error message. When the access ports receive packet with 802.1Q tagging in the header instead of the access VLAN value, it avoids the packet with not even knowing its MAC source address. When the access port is provided assigned to the private VLAN, all the access ports linked with that specific access VLAN also attains the broadcast traffic designed for primary VLAN in private VLAN.
It is possible to change the membership of access port in a VLAN by providing a new VLAN. It is important that a VLAN should be created before it is selected as the VLAN access port. If the access VLAN in access port is changed to a VLAN which is not designated yet, the system will shutdown the specific access.
Trunking is the point to point connection between more than one Ethernet switch and some other network devices like switch or a router. Gigabit and fast Ethernet trunks carry the traffic of a range of VLANs over one link, and then you can extend VLANs across the entire network.
There are two kinds of trunking encapsulations on Ethernet interfaces:
- ISL- Inter switch link
- 802.1Q is standard encapsulation
VTP is designed to ensure that every switch in VTP domain is aware of all the available VLANs. However, in some cases the VTP creates unwanted traffic. All unwanted broadcasts and unicasts in VLAN are then flooded over the entire VLAN. All the network switches receive every broadcast, even about the cases where few or no users are connected to the VLAN. For this purpose the VTP pruning is designed which eliminated the unwanted traffic.
(TLV) support - VTPv2 and VTPv1 broadcasts instead of dropping the VTP advertisements with the unrecognized TLVs which cannot be understood or parse. It also stores them in the vlan.dat while it is in the VTP mode. This feature is useful when all the devices are not at the similar release version.
VTPv2 and VTPv1 have mostly similar features. There is usually no such reason to enable the VTPv2 unless the token ring VLAN in the campus network exists.
VTPv3 offers the following features over the previous VTP versions:
- Support for the extended VLANs is provided through VTPv3 that ranges from 1006-4094. However, once VTPv3 is configured with extended VLANs, it cannot go back to VTPv2 or VTPv1.
- It provides foundation for the detailed VLAN configuration advertisement.
- Provides better password security with secret and hidden options.
- It provides security against the at automatic unintended database synchronization on the introduction of the new switches. In VTPv3, only a particular device known as the primary server is allowed to upgrade to other switches.
- It also provides the ability to spread the other and VLAN database like mapping table.
Normal range of VLANS is 1 to 1005. If the VLAN switch is in VTP transparent or VTP server mode, you can modify, remove or add configurations for the VLANs ranging from 2-1001 in VLAN database. The VLAN IDs ranging from 1 and 1002 to 1005 are created automatically and are permanent.
In the VTP versions 2 and 1, switch should be in the VTP transparent mode while you create the extended range VLAN. There range varies. If the switch in not the transparent mode, the extension will not take place.
You can design these parameters while you create a fresh normal range VLAN or change the existing VLAN in VLAN database.
- VLAN name
- VLAN ID
- VLAN state (suspended or active)
- MTU for VLAN
- Security Association identifier
- Parent VLAN
- Ring number
- Token Ring
- TrCRF VLAN
- TrBRF VLAN
Through version 2 and version 1 VTP, while the switch is in transparent mode, you are able to create the extended range VLAN ranging from 1006-4094. The extended range in supported by VLAN in transparent or server mode. Extended range VLANs allow service providers to increase their infrastructure to a great number of clients. The extended range VLAN IDS are permitted for any switch port command which allows VLAN ID.
With the help of version 2 and version 1, the extended range of VLAN configuration is not saved in the VLAN database, but as the mode of VTP is transparent, they are saved in the switch which is running configuration file, as for you to save the configuration in startup configuration by using he command copy running config startup config EXEC. Create in version 3, the extended range VLANs are saved in VLAN database.
The extended VLANs include VLAN IDs from the range of 1006-4094. You are able to delete or create extended VLANs by using CLE in the config-vlan submode. The entire extended VLAN is created with the primary type, right for the device. Configurable VLAN parameters consist of MTU size, RSPAN and private VLAN. The other extended VLAN parameters utilize the default values.
To properly deliver and spread the trunk port traffic with multiple VLANs, the device uses 802.1Q encapsulation method. This method instead of frame header uses tag. The tag includes information related to the particular VLAN which the packet and frame belongs to. This encapsulation method allows the packets encapsulated for multiple VLANS to pass through the same port and manage traffic division between VLANs. The encapsulated VLAN also allows trunk to shift end to end traffic via network on similar VLAN.
For extra security to traffic that is passing through the dot1q trunk port, the vlan dot1q native VLAN command was initiated. This feature is designed to provide a way to make sure of the safety of all the packets passing through 802.1Q trunk port. Also, it prevents the reception of the untagged packets on 802.1Q trunk port.
Without the help of this feature, all the tagged frames received on the 802.1Q trunk port are allowed till they fall within the allowed list of VLAN and their stored tags. Native VLAN IDs tag the untagged frames of the trunk port prior to further processing. Only those frames of which VLAN tags are within the permitted range for that dot1q port are received. If VLAN tag matches a native VLAN frame on the port that tag is stripped off with the frame sent to untagged.
When a switch port is regared/configured as trunk, it will tag frames with the correct VLAN number. Frames of VLAN1 by default belong to the native VLAN and are passed through the untagged trunk. The IEEE committee which explained the 802.1Q decided that due to the backward compatibility, it was recommended to support native VLAN. Native VLAN in short is used for the untagged traffic being received on 802.1Q port. This is desirable as it allows the 802.1q ports to communicate with old 802.3 ports directly through receiving and sending untagged traffic. However, in other cases it can be very disadvantageous as packets linked with native VLAN will lose their tags. Due to this reason, loss of classification and means of identification, the use of this feature should be avoided. There are also a very few reasons why native VLAN may be needed. Native VLAN can be changed to any VLAN other than the VLAN1.
The excessive and unwanted traffic within the network is one of the major issues associated with layer 2 architecture. Manual pruning is implemented on the switches to trim VLANs from flooding switches which do not have hosts for that specific VLAN. It is also important to know that though pruning prevents some unwanted traffic from circulating across the network, it does not always simplify the problem of spanning tree topologies. Trunk port by defaults allows all the VLANs pass through the trunk.
VTP pruning is regarded as a global command which affects all the switches available in the VTP domain. This only needs configuration on a single switch. All the default VLANs are eligible to prune, meaning that all VLANs are able to be pruned. To block the specific VLANs from the mechanism of pruning, use clear vtp pruneligible command. Manual pruning requires the configuration of switch for filtering specific VLANs on the trunk. In VTP pruning, the trunks dynamically prune and allow VLANs based on the VTP join messages. Usually, manual pruning will be configured on the trunks that will not consist of any hosts linked with filtered VLAN. The pruning also puts an impact on the spanning tree topology.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from [email protected] and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
|
<urn:uuid:ccf974c6-ffd3-4d9a-ad52-cbea3347c584>
|
CC-MAIN-2022-40
|
https://www.examcollection.com/certification-training/ccnp-configure-and-verify-vlans-and-trunking.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00450.warc.gz
|
en
| 0.898225 | 2,457 | 3.171875 | 3 |
(This is the first installment of a two-part series on innovation, featuring Greg Satell, author ofMapping Innovation, andCascades: How to Create a Movement that Drives Transformational Change, which will be published by McGraw-Hill in April, 2019. You can learn more about Greg on his website,GregSatell.comand follow him on Twitter@DigitalTonto. Read part 2 here.)
Digital is doomed.
So says innovation adviser and best-selling author Greg Satell.
Satell says that we're entering anew technology revolution where it will be much harder to prototype and iterate innovation.In the past, our ability to cram more and more transistors onto a microchip what's called Moore's Law allowed us to make technology exponentially more powerful year after year.
But Satell argues that Moore's Law is showing signs of slowing down.
There's speculation that we're already bumping up against the limit on how many transistors we can cram onto a tiny silicon chip.And big tech companies like Amazon and Google are already designing custom chips, because they can't wait for the next generation of silicon chips to run their artificial intelligence algorithms.
So, is it time to ditch digital and prepare for a new era of neuromorphic processors?
Not in the short run.
But in the long run, the amazing efficiency of neuromorphic chips (potentially thousands of times or even millions of times more efficient than conventional silicon chips) could revolutionize edge computing with adaptable AI processing.There's also excitement around the astonishing computational capabilities of quantum computing, which can solve complex problems unbelievably faster than traditional computing ever could.
Satell says that commercial applications for these technologies may be ten to 15 years away. But here's the bottom line. Today, the innovation spotlight is on speed and agility. But in the future, we won't be able to rapidly prototype innovation such as a quantum computer, a cure for cancer, or an undiscovered material.
And slowing down to take stock of the ethical considerations of emerging technologies will be more important than ever.
"We've spent the last few decades learning how to move fast," says Satell. "Over the next few decades we're going to have to relearn how to go slow again."
"The digital age has been about agility and disruption. But it's time to think less about hackathons and more about tackling grand challenges."
In this thought-provoking, two-part interview, Satell debunks popular innovation myths, breaks down life after digital, and offers:
Hope you enjoy the conversation.
Appian: What motivated you to write "Mapping Innovation"?
Satell: You know, it was really just my personal experience. Well, my personal frustration running businesses. Because there's so much pressure to innovate, and so many ideas on how to do it.
When you look for guidance, you see really good ideas from people with really strong track records. But they often contradict each other.
So, I wasn't finding a good answer to the basic strategic question every leader must answer. Which is "What should I do." You'd see something like Design Thinking, which is a good example. Obviously super successful. Steve Jobs swore by it. Stanford University created an entire school behind it. You start with the customer. You figure out their needs. You work your way back. Rapidly prototype, and it makes a lot of sense.
Appian: Yes, but some innovation experts, like Clayton Christensen, argue that that approach is the wrong way to go about innovation.
Satell: That's right. You read Clayton Christensen's book: "Innovator's Dilemma". And he says that that's how you go out of business by listening too much to your customers, when the basis of competition changes. So, which is it? It doesn't seem like both of those things can be true.
And then you have the concepts of "open innovation," and "Lean Startups" and on and on and on. It can be incredibly confusing. So, I set out on this 10-year journey to figure it (innovation) out. And that's why I wrote "Mapping Innovation".
Appian: So, what are the biggest takeaways from the book, as you think about the innovation challenges faced by today's C-Suite execs?
The biggest takeaway is that there's no one true path to innovation. Innovation is basically about solving problems. So, the first thing you have to do is figure out what kind of problem you're trying to solve. Because different strategies work well with different problems.
And the way a lot of organizations get stuck is that they say: "This is how we innovate. This is our innovation DNA or whatever." And they do that, because that's what Steve Jobs did, or that's what Elon Musk does. Or that's what worked for us the last time."
That approach can work well for a while until you hit a problem that doesn't fit. And then you can end up just spinning your wheels. So, you really have to focus on classifying the problem first. Then, you identify the solution, rather than the other way around.
Appian: So, based on your years of research, what do the most successful companies get right about innovation that other companies get wrong?
Satell: That's one of the questions I had in my mind while I was writing my book: What do successful innovators (company leaders) do differently? I've been talking to and writing about great innovators for years. And I've studied hundreds of companies. And there never seemed to be a common denominator.
Some of them are very introverted. Some are extroverted. Some organizations are very fly-by-the-seat-of-your-pants. Others are very conservative. IBM is among the best innovators ever. They've been on the cutting edge of technology for over 100 years and they're a very conservative company too.
Nothing obvious stood out you know, that one thing that successful innovators do differently than anybody else? But, in the course of researching the book. I finally realized what it was. The one thing that they all do consistently and this is the thing that allows the best companies to innovate decade after decade, versus somebody who came up with one good idea, but couldn't follow up on it.
What makes the difference is that successful innovators focus on solving problems.
Appian: So, it's not some special brainstorming strategy or unique organizational culture that defines an innovative organization?
Satell: It's about how they (successful innovators) search for problems.
So, what great innovators do, is that they're always looking for new problems to solve. And what innovative organizations do, is that they have a systematic and disciplined process for identifying new problems.
Experian with DataLabs. They have a unit that just goes to customers and finds out what problems they're trying to solve, or what's giving them heartburn.
When they find one (a problem that needs to be solved) that has potential, they take it back to DataLabs, and they have a team of world class data scientist there. And they can usually come back with a prototype solution in 90 days. If the customer likes what they see, the prototype goes back to an operational unit in Experian where it's scaled up as a real business.
Over the last five years, Experian has launched more than a dozen new businesses that way. Extremely successful. Then, you look at IBM and they have a very similar idea. They call it Grand Challenges. Deep Blue was one. Then Blue Gene, then Watson. They didn't have any idea about how they were going to make money with Watson.
But they knew that it would open up numerous possibilities. They're also doing the same thing now with Project Debater (the first AI system designed to debate humans on complex topics).
Appian: So, that's a different approach than Experian. It's taking a longer view of innovation, in terms of when it pays off.
Satell: IBM isn't looking for problems that they can solve in 90 days. They're looking for problems that may take them years and sometimes decades to solve. For example, they've been working on quantum computing for 30 years. And Google takes a similar approach to innovation.
Appian: Is that what you mean when you talk about systematic exploration in the innovation journey?
The disruption of GE is the opposite of that (systematic exploration). I mean GE hasn't invented anything innovative since CT scanners in the 1970s. And you're talking about a very capable company that makes serious investments in R&D. But what they don't do is explore.
Appian: But how do you get a big, traditional, legacy type company to adopt that kind of mindset, a willingness to explore?
Look, it's a basic equation, right? If you don't explore you won't invent. And if you don't discover, you won't invent. And then you're eventually going to get disrupted.
Appian: What are some of the exploration strategies that get the best results?
Satell: Some companies get closer to groundbreaking innovation by sponsoring academic post-doctoral research at universities. One of the most successful innovation programs at Google is one that nobody ever talks about. In fact, I've never heard of this program outside of Google. I'm talking about their practice of bringing in top-flight academics to work at Google for a year.
That's how Google Brain started. They bring in about 50 people, including high-flyers like (top AI experts) Andrew Ng and Geoffrey Hinton. But the point is that Google brings in about 50 people a year working on all kinds of things. Think about that, a company the size of Google bringing in 50 extra salaries. That's not a significant investment.
So, it's not about spending lots of money on investment in innovation. It's about realizing that exploration is important and having the will to do it.
Appian: Which is a great segue to what I want to talk about next. Let's look at the flip side of what you were just talking about. Help us debunk some of the misconceptions about innovation. You've addressed the misconception that innovation takes lots of money. But what are some of the other myths about what it takes to be a successful innovator?
Satell: The biggest misconception, which is something that I've already touched on, is that innovation is about ideas. Innovation is not about ideas, it's about identifying problems. If you identify a meaningful problem, the ideas will come. So, that's the first thing. I've spoken to all kinds of innovative people and companies. And they were all focused on solving a problem, not on an idea. That's the first thing. The second misconception is that you need to have a leader like Steve Jobs.
Appian:So, what's the problem with the Steve Jobs approach?
You don't need someone who's spouting off ideas all the time, breaking all the china. That's actually the last person you want.
People like Steve Jobs are good at going off and starting their own companies. But you don't want them working in yours. When Steve Jobs worked for other people, he was a disaster.
Appian: So, what's a better alternative?
Satell: The alternative is an innovative culture that's a collaborative culture. So, you want people who are good listeners. People who can work effectively with other people. One of the most interesting things that I found through my research is that many of most innovative people that I've talked to, including world-class scientists, are among the nicest people I've ever met. But that's what great innovators are like.
Great innovators aren't necessarily smarter or more talented than anybody else. They don't know everything, but they know somebody who knows.
And because they build up these fantastic knowledge networks, they can be information brokers, which makes it possible for them to come across that random insight or piece of information they need to solve a difficult problem.
And the way you do that is by building a strong network of connections. And the way you do that is by being generous with your time and expertise. So generosity can be a competitive advantage.
Appian: Which gets at the myth of the lone genius or mad scientist working alone in the secret lab.
Satell: Most people think that great innovators are secretive and innovate over there in a corner by themselves. But that's not the case.
(Tune in next week for the final installment of our two-part series on the future of innovation.)
Appian is the unified platform for change. We accelerate customers’ businesses by discovering, designing, and automating their most important processes. The Appian Low-Code Platform combines the key capabilities needed to get work done faster, Process Mining + Workflow + Automation, in a unified low-code platform. Appian is open, enterprise-grade, and trusted by industry leaders.
|
<urn:uuid:af07d29e-711e-4347-b777-a5b0759bc75b>
|
CC-MAIN-2022-40
|
https://appian.com/blog/2019/beyond-digital-doom-exploring-the-future-of-innovation-part-1-.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00450.warc.gz
|
en
| 0.966185 | 2,770 | 2.578125 | 3 |
The Intelligence Advanced Research Projects Activity (IARPA) looking to develop smart radio techniques for detecting anomalies and secure data transmissions.
In a request for information (RFI), IARPA said it was planning to hold a virtual Proposers’ Day on August 20 to introduce the Securing Compartmented Information with Smart Radio Systems, or SCISRS program, which is meant to “develop smart radio techniques to automatically detect and characterize RF [radio frequency] anomalies in complex RF environments.”
“Highly customized solutions based on specific hardware are not of interest,” the RFI said. “It is an objective of the SCISRS program that methods be adaptable to a wide variety of RF collection hardware and that they be capable of detecting and characterizing various kinds of Signals in environments cluttered with sources of noise and interference.”
According to the RFI, the SCISRS program will move forward in three phases:
- Focus on RF baseline characterization and detection of low probability of intercept anomalies;
- Focus on altered and mimicked signal anomalies; and
- Focus on unintended emissions.
IARPA detailed that the SCISRS program will hold a program kick-off workshop in the first month of the program and hold similar workshops annually.
|
<urn:uuid:bf6b7cba-865e-44e1-a4ec-d2cee244be8f>
|
CC-MAIN-2022-40
|
https://origin.meritalk.com/articles/iarpa-seeks-smart-radio-tech-to-detect-anomalies/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00650.warc.gz
|
en
| 0.905993 | 265 | 2.515625 | 3 |
The United States of America didn’t just appear overnight. There was a long road that led us to the great nation we are today.One of the milestones on that road was the Constitution of the United States. This concise document outlines exactly how the nation is supposed to function and is the basis for all our laws.It’s a sad fact that few American citizens have read or understand what is in the Constitution. Fortunately, it isn’t hard to correct that.The National Archives has a complete transcript of the Constitution that you can read online. In addition to the transcript, you can view high-resolution images of the original document and learn more about its creation and ratification.I encourage you to take a few minutes and read this important document. It isn’t just a historical artifact; it still very much applies to the decisions affecting our lives today.
New eBook: ‘Cryptocurrency 101’
Don't want to lose your dough to crypto? Check out my new eBook, "Cryptocurrency 101." I walk you through buying, selling, mining and more!
|
<urn:uuid:5ad0cb93-94ae-4dec-b7c4-29c0d7bc0399>
|
CC-MAIN-2022-40
|
https://www.komando.com/money-review/constitution-of-the-united-states/7474/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00650.warc.gz
|
en
| 0.956127 | 227 | 2.734375 | 3 |
What is IPv4? It routes most of today’s internet traffic
The short answer to the question, “What is IPv4?”, is that it’s the fourth version of the internet protocol. IP, which stands for internet protocol, is the internet’s principal set of rules for communications.
In place for more than 35 years, the U.S. Department of Defense first deployed it on its ARPANET (Advanced Research Projects Agency Network) in 1983.
Internet protocol version 4, IPv4, is also at a crossroads: its global IP address supply is exhausted. The internet is undergoing a gradual transition to the next version, IPv6, but not without challenges.
In this glossary entry, we’ll explore the basic components of the internet and how they work together, examine the fourth internet protocol version and its modern-day shortcomings, and touch on its IPv6 successor.
Before IPv4, a little more on how the internet works
More details on IP
IP is part of an internet protocol suite, which also includes the transmission control protocol. Together, these two are known as TCP/IP. The internet protocol suite governs rules for packetizing, addressing, transmitting, routing, and receiving data over networks.
IP addressing is a logical means of assigning addresses to devices on a network. Each device connected to the internet requires a unique IP address.
Most networks that handle internet traffic are packet-switched. Small units of data, called packets, are routed through a network. A source host, like your computer, delivers these IP packets to a destination host, such as a server, based on IP addresses in packet headers. Packet-switching allows many users on a network to share the same data path.
An IP address has two parts—-one part identifies the host, such as a computer or other device. And the other part identifies the network it belongs to. TCP/IP uses a subnet mask to separate them.
IP sits at Layer 3, the network layer, in the OSI model. The model divides communication across computer networks into seven abstract layers that each perform a distinct function in network communication. Layer 3 is where routing occurs between different networks.
How DNS fits in the picture
DNS, or domain name system, is the phone book of the internet. It translates domain names that we easily remember, like bluecatnetworks.com, into IP addresses like 126.96.36.199, which are the language of the internet.
DNS allows computers, servers, and other networked devices, each with their unique IP addresses, to talk to each other. And it gets users to the website they’re looking for.
Now, exactly what is IPv4?
IP (version 4) addresses are 32-bit integers that can be expressed in hexadecimal notation. The more common format, known as dotted quad or dotted decimal, is x.x.x.x, where each x can be any value between 0 and 255. For example, 192.0.2.146 is a valid IPv4 address.
IPv4 still routes most of today’s internet traffic. A 32-bit address space limits the number of unique hosts to 232, which is nearly 4.3 billion IPv4 addresses for the world to use (4,294,967,296, to be exact).
Today, we’ve run out
Think about it: How many connected devices are in your household?
The median American household has five devices, including smartphones, computers and laptops, tablets, and streaming media devices. That doesn’t even include the range of devices that fall under the internet of things (IoT) category, such as connected thermostats, smart speakers, and doorbell cameras.
So, in today’s world of ultra-connected computer networks, where every stationary and mobile device now has an IP address, it turns out that 4.3 billion of them isn’t nearly enough.
In 2011, the Internet Assigned Numbers Authority (IANA), the global coordinator of IP addressing, ran out of free IPv4 address space to allocate to regional registries. IANA then recovered additional unused IPv4 address blocks from the regional registries and created a recovered address pool. In 2014, IANA announced that it was redistributing the last addresses in the recovered address pool.
When it’s tapped, there will be no more IPv4 addresses left.
Besides running out of address space, the IPv4 addressing system has some additional downsides:
About 18 million addresses were set aside for private addressing, drawn from a range known RFC 1918. Most organizations use private addresses on internal networks. However, devices on these local networks have no direct path to the public internet.
To access the public internet, devices with private addresses require a complex and resource-intensive workaround called network address translation (NAT).
Furthermore, North America got the lion’s share of IPv4 address allocations. As a result, entities in Asia-Pacific and elsewhere, where internet use has exploded, have purchased large chunks of IP space on the gray market. This has broken up contiguous ranges of IP addresses and made it more complicated to route internet traffic.
To replace IPv4, enter IPv6
To address this problem, the internet is undergoing a gradual transition to IPv6. The latest version of the internet protocol, IPv6 internet addressing, moves from 32 bits to a 128-bit address space, with both letters and numbers in identifiers (for example, 2002:db8::8a3f:362:7897). IPv6 has 2128 uniquely identifying addresses, which is about 340 undecillion or 340 billion billion billion.
This version of IP has some obvious advantages, the primary one being that it’s a lot more space. With IPv6, a single network can have more IPv6 addresses than the entire IPv4 address space.
It seems easy enough, but IPv4 and IPv6 are not directly interoperable. IPv6 is not the easiest protocol to walk into. Understanding IPv4 vs IPv6 is a big undertaking fraught with challenges. And when it comes to transitioning to IPv6 DNS, the BlueCat platform is at the ready to help.
The first IPv6 DDoS attack surfaced in 2018. While IPv6 is more secure than IPv4, if bad actors want to attack your network, they will find a way.
The gap between what the network team can deliver and what end-users need continues to widen. You need back-end DNS that supports all of your initiatives.
How difficult are IPv6 migrations? A recent GAO report on DOD’s transition plan provides some sobering conclusions.
You know who they are. They’re the go-to person for everything DNS-related. While that’s a big burden to carry, relying on a single person also puts the…
|
<urn:uuid:7951ba2b-8ffb-4f56-a68f-fefd0c03a11e>
|
CC-MAIN-2022-40
|
https://bluecatnetworks.com/glossary/what-is-ipv4/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00650.warc.gz
|
en
| 0.916734 | 1,441 | 3.859375 | 4 |
The main purpose of the course is to give students the ability to add BI techniques to Excel data analysis. The course goes beyond the capabilities of tables and charts and uses Pivot Charts, the Excel Data Model, and Power BI.
Who should attend
This course is intended for students who are experienced with analyzing data with Excel and who wish to add BI techniques. The course will likely be attended by SQL Server report creators who are interested in alternative methods of presenting data.
Before attending this course, students must have:
- Basic knowledge of the Microsoft Windows operating system and its core functionality.
- Working knowledge of relational databases.
- Extensive knowledge of Excel spreadsheets including formulas, charts, filtering, sorting, and sub-totals.
- Basic knowledge of data analytics equivalent to having attended course .
After completing this course, students will be able to:
- Explore and extend a classic Excel dashboard.
- Explore and extend an Excel data model.
- Pre-format and import a .CSV file.
- Import data from a SQL Server database
- Import data from a report.
- Create measures using advanced DAX functions.
- Create data visualizations in Excel.
- Create a Power BI dashboard with Excel.
- Data Analysis in Excel
- The Excel Data Model
- Importing Data from Files
- Importing Data from Databases
- Importing Data from Excel Reports
- Creating and Formatting Measures
- Visualizing Data in Excel
- Using Excel with Power BI
|
<urn:uuid:2c599295-2257-4685-b00c-bc27123a541b>
|
CC-MAIN-2022-40
|
https://www.fastlanemea.com/course/microsoft-20779
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00650.warc.gz
|
en
| 0.794294 | 326 | 2.5625 | 3 |
With data growing so rapidly, becoming a critical component of decision-making, it’s unsurprising that organizations want faster access and better throughput to data stores -- even large datasets. They also want less latency and lower costs, and they don’t want to worry about issues like power interruptions causing delays or access problems.
Enter non-volatile random-access memory, or NVRAM.
What Is Non-volatile Random-access Memory?
Until recently, most related technologies were volatile, including dynamic random-access memory (DRAM). If power is lost, so is the data stored in registers, caches, and memory.
NVRAM, also called storage-class memory or persistent memory, doesn’t lose data when a system loses power. NVRAM is typically very fast (although slightly slower than DRAM) and has high capacities and significant scalability. It’s also less expensive than DRAM and has lower latency.
“Many storage systems require caching and checkpointing for acceleration, which has typically been done through flash,” explained Arthur Sainio, co-chair of the SNIA Persistent Memory and NVDIMM Special Interest Group and a director at SMART Modular Technologies. “By moving caching and checkpointing in persistent memory, you are speeding up the process, increasing throughput and reducing latency.”
Non-volatile random-access memory can also enable a powered-down system to come back up quickly because the system doesn’t have to move saved data from storage back to memory.
What Is NVRAM Used For?
The benefits of persistent memory translate into interesting use cases in the storage arena.
For example, storage devices are often too slow for applications that require data persistence. However, moving applications from a storage device to persistent memory can resolve many issues. Take the case of a time-series database. Typically, it involved using an solid-state drive (SSD) when memory requirements became too great. Persistent memory can reduce the latency, which in turn improves performance.
Another example is a data orchestration service with a tiered cache, providing the option to store cache data in DRAM and SSDs. With the option to cache in persistent memory, performance improves significantly.
In other cases, data structures or applications must reside in memory because they require fast access. But sometimes these applications can run out of memory or become too expensive to run. Using persistent memory can solve the problem by improving performance and lowering costs. For example, an in-memory database that uses persistent memory requires fewer nodes to store the data because each system has a larger pool of memory.
Persistent memory also is a key enabler of data portability, which has become more important in recent years, said Ed Fiore, vice president and chief architect for platforms at NetApp.
Persistent memory technologies enable file system metadata to be separated from other types of metadata. “Typically, all of the data and metadata in a file is written to a storage device, but going forward, people are seeing the value of breaking these up to take better advantage of file system metadata,” Fiore explained. “So, you’re not just using the cache as a way to store data quickly but as a way to actually hold metadata. That means you can start searching the data and adding value to it.”
Types of Persistent Memory
One of the most prominent types of persistent memory is non-volatile dual in-line memory module (NVDIMM). Essentially, NVDIMM combines non-volatile NAND flash memory with DRAM. Because it has very low latency, it is commonly used in storage servers for write acceleration and commit logs. And because data has been saved in the NVDIMM, it is very useful for data recovery operations.
An increasingly popular form of persistent memory is magnetic random-access memory (MRAM), which stores data using magnetic charges, unlike static random-access memory (SRAM) and DRAM, which use electrical charges. MRAM results in power savings and smaller chip size.
Over time, MRAM could replace embedded SRAM in NOR Flash because of its lower power requirements, lower cost, and higher density, said Tom Coughlin of Coughlin Associates, a digital storage consulting and analyst firm. MRAM has real potential for storage, especially for caches and buffers. IBM, for example, uses MRAM in some of its flash-based systems and as a cache in its SSDs.
Yet another up-and-coming option is phase change random-access memory (PCRAM) that stores data by changing the state of the material used from amorphous to crystalline and back again. PCRAM is considered less expensive and more scalable than regular flash memory, making it a viable option for replacing parts of the DRAM-based DIMM and some SSDs. PCRAM made a big splash, with Intel and Micron in 2015 basing their 3D XPoint technology called Optane on it.
Two other persistent memory technologies could become more important over time:
- Resistive RAM (ReRAM) creates different levels of electrical resistance using charged atoms instead of electrons to store data. It is particularly useful for high-density applications.
- Ferroelectric RAM (sometimes called FeRAM, F-RAM, or FRAM) shares some similarities with DRAM, but it uses a different type of electric polarization to achieve non-volatility. It isn’t affected by power disruption or magnetic interference, making it particularly reliable. It has faster write performance and much higher maximum read/write endurance than flash. At the same time, it has much lower storage density and storage capacity limitations, and higher cost, than flash.
Conclusion: Non-volatile Memory Market Still Maturing
While non-volatile memory offers many benefits for storage applications, the field is far from mature. In fact, many of the early players like Intel and Micron, which first came to market with proprietary 3D XPoint/Optane technology, now look to be embrace the open-source CXL (Compute-Express Link) standard. The CXL standard integrates different forms of memory, like processors, accelerators, smart NICs, and memory devices. Eventually, CXL may make it possible to have both volatile and non-volatile memory in the same environment, Coughlin said.
While it’s still relatively early for non-volatile random-access memory and storage, the goal is clear. “At the end of the day, what the industry needs in general is a non-volatile storage device as large as it can get and as close to the processor as possible, and we’re getting there,” Fiore said.
It’s not too early to dip a toe into the persistent memory waters, Coughlin added. “It makes sense to look at persistent storage now -- even though there are new architectures still being developed,” he said. “Even a little bit of persistence can be very useful.”
|
<urn:uuid:a580f827-8244-4ddf-802d-f8220826545c>
|
CC-MAIN-2022-40
|
https://www.itprotoday.com/high-speed-storage/promise-non-volatile-random-access-memory-storage?parent=138951&infscr=1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00650.warc.gz
|
en
| 0.927298 | 1,467 | 3.171875 | 3 |
In this Cisco CCNA tutorial, you’ll learn about Layer 2 of the OSI reference model, which is the Data Link layer. Scroll down for the video and also the text tutorial.
Cisco Local Area Network Layer 2 – Ethernet Video Tutorial
Layer 2 – The Data Link Layer
Layer 2 frames are encoded and decoded into bits ready to put on to the Physical Layer, the physical wire. Error detection and correction for the Physical Layer can be provided here, depending on the protocol that we’re using.
Ethernet is the Layer 2 medium that is used on Local Area Networks. Ethernet is pretty much ubiquitous on the LAN, and that’s the Layer 2 media that we’re going to focus on.
Let’s have a look at some of the different Layer 2 protocols. I’ve got a link going to a page on Wikipedia where there’s a list of network protocols above. Open that up and at the top, it lists there some of the common Layer 1 protocols, like ISDN, DSL. There’s a lot of legacy protocols that are still listed here as well, like ISDN which isn’t used so much these days.
Then, you can see the list of Layer 2 protocols as well. Ethernet is in here, most commonly used on the LAN. There’s a legacy WAN protocol in here, Frame Relay. Other kinds of protocols we’ve got in here are FDDI and the Point-to-Point Protocol (PPP).
OSI Reference Model – Encapsulation
The next thing I wanted to do was just clear up any misunderstanding you may have about the terminology. When a packet is composed, obviously it’s composed by the sender, and it’s going to put it on the wire and send it to the receiver.
As we go down through the OSI model, the sender will start at the top layer, the Application Layer, and it will compose that part of the PDU. Then, Layer 7 will get encapsulated in the Layer 6 header. At that point, is called the data.
We then encapsulate it in the Layer 5 header and then in the Layer 4 header. We put the Layer 4 header on there, and at that point, it’s called a segment.
Then the Layer 3 header goes on. At that point, it’s called a packet.
Finally, Layer 2, the Data Link Layer, will go on. At that point, it’s called a frame.
However, when the sender is sending traffic to the receiver, it’s not like it sends segments, packets, and frames separately. They’re all part of the same PDU. It’s just terminology that we’re talking about here.
We send that one PDU and when we’re looking at it from the point of view of Layer 4, we call it the segment. From the point of view of Layer 3, it’s a packet, and from the point of view of Layer 2, it’s a frame.
The Ethernet Header
Moving on, let’s look at the Layer 2 Ethernet header. It is Ethernet that we’re using as our Layer 2 protocol. At the start of the header, we’ve got the preamble. That’s used to help the sender and receiver to synchronize.
We then have the Layer 2 destination and source address, that’s the MAC address when we’re using Ethernet. We then have the ethertype, which is used to specify what is encapsulated inside the Ethernet header. That will typically be IPv4.
We then have the data and the FCS. The FCS is the Frame Check Sequence. That’s a cyclical redundancy check, which is used to check for the integrity of the frame to check that it has not been corrupted during transit.
The Media Access Control (MAC) Address
Now, let’s look at the Layer 2 Ethernet address, which is the MAC address. MAC stands for Media Access Control. The MAC address is a 48-bit hexadecimal address. If we look back at the Ethernet header, we have 6 bytes, each byte is 8 bits. So, 6 times 8, that’s where we get the 48 bits from.
The MAC address is split into two different halves that make up the MAC address. The first half, the first 24 bits, is the OUI. Which stands for Organizationally Unique Identifier. The identifier is the manufacturer of the Ethernet port.
So if you’ve got a Cisco router or switch, and it’s got an Ethernet port on there, it will have a MAC address and the first half of the MAC address is Cisco’s identifier. If you had a network card, and that network card came from IBM, for example, the first half of the MAC address is going to be IBM’s identifier.
The second half of the address, the last 24 bits, is assigned by the manufacturer. The burned-in MAC address on every NIC port in the world is globally unique. The actual number of potential addresses is 2 to the power of 48. This is absolutely a big number.
There’s a lot of possible MAC address, so that makes it possible for every Ethernet port in the world to have a unique MAC address.
An important point that I want to make here is that there is no logical addressing with your MAC addresses. It’s just one big flat address space. We could have a PC with a network card from IBM, so it’s going to have an IBM MAC address on there. That could be in New York.
We could have another one also from IBM in Beijing, and another PC from IBM in London. They’re not grouped together. There’s no kind of a logical order with your MAC addresses, just one big flat address space.
With IP addresses, there is a logical order there. That’s how we, as administrators, are going to control our networking.
I also want you to know how to get information about the MAC address. To get the MAC address on a Windows machine, open up a command prompt, and then enter the command:
In Linux, the command will be:
Finally, let’s have a look at a Cisco router or switch. I’m going to open up a Putty session, and I’m going to SSH onto my router. I need to enter enable to get to the enable prompt, and then the command is:
This is going to give me a heap of output about all my different interfaces. I could also have just entered one interface if I wanted to, to more target the output. I can see on FastEthernet0/0 that the address is 0018.7374.8d56.
In brackets here, it tells me the BIA. That’s the burned-in address, and it’s exactly the same value here. The reason there’s a difference, it is possible in software to change the MAC address on an interface, but normally we won’t do that. Normally, we’ll just leave it at the burned-in address from the manufacturer.
Introduction to the OSI Model: https://networklessons.com/cisco/ccna-routing-switching-icnd1-100-105/introduction-to-the-osi-model
Cisco Networking: OSI Model Layer 2 – Data Link: https://www.dummies.com/programming/networking/cisco/cisco-networking-osi-model-layer-2-data-link/
|
<urn:uuid:958b3466-2569-46b6-859a-bb3ecb1892e3>
|
CC-MAIN-2022-40
|
https://www.flackbox.com/cisco-local-area-network-layer-2-ethernet
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00650.warc.gz
|
en
| 0.907 | 1,646 | 3.4375 | 3 |
With today’s technology, device safety is very important. People are resorting to online banking due to it becoming easy and quick. Personal information can be subject to theft. Unfortunately, it’s easier to gain access to your information remotely. That is why your devices should be protected and secure from online attacks. Overall, mobile devices are pretty safe. However, there are some cases of hackers getting through security measures and accessing your information. Since we are in 2020 now, let’s look over some ways to truly protect your devices.
Many people still keep their phones unlocked. Realistically, this isn’t a good idea. People lose their phones all the time, and with that comes the possibility of someone looking through your phone. Since your phone might have banking apps or popular shopping apps that have access to your banking information, you’ll want to make sure your phone is locked fully. You can use a number pin or a password, but even more secure are fingerprints or face scans. Those are the top levels of security you can apply to your device.
Use the Official App Store
Some people may overlook this, but it is important for your device. With the Andriod Play Store and Apple’s App Store, you can be sure your device will be protected most of the time. In some unique cases-some apps have been known to have viruses and malware. That isn’t so much the case anymore, due to both app stores locking down on suspicious applications. There are ways of downloading apps from third parties. Doing so might put your device at risk since you’re downloading it from an unknown source. So, be careful downloading apps that are not on the official app store.
Use Security Software
Almost everyone has a smartphone in today’s world-and with smartphones-you can download security applications from the app store. Security applications work the same way your anti-virus software does for your computer. It protects your device from unwanted, suspicious activity and can be downloaded easily through the app store.
Updates and Patches
Security updates roll out on mobile devices regularly. That adds safety measures for devices. You can see this happening a lot with both Andriod and Apple. A lot of the time people won’t update, but that’s important to the overall security for the device. Since new security means are being implemented into the update, updating to the new security update is important for protection.
Sometimes, scammers or hackers have been known to send text messages that contain an infected link. That is why you should be careful opening random links from people you don’t know. Concerning that, avoid answering any email or text messages that seem unusual in any small way.
2020 and Beyond
We are in the era of technology. Thieves can access information easier than one might think. Try to avoid downloading uncertain applications from unknown sources and only stick to the official app store. Devices are now remarkably secure with the fingerprint scanners and face scanners-use those to your benefit. Lastly, be wary of suspicious texts or emails-and never click on links within them. If you’re cautious while using your device, you shouldn’t come across any issues, and you’ll be able to fully enjoy all the internet has to offer.
|
<urn:uuid:8528207c-764d-455f-a055-2bf90c16239f>
|
CC-MAIN-2022-40
|
https://mytekrescue.com/keeping-your-devices-secure-in-2020/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00050.warc.gz
|
en
| 0.945135 | 688 | 2.515625 | 3 |
Security concerns are rippling across the IT industry. The WannaCry ransomware attack which hit the NHS and data theft incidents like that experienced by Wonga, the payday loan company, leave a sense of unease that traditional security measures are failing to allay.
Data centers have particular challenges to face when it comes to security, because they not only need to construct cyber-defenses, but physical barriers too, to protect and safeguard equipment and sensitive data.
Now there is a new threat lurking that is set to put data center security in the spotlight, and that is the exponential rise in politically motivated, or state-sponsored cyber-attacks.
The rumors about leaked communications from hacked servers that circulated following the US Presidential election continue to surface in news stories, and more recently the new French President, Emmanuel Macron was targeted by a coordinated hacking attack which saw thousands of internal emails and other documents released in an attempt to destabilise the vote.
In this case, Russia was accused of masterminding the attack, but Russia is only one of several countries that are known to be a source of politically motivated cyber-hacks. Up there too are the US, China, Iran, North Korea and Israel.
Migration to cloud services and the increasing use of data centers and colocation facilities has become more and more popular with local and national government departments in recent years for the storage of sensitive data. While these strategies help governments to take advantage of enormous economic and workflow advantages, they also herald a relinquishing of control of their data and they are increasingly dependent on the cloud or data center operator to implement the highest levels of security.
This is now beginning to see a knock-on effect. Recently, the Australian Department of Defence said that it would be taking all of its data out of Global Switch data center facilities when its contract finishes, because of fears that classified data could be at risk following a massive investment in the company by a Chinese consortium.
Meanwhile, Estonia is taking things a step further and has announced that it will back up its most sensitive government data in a ‘data embassy’ in Luxembourg. This move - which heralds the arrival of friendly countries hosting servers in anonymous data centers to house critical data and applications on behalf of other countries – has been initiated because, according to a story in Wired magazine, Estonia has to fend off hundreds of thousands of cyberattacks every day, most of them coming from China and Russia.
But one of the most beleaguered nations of all from a cyber-crime perspective is Ukraine, which has suffered sustained attacks for many years targeted at undermining the entire stability of the country. Wired again reported on this, saying that Ukraine’s president, Petro Poroshenko, reported 6,500 cyberattacks on 36 Ukrainian targets in just two months with investigations pointing to the “direct or indirect involvement of secret services of Russia”. The attacks have been many and various, but two particularly large incidents took out the national grid, leaving Ukrainians without light or water in the middle of winter.
The report said that it appeared the hackers had spread through the power companies’ networks and eventually compromised a VPN used for remote access —including the highly specialized industrial control software that gives operators remote command over equipment like circuit breakers.
Extra vigilance needed
As nation states become more active in the cyber black market, data centers will need to be extra vigilant about their security measures. Hackers employed by nation states are well-funded and well-supported and their goal is to extract information that can put their country at an economic, political or military advantage. As well as vulnerabilities in networks, systems and servers, they are not beyond targeting the people who work in data centers. Whether unwittingly, or for financial gain, employees can be a weak link in the chain, and their access permission needs to be high on the security agenda.
Part of the difficulty of dealing with cyber espionage is knowing who the enemy is. A lack of trust pervades which impedes progress and while we struggle to withstand the onslaught of attacks, our inability to identify the attacker, means that we are forced to take a ‘zero-trust’ approach.
The security strategies adopted by data centers today need to adapt to the current climate. The provision of support that enables companies to take advantage of cloud services or web applications is vital, but if holding and managing data (particularly government data) is the core function of a facility, it becomes imperative to apply the strictest security protocols to peripherals, servers and data center management software.
Solutions are available that provide granular access controls to assets based on trust. This has to be measured across devices, software, users and systems at all times. Connections should be permitted only on the basis of having a deep knowledge of where a connection initiates from and where it is going to, validation of relevant credentials and continuous monitoring to ensure access is restricted only to approved assets.
Shared infrastructure is a feature of the digital age and provides untold advantages to data centers, cloud providers and end users. But it can also expose our systems to a growing group of hackers who are being sponsored for political gain. We don’t know how effective cross-border treaties or agreements will be in rounding-up these cyber criminals; the likelihood is that they will live beyond the law indefinitely. The only approach we can take to keep our data centers and everything they store and manage protected, is by trusting no-one and questioning everything.
Paul Darby is regional manager for EMEA at Vidder, a company specializing in access control software
|
<urn:uuid:53ca8191-f2c4-4b3f-a803-2201f1179275>
|
CC-MAIN-2022-40
|
https://www.datacenterdynamics.com/en/opinions/trust-no-one-protecting-data-centers-against-cyber-espionage/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00050.warc.gz
|
en
| 0.954805 | 1,134 | 2.546875 | 3 |
Where’s the Risk?Voting machines that rely on direct input, using a touch screen, keyboard, etc. do not have a paper trail and are based on traditional embedded operating systems like Windows and Linux. They can be compromised using common hacking tools and the only way to identify if they have been breached is to use modern threat detection tools found in commercial environments. Considering many of these systems are well over 10 years old, and contain end of life technology, detection of a modern threat may elude many of these tools simply based on vendor maintenance. While large hacking of these systems is unlikely, compromising just enough of them undetected in battle ground states could lead to a similar scenario as the 2000 election. Let’s take a look at two ways to address potential problems – replacing end-of-life systems and patching vulnerabilities. 1. Upgrade End of Life Backend Technology Physical electronics and computer software are designed to last from six to 15 years; consumer electronics like cellular phones last even shorter. When technology is acquired mid cycle, it loses support and maintenance quicker since half its life expectancy has already been realized. For example, voting systems purchase in 2006 that are based on Microsoft Windows XP technology have been end of life for a few years now and no longer receive ANY security updates. This makes them susceptible to many modern attack vectors if they connected to the internet or not properly segmented from other systems to limit exposure. 2. Patch Vulnerabilities All software has vulnerabilities. Their risk, however, determines the potential danger to tasks they perform. Detecting these vulnerabilities can be done with vulnerability assessment scanners, web application scanners, source code review scanners, and a variety of other tools. Like any other piece of software, they are not perfect and can only detect what they have been coded to find. It is up to the states to assess their voting systems on a regular basis and fix any identified vulnerabilities in a timely manner so they do not become a liability. Considering many of these systems are end of life, there is no good solution to fixing them. In addition, Zero Day vulnerabilities are software or configuration flaws that have not been detectable in the past, have no security patch available, and represent a risk until a remediation strategy is published. The tools security teams rely on cannot detect the flaw because they have no method to do so. This type of attack vector, although much rarer than exploiting a known vulnerability, is what movies are made out of and espionage theories live on. They present the “what if I could” and in the past have been proven to be possible. One saving grace for the entire system is that it is dissimilar by state. A national hack of the entire electronic voting system is highly unlikely because of all the different technology used. Detecting a single hack is reliant on each state and it would take massive changes to the results or voter database to make a significant impact in an election. However, performing regular vulnerability assessments, limiting privileged access to the systems, and replacing outmoded technology is a start to avoid the chaos of a failed election.
Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.
|
<urn:uuid:85af66a7-d508-437d-9559-66532ebf98b1>
|
CC-MAIN-2022-40
|
https://www.beyondtrust.com/blog/entry/reducing-electronic-voting-system-risks-two-steps-state-governments-should-take
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00251.warc.gz
|
en
| 0.960956 | 837 | 2.5625 | 3 |
The Difference Between Saving and Investing
The terms "Saving" and "Investing" often are used interchangeably, but there is a difference between both.
- Saving is setting aside money you don’t spend now for emergencies or for future use. It’s the money you want to be able to access quickly, with little or no risk, and with the least amount of taxes. Financial institutions (banks) offer a number of different savings options such as savings account.
- Investing is buying assets such as stocks, bonds, mutual funds or real estate with the expectation that your investment will make money for you. Investments usually are selected to achieve long-term goals. Generally speaking, investments can be further categorized as income investments or growth investments.
Making a choice between either saving or investing will depend on your goal for the money and your risk tolerance.
|
<urn:uuid:9d11c264-68e0-461f-a1b1-00bf076c8ec7>
|
CC-MAIN-2022-40
|
https://www.knowledgepublisher.com/article/1386/the-difference-between-saving-and-investing.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00251.warc.gz
|
en
| 0.95428 | 178 | 2.8125 | 3 |
By: M. Ali
Nakamoto invented blockchain, which is a system that uses chain blocks to store an immutable record. The term “Transactions” refers to new entries in the blockchain; each transaction is put in the ledger using a particular conscious algorithm. Each node in the blockchain network maintains a copy of the ledger, and each block includes the hash values of the previous block in the chain, making it very impossible to change the blockchain’s components.
Type of blockchain:
- Public blockchain: With this kind of blockchain, anyone may join the network and see the data contained therein.
- Private blockchain: This is a centralized approach to blockchain technology in which a central entity controls the nodes’ behavior.
- Consortium blockchain: Consortium blockchain is a hybrid of public and private blockchains. Only a limited number of organizations are allowed to utilize this blockchain.
Cite this post as:
M. Ali (2020), Blockchain Technology, Insights2Techinfo, pp.1
|
<urn:uuid:4b14e0fa-cf46-41d0-acfc-b6b4502db530>
|
CC-MAIN-2022-40
|
https://insights2techinfo.com/blockchain-technology/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00251.warc.gz
|
en
| 0.844031 | 205 | 3.328125 | 3 |
This section lists the settings found in the Network Properties tab, in the Network view task.
In the Properties tab, you can define the network characteristics and routing information.
- Data transmission capabilities for streaming live video on the network.
- Unicast TCP
- Unicast (one-to-one) communication using TCP protocol is the most common mode of communication. It is supported by all IP networks, but it is also the least efficient method for transmitting video.
- Unicast UDP
- Unicast (one-to-one) communication using UDP protocol. Because UDP is a connectionless protocol, it works better for live video transmission. When the network traffic is busy, UDP is much less likely to cause choppy video than TCP. A network that supports unicast UDP necessarily supports unicast TCP.
- Multicast is the most efficient transmission method for live video. It allows a
video stream to be transmitted once over the network to be received by as many
destinations as necessary. The gain could be very significant if there are many
destinations. A network supporting multicast necessarily supports unicast UDP and
unicast TCP.NOTE: Multicast requires specialized routers and switches. Make sure you confirm this with your IT department before setting the capabilities to multicast.
- IPv4 address prefix
- IPv4 has two display modes. Click
to select the preferred display mode.
- Subnet display
- This mode displays the IPv4 subnet mask as four bytes.
- CIDR block display
- The Classless Inter-Domain Routing (CDIR) mode displays the IPv4 subnet mask as a number of bits.
- IPv6 address prefix
- Version 6 IP address prefix for your network. Your network must support IPv6 and you must enable the option Use IPv6 on all your servers using Server Admin.
- Public servers
- You only need to specify the proxy server when Network Address Translation (NAT) is used between your configured networks. The proxy server must be a server known to your system and must have a public port and address configured on your firewall.
- Lists the routes between every two networks on your system, and the route capabilities.
|
<urn:uuid:b0519ec1-acb5-4da4-99cb-15bb8ee64c00>
|
CC-MAIN-2022-40
|
https://techdocs.genetec.com/r/en-US/Security-Center-Administrator-Guide-5.9/Network-Properties-tab
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00251.warc.gz
|
en
| 0.855095 | 465 | 2.71875 | 3 |
A key block is a unique cryptographic structure designed to protect cryptographic keys during transport over potentially insecure networks. A team around Mohammed M Atalla invented the concept of the key block with his Atalla key block. This solved several issues created by “key variants” that were then used for transporting keys.
Shortly after its invention, the key block was normalized in ISO and ANSI standards, among others. Several formats appeared during the time, based on the same original logic. Most of these formats respect the TR-31 technical report, which defines how a key block “should” minimally behave. TR-31 defines the general principle of a keyblock and is generally regarded as ‘the reference’ for keyblocks. TR-31 is an interoperable format which was defined by the American National Standards Institute (ANSI). It supports the secure interchange of cryptographic keys through key attributes included in the exchanged data. The TR-31 format enables the secure interchange of symmetric keys.
The ANSI X9.24 (1&2) norm also provides a background on how key transportation should be done in insecure environments.
Some key block formats, such as the Thales key block, respect the TR-31 norm but are totally proprietary. They will generally only work on Thales hardware. Others, such as the TR-34 key block format, expand the original idea of the key block but refine it, adding much more components and security.
Here is a list of the most common key block formats.
TR-31 Key Block Compatible
Atalla Key Block
This is by far the most ancient and the ‘father’ of all the other key blocks. The Atalla key block format contains:
- 8-byte header containing the attributes of the key (header)
- 48-byte key field containing the Triple-DES cipher block chaining (CBC) mode ciphertext of the key (encrypted key field)
- 16-byte field containing the Triple-DES message authentication code (MAC) computed over the header and the encrypted key field
They are proprietary key blocks that are used with Thales HSMs and contain four distinct blocks:
- Header (16 bytes)
- Optional header
- Encrypted key data
IBM (CCA) Key Block Format
IBM created this key block format for use with its Common Cryptographic Architecture.
This key format contains 3 to 5 blocks:
- Key encrypted with the CCA master key or the CCA KEK / transport key
- Control vector (public)
- Token validation value used to ensure the integrity of the whole key token
And, optionally and conditionally:
- Master key verification pattern
- RSA modulus and exponent
TR-31 Key Block Partially Compatible
TR-34 Key Block
This is a very sophisticated format, using the TR-31 design but adding more components. The TR-34 implementation is used mainly in retail banking. For instance, numerous POS terminals and ATMs use this method for remote key loading. It is also part of the PCI security norm.
It contains the following 11 blocks:
- Freshness token
- Optional header
- Encrypted KEK
- Key version (ciphered)
- Key ID (ciphered)
- Key (to be transported and encrypted)
- Header (again, ciphered)
- Optional header (again, ciphered)
Other Non-TR-31 Key Blocks
PKCS-8 Ciphered Private Key
This is a key block format that is totally outside the TR-31 design and belongs to the PKCS norm.
The PKCS-8 Ciphered Private Key’s blocks include the following:
- Key encryption algorithm
- Version (ciphered)
- Private key algorithm (ciphered)
- Private key (ciphered)
- Optional attributes (ciphered)
References and Further Reading
- ASC X9 TR 31-2018 - Interoperable Secure Key Exchange Key Block Specification (2018), by the American National Standards Institute (ANSI)
|
<urn:uuid:da842517-38b8-430e-a206-a21a5ba5d427>
|
CC-MAIN-2022-40
|
https://www.cryptomathic.com/news-events/blog/an-overview-of-the-different-key-block-formats
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00251.warc.gz
|
en
| 0.888371 | 860 | 3.28125 | 3 |
Big data architecture is the foundation for big data analytics. Think of big data architecture as an architectural blueprint of a large campus or office building. Architects begin by understanding the goals and objectives of the building project, and the advantages and limitations of different approaches. It’s not an easy task, but it’s perfectly doable with the right planning and tools.
System architects go through a similar process to plan big data architecture. They meet with stakeholders to understand company objectives for its big data, and plan the computing framework with appropriate hardware and software, data sources and formats, analytics tools, data storage decisions, and results consumption.
If you’re in the market for big data tools, see our list of the top big data companies.
Do I Need Big Data Architecture?
Not everyone does need to leverage big data architecture. Single computing tasks rarely top more than 100GB of data, which does not require a big data architecture. Unless you are analyzing terabytes and petabytes of data – and doing it consistently — look to a scalable server instead of a massively scale-out architecture like Hadoop. If you need analytics, then consider a scalable array that offers native analytics for stored data.
You probably do need big data architecture if any of the following applies to you:
- You want to extract information from extensive networking or web logs.
- You process massive datasets over 100GB in size. Some of these computing tasks run 8 hours or longer.
- You are willing to invest in a big data project, including third-party products to optimize your environment.
- You store large amounts of unstructured data that you need to summarize or transform into a structured format for better analytics.
- You have multiple large data sources to analyze, including structured and unstructured.
- You want to proactively analyze big data for business needs, such as analyzing store sales by season and advertising, applying sentiment analysis to social media posts, or investigating email for suspicious communication patterns – or all the above.
With use cases like these, chances are that your organization will benefit from a big data architecture expressly built for these challenging tasks. Plan for an environment that will capture, store, transform, and communicate this valuable intelligence.
Planning the Big Data Architecture
Big data architecture includes mechanisms for ingesting, protecting, processing, and transforming data into filesystems or database structures. Analytics tools and analyst queries run in the environment to mine intelligence from data, which outputs to a variety of different vehicles.
The architecture has multiple layers. Let’s start by discussing the Big Four logical layers that exist in any big data architecture.
- Big data sources layer: Data sources for big data architecture are all over the map. Data can come through from company servers and sensors, or from third-party data providers. The big data environment can ingest data in batch mode or real-time. A few data source examples include enterprise applications like ERP or CRM, MS Office docs, data warehouses and relational database management systems (RDBMS), databases, mobile devices, sensors, social media, and email.
- Data massaging and storage layer: This layer receives data from the sources. If necessary, it converts unstructured data to a format that analytic tools can understand and stores the data according to its format. The big data architecture might store structured data in a RDBMS, and unstructured data in a specialized file system like Hadoop Distributed File System (HDFS), or a NoSQL database.
- Analysis layer: The analytics layer interacts with stored data to extract business intelligence. Multiple analytics tools operate in the big data environment. Structured data supports mature technologies like sampling, while unstructured data needs more advanced (and newer) specialized analytics toolsets.
- Consumption layer: This layer receives analysis results and presents them to the appropriate output layer. Many types of outputs cover human viewers, applications, and business processes.
In addition to the logical layers, four major processes operate cross-layer in the big data environment: data source connection, governance, systems management, and quality of service (QoS).
- Connecting to data sources: Fast data ingress requires connectors and adapters that can efficiently connect to different storage systems, protocols, and networks; and data formats running the gamut from database records to social media content to sensors.
- Governing big data: Big data architecture includes governance provisions for privacy and security. Organizations can choose to use native compliance tools on analytics storage systems, invest in specialized compliance software for their Hadoop environment, or sign service level security agreements with their cloud Hadoop provider. Compliance policies must operate from the point of ingestion through processing, storage, analysis, and deletion or archive.
- Managing systems: Big data architecture is typically built on large-scale distributed clusters with highly scalable performance and capacity. IT must continually monitor and address system health via central management consoles. If your big data environment is in the cloud, you will still need to spend time and effort to establish and monitor strong service level agreements (SLAs) with your cloud provider.
- Protecting Quality of service: QoS is the framework that supports defining data quality, compliance policies, ingestion frequency and sizes, and filtering data. For example, a public cloud provider experimented with QoS-based data storage scheduling in a cloud-based, distributed big data environment. The provider wanted to improve the data massage/storing layer’s availability and response time, so they automatically routed ingested data to predefined virtual clusters based on QoS service levels.
Big data architecture includes myriad different concerns into one all-encompassing plan to make the most of a company’s data mining efforts.
Let’s look at a big data architecture using Hadoop as a popular ecosystem. Hadoop is open source, and several vendors and large cloud providers offer Hadoop systems and support. There are also numerous open source and commercial products that expand Hadoop capabilities.
Hadoop architecture is cluster architecture. Hadoop runs on commodity servers, and recommends dual CPU servers with 4-8 cores each, and at least 48GB of RAM. (Using accelerated analytics technologies like Apache Spark will speed up the environment even more.) Storage must also be highly scalable.
Another option is cloud Hadoop environments where the cloud provider does the infrastructure for you. The cloud might add latency, you’ll be in a shared environment, and you don’t want to be locked-in. But the cloud is an excellent choice for a new Hadoop installation, or when you know that you don’t want to grow your data center racks or IT staff to support on-premise Hadoop.
Loading the Data
Loading data onto the clusters is an ongoing event. Hadoop supports both batched data such as loading in files or records at specific times of the day, and event-driven data such as loading transactional data as the transactions occur. Software tools for loading source data include Apache Sqoop for batch loading and Apache Flume for event-driven data loading.
Your big data environment will also stage the incoming data for processing, including converting data as needed and sending it to the correct storage in the right format. Additional activities include partitioning data and assigning access controls.
Processing the Data
Once the system has ingested, identified, and stored the data it will automatically process it. This is a 2-step process of transforming the data and analyzing it. Transforming the data simply means processing it into analytics-ready formats and/or compressing it.
In Hadoop, this is MapReduce territory. MapReduce is the core component of Hadoop that filters (maps) data among nodes, and aggregates (reduces) data returned in response to a query. MapReduce achieves high performance thanks to parallel operations across massive clusters, and fault-tolerance reassigns data from a failing node. MapReduce works on both structured and unstructured data.
Many analysts and vendors run MR with additional filters, like adding collaborative filtering to MR to identify user preferences in Twitter data. Other analytics products replace it, such as Google’s proprietary Cloud Dataflow.
Output and Querying
One of Hadoop’s shining features is that once data is processed and placed, different analytics tools can operate on the unchanging data set. There is no need to re-process it for different tools, or to copy it to different locations. The same copy of data serves for all queries.
Output covers a variety of destinations, including reports and dashboard visualization for users or next step triggers in business processes.
Micro- and macro-pipelines enable discrete processing steps. Micro-pipelines operate at a step-based level to create sub-processes on granular data. In a typical scenario, one source of data is customer transactional data from the company’s primary data center. The data enters Hadoop so company analysts can investigate customer churn. However, compliance is an issue because the data includes customer credit card numbers. A micro-pipeline adds a granular processing step that cleans credit card numbers from the analyst team’s reports.
Macro-pipelines operate on a workflow level. They define 1) workflow control: what steps enable the workflow, and 2) action: what occurs at each stage to enable proper workflow.
Big Data Architecture: Crucial for Analytics Success
Big data architecture takes ongoing attention and investment. Before you run screaming for the hills, remember that a well-executed big data architecture will do much of this for you behind the scenes. You can offload even more planning and management tasks if you’re working with consultants and service providers.
Despite complexity and cost, big data architecture lets you extract vital business information from your otherwise opaque data for higher profit and lower risk. Done well, these results are more than worth the price of admission.
|
<urn:uuid:746add36-1228-4bc1-893c-b62e67c56abc>
|
CC-MAIN-2022-40
|
https://www.datamation.com/big-data/big-data-architecture/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00251.warc.gz
|
en
| 0.89373 | 2,060 | 2.6875 | 3 |
IP Based Access Control Systems
The term access control (AC) refers to the system of selective restriction of access to certain information or place. With the rapid advancement in technology, this type of security system has become an inextricable part of our quotidian life as they are extensively used in corporate sectors and various other professional areas.
Amongst several categories of access control system that includes mobile based access control, physical access control, etc, IP-based access control is gaining rapid popularity nowadays in the security industry because of its efficiency and other convenient features. This type of access control has also become fairly easy to implement in a certain workplace because of the ubiquity of internet connection and IT-based industries in every part of the world.
Here, a few aspects of this system along with its irrefutable benefits are going to be discussed below.
Introduction to IP Based Access Control Systems:
Basically, IP based access control system involves an electronic access controller that is specifically designed for controlling the entry and exit to and from a restricted area as well as for identifying the users precisely. A controller of this kind is capable of supporting multiple (up to 4) basic access control readers. These readers are also known as non-intelligent readers as they are only capable of reading the card number or PIN of an individual and sending it to the main server via Ethernet. In most cases Wiegand protocol is used for transmitting such data.
Difference between IP Based and Traditional Access Control System:
In terms of building architecture and working procedure, these two access control systems exhibit conspicuous distinction between them. Some of these differences include:
- Unlike traditional access controllers, IP controllers are directly connected to the LAN/WAN connection and as a result, they can keep track of all input and output data necessary for controlling enter and exit along with monitoring door inputs and locks.
- Traditional access controlling system is incapable of functioning properly without the presence of a terminal server whereas an IP based access control system has its own onboard network interface that assists in independent functioning.
Types of IP Access:
IP Access control system can implement more than one type of IP access. They are:
- Embedded IP access: This type of IP access can operate on a single site and the door count is usually low. Embedded IP access stores the important data regarding credentials and PIN numbers on the control panel which is directly connected with the browser. Embedded IP access based control systems are used in myriads of business sectors because of it is relatively inexpensive and fairly easy to install and operate.
- Server based IP access: The server based IP access is capable of handling multiple websites simultaneously and the door count is much higher than the embedded IP access. Sever based IP stores all the necessary data on the server that consists of multiple control panels.
- Hosted IP access: Hosted IP access system is the most sophisticated system that can operate thousand of websites. Hosted IP consists of a highly secured data centre with redundant backups and audited information security. In this case the browser links to the web application containing several control panels.
Applications of IP Based Access Control System:
Nowadays, this specific access control system has several applications in corporate sectors and security industry. Some of them are:
- Biometric access control system: Biometric access control is nowadays indispensable for the identification of employees in an office as well as the applicants of a competitive exam. In this system, usually fingerprint is used for identification in high security areas. In corporate sectors, biometric is used to keep track of the employees’ attendance and calculation of payroll whereas in case of applications, biometric is used to prevent malfeasances like forgery or identity theft.
- Proximity access control system: The proximity access control system is nowadays widely used in banks and corporate areas to ensure the safety of the surrounding environment. There are over 50 kinds of proximity access control systems available in the market that can be operated quite easily by implementing an IP based access controller.
- Door access control system: IP based access door closing and opening system is an extremely efficient as well as affordable security system. In this system, the door comes with an electromagnetic lock that can be controlled by an IP based server for allowing the entrance and exit of certain individuals. Because of the simplicity of this system, the installation of door access control is also easier and relatively inexpensive. Nowadays, magnetic door locks with uninterrupted power supply is also used in airports, homes, offices, and data centers.
Reasons to Use IP Based Access Control Systems:
This particular access system comes with numerous advantages over others. They are:
- Easy installation: One of the principal reasons behind using this system is its simplicity. In this system, the existing network is completely utilised by the IP controllers to ensure the security of a certain restricted area. As a result, installing additional communication lines becomes irrelevant making the installation of this system much simpler and easier. Also, special knowledge for termination, troubleshooting, grounding of RS-485 communication lines is not required for installing an IP based access control system.
- Flexibility: This access control system offers immense flexibility as there is no limit of the number of total IP based access controllers. You can install multiple controllers within a single system and increase the security of necessary information in single installation. This flexibility not only saves a significant amount of time but a considerable amount of money in terms of installation cost as well.
- Accuracy and speed: An IP based access control system is capable of sending the verifiable information to the main control unit and take decisions accordingly very quickly. Also, with all the necessary information stored on the database, this security system works infallibly by not allowing a single individual in the restricted area with proper access code. Also, this system is capable of keeping records in an organised manner so that issues like attendance or proper identity can be easily resolved.
In this competitive world, security is an essential entity to be properly taken care of and with the implementation of new technologies the security industry is also upgrading. Under the circumstances, IP based access control system has become essential for both discretionary and mandatory access control systems with its efficacy and numerous beneficial features.
|
<urn:uuid:2723c802-17dc-4918-b817-c08a91c6f4fe>
|
CC-MAIN-2022-40
|
https://www.getkisi.com/guides/ip-access-control
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00251.warc.gz
|
en
| 0.940806 | 1,253 | 2.953125 | 3 |
Understanding the differences held between viruses and malware has not been of particular nor pressing concern for the average global citizen up until perhaps a few years ago.
However, with the rapid surge in digital and cyber threats becoming rampantly abundant, millions of computer users across the globe began self-educating themselves on the intricacies of at-home cybersecurity to protect their finances, personal information, and much more.
In today’s modern times, keeping abreast of the distinctions between the various concepts in cybersecurity are of paramount importance.
Due to the ever-evolving cyber threats becoming increasingly sophisticated and harder to fight, the heightened risk to the data we hold dear in our lives can become compromised without the proper knowledge and management involved with protecting yourself in today’s cyber world.
Malware is a condensed version of the term malicious software and is an umbrella term referring to several different classes of dangerous cyber threats, including, but not limited to, Trojans, viruses, worms, and bots, as well as adware and spyware
A slightly more advanced form of malware, ransomware, is purposed by hackers to commit fraud, extortion, and theft from computer users.
Not only are there a multitude of malware in existence but many of them have sophisticated ways of propagating themselves.
According to Cisco.com, malware is “specifically designed to damage, disrupt, steal, or in general inflict some other “bad” or illegitimate action on data, hosts, or networks.”
As mentioned above, there are several classes of malware. Each class of malware has a distinct method it employs to try to gain entry into your system or network. The types of malware most prevalent in 2018 are the following:
The chart shown above from McAfee Labs is their annual “Threats Report” compilation, totaling the number of major cybersecurity threat occurrences each year across the globe
In the span of a year, global malware threats shot up by over an astounding two hundred million cases. The Threat Report for 2018 is truly eye-opening, however, as it shows an increase of over well over one hundred million major malware threat attempts.
However, the importance of this graph is clear: malware isn’t going anywhere. In fact, it has been climbing by a rate of over one hundred million cases per year and is anticipated by industry analysts to grow to near insurmountable levels in the 2020s
Much more than just a trend, malware is a force of terrorism, putting digital users of all ages, backgrounds, and cultures at risk every moment of every day.
To put the matter into perspective even further, consider Accenture’s recent statement that malware attacks are among the most costly cyber attacks in existence today, with companies spending upwards of 2.4 million dollars in protective measures.
Computer users that aren’t well seasoned in the cyber security sphere and its associated lexicon often confuse malware as a form of defective software containing errors.
Holding this oversimplified view is immensely detrimental, as unknowing users can become the recipients of successful malware attacks that have now evolved to the extent that they can damage the software, data, and even the physical hardware components of a user’s system.
Successful malware attacks made on businesses pose a much greater spectrum of consequences, including, but not limited to, substantial losses in data, an array of financially-related issues, an operation that has been thoroughly disrupted, and worst of all, the potential for a company’s site to go into “downtime” mode and become aired by major news networks.
Malware attacks may number into the hundreds of millions, but instances of actual breaching are far fewer. According to Varonis.com, between January 1, 2005, and April 18, 2018, there have been 8,854 recorded breaches.
Prior to delving into the subject of viruses, their distinctions, and their relationship to malware, we’d like you to engage in a quick review lesson that will help your better incorporate and digest the next part of this article, while making your own comparisons and contrasts in your mind as you read on to the end conclusion of the article.
Lesson #1: MALWARE is an umbrella term for software designed for malicious intent, including things like VIRUSES, worms, bots, ransomware, backdoors, spyware, adware and more
Lesson #2: VIRUSES are a type of MALWARE, just like ransomware and worms are types of malware.
Lesson #3: Always remember the following: VIRUSES are a type of MALWARE
Viruses as demonstrated above in the review lesson, are a type of malware. It has an infectious component to it and spreads easily from system to system at lightning speed under the right conditions.
In particular, when virus-infected software is widely shared with others, albeit unintentionally, viruses can begin spanning entire nations or even the globe.
Often compared to a parasite, viruses join other “gnarly insects” in the Malware class, right alongside Trojan Horses, spyware, adware, and worms – all employing differential tactics to gain entry into computer users systems, create harm, and steal information.
What are viruses exactly? Viruses can wreak havoc and destruction on computers, a range of digital devices, and even an entire network.
A virus that has successfully infected its target can potentially result in the loss of data, a reduction in computer and business performance, and the compromising of sensitive data pertaining to the company and its clients.
From there, a virus can propagate itself, moving from victim to victim via email attachments, files, documents, and downloads; traveling through the internet and the world at large.
More specifically defined, a computer virus is a form of malicious software that is designed and executed by hackers at “opportune” times (timing can be based on gaining the most resources and data or perhaps even the most international visibility).
Upon execution, a computer virus begins to self-replicate while also modifying computer or device programming and re-inserting with its own malicious, self-serving code.
The point at which replication succeeds is considered to be a success, with the targeted areas of a system satisfactory infected, and thus leading to the official diagnosis of the computer as “infected.”
What can viruses result in: Computer viruses that occur in today’s times can cost billions of dollars in damages for businesses, due in large part to the large assortment of costs involved, which can include the following:
How many viruses out there? In the year 2000, there were a spectacular 40,000 computer viruses in existence. In 2003, that number jumped to just over 103,000.
In 2008, Symantec reported that the number of viruses in existence today exceeds well over a million. Important to note, however, is that only a very small percentage of those viruses are still in circulation and are worth reading up on.
What kinds of viruses are there, and which viruses should I know about? You should be aware of some of the more common viruses in existence right now (listed right below) in addition to the infamous viruses that caused great financial harm to countless people around the world. These viruses are listed in the next section’s chart.
|Virus Name||Key Points|
|MDMA||Transferred via MS Word file to another MS File only if both files are stored in memory|
|Melisa||Distributed as an email attachment; takes on different functions with the presence of MS Outlook, disables safeguards|
|Ripper||Shreds and corrupts data from the hard disk, rendering it irretrievable|
|SirCam||Is sent via email attachment; capable of deleting files at will, destroying performance levels, and sending files on its own|
|Concept||Transferred via email attachment; saves files in a template directory as opposed to its original, intended location|
|Nimba||Multifaceted virus that damages computers substantially, as well as altering many of its settings|
|One_Half||Encryption-capable renders data readable only by the virus hard disk so only the virus may read the data.|
|CodeRed||Impacts Microsoft II servers; enables remote access after hacking; was used on an attack on the White House|
|Virus name||Interesting facts||Related figures|
|I Love You||Guinness World Record’s entry as the most ‘virulent’ virus of all time||$15 billion in damages|
|MyDoom||Regarded as the most damaging virus ever released||$38 billion in damages|
|Wannacry||Impacted 100,000 groups in 150 countries and over 400 million devices||$4 billion in damages|
|Slammer||Crashed the internet for 15 min, brought down Bank of America’s ATM services, disrupted 911 emergency services, and even caused lights to go out in several cities||1 billion in damages|
Types of Viruses You Don’t Need To Worry About As a Novice or At-Home User
|Multipartite Virus||File Infector Virus|
|Resident Virus||Direct Action Virus|
|Boot Sector Virus||Polymorphic Virus|
What’s going with viruses today? Quite a bit. As the infographics below show, viruses have been going after many sectors, including the banking arena, and hackers are using a variety of attacks to gain entry into sites.
Cybercriminals are becoming increasingly skilled at their trade, using tactics that display astounding levels of sophistication and complexity.
As a result, it has never been more abundantly clear that we need to protect ourselves in every possible from the looming threats of numerous and varying cyber dangers.
Protecting our computers, devices, and networks have now become of paramount importance in these technological times for all the types of people, from businesses, government agencies, and perhaps most importantly, at-home users who have the most to lose, as well as the most incentive to learn the very best protection measures readily available today.
|
<urn:uuid:e39c1dde-38bf-4edd-8eea-7ebaac55fa6d>
|
CC-MAIN-2022-40
|
https://antivirusrankings.com/understanding-malware-vs-virus-for-cyber-security-in-2018
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00451.warc.gz
|
en
| 0.944886 | 2,109 | 3.265625 | 3 |
A Kubernetes (K8s) cluster is a grouping of nodes that run containerized apps in an efficient, automated, distributed, and scalable manner. K8s clusters allow engineers to orchestrate and monitor containers across multiple physical, virtual, and cloud servers. This decouples the containers from the underlying hardware layer and enables agile and robust deployments.
Even after impressive growth and a surge in popularity the past few years, Kubernetes continues to be one of the most popular topics in the world of application delivery. In fact, RedHat’s 2021 State of Open Source report finding that 85% of IT leaders surveyed indicated “Kubernetes is key” to cloud-native application strategies. Let’s take a closer look at Kubernetes clusters, how they work, and how the right tools can help you secure them.
Keeping up with all the terminology in the world of containers can be difficult. Before we go any further, let’s take a minute to answer the “what is a Kubernetes cluster?” question in a bit more detail by reviewing its key components.
Combined, these components make up a Kubernetes cluster.
Now that we understand the components of a Kubernetes cluster, we can look at how they work. While the specifics of Kubernetes under the hood can get complex, the basics are easy to conceptualize.
Additionally, a Kubernetes cluster can automatically deploy rolling updates and be configured to scale as needed.
When you’re new to K8s, it can be hard to know where to get started. Fortunately, there are several ways to create Kubernetes clusters depending on your desired deployment environment. For example, Azure offers a simple wizard based K8s cluster creation and the AWS platform offers Amazon Elastic Kubernetes Service (EKS) to abstract away the complexity of deployment.
However, if you’re looking to learn and tinker with K8s, one of the best ways to get started is with minikube. After install of minikube and kubectl, a simple minikube start from your system’s terminal can have you up, running, and ready to begin your K8s journey. minikube is also great for developers and engineers looking to test on their local machines.
At this point, the benefits of Kubernetes clusters should start to become clear. At a high-level the benefit is that K8s clusters abstract away the complexity of container orchestration and resource management. Specifically, benefits of Kubernetes clusters include:
Taken together, these benefits lead to more reliable and scalable production applications.
Of course, when dealing with production applications, you can never overlook security. With Kubernetes, that starts with following container security best practices and configuring the appropriate pod security policies and pod security contexts for your use cases as well as using Kubernetes secrets to store sensitive information.
Additionally, solutions that can improve cluster visibility and enable real-time vulnerability scanning in cloud-native Kubernetes environments can go a long way in protecting your workloads. Check Point CloudGuard was purpose-built to enable full lifecycle security and compliance for container-based workloads.
Specific benefits of CloudGuard include:
To learn how CloudGuard can secure K8s workloads in multicloud environments, sign up for a demo today. Alternatively, for a technical deep dive on cloud-native security, you’re welcome to check out our free guide to containers and K8s security.
|
<urn:uuid:3b18d8d3-47f6-4b4c-8acf-6cd53d201566>
|
CC-MAIN-2022-40
|
https://www.checkpoint.com/cyber-hub/cloud-security/what-is-kubernetes/what-is-a-kubernetes-cluster/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00451.warc.gz
|
en
| 0.899199 | 748 | 2.75 | 3 |
Federal governments and major technology firms are arguing for or against encryption, respectively. But why?
Due to recent political turmoil and devastating events overseas, the topic of end-to-end encryption has reentered public discussion. At the center of the debate, you have federal governments and major technology firms, each arguing for or against encryption.
The New York Times reported that despite a lack of evidence pointing directly to encryption as a means of holding non-surveilled, crime-related conversations, the U.S. government - specifically the National Security Agency - advocates against encryption technologies under the assertion that malevolent forces will obscure their dialogues and therefore evade law enforcement officials. As a resolution to the matter, the NSA wants so-called "backdoors" into encryption. In other words, the federal government wants inside access to encrypted data. Read my blog "Government backdoor: The basics of the plan to bypass encryption"
Down with encryption?
The argument against encryption is simple: Encryption obfuscates information - most data-centric solutions, such as CloudMask, encrypt data in all its forms - making it unidentifiable and otherwise worthless for those without an associated encryption key. For law enforcement officials and government agencies, this means they cannot access information without getting the key from the data owner, and therefore, appropriate encrypted communications are off limits for organizations such as the NSA. In essence, businesses and consumers can use encryption to prevent government surveillance, making any attempts to uncover email messages or intellectual property fruitless.
"U.S. citizens and major corporations trust encryption to keep their data private from all entities."
The Feds' stance
For obvious reasons, U.S. citizens and major corporations trust encryption to keep their data private from all entities. If encryption was made illegal, data protection as we know it would be over, as right now, encryption really is data's last defense in the event of a data breach. So, as a resolution to the encryption conundrum, federal governments have requested backdoors.
Major tech firms are staunchly against encryption backdoors. According to Re/code, Apple CEO Tim Cook asserted that blanket access to consumers' private information and personal devices would result in "dire consequences."
"If you leave a backdoor in the software, there is no such thing as a backdoor for good guys only," Cook said, according to Reuters. "If there is a backdoor, anyone can come in the backdoor. ... We believe that the safest approach for the world is to encrypt end-to-end with no backdoor. We think that protects the most people."
Encryption works based on strong key management, which is why CloudMask is such a powerful and useful solution in response to government surveillance. No one else should have encryption keys - cloud providers and Google included - since one small hole, a single little vulnerability, opens the door to cybercriminals and the like. According to Wired, a National Security Council report stated that encryption's "benefits to privacy, civil liberties and cybersecurity" trump any pros to creating backdoors into encryption.
The privacy solution
If consumers and business get their way, encryption stays. This is the best case scenario. Data-centric protection solutions that leverage cryptographic engines - such as CloudMask - are the only way to combat cybercrime, and we shouldn't give up the best defensive mechanism we have available due to potential outcomes. Therefore, Strong encryption solutions aren't just a cybersecurity solutions, but rather it is a requirement to prevent surveillance, regardless of who is conducting it.
With CloudMask, only your authorized parties can decrypt and see your data. Not hackers with your valid password, Not Cloud Providers, Not Government Agencies, and Not even CloudMask can see your protected data. Twenty-six government cybersecurity agencies around the world back these claims.
Watch our video and demo at www.vimeo.com/cloudmaskShare this article:
|
<urn:uuid:4485f543-6064-482c-9ba6-4a2d33e08794>
|
CC-MAIN-2022-40
|
https://www.cloudmask.com/blog/why-the-government-isnt-a-fan-of-commercial-encryption
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00451.warc.gz
|
en
| 0.941467 | 794 | 2.578125 | 3 |
The Best way to Protect your Account
More and more frequently people are receiving emails from a friend about “how I have to check out this new product they just used…” We have all seen these emails, sent by some anonymous person who happened to get our friend’s email password and started sending out spam mail to everyone on their contact list. It is annoying at best, sometimes these messages are how computer virus’ can spread. It is important to keep our passwords secure and up-to-date. But there are other ways to stop someone from gaining access to your email and other information. In the case of Apple’s iCloud, you can setup two-step authorization.
Two-step authorization works by putting another level of security on top of your password. It functions on the basis of having two things to get into an account, something you know (your password) and something you have (an apple device). The you use it, you go to use iCloud online and when you sign in with your password, you are asked by iCloud for another randomly generated code that is either sent via text or a Find My iPhone notification to one of your registered apple devices, once that code is entered, you have access to iCloud. So someone having your password is not enough to be able to access your iCloud data.
The way that you setup two-step authorization is to start at appleid.apple.com and click “Manage your Apple ID” and then login to your iCloud account. From there you click on “Password and Security” and click “Turn on Two-Step Authorization”.
Follow and read the instructions that are displayed to turn on the service. You will be asked to enter in your cell phone number as the first device that is a “trusted device”. You will be sent a 4 digit code that is used to verify that you have the phone in your possession. From here, you can add more phone numbers to be trusted devices or have any iOS device that is signed into Find my iPhone become a trusted device.
If you want to have an iOS device be registered as a trusted device, you need to click verify on the screen and a notification with a 4 digit code will be sent to your device. Enter that code and your iOS device is registered. Once you have all of the devices you want trusted, click the next step.
Here you are shown your recovery key. This is the key that is used in the case that you do not have access to a trusted device. Apple recommends printing the key and saving it somewhere in your house (or some place safe). Once you have that saved, click next again. Here apple will verify you have a copy of the key by having you enter the key again into the computer. The final step is to agree to the conditions for two-step authorization.
At this point, we have turned on two-step authorization and our iCloud information is secured. But there is one more thing I want to go over concerning this new security system, and that is app specific passwords.
So far two-step authorization has protected us from letting someone who knows our password have direct access to our data, but it has not addressed the issue of programs that are always connected to our iCloud account, for example the mail app on your iDevice. If you have noticed, you do not have to put your iCloud password in every time you access mail on your device, it just remembers your password the one time you entered it. To address this issue with two-step authorization, apple created app-specific passwords for these accounts that are randomly generated for each app that access iCloud this way (think mail on your computer, iPhone, iPad, or access your contacts on any of these devices). So you end up creating a unique, randomly generated password for your always connected apps.
To create an app-specific password, we have to navigate to appleid.apple.com on any web browser:
> Log in to your apple ID > click Password and Security on the left had side > click Generate App-Specific Password > Type in the name of the app using the password > Click generate
You will now be shown the unique password for the app that wants constant access to iCloud. Enter the password you see into the password field for your app and continue signing in. You will not be shown this password again, so make sure you are logged in before closing the app-specific password window. So you will have to generate a new app-specific password for every program or app that wants constant access to iCloud.
Now that we have setup our app-specific passwords, you need to learn how to manage them. From the password and security section of appleid.apple.com there is an option to view history under app-specific password. Here is the place where you can revoke a password if you are no longer using an app or device (remove access for mail, calendar, or some app that is accessing this data).
Finally in the password and security section, you can replace you recovery key (they key to get into iCloud if you do not have a trusted device). Also, you can manage your trusted devices. These are the devices which which can have a code sent to them to get into your iCloud account, step two of the two-step authorization. With this system in place, if someone got ahold of your password, they would still not be able to access any of your data in iCloud.
|
<urn:uuid:857cc063-a5ef-454b-a24a-9db9cb383662>
|
CC-MAIN-2022-40
|
https://www.bfatechnologies.com/blog/icloud-2-step-authorization
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00451.warc.gz
|
en
| 0.947503 | 1,121 | 2.5625 | 3 |
Your teen is probably thinking about what they want to do when they go to college. They might be considering a traditional career path, like becoming a doctor or a lawyer. But what about a career in technology?
Technology is a fast-growing industry globally, and there are many exciting and rewarding careers available for those with the right skillset. Here are four tech careers your teen should be thinking about:
As a software developer, your teen will be responsible for creating and maintaining software applications. This can include everything from developing new apps and software like Prometheus Monitoring to fixing bugs in existing ones. They’ll need strong problem-solving and programming skills to succeed in this role.
Of course, those are just a couple of reasons why a career in software development can be gratifying. Not only is it a field that is constantly evolving and growing, but there are also always new challenges to keep you intellectually engaged. Additionally, software developers typically enjoy good job security and opportunities for advancement. If your teen is already sweating the student loans process, you may want to remind them that software developers often earn excellent salaries and benefits. If your teen is looking for a challenging and rewarding career, software development may be the perfect fit for them.
Web developers create and maintain websites. They’ll need to have a good understanding of web design principles and strong coding skills. If your teen is interested in web development as a career, you will want to help them research the field thoroughly.
A vast amount of opportunity lies under the scope of web development. This means looking into the different aspects of web development, such as design, coding, and marketing. You will also want to consider what type of work your teen would like to do as a web developer. For example, they may want to specialize in front-end development or back-end development. Additionally, you will need to make sure that they have the necessary skills and qualifications for the job. Once you and your teen have done the research, they will be able to decide if web development is the right career.
Although the field of cybersecurity is relatively new, the demand for qualified specialists is growing rapidly. As businesses become more reliant on technology, they are increasingly vulnerable to cyber-attacks. As a result, they need experts who can help them protect their systems and data.
As a cybersecurity specialist, your teen will help protect businesses from cyberattacks. They’ll need strong analytical and technical skills to be successful in this role. Cybersecurity specialists typically have a background in computer science or engineering. They must be able to understand complex concepts and have strong analytical and problem-solving skills. In addition, they must be able to communicate effectively with non-technical staff and explain technical concepts in plain language.
Data analyst jobs are becoming increasingly popular as businesses look for ways to make better use of their data. As a result, there is a growing demand for qualified data analysts who can help organizations effectively analyze and interpret data. Data analysts collect and analyze data to help businesses make better decisions. Your teen will need to be good with numbers and have strong analytical skills if they are considering this career.
Data analysts typically have strong mathematical and analytical skills and experience working with statistical software programs. They must be able to communicate their discoveries to decision-makers within an organization in order to help inform strategic decisions. It will also be important for your teen to stay up-to-date on the latest trends and developments in the field.
These are just a few of the many exciting careers your teen can pursue in the tech industry. So if they’re interested in technology, encourage them to explore all the different options available to them.
Publish Date: June 7, 2022 11:13 PM
|
<urn:uuid:838445ba-c069-49b6-bf00-c6b906d236e1>
|
CC-MAIN-2022-40
|
https://www.contactcenterworld.com/blog/save-money/?id=e9056451-0013-4885-865d-6cf2a65dd503
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00451.warc.gz
|
en
| 0.956992 | 757 | 3.015625 | 3 |
Microsoft Technology Literacy for Educators 62-193 is one of the most important exams that Microsoft has come out with in a long time. With this Microsoft Certification, you will learn how to get your computer to do just about anything that you would want it. The MCE Technology Literacy for Educators certification confirms that educators have the universal educator technology literacy competencies required to give a rich, innovative learning experience for students. MCE certification is best suited for educators-in-training, faculty of teacher training colleges, and in-service educators.
Microsoft 62-193 Exam Information
Microsoft Certified Educator: Technology Literacy for Educators certification exam is an intermediate-level certification exam. It includes all the aspects of depth in the technical questions and delivery methodology of official Microsoft certification by following the complete exam syllabus. Microsoft 62-193 exam questions will give you the most practical experience in the real-world exam.
Microsoft 62-193 Exam Details:
|
<urn:uuid:1c932456-fac0-4ace-9739-8ff848e52921>
|
CC-MAIN-2022-40
|
https://www.edusum.com/blog?page=4
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00451.warc.gz
|
en
| 0.914154 | 193 | 2.546875 | 3 |
Cybersecurity is required to be a dynamic industry because cybercriminals don’t take days off. Cybersecurity professionals must be innovative, creative, and attentive to keep gaining the upper hand on cybercriminals. Unfortunately, there are millions of unfilled cybersecurity job openings around the globe.
The gender divide
The problem of not enough cybersecurity professionals is exacerbated by a lack of diversity in the sector. There is a disproportionately low ratio of women to men within the entire technology industry. In the science, technology, engineering and math (STEM) industries, women make up only 24% of the workforce, and while this has increased from just 11% in 2017, there is clearly still a sizeable disparity.
The cybersecurity industry is performing only marginally better than STEM, with women making up roughly 24% of cybersecurity jobs globally, according to (ISC)².
There is also a parallel trend here: women have superior qualifications in cybersecurity than their male counterparts. Over half of women – 52% – have postgraduate degrees, compared to just 44% of men. More importantly, 28% of women have cybersecurity-related qualifications, while only 20% of men do. This raises one important point, which is that women feel that they must be more qualified than men to compete for and hold the same cybersecurity roles. The industry is, therefore, losing a significant pool of talent because of this perception. Untapped talent means less innovation and dynamism in the products and services businesses offer.
Unfortunately, the challenges for women do not appear to stop once they enter the cybersecurity workforce. Pay disparity continues to blight the industry. Women reported being on smaller salaries at a higher proportion than men. 17% of women reported earning between $50,000 and $99,000 compared to 29% of men. However, there are signs that this disparity in pay is closing. For those in cybersecurity who earned over $100,000, the difference in percentage between men and women was much closer. This is encouraging and shows that once women are in the industry, they can enjoy as much success as men.
Nevertheless, reaching these higher levels of the cybersecurity industry is far from straightforward for women at present. It is an unavoidable fact that women still struggle to progress as easily compared to male counterparts. A key reason for this is cultural: women are disinclined to shout about their achievements, as such they regularly go unnoticed when promotions and other opportunities come round.
The cybersecurity industry is starting to embrace diversity in the workforce, but there is a long way to go before women are as valued in cybersecurity as men. With the current skills deficit hampering the growth of cybersecurity providers, this is a perfect opportunity for the industry and individual providers to break the bias and turn to women to speed up innovation and improve defense against cybercriminals.
Why women are essential for success
As well as the moral argument for including and empowering women in the workplace, there is a concrete business case for pursuing such a policy.
Having individuals with diverse backgrounds, experiences and talents translate into a far broader range of unique perspectives, ideas and perceptions. Problem-solving, decision making, and creativity are just some areas where unique perspectives can make an immediate difference.
Cybersecurity as an industry is missing out on new and fresh ideas on tackling cybercriminals and protecting the businesses they serve, simply by underutilizing half of the population.
Recent research by the House of Commons Library discovered that businesses with more women in senior leadership positions are far more successful than those without. Fresh perspectives are crucial to continual innovation, growth, and long-term business success. This shouldn’t shock anyone who has had a successful career in business. The easy step is to acknowledge the truth that embracing and including women generates immense value for an organization. The hardest part for businesses appears to be how to do it.
What can businesses do to aid and develop women in the cybersecurity sphere?
Awareness is a key first step. There needs to be more concerted effort from our sector to show underrepresented groups what a career in cybersecurity has to offer. This means we need to get into schools early and offer mentoring schemes to talented and interested children.
We need to run programs in universities to allow young women to visualize a career in cybersecurity. Parents and careers advisors can play a massive part in this by being educated and informed on the benefits of working in cybersecurity. It is also important to remember that not all jobs in this industry require deep technical knowledge. For roles in compliance, for example, other skills are needed.
One of the measures that we have implemented at Defense.com is recognizing the need for a flexible working environment. Women are more likely to be in part-time employment in the UK, so flexibility is essential. It allows women to balance work and care responsibilities and other priorities, creating a more inclusive culture that is more attractive and supportive for all employees.
Mentoring schemes (even with a mentor from outside the company) are a powerful additional tool here to support the long-term career development of women in the business.
Businesses can make a conscious decision to hire more women – a step taken by about a third of cybersecurity firms. One step to achieve this to introduce non-gendered language in recruitment adverts. Reports have found that job adverts with masculine language discourage many women and make the role appear unappealing.
A job advert’s wording profoundly impacts who applies and who doesn’t. This applies beyond exclusively gendered language as well. We need to move away from outdated frameworks like required skills lists in job adverts. Women are significantly less likely than men to apply if they do not match all the required skills.
Offering competitive salaries, training and development, and internal opportunities for career progression are well-established principles for a successful and thriving business. However, many cybersecurity companies are failing in this regard; as shown above, women’s salaries are not at the same level as those of men. This fundamental obstacle must be addressed if the perception of cybersecurity as a boy’s club can be overcome. Women don’t want special treatment, just the same career development and progression opportunities as men.
By building a culture where everyone can thrive, organizations can be on their way to construct an equitable workplace where female colleagues can flourish and prosper, as well as the men. We are improving as an industry, however, there is still a long way to go.
|
<urn:uuid:f30e21c5-4393-4ed1-b1b9-a8faef3040e0>
|
CC-MAIN-2022-40
|
https://www.helpnetsecurity.com/2022/06/02/support-women-in-cybersecurity/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00451.warc.gz
|
en
| 0.963072 | 1,310 | 2.609375 | 3 |
Local weather consultants are involved concerning the frequent incidence of maximum climate occasions. Within the final 5 weeks alone, the U.S. has skilled 5 as soon as in a 1000-years rainfall occasions, elevating concern over the state of the local weather. Components of the U.S. have undergone devastating droughts, but many components are grappling with extra rainfall.
Proceed studying beneath
Our Featured Movies
On Monday, one such occasion occurred in components of Dallas-Fort Value space the place residents woke as much as torrential rainfall. In a matter of minutes, they skilled as much as 16 inches of rainfall, resulting in widespread flooding. At the very least one individual was confirmed lifeless, in accordance with a report by Washington Put up. This occasion marks a flooding episode that has a 0.1% chance of occurring in any given 12 months. For such an enormous occasion occurring simply days after comparable occurrences throughout the states, there’s trigger for alarm.
Within the current previous, such 1000-year rain occasions have occurred elsewhere throughout the U.S., together with Kentucky, St. Louis, Jap Illinois and Dying Valley. All of those states had been experiencing abnormally dry circumstances by the top of July. Out of the blue, torrential floods struck, inflicting huge damages and deaths.
There are lots of potential elements behind the acute flooding occurrences. For example, the states the place excessive flooding has occurred began by experiencing drought. When drought happens, it leaves the land naked, lowering the power of the soil to soak up and retain water. Droughts additionally harden to topsoil, resulting in elevated floor runoff. When heavy rains fall, these circumstances set off widespread flooding.
Whereas nobody can level a direct finger on the actual reason for the floods, all indications present that there’s a human hand within the matter. Human actions might affect local weather each methods. The results of world warming can set off droughts but in addition trigger heavy downpours.
Local weather consultants fear that if such excessive occurrences proceed occurring, it will likely be unattainable to foretell local weather and climate patterns. Beforehand, meteorologists used knowledge from the previous to estimate the chance of sure excessive occasions occurring. Nonetheless, local weather change is making such predictions unreliable.
Michael Mann, a local weather scientist at Pennsylvania State College, defined that such possibilities might not apply. For example, a 2017 paper discovered that the return interval for a 7.4-foot storm flood in New York Metropolis had decreased from 500 years to only 25 years.
Through Washington Put up
Lead picture through Pexels
|
<urn:uuid:17505b2b-96af-47c0-8fd2-3a042e13d24a>
|
CC-MAIN-2022-40
|
https://blingeach.com/considerations-over-excessive-1000-year-rain-occasions-within-the-us/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00651.warc.gz
|
en
| 0.94267 | 534 | 2.8125 | 3 |
Bot attacks are a significant security issue for online businesses. Especially with the rise of consumers interacting with businesses online, protecting customer accounts against bad actors is more critical for long-term retention and customer loyalty than ever before.
Research shows that in 2020, bad bot traffic accounted for over 25% of all website traffic, which is a 6.2% increase from 2019. What’s more, over 28% of bots self-report are mobile users which represents a 12.9% increase from the previous year.
Learn more about how passwordless authentication can help you prevent bot attacks.
What are bot attacks?
Bots are small software programs that run automated scripts. These scripts can be used to crawl websites for search engine indexing, monitor site outages, or perform customer support services. They can also be used for malicious purposes including sending spam, harvesting customer data, executing brute force attacks, and overwhelming services with distributed denial-of-service attacks.
Bot attacks are attacks that leverage automated scripts to defraud or manipulate applications, users, or devices. Botnet attacks are a subset of bot attacks that leverage a network of computers to carry out malicious activity for purposes for fraud, service disruption, and data breaches.
While bot attacks can take many forms, account takeover is a particularly lucrative use for bots making it one of the most common attacks companies need to defend against.
How does going passwordless mitigate bot attacks?
During an account takeover attack, the malicious party tries to authenticate as a legitimate user via credential stuffing using stolen credentials, brute force guessing, rainbow table attack, or reverse brute force attack.
The common denominator across all these attack methods is the password. Therefore, eliminating the password removes the primary vector for bot attacks. When your login page does not have a password field, there’s nothing for a bot to execute its attack script against.
However, eliminating the password does not mean that the password is simply hidden for the customer behind a FaceID, one-time password, push notification, or magic link. To deliver full protection against brute force attacks, credential stuffing, and account takeover fraud, passwordless solutions must eliminate the password from the customer experience and the database so that it is never used for authentication nor recovery.
Take bot prevention to the next level with passwordless risk-based authentication
In addition to the automated attacks leveraged against your application, bots can be insidious in that they can infect your customers’ devices without their knowledge. In a mobile context, rooted or jailbroken devices are particularly vulnerable since there are no security parameters around what can or cannot be installed on the device.
Jailbroken or rooted devices leave customers open to the risk of unwittingly providing malicious bots access to their accounts. Mitigating malware vulnerabilities associated with rooted devices requires some level of visibility into the security posture of the endpoint device prior to login and the ability to make risk-based decisions in response.
Beyond Identity allows you to capture real-time user and device risk signals including the jailbroken or rooted status, patch level, and more. Additionally, you can utilize these risk signals to implement adaptive risk-based authentication. For instance, if your application contains sensitive customer data such as financial or health-related information, you can choose to deny authentication on rooted devices completely. Alternatively, you can prompt a biometric step-up authentication when jailbroken devices are detected to increase assurance of the login attempt.
Ready to get started with building a world-class passwordless authentication experience in your products? Contact our customer authentication specialists today.
|
<urn:uuid:c1b527b5-3f0e-4951-b36e-be66f9432fc8>
|
CC-MAIN-2022-40
|
https://www.beyondidentity.com/blog/stop-bot-executed-credential-attacks-passwordless-authentication
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00651.warc.gz
|
en
| 0.924591 | 721 | 2.734375 | 3 |
Q. The focus is off when camera shift from night time to day time.
It is a phenomenon that occurs commonly on box type cameras with separate lenses.
– The focus is out at the night if it is set during the day, and vice versa.
– It is caused by the focal point location shift according to wave length difference between visible light and IR light.
– It is recommended to use IR light lens that reduces the difference of wave length between visible light and IR light.
– It is recommended to use a camera featuring simple focus function which readjusts the focus according to day and night.
The 2 measures above are the most effect methods to reduce the focus difference occurred due to day and night shift.
Image 1) visible light
Image 2) IR light
|
<urn:uuid:5bc16a42-ea58-452e-a79e-e13195637d25>
|
CC-MAIN-2022-40
|
https://www.hanwha-security.com/en/support/faq/20572/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00651.warc.gz
|
en
| 0.939843 | 172 | 2.78125 | 3 |
This course provides the learner with a basic understanding of IBM mainframe networks with z/OS. It introduces traditional SNA subarea, SNA APPN networks and mainframe TCP/IP networks, including some of the network equipment associated with each. Current topics such as the future of SNA, and concepts of using TCPIP to transport SNA traffic are also covered. Finally, the learner is introduced to basic VTAM and TCP/IP commands needed to use, control and investigate mainframe networks.
This course is oriented to Systems Programmers and Operators. However anyone needing to understand IBM Mainframe networks will benefit from this course.
Completion of the “IBM Mainframe Communications Concepts” course and basic knowledge of TCP/IP and the IBM operating system.
After completing this course, the student will be able to:
- Issue standard TCP/IP commands such as ping and netstat from the mainframe.
- Start and Stop TCP/IP and its related processes.
- Use TCP/IP client applications from the mainframe.
- Manage TCP/IP server applications from the mainframe.
Introducing Mainframe TCP/IP Commands
How TCP/IP runs on the mainframe.
The TCP/IP Daemon
TCP/IP Support Daemons
TCP/IP Daemon Commands
TSO/E and USS Commands
Starting and Stopping TCP/IP
TCP/IP Console Commands
TCP/IP Client Application Commands
Sending and Receiving Files
Accessing Remote Computers
Sending and Receiving emails.
TCP/IP Server Application Commands
FTP and Remote Execution Servers
|
<urn:uuid:aa9e08df-3717-470f-8a1c-5ce435a92a74>
|
CC-MAIN-2022-40
|
https://interskill.com/?catalogue_item=mainframe-tcp-ip-commands&noredirect=en-US
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00651.warc.gz
|
en
| 0.852174 | 373 | 2.859375 | 3 |
A “Firewall” is one of the most popular computer and network security devices that professionals use to protect their enterprise IT assets and networks.
Just like a fire-resistant door in buildings which protects rooms from a possible fire and stops the spreading of flames within the building, the security firewall has a similar function to prevent malicious packets and traffic from entering and harming your protected computer assets.
In this website I have written hundreds of articles around Cisco ASA firewalls, which represent a classic example of hardware network-based firewalls.
However, there are also other types of firewalls such as host-based firewalls which we’ll also discuss and compare in this post.
What is a Network based Firewall
As the name implies, this type of firewall is mainly used to protect whole computer networks from attacks and also for controlling network traffic so that only allowed packets are able to reach your servers and IT assets.
The picture above shows several Cisco ASA network-based firewalls. As you can see, these are hardware devices containing several Ethernet ports for connecting to a network (ranging from SMB to large Enterprise Networks).
These ports are usually 1 Gbps ports with electrical RJ45 connectors but you can find also optical ports (e.f 1 Gbps, 10 Gbps speed ports etc) for connecting to fiber optic cables for longer distances and larger bandwidth.
The physical ports of a network firewall are connected to network switches in order to implement the firewall within the LAN network.
In its simplest form, you can connect several physical ports of the firewall to different network switches in order to provide controlled access between different network segments.
The most popular use case of a network-based firewall is when it is used as Internet border device to protect a company’s LAN from the Internet as shown in the diagram below:
The network above is one of the most popular use-cases of network firewalls as used in enterprise networks. As you can see, one port of the firewall is connected to the network Switch-1 which accommodates all of the LAN client devices of the Enterprise Network.
Another physical port of the firewall is connected to a different network Switch-2 which connects to a Web Server.
Therefore, in the scenario above we have managed to separate the publicly accessible Web Server from the rest of the Enterprise Network via the security appliance. This is a good security practice because if the Web Server gets compromised by a hacker, there will not be any access to the protected internal secure network.
The firewall must be configured to allow traffic from Internet towards the Web Server at a specific TCP port (port 443 for HTTPs or port 80 for HTTP).
All other traffic will be blocked by the firewall (i.e no direct traffic from the Internet towards the internal Enterprise Network will be allowed).
Moreover, the firewall will allow also outgoing traffic from the Enterprise Network towards the Internet in order to provide Internet access to internal LAN hosts.
As described in the above use-case, the network firewall is responsible for allowing or denying network packets between networks (using OSI Layer 3 and Layer 4 access rules). This is the most basic functionality of the firewall.
Newest generations of firewalls (called “Next Generation Firewalls”) also inspect traffic at the application layer to identify malicious traffic and attacks at the application level (e.g viruses, intrusion attempts, SQL injection attacks etc etc).
What is a Host based Firewall
As the name implies, a host-based firewall is a software application installed on host computers or servers to protect them from attacks.
Although the network depicted above is not recommended in real scenarios, it illustrates how a host-based firewall is used.
Note that the best option would be to combine both a network firewall and host-firewalls for better protection using a layered-approach.
The Host-based firewall is directly installed as software on the host and controls incoming and outgoing traffic to and from the specific host.
A classic example of host firewall is the Windows Firewall which comes by default in all Windows Operating Systems.
Because this type of protection is bound to the host itself, it means that it provides protection to the host no matter which network is connected to.
For example, if you are connected to your enterprise network, you are already protected by your network firewall. Now, if you take your laptop with a host-based firewall and connect it to an external WiFi network, the firewall still provides protection to the computer.
Moreover, some host-based firewalls provide also protection against application attacks. Since this type of protection is installed on the actual host, it has visibility on the applications and processes running on the host, therefore the firewall can protect against malware that try to infect the system.
One weak point of such a firewall is that it can be turned-off by an attacker who managed to gain admin level access to the host.
If the host is compromised by a hacker with administrator privileges, the host-based firewall can be turned-off, something that is impossible to do on network-based firewalls.
Comparison between Network Based and Host Based Firewalls
|Characteristics||Network Firewall||Host Firewall|
|Placement||Inside the network (either at the border/perimeter or inside the LAN)||On each host|
|Hardware/Software||Hardware Device||Software Application|
|Performance||High performance (bandwidth, concurrent connections etc)||Lower performance (since it is software based)|
|Level of protection||Network protection plus Application level protection (if using Next Generation Firewall)||Network protection plus Application protection (on some models)|
|Use-cases||Mostly in Enterprise Networks||Both in personal computers in home networks and also in Enterprise networks as additional protection on hosts.|
|Network Segmentation||Great segmentation and control at the VLAN / Layer 3 level but can’t restrict traffic between hosts in the same VLAN||Great micro-segmentation at the host level even if the hosts belong in the same VLAN.|
|Mobility||Once firewall is implemented inside the network, it is very hard to remove or change.||High mobility since it is bound to each host.|
|Management||Can be managed from a central firewall management server or directly on the appliance itself||Hard to manage when hundreds of hosts exist in the network.|
|How easy to Bypass||Network firewalls can’t be bypassed by attackers.||Easier to bypass. If the attacker compromises the host via an exploit, the firewall can be turned-off by the hacker.|
- How to Scan an IP Network Range with NMAP (and Zenmap)
- What is Cisco Identity Services Engine (ISE)? Use Cases, How it is Used etc
- What is Cisco Umbrella Security Service? Discussion – Use Cases – Features
- 7 Types of Firewalls Technologies (Software/Hardware) Explained
- 10 Best Hardware Firewalls for Home and Small Business Networks
|
<urn:uuid:6107c793-d22c-43f9-badd-419dfdb173ff>
|
CC-MAIN-2022-40
|
https://www.networkstraining.com/network-based-firewall-vs-host-based-firewall-discussion-and-comparison/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00651.warc.gz
|
en
| 0.908429 | 1,455 | 3.3125 | 3 |
The Ultimate Guide to Firewall
Like its real-world namesake, a firewall in computer technology is a structure preventing unwanted incursion. The real world one slows or stops fire, the virtual one slows or stops hackers and malware as a barrier between the internet and your private network. Firewalls can be physical devices monitoring and filtering traffic going into or out of the network, or it can be a software product or service. A Next Gen firewall performs essentially the same task, but with added intelligence – including application awareness and control, integrated intrusion prevention, and threat intelligence.
|
<urn:uuid:3ec80419-11f1-4b17-ab5e-84e4ab0ac2f5>
|
CC-MAIN-2022-40
|
https://itbrief.asia/tag/firewall
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00651.warc.gz
|
en
| 0.916837 | 114 | 2.84375 | 3 |
Malicious scripts in compromised websites - and how to protect yourself
When talking about the attacks and threats users must face every day, people often highlight those that are more or less predictable, such as malicious archives sent as email attachments. Even though these threats are still very prevalent (e.g. in the different ransomware variants), cybercriminals also use many other attack vectors. Some of the most dangerous are those that involve scripts, they are quite difficult for the average user to detect.
How does a malicious script work?
Malicious scripts are code fragments that, among other places, can be hidden in otherwise legitimate websites, whose security has been compromised. They are perfect bait for victims, who tend not to be suspicious because they are visiting a trusted site. Therefore, cybercriminals can execute malicious code on the users' systems by exploiting some of the multiple vulnerabilities in the browsers, in the operative system, in third-party applications or in the website itself that allows them to place the exploits in the first place.
If we take a look at recent examples, we will see that cybercriminals have been using well-known exploit kits for years to automate these infection processes. Their operation is relatively simple – they compromise the security of a legitimate website (or else create a malicious website and then redirect the users to it from other locations), and install any of the existing exploit kits. From then on, detection and exploitation of vulnerabilities in the systems of users visiting that website can be automated.
This can be seen in malvertising campaigns, where ads displayed on compromised websites have malicious code embedded in them. If accessed, they would allow cybercriminals to gain control of a device and launch attacks unless protected by a quality computer security product.
The reason why the execution of such code is accomplished automatically and without user intervention has much to do with the permissions that are granted during system configuration. Even today, the number of user accounts with administrator rights on Windows systems is still overwhelming, and this is totally unnecessary in most situations of everyday life.
This, together with the poor configuration of any of the security measures integrated to the Windows system itself, such as the UAC, enables the vast majority of these malicious scripts to operate unimpeded in hundreds of thousands of computers every day.
If only the users would set this security feature at a medium/high security level, many of these attacks could be avoided, provided that users are aware of the importance of reading the alert windows displayed by the system and the security suite instead of making the mistake of closing them or, worse yet, clicking on the “OK” button.
How to protect yourself from malicious scripts
We know that malicious scripts have been used by cybercriminals for years to spread all kinds of threats like Trojans, ransomware, and bots. However, at present there are adequate security measures available at least to mitigate the impact of these attacks. The only thing you need to do is set up the security measures that can protect you against these types of attacks and think before you click.
|
<urn:uuid:94b7312a-3c49-429c-8223-a48350cd44f6>
|
CC-MAIN-2022-40
|
https://securitybrief.asia/story/malicious-scripts-compromised-websites-and-how-protect-yourself
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00651.warc.gz
|
en
| 0.946866 | 825 | 2.59375 | 3 |
According to data from January 2018, over 42% of desktop and laptop computers are still running Windows 7, compared with just 35% running the latest version, Windows 10. You may not be aware that this is a problem, but remember that Windows 7 is now a nine-year-old operating system, with a codebase reaching all the way back to Windows NT in the early 2000’s. There are students starting university this year who were not even born when the foundations of Windows 7 were written. In computing terms, that’s not just old; it’s positively prehistoric!
Why does it matter?
Using an out of date operating system means that your business’s computers are not protected by the best available security measures, which are only available on modern, up-to-date systems. It does not matter how effective your anti-virus software is, and even the best AV software can surprisingly make your computer less secure, rather than more. An out of date system can never be a safe system.
Threats of an out-of-date operating system
Leaving aside that an operating system as out-of-date as Windows 7 is unlikely to be compatible with most new or updated software, it is also at risk from severe security flaws, including:
Recent ransomware attacks have focused on exploiting security vulnerabilities in older, out-of-date, operating systems. Ransomware attacks are especially dangerous for businesses as they can stop you accessing – or even destroy altogether – your most valuable asset: your data. Last year’s WannaCry attack on the UK National Health Service is one example of such an issue which was directly attributable to corporate users continuing to use computers with out-of-date operating systems.
Computer viruses as they used to be – trojans, worms, etc. – are less of an issue now that criminals are utilising more advanced forms of malware (malicious software) to achieve their aims. Some particularly insidious kinds of malware can sit silently on a computer, reading all of a user’s keystrokes, and then send their usernames, passwords, and the content of any documents they are writing, back to the criminals to use as they wish. Malware like this is a significant source of information for organised crime gangs to commit financial fraud and identity theft, and it is vital for your organisation to prevent it.
Possibly most topically, any form of compromise to your business’s computing systems could result in the loss of customer data or the theft of that data by a third party. With the advent of the General Data Protection Regulation on 25th May, this could cost your business €20,000,000 or more for a single data breach.
How can cloud services help?
Virtual desktops such as Office365, or a more comprehensive solution which allows access to a full range of applications such as Citrix XenDesktop, can support all of your staff to work on a secure, modern, up-to-date operating system. Additionally, because these solutions can be used on lower-specification hardware than a full install of Windows 10, your business could dave significant capital costs.
If you’re looking for an excellent IT support service in Glasgow, look no further. From traditional repair and management of support to cloud services, we can provide everything you need to ensure your business is kept up and running. Contact us today to find out more about what we can do for your business.
|
<urn:uuid:ae8d26b3-9246-4561-9bf4-e4d11958db64>
|
CC-MAIN-2022-40
|
https://www.consilium-uk.com/is-it-safe-to-use-windows-7-and-how-the-cloud-can-help-you-replace-it/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00651.warc.gz
|
en
| 0.946384 | 724 | 2.625 | 3 |
The term “digital currency” and “an alternative to central bank-controlled fiat money” are both used to describe Bitcoin (BTCUSD). The latter is valuable, nevertheless, as it is produced by a monetary authority and is widely accepted in an economy. The decentralized nature of the Bitcoin network and the limited use of digital currency in everyday commerce.
The value of Bitcoin can be compared to that of precious metals, according to some. Both have specific uses and are in limited supply. Gold and other precious metals are utilized in industrial applications, but the blockchain, the technology that underpins Bitcoin, has some uses in the financial services sector. Due to its digital heritage, Bitcoin may someday be used as a medium for retail transactions.
The Worth of Cryptocurrencies
Any discussion on the worth of Bitcoin must take currency into consideration. Due to its physical characteristics, gold was a valuable currency, but it was also heavy. Although paper money was an advance, it still needs to be manufactured, stored, and is not as portable as digital currencies. Money has evolved digitally, moving away from physical qualities and toward more functional traits.
Here is one instance. Ben Bernanke, who was the Federal Reserve’s governor at the time and spoke on CBS’s 60 Minutes, recounted how the organization “rescued” insurance giant American International Group (AIG) and other financial companies from bankruptcy by providing money to them during the financial crisis. The interviewer was perplexed and inquired as to whether the Fed had created billions of dollars. That wasn’t really the situation.
So, in order to lend to a bank, we merely mark up the size of the account that they have with the Fed using a computer, according to Bernanke. In other words, by making entries in its ledger, the Fed “created” US dollars.
The capacity to “mark up” an account is an illustration of the characteristics of digital currencies. Because it streamlines and simplifies transactions involving currencies, it has ramifications for the velocity and use of such currencies.
Why Is Bitcoin Valuable?
Both a system of intermediary banks and the support of governmental agencies are absent from the Bitcoin ecosystem. The Bitcoin network’s consensus-based transactions are approved by a decentralized network made up of different nodes. No government or other fiat authority exists to act as a counterparty to risk and make lenders whole, so to speak, in the event that a transaction fails.
The cryptocurrency does, however, exhibit some characteristics of a fiat currency system. It is rare, unreplicable, and difficult to obtain. The only way to produce a fake bitcoin is to carry out a double-spend, which is known as a double-spend. When a user “spends” or transfers the same bitcoin in two or more different settings, they are essentially making a duplicate record.
If you are interested in more articles like this, here’s one about where you can buy superbid cryptocurrency.
|
<urn:uuid:9be2a5fa-a63b-433a-855c-d9f4c8b9dde2>
|
CC-MAIN-2022-40
|
https://www.akibia.com/what-is-bitcoin-based-on/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00051.warc.gz
|
en
| 0.956859 | 618 | 2.90625 | 3 |
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss PPP.
PPP: Point-to-Point Protocol Overview
The Point-to-Point Protocol (PPP) provides a standard method for transporting multiprotocol
datagrams over point-to-point links. PPP was originally emerged as an
encapsulation protocol for transporting IP traffic between two peers. It is a data link layer
protocol (layer 2 in the OSI model ) in the TCP-IP protocol suite over synchronous
modem links, as a replacement for the non-standard layer 2 protocol SLIP. However,
other protocols other than IP can also be carried over PPP, including DECnet and
Novell's Internetwork Packet Exchange (IPX).
PPP is comprised of the following main components:
• Encapsulation: A method for encapsulating multi-protocol datagrams. The PPP
encapsulation provides for multiplexing of different network-layer protocols
simultaneously over the same link. The PPP encapsulation has been carefully
designed to retain compatibility with most commonly used supporting hardware.
• Link Control Protocol: The LCP provided by PPP is versatile and portable to a
wide variety of environment. The LCP is used to automatically agree upon the
encapsulation format options, handle varying limits on sizes of packets, detect a
looped-back link and other common misconfiguration errors, and terminate the
link. Other optional facilities provided are authentication of the identity of its peer
on the link, and determination when a link is functioning properly and when it is
• Network Control Protocol: An extensible Link Control Protocol (LCP) for
establishing, configuring, and testing and managing the data-link connections.
• Configuration: Easy and self configuration mechanisms using Link Control
Protocol. This mechanism is also used by other control protocols such as Network
Control Protocols (NCPs).
In order to establish communications over a point-to-point link, each end of the PPP link
must first send LCP packets to configure and test the data link. After the link has been
established and optional facilities have been negotiated as needed by the LCP, PPP must
send NCP packets to choose and configure one or more network-layer protocols. Once
each of the chosen network-layer protocols has been configured, datagrams from each
network-layer protocol can be sent over the link.
The link will remain configured for communications until explicit LCP or NCP packets
close the link down, or until some external event occurs (an inactivity timer expires or
network administrator intervention).
Protocol Structure – PPP (Point to Point Protocol) Frame
8 16 24 40bits Variable 16 – 32 bits
Flag Address Control Protocol Information FCS
• Flag – indicates the beginning or end of a frame, consists of the binary sequence
• Address – contains the binary sequence 11111111, the standard broadcast address.
(Note: PPP does not assign individual station addresses.)
• Control – contains the binary sequence 00000011, which calls for transmission of
user data in an unsequenced frame.
• Protocol – identify the protocol encapsulated in the information field of the frame.
• Information – Zero or more octet(s), contain the datagram for the protocol
specified in the protocol field.
• FCS – Frame Check Sequence (FCS) Field, normally 16 bits. By prior agreement,
consenting PPP implementations can use a 32-bit FCS for improved error
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. Achieving your CCNA certification is much more than just memorizing Cisco exam material. It is having the real world knowledge to configure your Cisco equipment and be able to methodically troubleshoot Cisco issues. So I encourage you to continue in your studies for your CCNA exam certification.
|
<urn:uuid:f24c3156-081c-4626-b379-39422bb773bd>
|
CC-MAIN-2022-40
|
https://www.certificationkits.com/ppp-ccna/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00051.warc.gz
|
en
| 0.849976 | 865 | 3.546875 | 4 |
Traffic safety and mobility systems can collect and analyze data that helps enforce speed limits, spot vehicles flagged by police and alleviate congestion.
Every day, cities are growing smarter, more automated and more connected. They are also becoming more densely populated. As government transportation and IT officials make the case for enabling smart cities of the future, they will need both data and funds to make these meccas happen.
Transportation safety and mobility solutions can help cities highlight the benefits of smart city infrastructure:
- Traffic safety solutions can detect and collect the evidence needed to enforce speed limits and traffic laws, thereby decreasing the dangerous driving behavior and saving lives.
- Public safety solutions can spot flagged vehicles and help law enforcement catch criminals at large.
- Smart mobility solutions provide traffic flow assistance along with data points that include vehicle counts, speed, classification and lane usage.
Implementing technology for improved safety and mobility requires a few components, including cameras and a video-based solution to capture license plates on moving vehicles, also known as automated license plate recognition. When these low-latency video camera solutions are implemented on roadsides to assist with public safety, they capture the license plate data and then in seconds transmit it to a back-office computer system where algorithms can quickly match plates against national and regional watch lists. These lists might be Amber Alerts or Silver Alerts for missing persons, or reports of stolen cars and vehicles belonging to people of interest and sexual predators, for example. Depending on the type of monitoring protocols in place, the system can immediately send an alert of a flagged vehicle to 911, police dispatch or other public entity, saving police departments time and money.
The same type video camera solutions can be placed at intersections to capture red-light violations, in school zones to capture speeding violations and on school buses to capture vehicles not yielding to school bus stop signs. By enforcing traffic laws, cities, counties and school districts can make their communities safer. Reducing the number of collisions reduces the time and money spent by municipalities, emergency services, courts and insurance agencies to rectify and prosecute traffic collisions. Proceeds from road safety camera programs can also contribute to the funding needed for smart city enablement and infrastructure improvements.
Lastly, video camera technology can be leveraged to improve mobility in densely populated cities dealing with traffic congestion. Cameras placed at intersections or in bus lanes capture data on the movement of people and vehicles on the roadways that traffic engineers can then use to make informed decisions on optimizing traffic flow and improving safety. Data that can identify rush hour traffic patterns, as well as the number and kinds of vehicles that move throughout the city, can inform long-term transportation strategies, such as the installation of roundabouts, designated bus lanes and bike lanes and limits on commercial vehicles on specified roadways at certain times of the day.
The future of smart transportation
As cities become increasingly smarter, transportation technology must keep pace. Cities will need more actionable data to use in faster decision-making and transportation infrastructure changes. Open architectures will enable that data to be quickly and securely shared and fed into a network of government systems for use in public safety, traffic safety and improved mobility.
Soon, the months and years currently needed to efficiently time traffic signals will be done in real time and adjust dynamically with the flow of traffic. And in the near future, open application programming interfaces will be built and shared with traffic monitoring centers for a plentitude of real-time applications, such as automated alerts to first responders, signal timing or information sharing with autonomous vehicles.
|
<urn:uuid:7409a3bf-52d6-4b73-8fd4-fe8008f6eae2>
|
CC-MAIN-2022-40
|
https://gcn.com/state-local/2019/07/building-smart-cities-starts-with-safety-and-mobility/298120/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00051.warc.gz
|
en
| 0.935376 | 715 | 2.6875 | 3 |
By Bryn Farnsworth, iMotions
That’s been the question that has been asked by psychologists and market researchers for decades, and the answer has reliably come from eye tracking technology. Now, as more and more of the devices that we use—from our phone to our car—seek to understand how we interact with the world, eye tracking is being used more than ever.
There are a lot of things happening right now as you look at this screen: Your eyes are tracking the words, and maybe they’re searching for the header, or furtively glancing at the sidebar. This kind of information is critical to market researchers and businesses, who strive to understand how users make decisions (and which of those decisions leads to a purchase).
Psychologists have been using even finer points of measurement, to break down the cognitive and psychophysiological processes that underlie our attention. Knowing what you’re looking at has been critical for helping understand humans, a journey that is continually in progress.
But recently, eye tracking has begun to be applied and implemented in a myriad of new ways, due to the increased portability, and reduced costs of the tracking units. Eye tracking devices can now be almost seamlessly integrated into glasses, offering a way to monitor eye movements with as little distraction as possible. For example, they can be worn when in the supermarket, for market research, or discreetly placed into our phone to help how we use it. All in all, this opens up new ways for technology to work with human attention.
It’s of course no surprise that eye tracking, as an increasingly utilized piece of tech, would be introduced to the technology du jour: VR. Eye tracking has been integrated into VR headsets designed by Fove, a kickstarter-backed startup that uses the attention of users to impact the virtual environment they are placed in. By focussing on the user’s focus, the scenes can be shaped and changed in response to the eyes. This is just one example of these two technologies merging, and is something we will see more and more of as VR develops.
Human-computer interaction based on eye tracking is being explored and used within applications from assistive technologies, to improving car safety, to e-learning, and more. There is a wide, and increasing, range of opportunities for human attention to be paid attention to.
We’re also seeing eye tracking being used not only in new technologies, but also new fields of study. Urban design, neuroarchitecture, and even studies of art have also recently been utilizing this tool to understand how people examine their surroundings. Knowing how people pay attention has enabled researchers to better understand what is appealing about the environment, what doesn’t work, or what is confusing.
Such studies have also given researchers better insight into what users will do next, and to anticipate their future actions.
For example, Perkins + WIll, a research-based architecture firm, use eye tracking (along with measurements of galvanic skin response (GSR), and other measures) to understand how someone will connect with the buildings they have designed. By placing them in a virtual environment, the data provides a new layer of insight into how an individual experiences a building, before it’s really even left the drawing board.
As for medical uses, there is an increasing interest in using eye tracking to help diagnose, and potentially treat, neurological disorders. For example, infants usually like to look at images with people’s faces—scenes that have a social element. Research from UCSD has shown that infants that go on to develop autism are much more likely to have a preference for images that feature geometric shapes, suggesting that the analysis of eye movements may help guide early diagnosis.
Research platforms, in response to the growing demand and use by various companies and research groups, have increasingly focused on assisting such discoveries through eye tracking. The company iMotions has focused on increasing the simplicity of setting up an eye tracking study, while also enabling integration with other psychophysiological measurements.
The use of additional sources of physiological information about humans being integrated with eye tracking has brought increased attention for the possibilities it offers. While eye tracking clearly provides many opportunities for increased knowledge about humans, the capabilities can be multiplied when used in tandem with other measurements.
For example, research has used both eye tracking and GSR (the latter of which can be measured through wearable technology) to measure decision making with human-computer interfaces, providing more knowledge about how these should be designed. EEG recordings have also been combined with eye tracking, and GSR, for emotion recognition.
The possibilities opened up by the breadth of data that these devices offer when used in combination is huge, offering great potential for increasing the ease and speed of human-computer interaction. It won’t be too long in the future before we won’t have to use a computer mouse anymore (although we might still want to).
As technology improves, and hardware costs continue to decline, we will likely see ever further integration of eye tracking technology in various aspects of our lives, helping our actions throughout the world be simpler and more effective than ever before. The use of tracking hand gestures, voice, and emotions (through facial expressions) all offer implementable routes into improving our experience with technology.
A growing body of work and research has been seen within the automotive industry, that uses both eye tracking and emotion recognition to improve the safety and intelligence of the cars that we drive (and, in the future, to improve AI-driven cars, too). The combined technologies can examine both our attentional focus, and how we are feeling – if we’re tired and not paying attention, perhaps the car can tell us to pull over and take a rest.
However the future shapes up to be, it will surely feature eye tracking as a way to enhance our daily lives and to make the machines around us all the more intelligent as they work with us. It’s something we’ll keep looking at, and keep looking forward to.
Bryn Farnsworth is a neuroscientist and psychologist, and the science editor at iMotions. He has a PhD in neuroscience and developmental biology, alongside a bachelor’s degree in psychology, and a master’s degree in cognitive and computational neuroscience. A big fan of the brain and mind, Bryn believes in the power of well-captured data to provide answers about who we are, what we think, and why we behave in the way that we do.
|
<urn:uuid:61152ead-3e06-4794-be9b-af9796946f09>
|
CC-MAIN-2022-40
|
https://bdtechtalks.com/2017/01/20/what-are-you-looking-at/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00051.warc.gz
|
en
| 0.952269 | 1,348 | 2.625 | 3 |
(NOTE: The term "temperature monitor" can have different meanings in a Telecom and IT Remote Alarm Monitoring and Control context. It can be a simple hardware temperature sensor or it can be a Remote Telemetry Unit (RTU) that collects data from many temperature sensors. Some concepts in this article will apply to both, while others may be more relevant to one or the other.)
A temperature monitor is not overly complex, but it is one of the most important investments you can make toward the reliability and uptime of your telecom or IT system.
Unfortunately, a temperature monitor is not generally respected by the technicians responsible for keeping the network online. They are generally more interested in technical system alarms than this simple environmental monitoring device. Heat is obviously a problem whenever you're dealing with computer equipment, but the very low temperatures of harsh winters can also cause remote site equipment to fail.
Telecom hardware and devices, because they involve large amounts of electricity, create great quantities of thermal energy (heat). Therefore, high temperature represents a continuous threat that must be monitored with a temperature monitor device. Otherwise, a heat spike will stop your revenue-generating network in its tracks.
Don't forget, however, that cold temperatures can cause havoc in non-equatorial climates as well. Although computer equipment tends to appreciate a cool climate, 50 degrees below zero stops pretty much anything in its tracks.
If you can't protect your servers from high or low temperature, neither your boss nor your clients will be very happy about it. After all, why wouldn't you spend the relatively modest amount necessary to install a good temperature monitor within your server room, server closet, data center, or telecom site?
To properly monitor your site temperature, you need a temperature monitor that measures temperature within about a single degree against multiple threshold levels. At least two are necessary to detect the"too low" and "too high" levels, but additional thresholds can make it easier to understand what's really happening at your sites. For example, it can be useful to have a "yellow alert" level so that you know the temperature is starting to rise. It's even more important, however, to have a "red alert" level to tell you the temperature has reached an absolutely critical point that requires immediate intervention to prevent a thermal shutdown and expensive equipment damage.
But knowing the interior temperature at your site doesn't tell you everything. From the heat of the summer to the chill of the winter, it's also an excellent idea to monitor the exterior temperature. This way, you're able to assess the true danger of an HVAC or heater failure. While any climate control failure is an urgent scenario, the difference between an air conditioner failure that occurs when the temperature outside is 50 degrees Fahrenheit and one that occurs in the temperature outside is 110 degrees Fahrenheit.
As simple as temperature monitor technology is, it's not as if there's only one kind of sensor. In fact, there are two major types of temperature sensors: analog and discrete. What's the difference? Imagine that you need to monitor the temperature in your home (and you're not physically there to feel the temperature). With a thermometer, you would know the exact temperature (within a degree or so, depending on the accuracy of the thermometer you choose).
Imagine now that you can only listen to whether or not your heater or air conditioning is running, and that you can tell the difference between the two devices. In this case, you would know that the temperature in your home fell into one of three ranges: absolute zero to the point where the thermostat turns your heater off, the temperature range you specified as ideal, and somewhere between your AC activation temperature and the surface of the sun. As you can see, a discrete temperature monitor gives you much less detail than an analog one does. Even so, discrete is far better than nothing and can be significantly cheaper than an analog temperature monitor.
Now, my little scenario in the last paragraph hinted at a potential advantage for a discrete sensor over an analog: your thermostat is programmed automatic reaction temperatures where the heating or cooling is activated. In telecom network or IT terms, this translates into an automatic alarm from your temperature monitor, perhaps coupled with an automatic response in climate control.
In truth, however, a good temperature monitoring device will be capable of setting a "virtual threshold" for when your analog temperature reading crosses the levels you designate. For example, you might set levels that indicate when the temperature is "way too low", "a bit too low", "a bit too high", or "way too high".
Analogs are also able to track and measure the change in temperature over time. This allows you to know that the temperature is "dropping slowly" versus "plummeting like a rock".
As you shop for a temperature monitor, keep in mind that you may want to monitor the temperature in many locations throughout a site. This is especially true for larger sites, where many different pieces of equipment will create "thermal zones" that should be monitored independently by dedicated temperature monitors.
While your costs will increase somewhat as you deploy multiple temperature monitors at each site, this investment can pay off big time. Many companies have been known to lose collective millions of dollars in equipment because they failed to monitor temperature adequately. In this game of temperature monitoring, more intelligence yields better financial results.
Also, keep in mind that intelligence alone can't protect your equipment. If you know everything but can do nothing, you are getting nowhere fast. Intelligence combined with the power to react is what yields success. As an example, imagine that your revenue-generating equipment has a backup power supply but your air conditioning does not. If the power fails, you'll be helpless to keep service online because, although your servers have electricity, they will quickly reach the point of the thermal shutdown without proper air-conditioning. Although your temperature monitors will, in this case, faithfully report the rising temperature at your sides, you will be powerless to stop it without shutting down service.
DPS Telecom clients have frequently found that installing temperature monitors at remote sites is an effective way to protect the equipment there.
One key piece of equipment that benefits greatly from temperature monitoring is your site's backup batteries. Chemical-based batteries require an ideal temperature range to run best. Deviations from this ideal range can cause drastically reduced run-time and multiply the number of times you'll need to replace batteries completely.
Data closets are also prime targets for installing temperature monitors. Servers generate a lot of heat, so it's important to monitor how well your cooling efforts are working 24 hours a day. In fact, one client sent an e-mail to DPS with the following message:
"I need to know how to install a temperature monitor in my server closet. The AC has not been functioning properly in the last several months. I can't explain why it doesn't work, but we continue to have problems. I had another problem just three days ago. I need a temperature monitor that will send a message to my cell phone when the temperature rises too high."
DPS also has users in northern Canada that use NetGuardian remotes with built-in temperature monitors and external temperature monitor probes to make sure that the temperature at the site doesn't drop too low. As I described previously, the extreme cold of northern winters can cause big problems with equipment, just like heat can.
Temperature monitors have also been used by DPS clients to support environmental efforts. Turning off AC units when the site temperature is at the upper end of acceptable levels minimizes power consumption, but protecting the world environment and reducing energy costs for that client's company.
DPS has worked with a telco in Canada to send temperature alarm messages from remote sites via a T1. Each RTU uses only a single DS0, so a sizeable fleet of RTUs can report back to the central master using just that single T1.
In a much smaller application, another DPS client on the East Coast used a dial-out RTU as a stand-alone temperature monitoring tool to send alert messages to technician's cell phones. Originally, this client deployed a very small RTU supporting just two 2 numbers. Later, he upgraded to a medium-size RTU supporting up to 8 phone numbers.
DPS also has a railway client that uses RTUs as remote monitoring tools within their signaling division. While a normally built RTU was acceptable on the telco side of this company, a wide-temperature-range remote (built with higher-tolerance parts to sustain hotter and colder temperatures) was required to withstand the temperature swing in non-hardened signaling enclosures alongside railroad tracks. Whenever small remote cabinets and huts are used to contain telecom gear (including tools for remote monitoring), all of that gear must be able to handle the hottest heat and/or the coldest cold that the local climate generates throughout the year.
Temperature Monitoring System
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free Monitoring Fundamentals Tutorial.
An introduction to Monitoring Fundamentals strictly from the perspective of telecom network alarm management.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
|
<urn:uuid:91007ceb-bd81-4d08-ab26-d394b2af103b>
|
CC-MAIN-2022-40
|
https://www.dpstele.com/network-monitoring/temperature/monitor.php
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00051.warc.gz
|
en
| 0.944873 | 1,946 | 2.703125 | 3 |
Researchers develop facial recognition method that works in the dark
Researchers at the Institute of Anthropomatics & Robotics and Karlsruhe Institute of Technology have developed a new facial recognition method that uses thermal signature to recognize faces in the dark, according to a report by Gizmodo.
In the new study, the researchers use a new method that matches a regular visible light image of a face with an infrared image counterpart of the face.
The researchers used a “deep neural network” — computer programs that are designed to emulate the thinking patterns of a human brain — to discover that there is little one-to-one correlation between the two types of images.
When a deep neutral network is trained with a significantly large data set it is able to create connections based on a complex series of factors, just as a human would.
In this case, there are only few existing datasets of visible light images with corresponding infrared images.
The researchers used a University of Notre Dame set that included groups of images which include individuals with different facial expressions, in various lighting conditions, along with many images of the same person over a period of time.
By using the neural network to find correlations, researchers were able to significantly improve the accuracy of face matching.
However, the researchers admit that they are still a long ways away from developing a truly reliable system.
In cases where the system had many visible light images to compare to the thermal image, the network made the correct match 80% of the time. On the other hand, when the system only used one visible image, the matching percentage dropped to 55 percent.
The researchers said the facial recognition method is mostly useful for covert surveillance.
When paired with existing visible image databases containing mugshots and driver’s license photos, the new technology should allow government agencies to identify individuals even when their face is not visible, such as in low light settings or even at night.
|
<urn:uuid:c91ab01a-d9eb-4de3-8079-10afe3859764>
|
CC-MAIN-2022-40
|
https://www.biometricupdate.com/201507/researchers-develop-facial-recognition-method-that-works-in-the-dark?replytocom=223770
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00051.warc.gz
|
en
| 0.940744 | 389 | 3.453125 | 3 |
Street View cars, they may have inadvertently captured data payloads containing private information (URLs, fragments of e-mails, and so on).
Although some people are suspicious of their explanation, Google is almost certainly telling the truth when it claims it was an accident. The technology for WiFi scanning means it's easy to inadvertently capture too much information, and be unaware of it.
This article discusses technically how such scanning works.
There have been many controversies surrounding Street View. The first is about the images the cars take. They often contain private information, such as the license plates of cars parked on the side of the road. Google keeps improving algorithms to fix this, such as automatically covering license plates within images.
Street View cars also record nearby WiFi access-points. The purpose of this is to provide an alternate to GPS. A computer without GPS can scan for nearby access-points, look up their location in Google's database, and figure out it's own location. This is similar to the service provided by the company Skyhook Wireless, or by the collaborative effort WIGLE. Even though this is a useful tool, it is still a bit controversial, because it's yet one more piece of data (the location of everyone's access-points) that Google knows about us.
The current controversy s that while scanning for access-points, it may have captured private data.
NetStumbler". This is a popular program on Windows that makes it easy to both find access-points, as well as record their GPS location.
The WiFi radio in your laptop receives all packets on the current channel, including packets sent by other people's laptops near you. However, the WiFi device checks the incoming packets to see if they have the proper "MAC address" (the unique serial number assigned to your WiFi device). If they have the wrong MAC address, then the WiFi device will drop the packets. Only packets destined to your MAC address will be continue to be processed by your computer.
The way a packet-sniffer works is to turn off the MAC address check. All packets received by the WiFi radio are kept in the system, then saved to disk.
There are two parts to a packet: the header information (the envelope) and the payload/contents. Google is only interested in the headers.
THE BEACON PACKET
There are many types of packets. The most interesting packet is the "Beacon". The average access-point sends out this Beacon many times per second, advertising its existence, its name (the "SSID"), and a list of features (like whether a password is required).
When your laptop gives you a list of nearby access-point, it's simply listing the access-points from which it has received a Beacon. A program like NetStumbler builds its list from the same information.
The following picture shows a typical Beacon packet. I'm sitting a Panera café with free WiFi, this is the Beacon from the local access-point.
The raw packet is shown in a "hex dump" at the bottom, with the decode explanation at the top. I've selected the "SSID" field to show how the decoded information corresponds to the selected hex data.
THE DATA PACKET
While the Beacon packet is the most useful packet, other packets can be useful too.
Let's say that there is an access-point within a building, but the Beacon packets are blocked or too weak to reach the street. The access-point will exist, but Street View won't be able to see it.
However, somebody could be using a laptop halfway between the access-point and the Street View car. The laptop's packets can reach both the car and the access-point. Thus, even though Street View cannot see the access-point itself, it can still infer its existence by looking at the DATA packets.
A data packet example is shown below (in typical decode+hexdump format). This packet sent was sent by my laptop to the local access-point. I've highlighted the "BSS ID" field, which is the MAC address of the access-point (the same one shown in the Beacon above).
In addition, you'll notice that the signal strength in the decode. Google can use this to triangulate the location of the device that sent the packet. Street View knows the precise GPS location of the car as it rolls down the street. If it can get three beacons (or other data packets) from the access-point, it can triangulate the position of the access-point. Moreover, if it stores the raw packets from one day as the car takes one route, it can correlate the packets with another day's packets on a different route.
Triangulation is a lot harder than you'd think. This is because many things will block or reflect the signal. Therefore, as the car drives buy, it wants to get every single packet transmitted by the access-point in order to figure out its location. Curiously, with all that data, Google can probably also figure out the structure of the building, by finding things like support columns that obstruct the signal.
What's important about this packet is that Google only cares about the MAC addresses found in the header, and the signal strength, but doesn't care about the payload. If you look further down in the payload, you'll notice that it's inadvertently captured a URL.
Take a look again. Even though the access-point MAC address is highlighted, there's extra data in the packet. These extra data will include URLs, fragments of data returned from websites (like images), the occasional password, cookies, fragments of e-mails, and so on. However, the quantity of this information will be low compared to the total number of packets sniffed by Google.
That's the core of this problem. Google sniffed packets, only caring about MAC addresses and SSIDs, but when somebody did an audit, they found that the captured packets occasionally contained more data, such as URLs and e-mail fragments.
Google captured very little as it drove through neighborhoods. The primary reason is that most people encrypt their connections (by putting a password on their access-point). The second reason is that the car is only near an access-point for a few seconds - during which time it's unlikely that any data is being transferred.
You can verify this yourself (assuming it's legal in your area). You can download a version of Linux full of security tools like "BackTrack 4". You don't have to install it on your laptop, but instead, can put it on a USB flash drive, and boot from the flash drive. You can then run a tool like "ferret" that will sniff the wifi and show you interesting private information, like URLs. (Only half the wifi devices support such raw sniffing, you may have to buy a separate USB wifi stick for $10).
If you drive down the street running 'ferret', you'll see that it almost never shows you any information other than wifi control traffic (like Beacons).
The real reason Google might have data payload isn't from neighborhoods at all, but from cyber cafes and hotspots. If a Google Street View car came down the street near this Panera, it will be flooded with data packets from people within the café. This is the slowest period of the day, but I count 7 people using their laptops and one person using their iPhone to surf the web. Moreover, whereas people may encrypt their traffic at home, the hot spot here at Panera is unencrypted.
PROTECT YOURSELF, DON'T PUNISH GOOGLE
It's really easy to protect your data: simply turn on WPA. This completely stops Google (or anybody else) from spying on your private data (assuming you haven't done something stupid like chosen an easily guessed password, or chosen WEP instead of WPA). If you don't encrypt your traffic, then by implication, you don't care if people eavesdrop on it.
Laws against this won't stop the bad guys (hackers). They will only unfairly punish good guys (like Google) whenever they make a mistake.
HOW TO FIX IT
Google can easily get rid of the payloads by "slicing" data packets to the first 24 bytes. This preserves the MAC address and signal strength information Google wants, but gets rid of any private information inadvertently gathered from people who do not encrypt their connection.
EDITORIAL: THE NEED FOR TRANSPARENCY
This situation was only found because somebody audited Google's data. Just because they have no evil intentions doesn't mean they haven't made an evil mistake. The more Google becomes our overlord, the more we should demand that they be open and transparent about what data they are keeping about people.
What I've focused on here is question of Google's collection of "data" payloads, not the other privacy issues. Some people have accused Google of lying, and for having some nefarious purpose for gathering these packets. However, anybody who has experience in WiFi mapping would believe Google. Data packets help Google find more access-points and triangulate them, yet the payload of the packets do nothing useful for Google because they are only fragments.
|
<urn:uuid:60705412-4a00-4a81-a6c3-1ddc31145950>
|
CC-MAIN-2022-40
|
https://blog.erratasec.com/2010/05/technical-details-of-street-view-wifi.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00051.warc.gz
|
en
| 0.959591 | 1,926 | 2.59375 | 3 |
What is a green data centre?
The Australian Government can strengthen its environmental credentials through utilising green data centres, but what are green data centres, and how can they reduce government carbon emissions?
Data centres and cloud computing reduce carbon emissions.
The tectonic shift of computing within private data centres to large-scale cloud and Canberra data centre providers has greatly contributed to reducing carbon dioxide emissions over the last ten years. The key factor in this reduction is the greater efficiencies of aggregating computing power within data centre and cloud providers.
Research suggests that, if a larger organisation switches even one of their major enterprise applications to the cloud, they could save an average of 30,000 tons of carbon dioxide within five years, which is the equivalent of taking almost 6,000 cars off the road.
A recent forecast by IDC shows that the continued shift of computing to cloud and data centre providers could prevent the release of one billion metric tonnes of carbon dioxide into the atmosphere between 2021 to 2024.
What is a green data centre?
Green data centres go further to reduce their environmental footprint. A green data centre is an enterprise-class computing facility that is designed and operated on the green computing principles below that are applied throughout the lifecycle of the data centre:
- Energy efficient – minimise power consumption, both for primary computing infrastructure and supporting electronic resources, cooling, backup and lighting.
- Green design – ICT infrastructure and construction materials are selected on their low energy consumption, water utilisation and environmental impact. Reuse, recycle equipment with minimal e-waste.
- Green energy – electricity generation from wind and solar rather than from fossil fuels.
Green data centre certifications and metrics.
There are a number of standards and certifications for building sustainability (whole building approach) and energy efficiency that are employed in the design and operation of green data centres. Global examples include ISO 14001 and LEED, and locally we have NABERS.
ISO 14001 is an international standard that specifies requirements for an environmental management system (EMS). The standard provides a framework for organisations to follow rather than setting performance requirements.
LEED (Leadership in Energy and Environmental Design) is a whole-building certification programme developed by the US Green Business Council that covers energy efficiency and source, internal air quality, water usage and building materials selection and disposal.
Closer to home NABERS (National Australian Built Environment Rating System) is a national government program that provides a rating system to measure and accredit the environmental performance of buildings and tenancies. The system uses a 6-star scale rating system to measure power and water consumption, waste efficiency as well as indoor environmental quality. Buildings are independently assessed and the program is audited by the government.
A number of metrics have been developed to assess power efficiency in data centres, but the most frequently used metric is the Power Usage Effectiveness ratio which was developed by The Green Grid, a global consortium focused on improving energy efficiency within data centres.
Power Usage Effectiveness
PUE = Total Facility Power / IT Equipment Power
This ratio calculates the amount of power a data centre uses to deliver power to IT equipment. The ideal 1:1 ratio describes a data centre that doesn’t use any additional power apart from powering IT equipment. When the ratio was first introduced in 2007 the industry average was between 2.5-3.0; the average more recently is around 1.7, with state-of-the-art facilities achieving 1.59.
About Macquarie Telecom
Macquarie Telecom recognises the environmental and financial benefits of energy-efficient IT operations, and we continue to revise and implement our Green IT strategy. We are focusing on green technology and IT energy consumption in our Intellicentre data centres in Canberra to establish a low carbon footprint hosting environment.
We continue to investigate ways to minimise our energy use and environmental impact, such as reducing the number of idle processors at night and focusing on vendor hardware that uses less power when processors are actually idle. From our efforts we have achieved Power Usage Effective ratios well below the industry standard, with our IC5 Canberra data centre achieving a 1.37 PUE, and data centre IC3 achieving 1.28 – 19% lower than other state-of-the-art data centres.
|
<urn:uuid:2d31abb1-1a2d-43d8-a079-e03b49b36749>
|
CC-MAIN-2022-40
|
https://macquariegovernment.com/blog/what-is-a-green-data-centre/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00051.warc.gz
|
en
| 0.923853 | 875 | 3.421875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.