text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Digital media assets such as documents, images, videos, software are valuable content whose loss or unauthorized use could cause significant loss to the owners and copyright holders.
Therefore, they require protection from people who may want to exploit them for their gain. This is often a challenge due to the digital nature of storage and delivery of these assets.
This is where digital rights management or DRM comes in.
Digital Rights Management is the technology that controls access to content on digital devices. It helps protect the rights of the copyright holder or owner of digital assets and even the users of the assets. This technology defines how end-users interact with the digital assets.
With the continued digitization of all spheres of our life, including work and entertainment, Digital Rights Management has become a critical part of businesses and individuals who want to protect their digital assets and ensure only legitimate and licensed users access them.
How Digital Rights Management Works
Digital Rights Management platform deploys code that controls how end-users can use the digital files. Typically, it can be used to prevent sharing, modification, copying, storing, or limiting the devices that can access it. Digital Rights Management technology can either be hardware or software, with both options effective in introducing access control and permission management.
To achieve access control, the Digital Rights Management platform can:
- Deploy watermarks on documents and images to show ownership
- Capturing and monitoring copyright and license information through metadata
- Require encryption and decryption keys to access data
- Set expiry dates on digital assets, thus preventing unlimited access.
- Restrict users from copying, editing, printing, sharing, or storing the media on their devices. It can also restrict taking screengrabs or screenshots of the content.
- Restrict access to specific devices, IP addresses, or users in particular locations
- Use of embed codes to control how digital assets are published online
Digital Rights Management Use Cases
We can broadly categorize Digital Rights Management use cases into four distinct areas based on what we currently see in the IT landscape.
Entertainment/ media industry
Content creators and media companies in the entertainment industry spend a lot of resources to develop digital content with the hope of earning from it. Pirated copies of their works have always been a thorn in the flesh. Digital Rights Management software gives them more control of the work and safeguards their income.
Like in the entertainment industry, tech companies have borne the pain of pirated copies of their software. In addition to offering software as a service, most software companies have also deployed Digital Rights Management tools to entrench their ownership rights and income.
Enterprise digital rights management helps organizations, companies, and government departments protect their confidential data, share files, comply with relevant laws, collaborate with other organizations, among many other uses.
Benefits of Digital Rights Management Technology
There are many benefits that businesses/organizations derive from Digital Rights Management. It is not possible to discuss all of them in this article, but let’s look at the most critical:
Protecting confidential data
Confidential data such as proprietary information, customer data, strategic plans, employee data, health records, etc., can bring down an organization if the wrong people get hold of it. Digital Rights Management helps protect this data by tracking and restricting access.
Prevent unauthorized use
When users purchase a license to digital assets, they usually agree to abide by specific fair use policies. Digital Rights Management enforces these policies by applying the restrictions.
Protecting revenues by monetizing content effectively
By preventing unscrupulous people from copying other people’s work, individuals and businesses owing digital media assets stand a chance to get a return on their investment.
Compliance with relevant laws
Different regions have laws that govern how specific data and content should be distributed. Some sensitive government files may only be accessed in specified locations.
Other regulations like HIPPA protect confidential healthcare records. Digital Rights Management software comes in handy in ensuring data security.
Digital Rights Management helps restrict content to the right audiences to comply with relevant regulations. Age restriction and regional restriction are easily achieved with Digital Rights Management software.
Enterprise file security
File sharing is a common feature of the modern workplace. With it comes the challenges of data loss. Digital Rights Management software help mitigates this threat by controlling access. It can go further and help investigators understand how data leakage occurred.
Criticisms of Digital Rights Management Platforms
With all the benefits Digital Rights Management offers to digital assets owners, it isn’t devoid of criticism. Perhaps the biggest one is whether it limits legitimate users who have purchased rights to an asset from using it optimally. Some Digital Rights Management platforms lean heavily on protecting owner’s rights and hurt the end-user in the process.
Since Digital Rights Management technologies define and control how users can interact with the content, the question of legitimate license holders facing difficulties consuming the content they paid for arises.
Digital Rights Management software can sometimes hinder the smooth flow of work by introducing stringent restrictions even on less sensitive files in the workplace. CISOs should be open to feedback from users on how Digital Rights Management software may be affecting their work.
A robust Digital Rights Management platform balances the rights of content owners with the legitimate expectations of users who have obtained a license to such content. The user experience should be seamless and unobtrusive to the greatest extent possible.
With the number of digital assets being produced daily across different sectors, digital rights management platforms are very important.
It’s one way to protect your organization from the cybersecurity challenges posed by threat actors out to profit from your digital assets.
In addition, this technology will help protect your customer data and help with compliance by ensuring your content is distributed legally and complies by all data privacy laws.
If you’d like to see how the Lepide Data Security Platform can help you detect and react to security threats within your infrastructure, schedule a demo with one of our engineers or start your free trial today.
|
<urn:uuid:43e7e326-b060-4de9-83ce-50aa24d62b6d>
|
CC-MAIN-2022-40
|
https://www.lepide.com/blog/what-is-digital-rights-management/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00240.warc.gz
|
en
| 0.90218 | 1,225 | 2.875 | 3 |
First introduced in March of 2019, the IoT Cybersecurity Improvement Act of 2019 has now cleared its first hurdle in the House of Representatives and moves on to the Senate for a floor vote. Meant to create a security standard for the government purchase and use of all Internet of Things (IoT) devices, the bill would task the National Institute of Standards and Technology (NIST) and the Office of Management and Budget (OMB) with creating more stringent contractor screening and government use requirements if it ultimately passes.
Addressing the gaping IoT cybersecurity hole
IoT cybersecurity is a significant and ongoing problem everywhere, but particularly in the US government due to an almost complete lack of security standards for both government devices and those used by contractors. If passed, the bill would force government agencies to only purchase devices that can meet these newly established internet connectivity standards.
IoT devices are a particular risk as security standards tend to be much lower than they are for phones, tablets and computers. The IoT cybersecurity problem has persisted since these devices were first introduced years ago, but improvements are only sporadic at best while these devices steadily become more ubiquitous and collect a wider range of personal information. Some devices have no security whatsoever or come with a default password that cannot be changed; many have no means of receiving security updates when a vulnerability is discovered. Manufacturers often scrimp on IoT device security because it’s too difficult to establish an architecture for these unique devices or simply because it’s too expensive.
This problem is compounded by federal government procurement policies that tend to require agencies to go with the lowest bidder that is “technically acceptable,” something that becomes an obvious problem with no formal IoT cybersecurity standards in place. The bill would not create new market standards for IoT device security, but would force the government to only purchase from manufacturers that implement adequate measures. However, there is hope that this would create market pressure that in turn would create a general improvement in IoT cybersecurity standards.
The bill must now survive a Senate floor vote and then go to the president’s desk to be signed into law. NIST would be the agency taking point on developing IoT cybersecurity standards, and would review whatever policy it creates every five years. In addition to setting new national security standards, all of the government’s IoT device vendors would have to create a vulnerability disclosure policy.
In favor of an industry-independent initiative, Ellen Boehm, senior director of IoT product management at Keyfactor commented, “Any time there is an initiative around improving cybersecurity for IoT devices, independent of industry, it helps the collective market challenge the current state and think deeper about best practices around encryption and authentication for this growing population of connected things. We frequently hear about hackers who take advantage of weaknesses in IoT security, maliciously taking control of smart home devices for DDoS attacks or changing functionality of medical devices. The only way to improve our security posture is to design a robust security architecture around our entire IoT systems. Guidelines provided by NIST or other standards groups can really make an impact in how we design security into IoT devices from inception and provide a method to manage authentication and encryption around the IoT device data and functionality over time.”
Waivers may cause concerns
However, the IoT cybersecurity bill going before the Senate includes provisions for waivers that have some security professionals concerned. One particular waiver allows for exceptions when “appropriate to the function of the covered device,” broad wording that could be interpreted as a blanket loophole to make use of any sort of internet connected devices regardless of the security standards. The House bill did not include these broad waiver terms, and the bill may be amended to be closer to that form before it is voted on by the Senate.
Some precedent for this bill can be seen in California’s SB 327, the country’s first IoT cybersecurity law. That law is more expansive in that it puts security requirements on all manufacturers of IoT devices located in California, but it also suffers from some wording that is proving to not be adequately forward-thinking. The California bill’s issue is that it defines a secure device solely as one that is password-protected and allows the user to change the password; it does not require devices to allow for patching to address future vulnerabilities that may develop.
More regulations to address increasing IoT cybersecurity risks
A recent research report issued by Congress estimates that there are about 10 billion IoT devices in use today, with this number expected to swell to about 21.5 billion in the next five years. The US government spends tens of billions on these devices and their connectivity solutions each year. Some are in highly sensitive applications: utility grid sensors, patient care equipment at military hospitals, and a wide range of military field applications among other categories that are targets of high interest to hackers. The theft of sensitive information is far from the only risk that poor IoT cybersecurity presents; they can also provide an initial foothold from which attackers can gain greater access to networks.
In addition to the IoT Cybersecurity Improvement Act, lawmakers are attempting to address this considerable security risk with a companion piece of legislation called the “Developing and Growing the IoT Act.” This bill establishes a federal-level working group that would consult with technology industry leaders to assist in establishing the federal IoT security standards.
|
<urn:uuid:a4671d13-1ce3-4fbb-afdf-cf661cc26e0e>
|
CC-MAIN-2022-40
|
https://www.cpomagazine.com/cyber-security/iot-cybersecurity-improvement-act-of-2019-passes-house-of-representatives-would-demand-cybersecurity-standards-for-devices-and-contractor-requirements/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00240.warc.gz
|
en
| 0.957208 | 1,073 | 2.625 | 3 |
MySQL can use indexes to support a range of different functions. Indexes are not just for optimizing MySQL performance when reading data. These functions include the following:
• Maintaining data integrity
• Optimizing data access
• Improving table joins
• Sorting results
• Aggregating data
MySQL uses both primary and unique keys to enforce a level of uniqueness of your storage data per table. The differences between primary and unique keys are as follows:
• Only one primary key may exist per table.
• A primary key cannot contain a NULL value.
• A primary key provides a means of retrieving any specific row in the table
• If an AUTO INCREMENT column is defined, it must be part of the primary key.
• More than one unique key per table is supported.
• A unique key can contain a NULL value where each NULL value is itself unique (that is, NULL != NULL).
The million_words table contains a primary key on the id column. This constraint ensures no duplicate values. Here is an example:
mysql> INSERT INTO million_words(id, word) VALUES(1, ‘xxxxxxxxx’); ERROR 1062 (23000): Duplicate entry ‘1’ for key ‘PRIMARY’
Likewise, the million_words table contains a unique key on the word column. This constraint ensures a duplicate word cannot be added.
mysql> INSERT INTO million_words(word) VALUES(‘oracle’); ERROR 1062 (23000): Duplicate entry ‘oracle’ for key ‘word’
In addition, some MySQL storage engines support foreign keys for data integrity. These are not actually an index; they are referred to as a constraint. However, a common prerequisite for certain implementations of foreign keys is that an index exists in both the source and parent tables to enable the management of foreign keys. Currently only the InnoDB storage engine of the default MySQL storage engines supports foreign key constraints and there is no requirement for a corresponding index; however, this is highly recommended for performance.
CAUTION Although MyISAM does not support foreign key constraints, the CREATE TABLE (…) ENGINE=MyISAM syntax allows for the definition of foreign keys via the REFERENCES syntax.
Optimizing Data Access
Indexes allow the optimizer to eliminate the need to examine all data from your table during query execution. By restricting the number of rows accessed, query speeds can be significantly improved. This is the most common use for an index.
For example, in our example table of one million words, if the word column is not indexed, each SELECT would need to scan all one million rows sequentially in the random order in which they were added to find zero or more matching rows every time. Even if the data were originally loaded in sequential order, SQL does not know this and must process every row to find a possible match.
For example, we will create a table without an index:
mysql> CREATE TABLE no_index_words LIKE million_words; mysql> ALTER TABLE no_index_words DROP INDEX word; mysql> INSERT INTO no_index_words SELECT * FROM million_words; mysql> SELECT * FROM no_index_words WHERE word='oracle';
1 row in set (0.25 sec)
When the table has an index on the word column, each SELECT would first scan the index that is ordered and is well optimized for searches to identify a reference to the zero or more rows that contain the matching information. When the index is defined as unique, the SELECT would know that the results contained at most one matching row. Here is another example, using our million_words table:
mysql> SELECT * FROM million_words WHERE word='oracle';
1 row in set (0.00 sec)
The indexed column example retrieves a row in less than 10 milliseconds via this MySQL client output. When not indexed, the row(s) retrieved take 250 milliseconds.
Adding an index is not an automatic improvement in performance for all types of SQL queries. Depending on the number of rows required, it might be more efficient to perform a full table scan of all data. This is a difference between random I/O operations of retrieving individual rows from index lookups and a sequential I/O operation to read all data.
Throughout the remainder of the book, we will be providing more detailed examples of how indexes are used for query restriction.
In addition to restricting data on a given table, the other primary purpose for an index is to join relational tables conveniently and efficiently. The use of an index on a join column provides the immediate performance benefit as described in the previous section when now matching a value in a different table. The mastering of creating correct indexes to perform efficient table joins is fundamental for SQL performance in all relational databases.
MySQL indexes store data in a sorted form. This makes the use of the index very applicable when you would like the result of a SELECT statement in a given order. It is possible to sort data for any SELECT query via the ORDER BY operator. Without an index on the ordered-by column, MySQL will typically perform an internal filesort of the retrieved table rows. The use of a predefined index can have a significant performance improvement on a high concurrency system that is required to sort hundreds or thousands of individual queries per second, since the results are naturally ordered in the index. Simply having an index that matches the order you want for your results does not automatically mean that MySQL will choose to use this index.
Indexes can be used as a means of calculating aggregated results more easily. For example, the sum of the total of all invoices for a given period might be more efficiently performed with an appropriate index on the date and invoice amount.
|
<urn:uuid:5051763c-ea02-4275-9407-1e3e8f694a69>
|
CC-MAIN-2022-40
|
https://logicalread.com/mysql-index-usages-mc12/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00240.warc.gz
|
en
| 0.851466 | 1,234 | 3.015625 | 3 |
Remote sites typically run off sustainable and uninterpretable energy sources. Primarily commercial electricity, with some wind turbine and/or solar source to supplement. Some gear, however, requires liquid fuel that requires regular refueling for operation. Now how do you track fuel levels for remote sites, especially for a lot of locations?
When it comes to monitoring how much fuel is in your tanks, many options exist. Some are basic sight-only solutions. Either open the cap and look or maybe you get lucky and have a viewport or a gauge to look at. This requires techs to be in the field constantly to track levels manually.
This not only increases costs through commuting and labor expense but also has room for error. If a mistake results in a fuel delivery delay, you could lose a site and have some unhappy customers.
Fuel level sensors (aka fuel gauges) are devices that allow technicians to monitor fuel consumption so they know when their tanks need to be refilled.
Some solutions may work for home-owner projects. When it comes to industrial applications though, there are more robust solutions. There are basically two kinds of level monitoring: switches and sensors (also known as senders). Both allow for automatic response to level events but the events are significantly different.
Fuel Level Switches is a float type fuel level sensor that operates on a simple principle. Utilizing a dry-reed switch and a float, it provides a signal path when the liquid is below a specified level. When the liquid is above the specified level, the signal path is broken. The signal is then monitored by a Remote Telemetry Unit (RTU) using a digital input.
This data can be processed locally for on/off status and possibly local alert. The data can also be sent to a Network Operating Center (NOC) for Remote Alarm Monitoring and Control if you have a Supervisory Control and Data Acquisition (SCADA) system in place.
The signal is clearly a digital signal. A digital fuel level sensor does not provide any hint of what the liquid level is or how far above or below the switch point it may be at any point in time. The signal simply switches between Off and On when the float moves and crosses the switch point.
Fuel Level Sensors (or sending units) use a different approach. Using a variable resistance component and a similar float, they provide a signal that varies between two stop values. Fuel Level Sensor typically provide a 4-20mAmp or 0-6Vdc output.
The signal can be monitored by an RTU using analog input. This data can be processed locally against thresholds (ideally user-defined) for out-of-range conditions and local alerts. This data can also be sent to a NOC for Remote Alarm Monitoring and Control by a SCADA system.
Notably, this data is analog data and provides basically a real-time indication of the liquid level. Alarm events are generated by the RTU or by the Remote Monitoring and Control system by comparing the monitored level against one or more thresholds.
Both Fuel Level Monitoring solutions allow for better management of your fuel levels. A more controlled response due to continuous monitoring. More efficient management by getting rid of site visits to check level or to fill tanks that are still well above the refill level.
Both improvements can be part of a Control solution for Fuel Level Monitoring. You will save time, reduce costs, and prevent gear downtime. A win/win for you and the customers that depend on your service.
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free Monitoring Fundamentals Tutorial.
An introduction to Monitoring Fundamentals strictly from the perspective of telecom network alarm management.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
|
<urn:uuid:02cfc93e-4137-4324-afb4-526f5b869765>
|
CC-MAIN-2022-40
|
https://www.dpstele.com/network-monitoring/fuel/level-sensor.php
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00240.warc.gz
|
en
| 0.912572 | 833 | 2.875 | 3 |
Students who take this course will receive a comprehensive study material with 100% coverage of the exam objectives. This eBook includes videos and practice questions and will be valid for 12 months. Students will also have access to a remote lab environment that enables hands on practice in actual software applications. Gradable, hands-on assessments provide an accurate picture of learners’ ability to perform job tasks correctly and efficiently. Lab scenarios are aligned with CompTIA exam objectives. Lab access will be valid for 12 months.
This course can benefit you in two ways. If you intend to pass the CompTIA Network+ (Exam N10-008) certification examination, this course can be a significant part of your preparation. But certification is not the only key to professional success in the field of network support. Today’s job market demands individuals have demonstrable skills, and the information and activities in this course can help you build your network administration skill set so that you can confidently perform your duties in any entry-level network support technician role.
Lesson 1: Comparing OSI Model Network Functions
- Topic 1A: Compare and Contrast OSI Model Layers
- Topic 1B: Configure SOHO Networks
Lesson 2: Deploying Ethernet Cabling
- Topic 2A: Summarize Ethernet Standards
- Topic 2B: Summarize Copper Cabling Types
- Topic 2C: Summarize Fiber Optic Cabling Types
- Topic 2D: Deploy Ethernet Cabling
Lesson 3: Deploying Ethernet Switching
- Topic 3A: Deploy Networking Devices
- Topic 3B: Explain Network Interfaces
- Topic 3C: Deploy Common Ethernet Switching Features
Lesson 4: Troubleshooting Ethernet Networks
- Topic 4A: Explain Network Troubleshooting Methodology
- Topic 4B: Troubleshoot Common Cable Connectivity Issues
Lesson 5: Explaining IPv4 Addressing
- Topic 5A: Explain IPv4 Addressing Schemes
- Topic 5B: Explain IPv4 Forwarding
- Topic 5C: Configure IP Networks and Subnets
Lesson 6: Supporting IPv4 and IPv6 Networks
- Topic 6A: Use Appropriate Tools to Test IP Configuration
- Topic 6B: Troubleshoot IP Networks
- Topic 6C: Explain IPv6 Addressing Schemes
Lesson 7: Configuring and Troubleshooting Routers
- Topic 7A: Compare and Contrast Routing Concepts
- Topic 7B: Compare and Contrast Dynamic Routing Concepts
- Topic 7C: Install and Troubleshoot Routers
Lesson 8: Explaining Network Topologies and Types
- Topic 8A: Explain Network Types and Characteristics
- Topic 8B: Explain Tiered Switching Architecture
- Topic 8C: Explain Virtual LANs
Lesson 9: Explaining Transport Layer Protocols
- Topic 9A: Compare and Contrast Transport Protocols
- Topic 9B: Use Appropriate Tools to Scan Network Ports
Lesson 10: Explaining Network Services
- Topic 10A: Explain the Use of Network Addressing Services
- Topic 10B: Explain the Use of Name Resolution Services
- Topic 10C: Configure DNS Services
Lesson 11: Explaining Network Applications
- Topic 11A: Explain the Use of Web, File/Print, and Database Services
- Topic 11B: Explain the Use of Email and Voice Services
Lesson 12: Ensuring Network Availability
- Topic 12A: Explain the Use of Network Management Services
- Topic 12B: Use Event Management to Ensure Network Availability
- Topic 12C: Use Performance Metrics to Ensure Network Availability
Lesson 13: Explaining Common Security Concepts
- Topic 13A: Explain Common Security Concepts
- Topic 13B: Explain Authentication Methods
Lesson 14: Supporting and Troubleshooting Secure Networks
- Topic 14A: Compare and Contrast Security Appliances
- Topic 14B: Troubleshoot Service and Security Issues
Lesson 15: Deploying and Troubleshooting Wireless Networks
- Topic 15A: Summarize Wireless Standards
- Topic 15B: Install Wireless Networks
- Topic 15C: Troubleshoot Wireless Networks
- Topic 15D: Configure and Troubleshoot Wireless Security
Lesson 16: Comparing WAN Links and Remote Access Methods
- Topic 16A: Explain WAN Provider Links
- Topic 16B: Compare and Contrast Remote Access Methods
Lesson 17: Explaining Organizational and Physical Security Concepts
- Topic 17A: Explain Organizational Documentation and Policies
- Topic 17B: Explain Physical Security Methods
- Topic 17C: Compare and Contrast Internet of Things Devices
Lesson 18: Explaining Disaster Recovery and High Availability Concepts
- Topic 18A: Explain Disaster Recovery Concepts
- Topic 18B: Explain High Availability Concepts
Lesson 19: Applying Network Hardening Techniques
- Topic 19A: Compare and Contrast Types of Attacks
- Topic 19B: Apply Network Hardening Techniques
Lesson 20: Summarizing Cloud and Datacenter Architecture
- Topic 20A: Summarize Cloud Concepts
- Topic 20B: Explain Virtualization and Storage Area Network Technologies
- Topic 20C: Explain Datacenter Network Architecture
The Official CompTIA Network+ Guide (Exam N10-008) is the primary course you will need to take if your job responsibilities include network administration, installation, and security within your organization. You can take this course to prepare for the CompTIA Network+ (Exam N10-008) certification examination.
To ensure your success in this course, you should have basic IT skills comprising nine to twelve months’ experience. CompTIA A+ certification, or the equivalent knowledge, is strongly recommended.
- Deploy and troubleshoot Ethernet networks.
- Support IPv4 and IPv6 networks.
- Configure and troubleshooting routers.
- Support network services and applications.
- Ensure network security and availability.
- Deploy and troubleshooting wireless networks.
- Support WAN links and remote access methods.
- Support organizational procedures and site security controls.
- Summarize cloud and datacenter architecture.
|
<urn:uuid:3f16789f-2b15-4e89-8db5-fa84dec19d6e>
|
CC-MAIN-2022-40
|
https://www.interfacett.com/training/comptia-network-plus-n10-008/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00240.warc.gz
|
en
| 0.752383 | 1,281 | 2.59375 | 3 |
We’ve talked before about changes that come with new ways of thinking and waves of new technology. We also know that some of these are changes that completely rework how industries, economies, even whole societies are transformed. So, what can we predict about the Internet of Things (IoT), particularly the Industrial IoT? It’s becoming clearer just how big this shift will be. So, how do we need to think about or do things differently? Standing at the start of the industrial revolution it was probably pretty difficult to predict just how far it would go, how much it would change everything. Does IoT have the same potential for change?
One of my favorite quotes about predicting the future comes from Sir William Preece, Chief Engineer of the British Post Office in 1876. Upon hearing about the invention of the telephone, Sir William said: “The Americans have need of the telephone, but we do not. We have plenty of messenger boys.” Clearly, Sir William was stuck in his own present context and couldn’t envision the new one. As we learn more about the IoT and its potential, we can take a lesson from Sir William and not be too complacent about just what may happen next, or how accurately we can foresee it. But there are lots of plausible opinions.
Daniel Wellers is Strategy and Research Lead for SAP Portfolio Marketing. He imagines the future of the Internet of Things in a piece published on the World Economic Forum website. He writes: “We’ll do truly different things, instead of just doing things differently. Today’s processes and problems are only a small subset of the many, many scenarios possible when practically everything is instrumented, interconnected, and intelligent.”
A McKinsey report projects the economic value of IoT at $11 trillion by 2025. By then they expect it will touch every part of our lives, from smart cities to cars, farms, hospitals, and factories. That’s in line with comments and cautions from Robert Smith at the January Forum meeting “I prefer to call it the Internet of Everything,” he said, and, “We need not end up where technology takes us, but make sure it takes us where we want to go.” That session was titled “The Internet of Things Is Here”.
Well, it may be here, but not everywhere. There has been a good deal of consumer centered IoT activity ranging from smart watches to internet-connected homes. There are also some cautionary tales. Google, for example, was out in front with its purchase of Nest a “smart appliance company”. However, recently the founder of Nest resigned. The company had technical difficulties that generated public customer complaints. And Nest is not alone in that regard. What early adopters are discovering is the reality of a fractured marketplace and as yet unproven products. Smart homes do have promise. They just aren’t quite ready yet.
The industrial IoT is proving to be different. Several industry sectors have been early IoT adopters, Agriculture and Energy for example. In most locations, agriculture accounts for as much as 90% of water consumption, so, that’s a natural place to look for ways to achieve improved water management and sustainability. We know that water is a precious resource and that increasing water scarcity is a real threat to economic and social stability. Think California’s Central Valley where IoT technology is combining with new policies and infrastructures to help. For example, IoT-enabled irrigation systems keep track of soil conditions to automate watering and minimize waste. Farmers now have sensors available in everything from smart barns to tractors to cows. This is “precision farming” where resources are more efficiently managed and crop yields increase.
In the energy and utility sector IoT is being used to address the challenges of energy efficiency, aging infrastructure, and increasing demand. Utilities also must contend with government regulations, particularly around pollution controls. IoT is helping with data collection from meters, equipment, and the transmission grid. Sensors report on faulty transformers or trees interfering with power lines. They can monitor hard-to-access infrastructure and use predictive analytics to make decisions around maintenance and equipment upgrade.
There are examples of IoT adoption across major industries, including healthcare, manufacturing, transportation, and retail. A recent piece in the Boston Globe discussed the General Electric headquarters move to Boston, and “the potential to galvanize our Internet of Things industry cluster of businesses, to make us the leader in a still-emerging field that promises to connect machines big and small – in our homes, on our streets, in our factories – and transform them with the help of vast reams of data, that can be crunched instantly.” I don’t know if Boston will be the Silicon Valley of IoT, but I do know excitement is building. IoT is here.
Sign up for blog updates via email.Subscribe
|
<urn:uuid:a4eca269-a0e5-48e4-8cb4-2ae3b56fba74>
|
CC-MAIN-2022-40
|
https://www.actifio.com/company/blog/post/industrial-internet-things-will-change-way-work/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00240.warc.gz
|
en
| 0.950297 | 1,029 | 2.578125 | 3 |
May 13, 2021
Types of Network Topology Explained
When building a computer network, you need to define which network topology you want to use. There are multiple types of network topologies used nowadays, each with its pros and cons. The topology you choose determines the optimal performance of your network, scalability options, ease of maintenance, and the costs of building the network. That’s why it is important to select the right network topology type.
This blog post covers types of network topology, their advantages, and their disadvantages. It also provides recommendations on which network topology to use in different scenarios. Practical examples of using a specific type of network topologies can help you understand when each topology can be applied. After reading this blog post, you should be able to select the needed topology and equipment to build your own network.
Table of Contents
- What is network topology?
- Point-to-point topology
- Bus topology
- Token Bus
- Ring topology
- Token ring
- Dual ring
- Star topology
- Hub vs. switch: What’s the difference?
- Star topology in real life
- Frame format
- Optical connection
- Advantages of the star topology
- Wi-Fi connection
- Tree topology
- An example of network configuration
- What is a router?
- Wi-Fi connection
- Mesh topology
- Wi-Fi connection
- Hybrid topology
- Types of cables
- Coaxial cable
- Twisted pair
- Optical fiber cable
What is Network Topology?
The network topology or network configuration defines the structure of the network and how network components are connected. Network topology types are usually represented with network topology diagrams for convenience and clarity. There are two types of network topology: physical and logical.
Physical topology describes how network devices (called computers, stations, or nodes) are physically connected in a computer network. The geometric scheme, connections, interconnections, device location, the number of used network adapters, types of network adapters, the type of cable, cable connectors, and other network equipment are the aspects of the physical network topology.
Logical topology represents the data flow from one station to another, how the data is transmitted and received, the path of data in the network, and which protocols are used. The logical network topology explains how data is transferred over a physical topology. Cloud and virtual network resources are part of the logical topology.
Point-to-Point Network Topology
Point-to-point network topology is the simplest network topology used when only two computers or other network devices are connected to each other. A single piece of cable is used in this case. The most common example of the point-to-point network topology is connecting two computers (that have Ethernet network adapters with RJ-45 ports) with a twisted pair cable (UTP Cat 5e, FTP Cat 5e, STP Cat 5e, etc.). The point-to-point type of topologies is also called the P2P topology.
Refer to the last section of the blog post to learn about the different Types of Cables.
Ethernet crossover cable of category 5e is a cable that has four twisted pairs of wires. The cable has RJ-45 connectors on both ends, with T568A wiring on one end of the cable and T568B on the other end. The crossover cable is used to connect network devices of the same type, such as two Ethernet cards of different computers. Modern network cards can work with a patch cable without a crossover cable when connecting two computers using the point-to-point network topology. The connection is possible thanks to Ethernet Auto MDI-X support (medium dependent interface crossover).
Patch cords or patch cables are used to connect a network card of a computer to a switch and to connect switches to each other. Both ends of a patch cord are crimped by using the T568B standard (T568A also can be used for both ends of the patch cord, but this practice is not common).
Bus Network Topology
In a bus topology, the main cable is called a common cable or a backbone cable. The stations are connected to this main cable by using other cables that are called drop lines. The tap device is used to connect drop lines to the main cable. An RG-58 coaxial cable with an impedance of around 50-52 Ohms is usually used to build the network in a bus topology. BNC (Bayonet Neill-Concelman) connectors are used to connect parts of the network and connect a cable to the network card. Terminators are devices installed on each end of the backbone cable to adsorb signals and avoid reflecting the signals back to the bus (reflecting signals back causes serious issues in the network).
The installation difficulty of the bus topology is medium. The topology requires fewer cables than other types of network topology and costs less. This network topology is used for small networks. Scalability is low because the length of the backbone cable is limited and so is the number of stations that can be connected to the backbone cable. Every network device is connected to a single cable.
A bus topology makes detecting network failures difficult. If the main cable is corrupted, the network goes down. Every additional node slows down the speed of data transmission in the network. Data can be sent only in one direction and is half-duplex. When one station sends a packet to a target station, the packet is sent to all stations (broadcast communication). However, only the target station receives the packet (after verifying the destination MAC address in the data frame). This working principle causes network overload and is not rational. The network of the bus network topology type works in half-duplex mode.
The half-duplex mode does not allow stations in the network to transmit and receive data at the same time. The whole channel bandwidth is used when data is transferred in either direction. When one station is sending data, other stations can only receive data.
In the full-duplex mode, both stations can transmit and receive data simultaneously. The link capacity is shared between signals going in one direction and signals going in another direction. The link must have two separate physical paths to send and receive data. As an alternative, the entire capacity can be divided between signals going in both directions.
10BASE2 is part of the IEEE 802.3 specifications used for Ethernet networks with coaxial cable. The maximum cable length ranges between 185 and 200 meters. The maximum length of thick coaxial cable for the 10BASE5 standard is 200 meters.
CSMA/CD (Carrier Sense Multiple Access/Collision Detection) is the technology used to prevent collisions (when two or more devices transmit data at the same time and this leads to corruption of the transmitted data) in the network. This protocol decides which station in what moment can transmit data. IEEE 802.3 is the standard that defines LAN (local area network) access methods using the CSMA/CD protocol.
IEEE 802.4 is the Token Bus standard that is used to create a logical token ring in networks built using the bus topology. A token is passed from one station to another in a defined sequence that represents the logical ring in the clockwise or counter-clockwise direction. On the following image, for Station 3, the neighbors are Station 1 and Station 5, and one of them is selected to transmit data depending on the direction. Only the token holder (the station having the token) can transmit frames in the network. IEEE 802.4 is more complex than the IEEE 802.3 protocol.
The Token Bus frame format. The total frame size is 8202 bytes, and the frame consists of 8 fields.
- Preamble (1 byte) is used for synchronization.
- Start delimiter (1 byte) is the field used to mark the beginning of the frame.
- Frame control (1 byte) verifies whether this frame is a control frame or a data frame.
- Destination address (2-6 bytes) specifies the address of the destination station.
- Source address (2-6 bytes) specifies the address of the source station.
- Payload (0-8182 bytes) is a field of the variable length to carry the useful data from the network layer. 8182 bytes is the maximum value if the 2-byte address is used. If the address length is 6 bytes, then the maximum size of the payload field is 8174 bytes, accordingly.
- Checksum (4 bytes) is used for error detection.
- End delimiter (1 byte) marks the end of the frame.
The bus network topology is not recommended for networks when transferring a large amount of traffic. Taking into account that the bus network topology with coaxial cables was used in the 1990s, and the maximum speed is 10 Mbit/s, you should not use this topology to build your network nowadays.
Ring Network Topology
The ring network topology is a modification of the bus topology. In the ring network topology, each station is connected to two other stations on either side. The two other stations are neighbors of this station. Data travels sequentially in one direction, hence, the network works in half-duplex mode. There are no terminators, and the last station is connected to the first station in the ring. The ring topology is faster than the bus topology. The coaxial cable and connectors used to install a network of the ring topology are the same as those used for the bus network topology.
If you build a large network using the ring topology, use repeaters to prevent data loss when transferring data over the network between stations on long cable fragments. Generally, each station works as a repeater and amplifies the signal. After data is transmitted, the data travels along the ring and passes intermediate nodes until this data is received by the destination device.
You can have higher latency if the number of stations connected to the network is high. For example, if there are 100 computers in the network, and the first computer sends a packet to the 100th computer in the ring, the packet has to pass through 99 stations to reach the target computer. Remember that data is transferred sequentially. All nodes must remain active to transmit data, and for this reason, the ring topology is classified as an active network topology. The risk of packet collisions is reduced because only one node in the network can send packets at a time. This approach allows equal bandwidth for each node in the network.
The token ring network is the implementation of the IEEE 802.5 standard. This topology works by using the token-based system. Token ring is the technology introduced in 1984 by IBM. The token is the marker that travels over the loop in one direction. Only the node that has the token can transmit data.
The first station that starts to work in the network becomes the monitoring station or the active monitor, controls the network state, and removes floating frames from the ring. Otherwise, continuously floating frames circulate in the ring for an unlimited time. The active monitor is also used to avoid lost tokens (by generating a new token) and to clock errors.
The IEEE 802.5 frame format for a token ring network is displayed on the diagram below.
- Start delimiter (1 byte) is used for synchronization and for notification of a station that a token is arriving.
- Access control (1 byte) is the field that contains the token bit, monitor bit, and priority bits.
- Frame control (1 byte)
- Destination address (6 bytes) – defines a MAC address of the destination device.
- Source address (6 bytes) – defines a MAC address of the sender.
- Payload (0 bytes or more) is the useful data (IP packet) that is transferred in a frame, and the size of the payload can vary from 0 to maximum token holding time.
- Checksum (4 bytes), which is also called frame check sequence or CRC (cyclic redundancy check), is used to check errors in the frame. Corrupted frames are discarded.
- The end delimiter (1 byte) marks the end of the frame.
- Frame status (1 byte) is a field used to terminate a data frame and serves as the ACK. This field can be set by a receiver and indicates whether the MAC address was recognized and the frame was copied.
The difficulty of the ring topology installation is medium. If you want to add or remove a network device, you need to change only two links. The ring topology is not expensive to install. But the list of advantages ends here.
Now let’s highlight the disadvantages of the ring network topology. Each fragment of the network can be a point of failure. A failure can be caused by a broken cable, damaged network adapter of a computer, cable disconnect, etc. In the case of link failure, the entire network fails because a signal cannot travel forward and pass the point of failure. Failure of one station causes failure of the entire network. All the data travels around the ring by passing all the nodes until reaching the destination node. Troubleshooting is difficult.
All nodes in the network of the ring topology share bandwidth. As a result, when adding more nodes into the ring, communication delays and network performance degradation occur. To reconfigure the network or to add/remove nodes, the network must be disconnected and stay offline. Network downtime is not convenient and cost-effective for an organization. Thus the ring network topology is not the best choice to build a scalable and reliable network.
The ring network topology in local area networks was popular in the 1990s until the beginning of the mass usage of the Ethernet standard with twisted-pair cables and more progressive star topology. Nowadays, the ring topology is not used and is not recommended for home and office usage due to the low network speed of 4 or 16 Mbit/s and the other disadvantages mentioned above.
The dual ring
The dual ring is a modified version of the ring topology. Adding a second connection between nodes in a ring allows the transfer of data in both directions and makes the network work in a full-duplex mode. Data is sent in clockwise and counter-clockwise directions in the network. If a link in the first ring fails, the second ring can be used as a link backup to continue network operation until the issue in the first ring is fixed.
The optical ring in modern networks uses the ring network topology. This network topology is primarily used by internet service providers (ISP) and managed service providers (MSP) to create connections in wide area networks.
Technologies and standards used to create an optical fiber ring:
- Resilient Packet Ring (RPR) that is known as IEEE 802.17
- STP (Spanning Tree Protocol) to avoid loops in the network
- Multiple section shared protection ring (MS-SPRing/4, MS-SPRing/2, etc.)
- Subnetwork connection protection (SNCP)
- Four-fiber bidirectional line-switched rings (BLSR/4), BLSR/2, etc.
- Synchronous transport module (STM-4, STM-16, STM-64, etc.)
- Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH)
Professional network equipment, such as switches, that support the appropriate standards are used to create a fiber ring. The price for this hardware is high. The optical ring with high availability is used to connect nodes in different districts of a city or in different cities to the high available and high-speed circle.
Star Network Topology
The star topology is the most common network topology used nowadays for the many advantages it provides. This topology requires a centralized unit, which is called a switch, and all other network devices are connected to this switch with own network cable. A switch has multiple ports (usually 4, 5, 8, 16, 24, 48, etc.), and all needed stations are connected to the switch to interact with each other in the network. There are no direct physical connections between the two stations in this case. If two stations interact with each other in the network, a frame leaves the network adapter of the sender and is sent to the switch, and then a switch retranslates the frame to the network card of the destination station.
The star network topology is easy to scale. If there are no free ports in a switch, change the switch to one with more ports or connect a second switch to the existing switch with a patch cord to expand the network of the star topology. Note that when the network is highly loaded, this connection between switches is a bottleneck because the data transfer rate between stations connected to different switches can be less than the data transfer rate between stations connected to ports of the same switch when. If you need to add a station to the network, take a patch cord, insert one end into the network adapter of the endpoint device and another end into the switch.
If any of the stations connected to a switch fails, the network continues to work without interruption. If a switch goes offline, the network cannot operate. Full-duplex and half-duplex modes are supported in the star network topology. This topology is easy in terms of maintenance.
Avoid loops when connecting network devices. If more than two connections are present between two network devices working on the second layer, a loop is created. For example, if you connect two switches by using two patch cords or insert a patch cord into two ports of one switch, you get a loop. The loop leads to communication disruptions within the network and broadcast storms that continue until you eject the unneeded network cable and power off the switch. If you want to create redundant connections, use devices with multiple network adapters that support NIC teaming or link aggregation.
Hub vs switch: What is the difference?
Both hub and switch are used to connect multiple devices in a local area network (LAN) that uses the star topology. When a signal that encodes a frame arrives at one port of a hub (a sender station that is connected to this port with a cable), the signal is sent to all ports of the hub and, thereby, to all devices connected to the hub. Only the station whose network card has the MAC address that is defined as the destination MAC address in the frame can receive the frame. All other network devices connected to the hub that are not destination devices and whose network adapters have other MAC addresses detect the sent signals and reject this frame. The disadvantage of the hub is that the network is overloaded. Instead of sending a frame from the hub to a destination network card, the frame is sent to all devices connected to ports of the hub. The network flooding reduces the bandwidth of the network. A hub operates on the first layer of the OSI model (a physical layer).
A switch is a more intelligent device. A switch remembers MAC addresses of connected devices and adds MAC addresses of devices connected to each port of the switch to the MAC address table. When a sender sends a frame to a target device, the frame is sent to the switch. The switch reads the MAC address of the network card of a destination station and checks the internal MAC address table to identify to which port of the switch the destination device is connected. Then, the switch sends the frame only to the port associated with the MAC address of the target device. There is no flooding and network overload. This approach ensures high network performance. There are no collisions when using a switch in a star network topology. A switch operates on the second layer of the OSI model (the data link layer). See the table below to see all OSI layers.
The Open System Interconnection Model (OSI)
|Layer number||Layer name||Protocol Data Unit (PDU)||Examples of protocols and standards|
|7||Application||Data received or transmitted by an application||HTTP, FTP, POP3, SMTP|
|6||Presentation||Data formatted for presentation||SSL, TLS|
|5||Session||Data passed to the network connection||NetBIOS, SAP|
|4||Transport||TCP segments, UDP datagrams||TCP, UDP|
|2||Data Link||Frames||Ethernet, PPP, STP, Token Ring|
|1||Physical||Bits||100BaseTX, RS232, ISDN|
A switch is more secure than a hub. Since 2011, using hubs to connect network elements is deprecated by IEEE 802.3, the set of standards and protocols for Ethernet networks.
Note: Switches, hubs, routers, modems, and Wi-Fi access points belong to active network equipment. Active equipment has electronic circuits and needs electric power for operation. Cables, connectors, transceivers, patch panels, rack mounts, and Wi-Fi antennas are passive network equipment that don’t need electricity. Passive network equipment are used to connect active network equipment.
Star topology in the real life
Let’s look in detail at how traditional Ethernet networks use the star network topology and how the IEEE 802.3 standard works. Twisted pair cables (4x2 wires) are the most common. They are usually used for these networks, and ends of cables are crimped with RJ-45 connectors (that are also known as 8P8C - 8 Position 8 Contact). Both ends of the cable are crimped by using the EIA/TIA 568B standard. You can also crimp both ends of a cable by using EIA/TIA 568A because the working principle remains the same, but this practice is not common. Find more information about cables in the Types of Cables section at the end of this blog post.
10BASE-T is the first implementation of Ethernet and uses a twisted pair cable (T in the name means the Twisted pair, BASE means baseband signaling). The maximum speed of the network is 10 Mbit/s. The required cable is UTP Cat.3 or higher (only orange and green pairs are used).
100BASE-TX, which is known as Fast Ethernet, was implemented in 1995 (IEEE 802.3u). This standard provides the speed of 100 Mbit/s in the network and requires the UTP Cat 5 cable.
1000BASE-T is known as Gigabit Ethernet (GbE or 1 GigE) and was described in the IEEE 802.3ab standard (that was ratified in 1999). The maximum data transfer rate is 1000 Mbit/s (1 Gbit/s). The required cable is UTP Cat 5e.
2.5GBASE-T is the standard referred to as IEEE 802.3bz, and the maximum data transfer speed is 2.5 Gbit/s. The IEEE 802.3bz standard was approved in 2016. The UTP Cat 5e cable is required.
5GBASE-T is similar to 2.5GBASE-T but provides a data transfer rate of 5 Gbit/s and requires the higher class of a cable – UTP Cat 6.
10GBASE-T is the fastest Ethernet standard that uses cables with copper wires with a maximum speed of 10 Gbit/s. The required cable is UTP Cat 6A. The IEEE 802.3an standard contains specifications for using a twisted pair for 10 Gbit/s connections.
RJ-45 connectors are used for cables in the previous Ethernet standards.
The maximum length of cable between the ports of two network devices is 100 meters for each standard mentioned above if twisted-pair cable requirements are met. If you need to connect two network devices that are 200 meters apart, use two 100-m segments of cable and connect them to a switch installed in the middle at 100 m from each device.
To achieve the highest speed for each standard, you must meet the minimum requirements: use the cable of the appropriate category, a switch that supports the needed mode, and network cards of devices connected to the switch. For example, if you want your devices in the network to operate at 1 Gbit/s speed, you must install 1-Gbit network cards on these devices, connect them to a 1-Gbit switch, and use the UTP Cat 5e cable that is crimped with RJ-45 connectors as a patch cord using the EIA/TIA 568B standard. When all connected devices work at a speed of 1 Gb/s, they work only in full-duplex mode.
Auto-negotiation is a feature used to determine the optimal network speed and data transferring mode (full-duplex or half-duplex) for a port linked to the port of another connected device. Auto-negotiation automatically determines the configuration of a port that is connected to the other end of the cable and sets the data transfer rate based on the lower value. If you connect a 100-Mbit network card to a 1-Gbit switch with a patch cord (Cat 5e), then the speed of the network connection is 100 Mbit/s. The backward compatibility with previous lower-speed Ethernet standards is a useful feature.
The length of a standard Ethernet IEEE 802.3 frame is 1518 bytes, and the standard MTU (maximum transmission unit) is 1500 bytes. If you need stations in the network to exchange large amounts of data, configure them to use jumbo frames that allow frames to use the MTU of 9000 bytes. Jumbo frames can help improve performance when transferring data because the ratio of useful information and service information in the frames is higher. Not all devices support jumbo frames.
Another advantage of using the star network topology is that Ethernet networks using this physical network topology type support VLAN tagging. VLAN tags are used to divide a physical network into logical networks by using the same physical infrastructure. Logical networks are separated on the second layer of the OSI model by using VLAN tags written into the frames. Hardware must support VLAN tagging to use this feature. VLAN ID can range from 0 to 4094. 4094 is the maximum number of VLAN networks in one physical network.
Let me cover frame formats for IEEE 802.3 Ethernet networks that use the star network topology.
- Preamble (7 bytes) indicates the start of the frame and is used for synchronization between a sender and receiver.
- Start of frame delimiter (1 byte) is the field that is always set to 10101011. SFD (start of frame delimiter) marks the end of the preamble and start of the Ethernet frame, preparing for upcoming bits of the destination address. This field is the last chance for the network device(s) to synchronize.
- Destination address (6 bytes) contains the MAC address of a destination network card (for example, E8:04:62:A0:B1:FF). The destination address can be unicast, multicast, broadcast (FF:FF:FF:FF:FF:FF).
- Source address (6 bytes) contains the MAC address of the source network card of the sender device. The source address is always unicast.
- Type (Ethernet type) or length (2 bytes) defines the length of the Ethernet frame. The type field indicates the layer 3 (L3) protocol (0x0800 – IPv4, 0x86DD – IPv6), whether the frame is using 802.1q VLAN tagging (0x8100), etc.
- Data payload (maximum 1500 bytes for standard frames or 9000 bytes for jumbo frames) is an encapsulated L3 packet that is carried by a frame. A packet is a typical PDU (protocol data unit) for the third layer of the OSI model (the Network layer).
- Checksum, FSC or CRC (4 bytes) is used to verify frame integrity. CRC is calculated by the sender, then a recipient receives the frame, calculates this value and compares with the received CRC value in the frame.
A 14-byte header of an Ethernet frame contains the destination address, source address, and type (length). If VLAN tagging is used, an additional 4-byte VLAN tagging field is added to a frame after the source address field.
The star network topology is also used to build networks based on optical cables (optical fiber) if you need to have longer cable segments or lower latency. 10GBASE-S and 10GBASE-E are modern standards for 10 Gbit/s networks using optical fiber to establish connections. A switch with transceivers and SFP connectors is required in this case to build a star topology network.
SR (short reach) transceivers are used for a distance of up to 300 meters.
LR (long reach) transceivers support cable length in the range of 300 m–3 km.
ER (extended reach) transceivers support cable length from 30 km to 40 km.
The multimode (MM) optical cable is used for short distances (less than 300 m).
The single-mode (SM) optical cable is used for long distances (more than 300 m).
There are transceivers that allow you to connect copper Cat 6A cables with RJ-45 connectors to SFP+ ports for maximum compatibility. The optical cable is connected to transceivers by using LC connectors. Building a physical network using optical cables is more difficult than building a network using copper cables of Cat 6A.
Advantages of the star topology
The star network topology is outstanding. The star is the most common network topology type nowadays. Let’s summarize the advantages of this network topology type.
- One network card per station is enough
- Easy installation and maintenance
- Easy troubleshooting
- High reliability and compatibility
- Fast speed
- Support of twisted pair and optical cables
- Flexibility and scalability
If a wireless network connection is used by installing an access point at home or in the office, the wireless network usually uses the star network topology. The 802.11n (a/b/g/n) standard is used in this case. The Wi-Fi access point acts as a switch connected with wireless network adapters of stations and represents the star topology.
Tree Network Topology
The tree network topology is an extension of the star topology and is widely used nowadays. The idea of the tree topology is that you can connect multiple stars like branches into a complex network by using connections between switches. Stations are connected to the ports of these switches. If one of the switches fails, the related segment of the network goes offline. If the main switch located on the top of the tree topology goes offline, network branches cannot connect to each other, but the computers of the branches continue to communicate with each other. Failure of any station connected to the network doesn’t affect the network branch or the entire network. The tree topology is reliable and easy to install, maintain, and troubleshoot, and provides high scalability. There is one connection between each node of the network when using this topology (see the network topology diagram below).
Protocols and standards applicable for the star network topology are used for the tree topology (including switches, cables, and connectors). Also, routers can be used to divide subnetworks from each other on the third level of the OSI model. As a result, network protocols of the third layer are used, and the appropriate configuration of network equipment is performed. The tree network topology is widely used in large organizations because it is easy to install and manage. The hierarchical network structure is present. Opt for connecting all switches of network branches to the main switch to avoid creating a long chain of switches that can cause bottlenecks and reduced network performance when data is transferred via segments between switches.
An example of network configuration
Let’s look at an example of the tree network topology and how this network topology type is used in practice. For example, there is an organization with multiple departments, and each department occupies one office in a building. Departments are located on different floors of the building. Installing a network by using a single star topology is not rational because this would lead to extra consumption of cable to connect all stations in different locations of the building to a single switch. Also, the number of stations can be higher than the number of ports in the switch. In this case, the most rational solution is to install a dedicated switch in the main office of each department, connect all the stations of each department to the appropriate switch, and connect all the switches of departments to the main switch located in the server room. The main switch is on the top of the tree hierarchy in this example. The main switch can be connected to a router to access the internet. If there is a department located in another building, and the distance to your switch in the main building is more than 100 meters, you can use an additional switch with a UTP cable. This switch divides the distance into segments that are under 100 meters. As an alternative, use the optical cable (and the appropriate converters or switches) to connect this remote office to the main switch.
To simplify administration and improve security, you can install routers for each department and create subnets for each department. For example, developers are in the 192.168.17.0/24 network, accountants are in the 192.168.18.0/24 network, testers are in the 192.168.19.0/24 network, servers are in the 192.168.1.0/24 network (the main subnetwork), etc.
What is a router?
A router is a device that operates on the third layer of the OSI model (the network layer) and operates with packets (the PDU is a packet). A router can analyze, receive, and forward packets between different IP networks (subnetworks) by using the IP addresses of source hosts and destination hosts. Invalid packets are dropped or rejected. Different techniques are used for routing, such as NAT (network address translation), routing tables, etc. The firewall and network security are additional features of the router. Routers can select the best route to transfer packets. A packet is encapsulated into a frame.
A router has at least two network interfaces (usually LAN and WAN). There are popular models of routers that are combined with a switch in a single device. These routers have one WAN port and multiple LAN ports (usually 4-8 for small office/home office models). Professional routers have multiple ports that are not defined as LAN or WAN ports, and you should configure them manually. You can use a physical Linux server with multiple network adapters and connect this machine as a router. Connect a switch to the LAN network interface of this Linux router to have the tree network topology type.
Just as for the star network topology, wireless network equipment can be used to create network segments of the tree topology in a mix with wired segments. Two identical Wi-Fi access points can work in the bridge mode to connect two segments of the network (two stars). This approach is useful when you need to connect to offices that are more than 100 meters apart and when it is not possible to install a cable between the offices. The following tree network topology diagram explains this case. A switch is connected to each Wi-Fi access point operating in the bridge mode, two other Wi-Fi access points are connected to the appropriate switch, and client stations are connected to these access points (forming branches of the tree that are networks of the star topology).
Mesh Network Topology
A mesh network topology is a configuration in which each station in the network is connected to the other stations. All devices are interconnected with each other. There are two types of mesh: a full mesh and a partial mesh. In a partially connected mesh, at least two stations of the network are connected to multiple other stations in the network. In a full mesh, each station is connected to all other stations. The number of connections for a full mesh is calculated with the formula Nc=N(N-1)/2 links, where N is the number of nodes in the network (for the full-duplex mode of communication). See the network topology diagram below.
The mesh network topology provides redundancy for a network but can be costly due to the high number of connections and total length of used cable. If one station fails, the network can continue operation by using other nodes and connections. If data was transferred via the failed node, the route is changed, and data is transmitted via other nodes.
Each node is a router that can create and modify routes dynamically to transfer data in the most rational way (dynamic routing protocols are used in this case). The number of hops can vary when changing the route between source and destination device. Routing tables consist of destination identifier, source identifier, metrics, time to live, and broadcast identifier. Routing works at the third layer of the OSI model. Sometimes flooding techniques are used instead of routing. This type of network topologies can be used for the transmission of high amounts of traffic thanks to connection redundancy.
It is difficult to add a new station to the network because you need to connect a new station to multiple other stations. Adding or removing nodes doesn’t interrupt the operation of the whole network. Multiple network cards per station are required to establish all needed connections. After adding a new station, you may need to install additional network cards on other stations that must be connected to the new station. The mesh network topology is scalable, but this process is not straightforward. Administration can be time-consuming. The fault-tolerant topology ensures high reliability. There are no hierarchical relationships.
The mesh network topology is an example of connecting multiple sites on the internet. This network topology is widely used for WAN (wide area network) connections, for networks of mission-critical organizations such as military organizations, etc.
The mesh network topology in Wi-Fi networks is used to extend the coverage of wireless networks that are called wireless mesh networks. The infrastructure mesh architecture is the most common for this type of network topologies. Wireless technologies used to create this network topology type are Zigbee and Z-Wave that are based on the IEEE 802.15.4 protocol, WirelessHART. IEEE 802.11, 802.15, and 802.16. Cellular networks also can work using the mesh network topology.
Hybrid Network Topology
The hybrid topology combines two or more of the network topology types covered earlier. A combination of the star and ring types of network topology is an example of a hybrid network topology. Sometimes you might need the flexibility of two topologies in your network. The hybrid topology is usually scalable and has the advantages of all child topologies. The disadvantages of the topologies are also combined, making installation and maintenance difficult. The hybrid topology adds more complexity to your network and may require additional costs.
The star-ring topology is one of the examples of the hybrid type of network topologies that you can find nowadays. When talking about the ring part, we don’t mean coaxial cables with T-connectors and BNC connectors. In a modern network, a fiber ring is used to connect nodes at long distances. This hybrid network topology (ring + star) is used to build networks between different buildings located far away within one city or in different cities. Using the star topology when the distance between nodes is high is difficult and causes overconsumption of cable.
The advantage of the fiber ring with multiple lines is the absence of a single point of failure. Redundant optical links provide high availability and reliability. In case of one optical link corruption, reserve channels are used. Different fiber lines between nodes of the circle can be traced by using different geographical routes.
Fiber switches/routers that are nodes of the ring are connected to switches/routers that are parts of network segments using the star network topology. That connection has advantages for building local area networks. Fiber media converters are used to connect switches/routers compatible with fiber cables and related connectors to switches/routers compatible with copper cables crimped with the appropriate connectors if a ring and star use different types of cables and network equipment.
Types of Cables
Cables are important components of physical network topology. Network speed and overall costs for network installation depend on the selected network topology, cables, and other network equipment. Different types of cables have been mentioned in the blog post when giving real examples of using different types of network topology. Let’s look at the most used cables for different types of network topology explained in this blog post for a better understanding of physical topologies.
The coaxial cable consists of a central copper wire as the inner conductor. Solid copper or several thin strands of copper can be used for the central conductor in different cable models. This inner conductor is surrounded by an insulating layer protecting the core wire. The insulating layer is surrounded by conductive aluminum tape and weaved copper shield. The external layer is polymer isolation, which is black or white.
RG-58 is a popular version of coaxial cable and has an impedance of 50 Ohm. This cable is also referred to as 10Base2 Thinnet cable. RG in the name stands for “radio guide.” Other examples of coaxial cable are RG-6, RG-8, RG-59. Nowadays, coaxial cables are used to connect Wi-Fi antennas to the appropriate network equipment (5D-FB, 8D-FB, LMR-400 cable types).
Twisted pair cables are widely used for networks due to simplicity of usage, high bandwidth, and affordable price. Two separate insulated copper wires (about 1mm in diameter) are twisted together to form a pair. One to four pairs are used in different cable types and categories. The reason for twisting is to reduce noise signals. Twisted pairs are covered with an external insulated shield that protects a cable against mechanical damage. There are three main types of twisted pair cables: UTP, FTP, and STP.
UTP (Unshielded Twisted Pair) is a cable that comprises of wires and insulators.
FTP (Foil screened Twisted Pair) or F/UTP is a cable in which all twisted pairs together are covered with a metal shield (aluminum foil). An additional single wire less than 1 mm in diameter is included inside the cable. As a result, FTP cables support grounding if the appropriate connectors are used. Individual twisted pairs are unscreened.
STP (Shielded Twisted Pair) contains a weaved metal shield around twisted pairs. Each twisted pair is shielded with aluminum foil. The whole cable is hard and is more difficult to twist (the cable is not as highly flexible as FTP and UTP). The STP cable provides better protection against electromagnetic noise and mechanical damage.
Category 5e or higher is used nowadays to install networks. The higher the category is, the higher data transfer rate is (100MHz, 250 MHz, 500 MHz), and data transferring speed is supported. You can use an FTP or STP cable of the same category instead of a UTP cable. UTP Cat.3 has only two twisted pairs. UTP Cat.5 and higher have 4 twisted pairs. Cable crimping is easy and can be done by anyone who has a cable crimping tool.
Optical fiber cable
Optical fiber cable provides the lowest latency and covers the longer distance with one cable segment (without repeaters). Fiber optic cable is thin and consists of two layers of glass. The core glass layer is a pure glass that is a waveguide for light signals for a long distance. The cladding is a glass layer surrounding the core and has a lower index of refraction compared to the core. The technology is based on the total internal reflection principle.
Single-mode fiber (SMF) cables and multimode fiber (MMF) cables are used. The MMF cables have a bigger diameter and are used to propagate multiple light rays (or modes), but they are better for short distances. MMF cables usually have a blue color. The SMF cables are better for long distances and are yellow. Popular connectors are SC, FC, LC, and ST.
The price for fiber optic cables is high. Welding of optical fiber is difficult comparing to the wiring of twisted pair cables or coaxial cables. The price of transceivers needed to plug an optical cable into a switch or router adds expenses. The ends of optical fibers should always be clean as even a piece of dust can cause significant issues.
This blog post has covered network topologies, including physical topologies, logical topologies, and examples of using them in real life. If you need to build a local area network, use the star topology, which is the most common network topology today, or the tree network topology, which is a highly scalable modification of the star topology. The ring and mesh topologies are mostly used by internet service providers, managed service providers, in data centers. These are more difficult to configure. The variety of network topology types, network equipment, standards, and protocols allow you to install a network of any configuration in your environment depending on your needs.
When you have installed a network and connected servers and a virtual machine to the network, don’t forget to configure data backup and protect your data. NAKIVO Backup & Replication is a universal data protection solution that supports backup of Linux machines, Windows machines, VMware virtual machines (VMs), Hyper-V VMs, Oracle Databases, and Office 365 via a network. Download the free edition of NAKIVO Backup & Replication and try the product in your environment.
|
<urn:uuid:a7b73ab0-c7e7-4597-b788-9d13780b1958>
|
CC-MAIN-2022-40
|
https://www.nakivo.com/blog/types-of-network-topology-explained/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00440.warc.gz
|
en
| 0.903981 | 9,594 | 3.65625 | 4 |
IFS Report Designer Development¶
The IFS Report Designer tool is used to create RDL Layouts, which is basically in XML format and well formatted. RDL objects in the layout is used to create a XSLT style sheet when previewing the report. Then these XSLT style sheets apply to a XML document and results in a XSL-FO (Formatting Objects XSL) document.
The idea behind the tool is to make it easy to create layouts for operational reports (business documents), things like invoices, order confirmations and so on. It wasn't designed to cover every aspect of XSL-FO, nor does it claim to be a general XSLT to XSL-FO design tool. In it's current shape and form the tool is not intended for development of ad-hoc or analytic reports (typically including business graphics such as pie charts, bar charts and so on). For that kind of reports please refer to the IFS documentation for Quick Reports.
The RDL objects in the RDL layout created using the IFS Report Designer tool is used to generate XSL style sheet during the report generation process Then the assembled data is formatted using the XSL objects. The formatted output (XSL-FO) can be rendered using a Formatting Object processor (like FOP from Apache). The rendered output is typically a PDF file that can be used both for preview and to make a print out.
There are a couple of basic concepts that are used during the development as well as modification of a report and its layouts. It's important to understand these basic concepts before getting started developing a report or if any major structural changes are going to be made to an existing report.
There are a couple of different data/file types involved. The XML is the actual report data, this is generated by the business logic layer (the RDF, PL/SQL code). XML schemas (in our case XSD files) provide a means for defining the structure, content and semantics of XML documents. This means a schema can be used to validate that a XML document has the correct structure content and semantics. The schema is generated from the modeled report structure. The schema contains all the information needed to create the layout such as structure, data types, length of attributes and so on.
The XML schema is the input to the Report Designer designer tool. The output is an RDL layout in XML format. The RDL layout objects converted into XSL objects and used to transform a XML document. The generated XSL layout is applied to the XML and in our case the output is another XML document, namely a XSL-FO (Formatting Object) document. The XSL-FO document can be rendered to a PDF document (Portable Document Format created/defined by Adobe).
The structure of the data (the XML) and the structure of the layout is very tightly connected. What this means is that the structure of the data very much determines what you are able to do in the layout. Whether or not it's possible to iterate over a collection of elements, where you are able to put elements in the layout and so on. Due to this it's important to consider the graphical representation when designing the logical structure of the data to be used in the report/layout. It could very well be that two layouts are substantially different and that results in designing this as two separate reports rather than two different layouts for the same report.
It's not possible to alter/change the grouping of the data in the layout. It's still possible to ignore groups, but you can't change the grouping. The following example illustrates what can be done and not. Let's say we have a phone book type of report/data. We have a number of entries consisting of a first name, surname and a phone number. Let's also say that these numbers are grouped by region. From this type of data it's very simple to create a layout that lists all entries per region. We can choose to sort the entries either by surname/first name or number. We can also skip the region grouping and just list all entries independent of region. What's significantly harder on the other hand is to create a layout that groups the phone book entries by name and list region and number for each entry. Using custom XSLT expressions it's close to anything is possible, but not recommended.
Considering this it's extremely important to consider the layout of the report when the structure of the data is defined.
XML is a hierarchic data format, this is reflected by a tables concept in the layout design tool. Connecting a table to a certain node in the schema/XML will result in a loop/iteration. A table object in the tools is used to design how one row in the data. A table can also have a header and a footer part that will not be repeated for all rows, but occurs once (or on every page if applicable). Each child element of the XML node to which the table is linked will result in the designed output once you render it. The other building block is an object called a block container or simply a block. Blocks can be used to logically group content in the layout, for easier editing.
There are three types of pages in the tool, first, rest and last. Each page can consist of three sections, header footer and flow. There is no need to use all page types nor page sections if not necessary to achieve the desired layout. The page type first makes it possible to design specific layouts for the first page. All other pages will get rest layout. If there is a last page in the layout, it may consist of either static header or different header and footer. In each header and footer section of a page you put static content, page header and footers. The layout of a header and footer is fixed, theses sections will not expand as the flow section. In the header and footer it's wysiwyg (what you see is what you get). In the flow section, where tables are inserted, things will expand depending on the amount of data. If you are designing a simple list report the flow section might just contain one table with a couple of columns. What you are designing is the content of one row in the list report. In design mode you will only see one row. Once the report is run or previewed with data containing multiple rows you will see that the row you designed will be repeated for every row in the data set.
Creating and Modifying layouts¶
This section of the developers guide, which by far is the largest section, covers a number of aspects regarding designing, changing and modifying an IFS Report Designer layout. Layout development/modification is something that can be done both by customers and IFS staff and this is why this part of the guide is aimed at both customers doing layout development/modification as well as IFS staff that develops reports from scratch..
Read more in Creating and Modifying Layouts - Details
Deployment and Configuration¶
This section is also covered in the administrator guide, but some parts of special interest to a developer is described here as well. We're focusing on being able to deploy, configure and test the newly developed or modified report. Things that are highlighted are:
- Configure XML generation
- Registering the layout
- Setting up company logotypes
Read more in Deployment and Configuration - Details
|
<urn:uuid:e62d18a0-2955-42cd-8a5d-30d6058e9bf2>
|
CC-MAIN-2022-40
|
https://docs.ifs.com/techdocs/21r2/050_development/050_development_tools/010_report_designer/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00440.warc.gz
|
en
| 0.91699 | 1,534 | 2.75 | 3 |
The Value of Rights, Freedoms, & Protections: Reflecting on Emancipation & our first Mary Prince Day
The year 2020 has given us a strange Cup Match holiday week - thanks to COVID-19, it has been a break without the Cup Match itself. This year is also remarkable for the change in name of the second day, in honour of one of Bermuda's National Heroes: Mary Prince, a key figure in the abolitionist movement. These two factors combined have perhaps given us an added reminder to reflect on the first day's namesake: Emancipation.
The Transatlantic Slave Trade marks one of the worst stains on the soul of the Western world. Slavery had existed in the ancient world, but early modern practices radically changed and institutionalised it. The status of slave became an inheritable trait, and the focus on Africa for indentured servants led to prejudiced treatment of all people based on the shade of their skin.
Atlantic commerce also changed the practice by dramatically industrialising slavery on a scale that had never been seen before, as Europeans sought to exploit entire continents for resources. To do so, slave traders utilised a tool we are all too familiar with today: data.
|
<urn:uuid:42253dd5-6bdc-4671-a5f5-cf5ec9ebf510>
|
CC-MAIN-2022-40
|
https://www.privacy.bm/post/the-value-of-rights-freedoms-protections-reflecting-on-emancipation-our-first-mary-prince-day
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00440.warc.gz
|
en
| 0.950613 | 246 | 3.390625 | 3 |
Autonomous and on-demand transport will form one-third of all urban journeys, according to a new World Economic Forum report, and reduce the need for parking. However, without the right regulations and management in place, the result could be city centres that are more, not less, congested. Chris Middleton reports.
Autonomous cars and driver-assistance systems have had a rough ride in the press this year. Two fatal accidents involving cars under software control, and two high-level reports revealing falling consumer confidence in the technologies have dented industry hopes. Meanwhile, Tesla’s internal woes have hardly helped the cause, given the company’s – and its CEO’s – status as a poster boy of change.
But peaks and troughs are characteristics of every wave, so it’s inevitable that problems will occur when anything as important as transport is being disrupted by new technology, and the very concept of urban car ownership is challenged by on-demand systems and new business models.
So what will autonomous vehicles’ impact really be, once the hype and hysteria have died down? How will they reshape urban mobility, as the West faces a future of ageing populations in ageing cities, while Africa and Asia meet a very different challenge: booming youth populations in young cities? (For more on this, see our separate report: City of Robots: How robotics & automation could solve cities’ most serious problems.)
Three years ago, the World Economic Forum partnered with the Boston Consulting Group to explore every avenue of this critical question in the city of Boston, MA, and the results of their joint research programme have just been published.
Their 36-page report, Reshaping Urban Mobility with Autonomous Vehicles: Lessons from the City of Boston, predicts that mobility on demand will account for one-third of all trips in the city – statistics that are likely to be mirrored in cities throughout the world.
“Consumer research supplemented our collaboration with the City of Boston,” explains the report. “A 2015 consumer survey showed strong interest in autonomous vehicles [AVs] around the world, with 60 percent of respondents indicating that they would ride in an AV.
“Among the many perceived benefits of the new technology, city dwellers valued AVs most because they eliminated the need to find parking.”
“In 2016, we conducted a series of detailed focus groups with residents of the Greater Boston area,” the report continues. “Our findings revealed that families with young children struggle with alternatives to the private car; that consumers are rapidly embracing Uber, Lyft, and other ride-sharing services to fill a gap between public transport and private-vehicle ownership; and that consumers are concerned about the public transportation system.”
Since then, the American Automobile Association (AAA) has produced a report suggesting that consumer support for driverless systems has been severely damaged by March’s fatal accidents, involving Uber and Tesla cars running under software control.
Nevertheless, the Boston researchers claim to have generated a detailed, long-term view of how mobility will evolve in cities. “Research participants were presented with variables, such as the length of the trip and the time of day, and were required to make discrete choices about what mode of transport they would use,” explains the report.
“This approach generated a realistic and granular view of how mobility will evolve in Boston. Our analysis predicts a clear shift to mobility on demand (for both autonomous and traditional vehicles), which will account for nearly 30 percent of all trips in the Greater Boston area, and 40 percent of trips within city limits in the future.
“Driving this shift are the cost-competitive nature of robo-taxis and robo-shuttles – especially on shorter trips – and the added convenience and comfort, compared with mass transit.”
In suburban and other areas outside the city, the WEF and Boston Consulting analysis found that mobility on demand will mainly replace personal car usage. However, in urban areas it will replace the use of both personal cars and mass transit to equal degrees, with the shift creating a risk of increased congestion.
“Policymakers must assess and address the potential challenge and identify the right policy levers to influence this transition,” urges the report. “AVs’ impact on traffic will vary by neighbourhood and be shaped by policy.”
So why will congestion increase?
Three key findings
To understand the effects of AVs in Boston, the researchers built a traffic simulation model that showed the contrasts between current traffic patterns and future scenarios, including personal vehicles, taxis, private AVs, and shared on-demand services.
From this, three important findings emerged:
- Shared AVs will reduce the number of vehicles on the streets and reduce overall travel times across the city. The findings showed that the number of vehicles on the road will decrease by 15 percent, while the total number of miles travelled will increase by 16 percent. Travel times will improve, but only by four percent on average, adds the report.
- However, introducing shared AVs will worsen congestion in downtown areas. Despite an overall reduction in traffic, congestion will worsen in some part of the city, mainly because customers will often choose AVs as substitutes for short public transport trips. As a result, travel time will increase by 5.5 percent in downtown areas. But other areas will see an improvement, adds the report. “In Allston, a neighbourhood outside the city’s core, mobility on demand will mainly replace the use of personal cars rather than mass transit, and travel time will decrease by 12.1 percent.”
- With the new modal mix, Boston will require roughly half as many parking spots, including those on streets and in car parks. As a result, AVs present an opportunity to rethink the overall design of city centres and suburbs.
However, it stands to reason that many cities may lose valuable sources of revenue as a result – something not explored in the report. For example, in the financial year 2015-16, the 353 local authorities in England generated combined revenues of £756 million from on- and off-street parking.
The Boston research findings suggest that some companies’ claims that autonomous vehicles will simply arrive and solve every transport problem are wide of the mark.
In some areas, congestion will fall and mobility will increase, but in others – particularly in crowded areas already served by public transport – congestion will worsen and journey times increase. Even in areas where congestion is predicted to fall, journey times may only see marginal improvements. Meanwhile, cities will need to examine their revenue models.
This is one reason why some companies, such as Uber, believe that the future of urban transportation will be as much about autonomous, on-demand air taxis as vehicles on the street.
That aside, the issues of congestion and speed are where policymakers need to step in and look at the challenges, both individually and holistically. “Local governments hold the key to influencing these results because they have the power to implement the right policies and incentives”, acknowledges the report.
“The greatest effects are likely to come from occupancy-based pricing schemes, in which financial incentives discourage single-occupancy rides. This measure could improve citywide travel time by 15 percent.”
Leadership is critical, continues the report. One of the WEF/Boston Consulting research goals in partnering with the City of Boston was to catalyse AV testing in the city, it adds, noting the numerous partnerships and experimental programmes that emerged as a direct result of that decision.
In short, a determination to effect and explore change within a city galvanises innovators to make that change happen.
The research partners pursued these collaborations to understand how to unlock AVs’ “tremendous potential to generate social value” (saved lives, saved time and enhanced access for people who are elderly, disabled, and disadvantaged). So what was the result?
“We conclude that cities, nations, and the world will need to embrace a regulatory and governance framework for AVs that nudges us towards an ‘AV heaven’ scenario, and away from ‘AV hell’,” it says.
“AVs enable the greatest transformation in urban mobility since the creation of the automobile. However, their social benefits can be unlocked only if governments understand and implement the appropriate policies and governance structures.”
Internet of Business says
With more than 100 AV pilots under way around the world, the lessons learned in Boston are timely and relevant, as the report itself says.
With robo-taxi services fast approaching in some cities, and other companies exploring the potential of autonomous deliveries, the time to get the policy decisions right is now.
|
<urn:uuid:37e41673-be5d-406e-a3d3-d64f83f735e6>
|
CC-MAIN-2022-40
|
https://internetofbusiness.com/one-third-of-urban-journeys-will-be-autonomous-says-wef/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00440.warc.gz
|
en
| 0.945488 | 1,814 | 2.546875 | 3 |
This introductory course is designed for technical and nontechnical learners who are unfamiliar with blockchain and interested in how this technology can solve business problems. The course covers core blockchain concepts, benefits, and potential hurdles. The course also provides real-world examples of how businesses have implemented blockchain to solve a business need.
When: Monday, February 15, 2021 | 9:30 AM IST
Length: 90 minutes
• List blockchain core concepts
• Compare and contrast blockchain and other similar technologies as databases and ledgers
• Explain the benefits of blockchain in solving business problems
• State examples of blockchain technology applied in various industries
• Recognize the challenges in establishing blockchain
• Describe the basics concepts, functionality, and benefits of Amazon Managed Blockchain
Who Should Attend?
• Application developers responsible for implementing blockchain applications that meet the required business and technical specifications
• Solutions architects responsible for designing blockchain applications that follow open standards and best practices for architecture design, performance, reliability, and security
• Business leaders responsible for evaluating and approving blockchain initiatives at their organizations
• DevOps engineers responsible for designing, deploying, and maintaining the cloud infrastructure that powers the blockchain applications
|
<urn:uuid:42b33533-ebcf-4a21-b0e5-d44ab0cbbf61>
|
CC-MAIN-2022-40
|
https://pages.awscloud.com/APAC-event-OE-blockchain-20210215-reg-event.html?sc_channel=em&sc_campaign=%7B%7Bprogram.name%7D%7D&sc_medium=em_%7B%7Bcampaign.id%7D%7D&sc_content=REG_webinar_traincert&sc_geo=apac&sc_country=in&sc_outcome=reg&trkCampaign=mse_feb_2021&trk=olkem_in
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00640.warc.gz
|
en
| 0.906386 | 239 | 2.640625 | 3 |
Remember the famous ad from Sun Microsystems - “The Network is the Computer”? This same sentiment is increasingly being applied to data, but instead of cables connecting many machines, we are uncovering and unlocking relationships in our data.
The majority of data that is created every day contains relationships. Some of these relationships are explicit, like following someone on Twitter, or foreign keys in relational databases. In other cases, the links are less overt, less structured, and have historically been harder to work with. However, new technologies are unlocking this new type of data, and opening the door to entirely new use-cases.
Advances in search and data analysis have given us incredible powers to explore new relationships in even the largest of datasets. However, many commentators have come to realise that “Big Data” analysis is often limited by the imagination of the user, i.e. their ability/creativity to envision what relationships exist and understand which ones are important.
Graph analysis adds a new superpower, by highlighting undiscovered relationships in the data. Graph makes it easy to answer complex questions and address use-cases such as behavioural analysis, fraud, cyber security, drug discovery, personalised medicine, and building personalised recommendations based on continuous real-time data.
How it Works
Exploring and understanding the relationships in your data starts by identifying them. All data has some underlying structure, and it is this structure that lays the foundation for relationship exploration. With modern document stores, it’s easy to store and query structured documents. These documents may represent information about a user, such as their complete purchase history or their music preferences, or it they may represent observations or events in the real world, like a tweet, or individual purchases.
With traditional data analysis, we would look at these data and try to summarise and understand the properties of the aggregated data: What product did we sell the most? Who are our top customers? Which band is most popular? As we learn the answers, we may slice and dice the data further, asking more specific questions like, What was our top product in each category, in each region? What music is most popular with people under 30 in France? Even 5 years ago, these were difficult questions to ask of large amounts of data, but today, if those are the only questions you ask, you are missing a big opportunity.
Rather than summarise whole documents, imagine if you could visualise your data in powerful new ways to see relationships based on how documents or properties relate to each other. This is what graph delivers allowing you to see patterns that you never knew existed. Then, imagine you could ask a different type of question based on the new relationships you discovered...
Questions like, ”What other artists are most like Mozart?” Or even, ”What other products are most often purchased with diapers?” Or, perhaps, “What similarities are there between individual entities in the Panama Papers?”
The Mozart question provides the perfect example of why relevance, which is one of the biggest challenges within graph analysis, is so important. In a music recommendation engine based on the preference data of thousands or millions of users, you run the risk of simply returning the most popular bands, regardless of whether they are meaningful. You wouldn’t want to recommend the Beatles, who are universally popular, for good reason, to someone looking for music similar to Mozart.
The frequency of these so-called “hyperconnected entities” within most people's playlists, means that they would show up as being similar to even the most obscure, niche music genres. Likewise, analysis of grocery shopping would show milk under “What products were most frequently bought together with...”, simply because of the number of people who pick up a carton every time they visit the supermarket.
By combining graph analysis with search techniques, relevance can be used to bring back the important results, and avoid frequent connections. Meaningful importance can be calculated by correlating the significance of each relationship in comparison to global averages.
This idea of using relevance in graph exploration has opened the door to asking more complex and valuable questions. If you have log data from your web server, you have information about the IP address of incoming requests, and the URLs they were requesting. Could you use this information to detect attackers? If you know one attack vector (requests for /admin) could you use this information to find bad actors and other attack vectors? With music preference data, you can now build a personalised recommendation system that suggests the most relevant bands, given your demographics and likes.
While graph databases and search engines have been around for some time, in some ways, intelligent graph exploration is a new frontier in data analysis and understanding. Businesses who use their data more effectively outperform those who don’t, and early adopters of this technology are likely to have a leg-up on the competition.
When you combine search relevance with graph exploration, you will be able to help your company respond more quickly to changes in customer behaviour, market conditions, and solve some of the most complex use-cases where the answers to those problems lie in the relationships in the data.
Steve Kearns, Senior Director of Product Management, Elastic (opens in new tab)
|
<urn:uuid:ee22319e-ecef-41f4-8b17-b7e7e834e13d>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2016/06/09/search-and-graph-find-new-answers-in-your-data/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00640.warc.gz
|
en
| 0.949478 | 1,082 | 2.59375 | 3 |
A new report by the Commons Science and Technology Committee on the state of digital skills in the UK, among the general public, teachers and professionals, is mildly depressing.
According to the BBC, members of Parliament have called for urgent action, otherwise the country will be facing a productivity and competitiveness crisis.
The report says that approximately 12.6 million adults in the country are lacking basic digital skills. The definition of a 'digital skill' is vague, but basically means people who have it know their way around a computer, know how to use the internet and "to navigate knowingly through the negative and positive elements of online activity and make informed choices about the content and services they use".
Another 5.8 million have never used the internet.
There are big problems in UK's schools, when it comes to digital skills, it was said, including “stubborn digital exclusion” and “systematic problems”. More than a fifth (22 per cent) of IT equipment in schools is ineffective, and just over a third (35 per cent) of teachers are qualified for the position.
Schools are missing almost a third (30 per cent) of computer science teachers.
The problem worsens when looking at the workforce – the UK will be missing 745,000 workers with digital skills by next year, and nine out of ten (90 per cent) of job positions require some extent of digital skills.
All things considered, this could cost the UK economy approximately £63 billion a year in lost income.
"The UK leads Europe on tech, but we need to take concerted action to avoid falling behind. We need to make sure tomorrow's workforce is leaving school or university with the digital skills that employers need," BBC cited the committee's chairwoman, Nicole Blackwood.
Image Credit: Yorkman / Shutterstock
|
<urn:uuid:00cb7baf-6ba0-4a22-8054-6b0c406eb145>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2016/06/13/uk-to-need-almost-a-million-of-digitally-skilled-workers-next-year/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00640.warc.gz
|
en
| 0.952512 | 373 | 3.015625 | 3 |
Elliptic Curve Cryptography (ECC)
Security providers work continuously to innovate technology to upend hackers who are diligent in their efforts to craft clever new ways to steal data. Advanced ECC, while not new, uses a different approach than standard RSA. RSA draws its strength by using increasingly larger logarithms, which take more time to process. ECC, on the other hand, relies on discovering a distinct logarithm within a random elliptic curve. The larger the elliptic curve, the greater the security. Using a random formula improves the encryption strength while a smaller logarithm increases the performance of digital certificates.
ECC uses a smaller algorithm to generate keys that are exponentially stronger than RSA keys.
The smaller algorithm means less data is being verified between the server and the client, which translates to increased network performance. This is especially important for websites that experience a high level of traffic.
Is ECC Right for You?
Entrust Datacard’s SSL Certificates using ECC technology are ideal for scenarios where server-load performance is critical, and site visitors and the Web/app server are known to be compatible with ECC keys.
|
<urn:uuid:e6a5edb0-f447-45f2-832b-6fa170c94fee>
|
CC-MAIN-2022-40
|
https://www.entrust.com/resources/certificate-solutions/learn/elliptic-curve-cryptography-ssl
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00640.warc.gz
|
en
| 0.908379 | 253 | 2.734375 | 3 |
In this presentation, the risk of privacy in the modern communication technology, both Internet and mobile networks, is analyzed. It turns out, that users have to negotiate the risk of privacy between refraining from services, trusting services, using self-data-protection methods and trusting privacy enhancing technologies. Services, on the other hand, have to present themselves as trustworthy with respect of their competent and decent way to handle user data. This presentation identifies the privacy principles and related trust areas and protection means.
You very much.
Okay. Thank you for this introduction. And thank you, Tom, for your way to speak about risk and security. I would like to talk about the way you do. It's very close to my approach by the way. But of course from academic point of view view, you have another style to talk about it. I have not only learned to take better care about my toilet lit and about my toothbrush, but more important to make a difference between hygienic risk and real risk. Actually, this is my topic, my theme as well. What I would like to tell you today is understanding privacy and its risks. Not only what is risk with privacy, but how we learn to assess this risk, to take measures about this risk, to identify a risk. And we start with understanding what is privacy? What I want to tell you is two things.
Mainly one is how do computer scientists understand privacy and how do we try to make it operational? What are those tools, technical organizational, that work well. And on the other side, those that do not work well. So we have a, a big history of failure and privacy technology. And second, not only technology is responsible or technology and organization is responsible for dealing with the risk of privacy. Also there's change of our societal approach to privacy. Let's start with the risk and where there's risk. There's also opportunity. Let's have a first look on the opportunity of what we can do with all these data and exploiting these data, new business models and well, that's my suggestion for a business on the business side, it's quite clear. They might get a service. They might earn money out of it. The risk side is on the user side, who is confronted with a prediction of his future, which he might have not known before. And this is one of the risks that due to algorithms and data, and yeah, there is a prediction about your own behavior in the future. And you don't know about your own future. And do you want to know it or maybe it's even wrong? So there's an obvious risk with that.
I would like to share one observation, which I made in the recent weeks about our change of our societal change in our approach to privacy. This painting is from the exhibition in Frankfurt, the state, the impression impressionism exhibition CLA Munk is wonderful. So who hasn't seen it yet? I suggest you go and, and, and, and, and see it. This painting is just beautiful. So the today person like me and like you, we approach this painting and we just love it. It's so friendly. It's so intimate. It's family, it's private. I learned from the description at this exhibition, that this painting was a scandal at the time when Claude Monique presented it to the critics, to the salon and Paris, and they refused it. They said, this painting is a scandal. This must not be exhibited to the public. It is absolutely impossible to make such a private thing with this, like this to become public.
Why? So at that time, 1860, I learned paintings of this big format, huge form where reserved only for religious topics or for ethical topics or for historical topics, but for nothing which is private. And this is not only private. As we see family sitting there having a lunch, but it's very private. Everybody knew that in the middle of the woman was the wife of Claude Monier, but he was not married to her an unmarried relationship. That was that, that is their child, an illegitimate child, a bastard. And these things will not be to be discussed in public, not to be shown to the public. So today we don't understand what's the problem with, with such an expression. But now if we shift to, to today, we blame our kids, that they put everything in the public to the Facebook. Why do you put all your private things to the Facebook?
This does not belong to the public, or maybe you do not really know how much public this is, but maybe that these young people, young generation or come and change in privacy will change the same way that not only we have to learn to come back to our old approach to privacy and teach ourselves to restraint ourselves. But the whole approach to privacy is, is due to change. So I would like to call this, this painting, the first Facebook scandal in history. Yeah. A computer sciences understanding of privacy. First step to understand is what is privacy in relation to what obviously hiding something against the public, but what kind of public there at least two very important public in modern life. One public is the governmental public, the state public, and the risk is NSA. And without Eric Snowden, we wouldn't have this huge increasing of interest in, in, in privacy issues.
The other one is the economic public. That is the, the, the, the customer towards this, the service provider or the employee to the employer. And there's a private area, which is to be protected against kind of business public. This is Facebook, this is Google and our relationship and a privacy concerns. And these two directions are very different from one, one another, not very but different enough to take this into respect for building technology. We have to build other technology in protecting our economic privacy and doing e-business eCommerce or being an, an employee other than towards the, the government not being observed in our political behavior.
Privacy in the first place for the computer scientist always was a legal concept. We learned from legal experts from the ustan, what privacy is. And they told us privacy means the following things. It is the right to be left alone. It is a personality, right? Which is very important. It's a personality, right? It's not an ownership, right? And we have a lot of laws already in law tradition, which protects privacy outside of the E area, supposed to telephone, integrity of toile and so forth. And then they told as even more that the legal approach to privacy can be broken down to principles and computer scientists love principles. Because if you have principles, you have an idea what kind of technology you can build. And these principles can be supported by automatic measures with measures. I don't mean metrics. I mean means functions, technology, organizational measures, and means.
So the first principle we learned from the legal experts is purpose. Binding personal data can be processed, collected store. If there is a specified purpose, and then they can be used only for this purpose and for no other purpose. And if you have a purpose principle, then you can also formulate the data minimization principle, because this goes with purpose. Minimization means it is minimal. If it is constrict restricted to the absolute purpose, if it's more data than purpose, then it's not minimal anymore. Wonderful principle, what kind of technology can you build about that? Then content, you can use personal data if the user senses to it and say, yes, you can have it. I give it to you. I agree. Wonderful for computer scientist, because this is communication. And we can build with email or with, with buttons. And we, we know how to deal with that.
There is a control which is done by somebody else, probably better equipped to do that. And if you, you are too weak, you can find help. Or even you have the hub without asking for it. It's already organized by Dan Schutze of garden in different areas who do their work instead and in help and in support of, of the individuals. And then of course, confidentiality is another very important principle and computer scientists love confidentiality because this goes with encryption. Encryption goes with mathematics, and this is right deep in the basics of academic life. And I think it's not a chance. It is quite normal that the confidentiality principle is the one which is best understood in, in, in academic life and in basic research. Well, and if you have these principles, you can also organize your risk assessment around these principles. And that's what we are doing.
Even in consultancy with firms making a risk assessment with privacy, they would always ask, what does that mean? We can say, we make an risk, risk assessment of these principles. And then we go one by one through the principles and make risk assessment. Another very important distinction which we make for technology and organization about privacy is this distinction of self data, protection technology, and system data protection technology. Well, self data protection means tools in the hands of users, what they can use and help them to enforce their privacy. Even if the word outside is bad, like awareness is of course, the very first thing awareness is an individual thing. You get educated, you learn it and you become illiterate or absence. You don't use it. You decide, no, I, I'm not going to the service. You take a choice, this service, but not that service or you download and use tools like no script gory at blocker and your browser, which I, by the way, recommend to you very easy to use tools and very effective self data protection tools.
And then of course, end to end encryption, which can be done by the service provider and the user, and all communication between these two is encrypted and no intermediary, not even the service provi the, the internet provider or the NSA should be able, I don't dare to say is a to look into it. Yeah. Why, why don't I say is, is a hundred percent unable. Well, of course the encryption mechanisms must work. So there's still a remaining trust in something, but at least if you use safe systems, then end to end, encryption is such a self data protection tool system data protection is another thing. This is technology in organization, which is so to speak built in it's embedded. It works without users need to do anything. It takes all the burden from the users. Data minimization, for example, cannot be enforced by a single user unless the user makes sure only to provide those data, which understands this limit, which is only restricted to, to, to the purpose.
But for example, if you would like, just like to, to serve through the internet anonymously, you can't do that as a, with an individual decision. But if there is, for example, a mixed net of analyzing networks, this could help for, for anonymous surfing, but this must be a service provided to the users like deletion of purposeless data. If data are out somewhere in the network, a user has no power to delete it, unless there is some service, some system, which does it for you minimization. I said it already, or, or getting the information. Good example is the IP security method where on the IP level, on the internet protocol level already, there is encryption and the users don't recognize it. It's organized by the, by the internet providers on, on the lowest level, this is a good help users don't need to do anything with that or SSL, which is the browser server encryption with this lock, which you see, which is used with HTTPS.
So if you do your home banking, then you have SSL encryption as a user. You don't have to do anything with it. It's built in, it works on its own. It works fine. Again, it should work fine. So also there, you need some trust that this technology technology is really working. And as we know, it's not always working so that we call privacy enhancing technology. We have the legal principles, we have ideas, how we could, how we could implement these principles by technological means organizational means. And we have done a lot in the past to do that. And I must agree. Most of that, we failed a lot of failure over there. Pet means privacy enhancing technology, and is everything what supports privacy. And the contrary pit would be called the privacy invading technology. The bad guys are pit. The good guys are pit. Yeah, this is a pet typology that computer scientists have built up and using until today, pet which works with communication with users, tools, for encryption tools, for anonymity and pseudonymity filtered tools to keep away, for example, trackers policy tools, which explain privacy policies or negotiate privacy policies.
And then number six, which I would like to emphasize a lot is rights management, especially in the modern mobile world. So if you download app, by the way, the word app, everybody of us know, knows of course, what an app is, what an app is. But I think three or four years ago, nobody knew what an app is. There were no apps, there's a very new thing. Everybody uses it now. And the way apps are built is on which principle on the principle notice and choice. There is a notice, the app tells you before download this app wants to have this and that permission of you. And now you can select yes or no. Then it's the choice you say. Yes. And then you agreed to the access to your personal data as was notice as was specified on this, on this notice rights management.
And I think it is pretty obvious that this way of rights management with mobile phones, smart phones does not really work. This is a flawed way of asking the consent of users more or less. I, I believe, and I can argue for is more or less. We are helpless with this way of decision. We cannot really decide and most often decide the wrong way and must decide the wrong way. Otherwise we would be outta service. Let me tell you a little bit about data sources and traces. What is the reason that all these data are collected? What are the opportunities on the side of the data collectors law and ment law enforcement and fighting crime is a wonderful reason. And we say, yes, we want to be protected against the bad world, predictive policy, drawing conclusion out of data where the next crime might happen and organize police work in order to prevent this, these criminal criminal activities.
And future sounds very interesting if it works well, if it's not misused, we say, okay, but remember the first picture which I showed to you, there might be even some wrong decisions coming out of it. Scoring for insurances. On the one hand we have the feeling. No, I do not want to be prisoner of, of these metrics of algorithms on the one side, on the other side. Yes. But to have a fair insurance maybe, or a fair money credits, maybe that some, some sense in it, customer relationship. Well, I think every customer likes to be behaved, treated well by, by the service. So if there's a well organized customer relationship, well, I agree to that. If there's a two big profiling over different branches, then we have the feeding. That's too much. Again, human resource management, before you are employed, you are already screened completely.
And everything is known about you. If feeling that's too much on the other hand, that's what we all do. That's what I do myself. Before I get into contact of a new employee or even a colleague to be called the university. I look at the internet, what do I know about this guy? What can I get, find, what can I find out? So there is some good opportunity, but a huge area of misuse, a user specific advertisement we hated and political persecution. We hated. But if we take the very first and the very last bullet point, again, it's not so much different. And it depends on the state and the culture and the society in which you are. If this is a political persecution, which we claim happens in China, Russia, or if it is criminal, what they would say, this is not a political issue.
It's criminal act in our country. So these two things, at least technologically is more or less the same. So this is one, one of the reason, one view on opportunities for data collection. And there are three types of data sources as far as, as I observed. And number one is well known and is discussed everywhere. And we are aware of this data source. This is one's own action. I agree. For example, through the apps, or if I get, make a contract through eCommerce, I agree. I provide the data. I act in the internet or with a mobile phone. And I am observed in these data I used okay. Or Facebook. And we tell our kids, don't put so much of your private life into the, into the public. This is your own action. Don't do so much in the internet with it, but this is not the only source and maybe not the most important one.
Number two is very interesting. This is data about the individual, about you, which you have not put into the net, which others put into the net like alumni lists. For example, every school does that. Every university does that. Every club does that, or these credit reports for Shah and Germany. I don't put these data into the network. This is done by the services in the background, or if there is a newspaper about you, then something about you is in the network. These data are not put into the network, but nonetheless, they are still related to individuals. Number three is very interesting. It becomes the most important one. And it's underestimated completely as a very important source. These are data which are not about individuals and they are collected by global measurements like global metrics. So there is a survey of habitation wound, occupation, be Hoover education.
It is all put into metrics and every person, every individual in this huge metric room would build up a individual vector, but is not yet placed as long is not in the network, but the measurement is already there. The metrics is there. And then you do just one action. And then already you are known and the whole metrics, the whole world you are already placed. And there's a full profile about you. Although you have done only very little in order to provide these data, and this is the so to speak most risky way of building up the data because no individual can do anything against it. So if you're asked for example on purchase before you pay, please give me the postal number of your, of the city, where you come from. So postal number of my city is not individual. So you give the postal city, but this is exactly what happens here.
It's, it's another data put into the surveys of the whole world and you provide data which then kept, which then builds up and, and identifies individual vectors. Whenever you do a small action in, in this world, not myself, but these are trans individual clusters after just one match. The vector is in the cluster is built. And then you have a very pre size scoring of the individuals traces by own activity. Well, yes, in the web by email social networks, by telephone, most in interesting and most modern way is by mobility. You know what they always ask as when you download an app full internet access, read contact data. I want to read the telephone status. I want to read your geolocation data. I want to call actively the installed applications I would switch on and off the micro and so forth. These all establish traces by mobility and, and by usage.
Now, how do we do the risk analysis of privacy? I will do it only by example, by a small example, to understand risk. It is important also to understand the other side of risk, that which is trust. If there is a risky situation, you have two ways to deal with it. One is you control the situation and mitigate the risk this way. Or if you cannot control, then you have must trust somebody else who controls it for you. And in this situation where on the left hand side, you have the trustee whom you trust. And this might well be the technology, the privacy technology that you have to trust. There's the trust propensity. The person who has to trust has some individual propensity to more or less to trust. And then there is the situation which you go, the perceived risk. And this might be only perceived, which is the toothbrush, or may be an actual risk, but the decision goes through the perceived risk.
And then you take a decision and go into, into the relationship and then something comes out and either your trust is broken and then you will take another decision the next time, or you feel well with this trust. And then, then you can go into the trust again. So risk is the situation in which you go and with the, with the risk assessment of privacy, you would go through the different protection, privacy protection principles by identifying what is the subject of trust? What is the risk? What can go wrong by likelihood and by damage by amount, how can you limit the risk? What is the measure? And then if you have taken some measures to mitigate the risk, there is still a remaining risk. You will never come to a zero point. Say there is no risk left, even that is to be identified and is also helped to identify the reason now why after having taken all these measures and accepting the remaining risk, why you have a reason to, to trust?
Is there an exchange of goods? Something like tit forted, is there a common value, cultural value, for example, or a law, which helps you, is there a contract? Is there a common interest, which makes it very likely that your partner will is, is able that you, that you trust the partner. So, and you go through the risk with the purpose, binding data minimization, constantly your permission, transparency, and so forth through the single principles. And then you can identify pretty clearly and precisely the single elements of, of privacy risks. And of course, I can't do that here just for an example, to do it with the confidentiality and the risk with confidentiality is even if you have the best measures, the trustee, the one to, with whom you communicate transfers the data to others, even if the line, the communication line was very well encrypted, but your partner of course gets all the information.
And now you have to trust that your partner keeps the confidentiality. So which measures do you have to enforce your partner to keep the confide confidentiality, this a risk, or maybe you trust the partner that he is integer will not transfer, make an onward transfer, but is maybe callous is not really able to keep confidentiality applies. No measures op applies the wrong measures. For example, there is no server protection. When you do your home banking and the server, the SSL doesn't work properly, or there are protection measures, but the protection measures are vulnerable. So these are three examples for risk. With confidentiality, you put to the risks measures in order to limit these risks. And then you come to a, you come to a remaining situation. What is the remaining risks? Still lack of integrity in benevolence of the trustee will always remain. Even if you have the best measures or you need still trust the system.
So even if you built in encryption, you must trust the encryption. So how do you deal with this risk, wonderful tool? I encrypt everything asymmetric and huge keys, but are the keys in the right hands? What does the NSA do in the background and so forth? So there's still a remaining risk for confidentiality. My working hypothesis on this is institutional trust and medium strengths. The, the trust, the strengths trust in the trustee. So we should build better. Medium, should build better tools and make these tools usable. A medium security must work more automatically, not so much that users always have to decide. And very interesting research question, by the way, does technical knowledge strengthen, or we can trust the medium the more, you know, do you feel the better or the worse? I know so much about the medium. I know what can, what can go wrong?
So that weakness might trust me. There, there seems to be an upper limit. So, and the limits of pet, you do the same with all the principles. And I will skip that all for, for time reasons. And again, take just the, the last one. These are all the principles, external control. Again, with confidentiality, there are reasons of distrust business with personal data is profitable. There's a lot of money with it, and that's a big reason to be very, very careful with it. Service security is very hard to detect. So even if confidentiality is broken, there are absolutely little means to find that out. It forensics may be a solution for the future, but we have almost no technology to prove or enforce purpose binding. And then to prove that these data have been leaked at this and that place.
So measures require personal integrity procedure, bene benevolence of the partners, which is maybe a cultural thing or educational thing, technical, organizational integrity, medium working hypothesis, again, combination of network and end to end encryption. It's not one or the other. You must put them together. Network encryption is system data protection end to end is self data protection ethics, which is awareness, which is teaching, which is education plus external control, which helps even if you are helpless, even if you are not well educated, if you're not well informed, purpose binding measures are required is really a huge research gap. Finally, hypothesis one individual actions do not lead to individual disadvantages, but they lead to social damage. This goes with my third data source. If I provide personal data of my myself in the most places, I personally will not have disadvantage. This. I have nothing to hide, which is more or less true, but it doesn't help much because you build up the huge metrics of the world for all the others.
It is a societal problem. And then of course it also damages individuals, mostly others, individual consent, and transparency is an overload of users and remain toothless without external control example, mobile communication permission model. It's an overload. We are not really able to take the right decisions and we need technology, which unburdens users and data networks should be anonymous on a network level already. So data has to be taken out of the network. Of course we need personal information personalization, but not on the network level. This must go on the content level. This is a very nice claim because in modern life, you cannot really separate. The two, for example, Facebook is both is a data provider and is a content provider. So must be able to do both help for an Analyst communication and for personal communication. So that's what I want to tell you. And if you want to learn even more, then I have some references for you, which you can use for further reading. Thank you.
Thank you very much.
Well, I have had the perspective a bit before that everything's happening on the intersection of law and technique law and it, and I think this is what you brought across as well. You started off with the legal ideas. Some people call them pretty ancient. I do sometime too when I don't have a good day believing in my own laws. Not that I made them, but my laws in the sense that I have to defend them sometime. And I had hope that you could give me a message that whatever conception of law there is technique would be able to bridge that into practice. And in a way you gave me a clear yes and no. Yes and no. Is that my right understanding? Yes.
Yes. I think one of the most important work we do now, we do already for a couple of years, is to bring law and technology in another way together than, than was traditional in the old ways. This is the law build technology, which makes sure that the law is kept and we learn the technology breaks, the law, and most things happen outside of the law. And we know law doesn't work. So law also must learn to cope with new reality. And new reality is mostly built up by new, by new technology. So all these apps, WhatsApp, for example, or Facebook is a new reality. And it came in by technology and we have no law about it. At least no law, which really works. So
While the law would work, if the world wouldn't be around it, right. That's right. In theory, it does work as technique always works in theory. That's my personal comment. However,
How can we help you
|
<urn:uuid:4b166e9f-a4fe-4da2-ae70-684bfd39c652>
|
CC-MAIN-2022-40
|
https://www.kuppingercole.com/watch/eic15_session_grimm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00640.warc.gz
|
en
| 0.97209 | 6,198 | 2.6875 | 3 |
Stiamo ancora imparando la tua lingua
Milestone investe e lavora continuamente per mettere a disposizione le pagine di milestonesys.com nel maggior numero di lingue possibili. Tuttavia il processo richiede tempo. Tutte le nostre funzioni sono già disponibili in versione multilingue, ma alcune pagine, come questa, ancora no.
Ti ringraziamo per la comprensione.
The Smoke Detector
The most common are photoelectric smoke detectors. Each unit contains a chamber lined by screens with a photoelectric light shining at all times. When smoke enters the chamber, optical sensors - typically laser-assisted infrared - detect smoke particles interrupting the light path. To reduce false alarms, pattern recognition is used to differentiate between the way smoke, dust and steam interrupt the photoelectric beam.
Smoke detectors using photoelectric cells or an ionization process (or both) and carbon dioxide detectors are extremely effective in home environments. Optical beam sensors work well in large spaces as they are able to detect the spectral bands of hot gases. While both conventional smoke detection and Co2 detection systems work well in open spaces, they are poorly suited to tall buildings or industrial sites.
Harsh environments – considered ‘high-impact’ with chemicals and high levels of steam, damp or dust interfere with smoke detection, often resulting in late detecting or false alarms.
Smoke is typically seen by humans through its density. Video programs are designed to amplify this detection process. A video smoke detection system is able to detect smoke and fire and pinpoint its location from a great distance. The camera does not require physical contact with smoke or dust and video analytic increases speed and accuracy of detection. When analytics are integrated into a camera, there is no failure point in the security network and minimal bandwidth use.
In high-impact areas where a camera cannot be placed, an aspirating smoke detection system is well suited. It consists of a central detection unit which draws air through a network of pipes to detect smoke and are recommended for areas with high airflow and condensation as they are armed with multi-sensor capable of monitoring air quality for temperature, humidity, hazardous chemicals and smoke.
Unlike smoke or fire, carbon monoxide cannot be detected by sight or smell, and is lethal. Professional grade co2 detection typically uses an electromechanical sensor. The electrodes use chemicals to sense resistance change to trigger an alarm.
Smoke detection analytics can integrate with your LAN, Fire Alarm Control Panel, VMS or Event Notification Server and are beneficial, paired with other detections analytics, especially audio. Local codes and standards apply to placement and consultation with an expert will help you choose the Smoke detection technology best suited to your conditions.
Fornito da Irisity
Indagine,Rilevamento / Scoraggiamento,Analitica,In loco,Servizi Cloud
Fornito da IronYun, Inc.
Aeroporti,Formazione,Infrastrutture critiche,Istituti sanitari,Monitoraggio del traffico,Retail
Fornito da Sprinx Technologies
Analitica,Gestione del traffico,Monitoraggio del traffico,Intelligenza artificiale,Rilevamento / Tracciamento veicoli,Integrazione del sistema
|
<urn:uuid:c69af81c-1a43-4ad6-9024-f0adfc7c1303>
|
CC-MAIN-2022-40
|
https://www.milestonesys.com/it/milestone-marketplace/technologies/smoke-detection/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00640.warc.gz
|
en
| 0.804255 | 726 | 2.578125 | 3 |
Indefatigable tech pioneer Elon Musk of SpaceX and Tesla electric car fame is poised to publish the alpha design of his proposed “Hyperloop” futuristic transportation technology.
“Elon Musk will hold a media conference call after his Hyperloop announcement on Aug. 12,” Christina Ra, communications director of SpaceX, told TechNewsWorld.
Musk first proposed the Hyperloop in an interview with PandoDaily (that portion of the discussion begins at about the 0:43:00 mark).
Hyperloop would be immune to weather, travel three to four times faster than a bullet train or twice the average speed of an aircraft, and take commuters from downtown Los Angeles to San Francisco in less than 30 minutes for much less than any other mode of transport, Musk has suggested.
It would also cost less to build than other modes of transportation. Musk estimates the cost of a Hyperloop between San Francisco and Los Angeles would be around US$6 billion, compared to the $68 billion projected cost of the high-speed rail project proposed for California.
Musk’s estimate “is probably lowball,” remarked Rob Enderle, principal analyst at the Enderle Group.
What’s a Hyperloop Like?
It’s not clear just what Hyperloop, which Musk posits as an alternative to planes, trains and automobiles — and, yes, boats — will be like. He has likened it to a ground-based Concorde jet plane, a cross between a Concorde and a railgun, and an air hockey table. It would not require rails.
Solar panels may figure somewhere in the mix, as well as maglev, or magnetic levitation.
“It sounds like an in-tube maglev train,” Enderle told TechNewsWorld. “You’d have to figure out how to deal with the bow wave, and he indicated it wouldn’t be a vacuum tube, so it would have to be some kind of creative venting, like a huge silencer.”
The cost of producing the power required would be low, especially with solar panels being used, Musk maintained.
Enter the ET3
Musk may be lagging the competition. ET3 has a similar concept in the works for which it has sold more than 60 licenses worldwide — about a dozen of them in China, according to the firm.
ET3’s system consist of automobile-sized passenger capsules traveling on maglev in two-way vacuum tubes 5 feet in diameter. People enter and exit at transfer stations equipped with airlocks.
The capsules are accelerated by linear electric motors when they leave the stations, and coast through the vacuum for the rest of their trip. Most of the energy used is regenerated as they slow down. Capsules will shoot along at 370 mph for in-state trips in the initial ET3 systems, and will accelerate to 4,000 mph for international travel.
An ET3 system can be built for 1/10 the cost of high-speed rail, or 1/4 the cost of a freeway, the company said.
We Gotta Move It, Move It
The United States’ transportation infrastructure is in dire need of an overhaul. Real highway spending per mile traveled has fallen by nearly 50 percent since the federal Highway Trust Fund was established in the 1950s, according to the National Surface Transportation Infrastructure Financing Commission.
The U.S.’ transportation infrastructure problems have been well known for years. In 2007, the U.S. Chamber of Commerce called for its overhaul.
Nearly 74 percent of respondents to an American Public Transportation Association survey in June favored increased investment in public transportation.
Perhaps the government should throw its weight behind Musk’s idea or look at other proposals, such as the one from ET3.
“It was estimated that it would cost $3 million to run a monorail from Anaheim to the Los Angeles airport, and they spent $9 million proving that it wouldn’t be economical,” Enderle mused. “It would probably be cheaper to build the Hyperloop than to study it if that model holds true.”
|
<urn:uuid:3cbc2a71-15bf-4a51-baf6-8632e86c83a9>
|
CC-MAIN-2022-40
|
https://www.linuxinsider.com/story/musk-revs-up-for-super-fast-hyperloop-transport-reveal-78487.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00640.warc.gz
|
en
| 0.953853 | 861 | 2.59375 | 3 |
Tuesday, March 12, 2013
Code, philosophy, integer overflows
A college professor has created a cool contest to teach people about “integer overflows”. Integer overflows are a common bug that allows hackers to break into computers. Even without hackers, it’s a subtle bug that most C programmers don’t know – but should.
This post discusses my entry to the contest. It’s only of interest to C coding geeks who write parsers.
The contest requirements are in a blogpost at http://blog.regehr.org/archives/909. In summary, the rules are to parse a valid “long” integer, reject invalid input, and NOT EVER do any integer overflow, even in an intermediate result. All the submissions and a test program are located on GitHub https://github.com/regehr/str2long_contest. Among submissions by other people, you'll find my own “robert_2.c”.
The problem with software is that coders start the low-level implementation before thinking of the high-level design. That’s what we see here: low-level implementations without a clear abstract, high-level design.
To be fair, the reason has more to do with the contest than the coders. The contest was defined in terms of low-level C, so it’s natural that people submitted low-level C in response.
The point I’m trying to make with my code is that coders should stop being so specific to C. Code should be as language neutral as possible. They need to stop casting things so much. They need to stop doing so much pointer math; they should index arrays instead. The only time low-level C weirdness is needed is spot optimizations, and even in those cases, coders should consider assembly language or “intrinsics” to get at even a lower level of code.
Since my code is high-level, does it perform worse than the low-level implementations? The rough answer is that it should be just as fast.
There are two things to be aware of. The first is how CPUs execute machine code. The second is how C compilers translate high-level code into assembly.
Virtually all submissions have the same inner calculation of multiplying the result variable by 10, then adding the next digit. This translates to the “imul” instruction, which stands for “integer multiplication”. Whereas addition instructions take only 1 clock cycle, multiplication is more expensive, and takes 3 clock cycles to complete. But, CPUs can execute 3 multiplications at the same time. Thus, while the “latency” for any single multiplication is 3 clock cycles, the “throughput” is one completed multiplication per clock cycle.
That’s for the “Sandy Bridge” class processors. Intel has a low-power x86 variant called Atom where multiplications are much more expensive, completing a multiplication only once every 11 clock cycles.
I point out the Atom case because that’s actually what today’s compilers target. Code compiled for the Atom tends to run well on Sandy Bridge, but the reverse isn’t true. The “imul” instruction is one example.
Compilers can change a “multiply by 10” instruction into a “multiply by 8 and add twice”. In other words, instead of “x = x*10”, you get “x = x*8 + x + x”. Since 8 is a power of 2, that means you can replace the multiplication with a logical shift instruction. That means the equation now becomes “x = x<<3 p="" x="">
Intel x86 has the totally awesome instruction called “lea” that combines a shift/multiplication plus an addition for precisely this situation. This allows us to execute the following two assembly language instructions (in Intel assembler format ‘cause I hate GCC assembler format):
lea x, [x*8 + x]
Each of these takes only a single clock cycle, so combined they take 2 clock cycles. This is slightly faster than the 3 clock cycle “imul” on Sandy Bridge, and much faster than the 11 clock cycles on Atom.
Moreover, it doesn’t really even take 2 clock cycles. Sandy Bridge can execute 4 instructions at once, and Atom can execute 2 instructions at once. The only caveat is that instructions must not depend on each other. Thus, these two instructions that are dependent on each other can be interleaved with other instructions that aren’t.
This is precisely what happens on Microsoft’s C compiler optimizing the “robert_2.c” code. It decomposed the multiplication into an LEA and ADD instruction, then interleaved them with other instructions.
The point I’m trying to make here is that both compilers and CPUs are very good at dealing with high-level code. As a coder, you know that multiplication is slower than addition, but you should largely ignore this fact and let the compiler optimize things for you.
The same applies to indexing arrays. Incrementing a pointer is a single operation, but incrementing an index and adding to a pointer is two operations. While this is true, things are different once the code is executed. Therefore, as a coder, you should generally just focus on the high-level code. The only time you should start doing low-level optimizations in C is when you’ve identified hot-spots in your code.
So, I’ve discussed how code should be written at a “high-level” and that you should let the compiler and CPU worry about how best to execute at the “low-level”. In this section, I’m going to attempt to “prove” this with benchmarks, showing how my high-level code performs at essentially the same speed as low-level code.
I downloaded the entire contest project, compiled it, and ran on my laptop. As it turns out, my “robert_2.c” code was twice as slow as the fastest result. This is within the range of meaningless compiler optimizations – simply changing compiler optimizations and versions might easily reverse the results. But still, the slowness bothered me.
Upon further investigation, I found there might be reasons for the speed differential. The test program did two things at once: test for “correctness” and test for “speed”. The consequence is that most of the input was abnormal. For example, almost all the numbers had leading zeroes.
My “robert_2.c” code doesn’t check for abnormalities explicitly. Instead, things like leading zeroes are handled implicitly – and slightly slower. Therefore, I made an update to the code called “robert_3.c” (https://gist.github.com/robertdavidgraham/5136591) that adds explicit checking. I don’t like it as much, but it’ll test my claim that it’s the test input that is at fault.
And indeed, whereas “robert_2.c” was twice as slow as the leading contender, “robert_3.c” was only 10% slower. Clearly, optimizing the code to target the test input is a win.
But instead of changing the parsing code, we should change the benchmark program. It shouldn’t be testing speed based on erroneous input. It should test speed using expected, normal input. Therefore, I changed the benchmark. Instead of using the sprint() format string of “%030ld” to format the input integers, I changed it to “%ld”, getting rid of the leading zeroes. I also got rid of all the erroneous input.
This is impossible. If there is no leading zeroes and no errors in the input in my new benchmark, then “robert_2.c” should be faster in every case than “robert_3.c”. But the reverse is true, with “robert_3.c” being 30% quicker than “robert_2.c”.
This proves my point at the top: otherwise meaningless changes in the build/runtime environment make more difference to the benchmarks than the code itself. Even when there is a 2:1 difference in speed, this is within the range of compiler weirdness, where different compilers, or meaningless changes to the code, can completely flip performance around. Also, if you look at the "stefan" algorithm, you'll find that it's essentially the same as my algorithm, except it uses pointer-arithmetic instead of indexed arrays, thus proving that indexing isn't slower.
There are indeed a few submissions that have clear performance problems, but all the rest perform the same, within the margin of error of compiler weirdness. Even when there is a 2 to 1 difference, it’s still the “same” performance. The only way to truly differentiate the code by speed is to do a wide range of tests using many compilers and many different CPUs, using many kinds of inputs.
Problems with the contest
While an awesome idea, I do want to point out problems with the contest.
The classic problem in parsing is the confusion between “internal” and “external” formats. This contest extends that confusion. Internal types like “long” are abstract and should change from platform to platform. External formats like strings are constant, and don’t change depending on platform. A PDF file is a PDF file regardless if its being parsed on a 32-bit or 64-bit computer. It shouldn’t suddenly fail to parse simply because you’ve copied it to your 32-bit Android phone.
Thus, the contest should clearly define the external format without depending upon abstract internal concepts. This specification could be something like “a 2s complemented signed 64-bit number” or “a number between -9223372036854775808 and 9223372036854775807, inclusive”. This means the internal function prototype changes to something like “int64_t str2long()”.
Everyone assumes the ASCII character set, where we check things a character being between ‘0’ and ‘9’, because in ASCII, ‘0’ is less than ‘9’. Whereas ASCII digits go “0123456789”, you can imagine some other character set being “1234567890”. Since in this new character set the value of ‘0’ is now greater than ‘9’, most all the submitted code will fail. One submission, mats.c, tries to fix this. It actually doesn’t. The problem here is again “internal” vs. “external” format. The internal character set used to write the code may differ from the external character set used to express the number, so Mat’s code still breaks if the compiler expects EBCDIC and the input is ASCII. The absolutely correct way to handle this is therefore to specify that the external format as “ASCII” and for everyone to change their code to use integers like 48 instead of character constants like ‘0’. My code depends upon ASCII because even though the official standard allows any character set, it’s wrong, the de facto standard is that there is only one standard, ASCII.
The contest used a global “errno” type value to signal an error. Globals like this are bad in even single threaded code, and become a nightmare in multi-threaded code. The contest should’ve signaled errors as another parameter, like “int64_t str2long(const char *s, int *error)”.
The contest is very Linux-specific. In order to run on my MacBook, I had to port the clock_gettime() function, because CLOCK_MONOTONIC isn’t supported on the Mac. He should’ve just used the ANSI standard “clock()” function, and the code would work everywhere. Sure, there is a good debate to be had about “wall-clock” vs. “system-clock” (another common problem in code along with integer-overflows and external-internal-formats), but he gets the wrong answer anyway. The proper Linux clock for this is “CLOCK_MONOTONIC_RAW”, as NTP can cause the clock to skew for mere “CLOCK_MONOTONIC”.
It took me 5 minutes to write, test, and submit the actual code. It’s taken me hours to write up the thinking behind it. That’s because I’m more of a philosophical coder, and I’ve spent a lot of time thinking about all these issues, even though it takes only minutes to apply these thoughts.
Software design and language-neutral code is good. Gotos, global variables, #defines, pointer-arithmetic are all bad. Parsing should follow the external/internal pattern. Portability is good, platform specific code is bad. Except for obvious performance problems, you can’t really compare the speed of code without a lot of work.
|
<urn:uuid:90a00bf5-d068-4ddf-b55d-17039c5d7eff>
|
CC-MAIN-2022-40
|
https://blog.erratasec.com/2013/03/code-philosophy-integer-overflows.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00640.warc.gz
|
en
| 0.91564 | 3,076 | 3.125 | 3 |
Internet of Things (IoT) connects devices such as industrial equipment and consumer objects to a network, enabling the gathering of information and management of these devices through software to increase efficiency and enable new services. IoT combines hardware, embedded software, communication services, and IT services.
IoT helps create smart communication environments such as smart shopping, smart homes, smart healthcare, and smart transportation. The major components of IoT include WSN (Wireless Sensor Network), RFID (Radio Frequency Identification), cloud services, NFC (Near Field Communication), gateways, data storage and analytics, and visualization elements. IoT helps to effectively manage and monitor interconnected devices.
IoT has access to organizations’ existing operational technology (OT) networks and information technology in addition to multiple devices, sensors, and other smart objects. Increasing dependence on the existing network connectivity gives rise to challenges, including IoT security threats.
The priority and focus of IT networks is to protect data confidentially and secure access, ensuring operational and employee safety. Thus, there is an increased demand for IoT security solutions in the workplace. Companies such as Cisco systems are trying to develop an approach that combines physical and cyber security components for employee safety and protection of the entire system.
To ensure the efficient functioning of devices such as smartphones, tablets, and PDAs in the workplace, it is crucial to maintain network infrastructure security. The global IoT security market can be segmented into two categories: end users and geography. For the end users category, the market can be segmented into utilities, automobiles, and healthcare, among others. For the geography category, the global IoT security market can be segmented into five major regions which include North America, Asia Pacific, Europe, Latin America, the Middle East, and Africa.
The need for regulatory compliance is one of the major factors driving the IoT security market growth. With the huge amount of digital information being transferred between people, the governments of several economies are taking steps to secure networks from hackers and virus threats by establishing strict regulatory frameworks. Thus, compliance with such regulations is expected to support the demand for IoT security solutions.
Furthermore, with advancements in technologies such as 3G and 4G LTE, threats such as data hacking have increased, which in turn have forced governments across the globe to establish stringent regulatory frameworks supporting the deployment of IoT security solutions. The emergence of the smart city concept is expected to offer sound opportunities for market growth in the coming years.
The governments of developed economies have already taken steps to develop smart cities by deploying wi-fi hotspots at multiple locations within a city. However, the market for IoT security solutions suffers from the high cost of installation. The cost of installation is usually high to provide machine-to-machine communication, which has impeded the market growth in emerging cost-sensitive economies.
Some of the key players in the global IoT security market include Cisco Systems, Infineon Technologies, Intel Corporation, Siemens AG, Wurldtech Security, Alcatel-Lucent S.A., Axeda Machine Cloud, Checkpoint Technologies, IBM Corporation, Huawei Technologies Co. Ltd, AT&T Inc., and NETCOM On-Line Communication Services, Inc. among others.
Click here to get a sample of the Persistence Market Research Report.
Gain complete access to the report here.
|
<urn:uuid:49ee1e07-3962-4e66-99d3-17c00f079919>
|
CC-MAIN-2022-40
|
https://www.iotforall.com/press-releases/iot-security-market-expansion
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00640.warc.gz
|
en
| 0.929055 | 673 | 2.65625 | 3 |
Special command descriptions
The Bourne shell provides the following special built-in commands.
|:||Returns a zero exit value.|
|. File||Reads and runs commands from the File parameter and returns. Does not start a subshell. The shell uses the search path specified by the PATH variable to find the directory containing the specified file.|
|break [ n ]||Exits from the enclosing for, while, or until command loops, if any. If you specify the n variable, the break command breaks the number of levels specified by the n variable.|
|continue [ n ]||Resumes the next iteration of the enclosing for, while, or until command loops. If you specify the n variable, the command resumes at the nth enclosing loop.|
|cd Directory ]||Changes the current directory to Directory.
If you do not specify Directory, the value of the HOME shell
variable is used. The CDPATH shell variable defines
the search path for Directory. CDPATH is
a colon-separated list of alternative directory names. A null path
name specifies the current directory (which is the default path).
This null path name appears immediately after the equal sign in the
assignment or between the colon delimiters anywhere else in the path
list. If Directory begins with a slash (
Note: The restricted shell cannot run the cd shell command.
|echo String . . . ]||Writes character strings to standard output. See the echo command for usage and parameter information. The -n flag is not supported.|
|eval [ Argument . . . ]||Reads arguments as input to the shell and runs the resulting command or commands.|
|exec [ Argument . . . ]||Runs the command specified by the Argument parameter in place of this shell without creating a new process. Input and output arguments can appear, and if no other arguments appear, cause the shell input or output to be modified. This is not recommended for your login shell.|
|exit [ n ]||Causes a shell to exit with the exit value specified by the n parameter. If you omit this parameter, the exit value is that of the last command executed (the Ctrl-D key sequence also causes a shell to exit). The value of the n parameter can be from 0 to 255, inclusive.|
|export [ Name . . . ]||Marks the specified names for automatic export to the environments of subsequently executed commands. If you do not specify the Name parameter, the export command displays a list of all names that are exported in this shell. You cannot export function names.|
|hash [ -r ][ Command . . . ]||Finds and remembers the location
in the search path of each Command specified. The -r flag
causes the shell to forget all locations. If you do not specify the
flag or any commands, the shell displays information about the remembered
commands in the following format:
|pwd||Displays the current directory. See the pwd command for a discussion of command options.|
|read [ Name . . . ]||Reads one line from standard input. Assigns the first word in the line to the first Name parameter, the second word to the second Name parameter, and so on, with leftover words assigned to the last Name parameter. This command returns a value of 0 unless it encounters an end-of-file character.|
|readonly [ Name . . . ]||Marks the name specified by the Name parameter as read-only. The value of the name cannot be reset. If you do not specify any Name, the readonly command displays a list of all read-only names.|
|return [ n ]||Causes a function to exit with a return value of n. If you do not specify the n variable, the function returns the status of the last command performed in that function. This command is valid only when run within a shell function.|
|set [ Flag [ Argument ] . . . ]||Sets one or more of the following
Using a plus sign (
Any Argument to
the set command becomes a positional parameter
and is assigned, in order, to
|shift [n]||Shifts command line arguments to the left; that
is, reassigns the value of the positional parameters by discarding
the current value of $1 and assigning the value of $2 to $1,
of $3 to $2, and so on. If there are more than 9 command
line arguments, the 10th is assigned to $9 and any
that remain are still unassigned (until after another shift).
If there are 9 or fewer arguments, the shift command unsets
the highest-numbered positional parameter that has a value.
The $0 positional parameter is never shifted. The shift n command is a shorthand notation specifying n number of consecutive shifts. The default value of the n parameter is 1.
|test Expression | [ Expression ]||Evaluates conditional expressions. See the test command for a discussion of command flags and parameters. The -h flag is not supported by the built-in test command in bsh.|
|times||Displays the accumulated user and system times for processes run from the shell.|
|trap [ Command ] [ n ] . . .||Runs the command specified by the Command parameter
when the shell receives the signal or signals specified by the n parameter.
The trap commands are run in order of signal number.
Any attempt to set a trap on a signal that was ignored on entry to
the current shell is ineffective.
Note: The shell scans the Command parameter once when the trap is set and again when the trap is taken.If you do not specify a command, then all traps specified by the n parameter are reset to their current values. If you specify a null string, this signal is ignored by the shell and by the commands it invokes. If the n parameter is zero (0), the specified command is run when you exit from the shell. If you do not specify either a command or a signal, the trap command displays a list of commands associated with each signal number.
|type [Name . . . ]||Indicates how the shell would interpret it as a command name for each Name specified.|
|ulimit [-HS] [ -c | -d | -f | -m | -r | -s | -t |-u] [limit]||Displays or adjusts allocated shell
resources. The shell resource settings can be displayed either individually
or as a group. The default mode is to display resources set to the
soft setting, or the lower bound, as a group.
The setting of shell resources depends on the effective user ID of the current shell. The hard level of a resource can be set only if the effective user ID of the current shell is root. You will get an error if you are not root user and you are attempting to set the hard level of a resource. By default, the root user sets both the hard and soft limits of a particular resource. The root user should therefore be careful in using the -S, -H, or default flag usage of limit settings. Unless you are a root user, you can set only the soft limit of a resource. After a limit has been decreased by a nonroot user, it cannot be increased, even back to the original system limit.
To set a resource limit, select the appropriate flag and the limit value of the new resource, which should be an integer. You can set only one resource limit at a time. If more than one resource flag is specified, you receive undefined results. By default, ulimit with only a new value on the command line sets the file size of the shell. Use of the -f flag is optional.
You can specify the following ulimit command flags:
|umask [nnn]||Determines file permissions. This value, along with the permissions of the creating process, determines a file's permissions when the file is created. The default is 022. When no value is entered, umask displays the current value.|
|unset [Name . . .]||Removes the corresponding variable or function for each name specified by the Name parameter. The PATH, PS1, PS2, MAILCHECK, and IFS shell variables cannot be unset.|
|wait [n]||Waits for the child process whose process number is specified by the n parameter to exit and then returns the exit status of that process. If you do not specify the n parameter, the shell waits for all currently active child processes, and the return value is 0.|
|
<urn:uuid:9cd08a35-73da-4e6b-82f9-0c11963d3b4e>
|
CC-MAIN-2022-40
|
https://www.ibm.com/docs/en/aix/7.2?topic=commands-special-command-descriptions
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00640.warc.gz
|
en
| 0.743081 | 1,841 | 2.5625 | 3 |
The Common Weakness Enumeration Specification, shortened as CWE, is an formal list of common, real-world software vulnerabilities to offer one common language to all the different entities developing and securing software. CWE's ultimate goal is to help the security testing industry mature in their application security programs and the security testing of their projects.
The CWE is written in one common language to incl for the causes of security vulnerabilities found in software and applications. It’s a community project which is contributed to and designed by developers and software engineers alike from around the world.
CWE focuses on several areas of software development for enterprise level entities. One area is where Software Assurance and resources are dedicated to ensuring that the supply chain for software is protected from vulnerabilities. This looks at incrementally improving approaches to software assurance that reduce risk and the chance of new code being exposed to known problems.
Each CWE entry drills down into the specifics, including a description summary, the point at which the weakness can be introduced, the coding languages and platforms which could be effected, the most common consequences, real-life examples, relationships to other CWE entries and more.
Like CVE, the CWE is maintained by the MITRE corporation and can be used as a benchmark to test security testing tools against each other. In fact, the CWE was created as a kind of supplement for the CVE, filling in the (many) gaps left up-in-the-air with CVE entries.
CWE has also published guidelines on secure development practices. Risk management for the supply chain is also tackled with an in depth briefing to better adapt the chain to reduce risks to code. Furthermore, there’s a focus on code analysis with a briefing paper from the Software and Supply Chain Assurance branch of the Department of Homeland Security.
Yet another part of the CWE project is guidelines for assessment and remediation tools for use in secure software development for platform management, static analysis, real-time threat prevention and more. Users can also access the full national vulnerability database, which includes a comprehensive listing of known remedies for CWE vulnerabilities.
|
<urn:uuid:10a7a5ac-831c-4567-b456-4fe15457a802>
|
CC-MAIN-2022-40
|
https://checkmarx.com/glossary/cwe-2/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00640.warc.gz
|
en
| 0.959273 | 430 | 2.609375 | 3 |
Technology firms are now linking objects ranging from household appliances to industrial machinery to the internet. With telecommunications firms on board, manufacturers are coming up with new, innovative uses for everyday objects. Yet linking these everyday objects to a wider network out there creates new security risks.
How do you cope with the massively increased flow of data that the Internet of Things requires? At the moment there are already a forecasted amount of 6.4 billion connected objects, but as this technology takes off this number will easily reach into the 11 billion by 2020. As a result, some new priorities are beginning to emerge.
Security begins with the object
When a staggering volume of new objects become connected to the internet, some form of security must be built in at the manufacturing level. This is a shift that’s one of the first major new priorities for the IoT. The shift will most likely move in stages, with the web service focused on first as a security barrier. Cloud security is also important to think about. Yet as IoT innovation from Nokia Networks and other providers continues to develop, the security will eventually become embedded directly into the object.
Securing data en route
Another area where security experts will need to focus is in the route that data takes from the object to the internet. There’s a far greater burden on IT departments with the growth of the IoT, as they must now track exponentially increasing volumes of data. Endpoint security is often focused on, but these mass flows of data must also be secured during transport as a new priority. Data is currently sent to a local data collation hub in many cases, where it is stored before it’s moved on to the next end point. These midway storage points must be secured just like the end points. Securing the endpoints is still important however, as is educating end users who may pose the biggest security risk.
Privacy is another major security issue to consider with the growth of the IoT. Naturally, privacy concerns are already a core issue with cloud systems, but this will grow as every person and object starts transmitting data. Objects will constantly be collecting and aggregating data in real time, which must be stored securely for review.
Systems will need to be put into place to determine how to best delete this data when it is no longer needed. Useful connected objects like health monitors will store sensitive information about consumers, for example. This is a major component of the security puzzle that needs to be addressed before the IoT can become mainstream.
Increasing Security Spending
With everything from industrial machinery to consumer vehicles connected to the web, the consequences of a security breach could be serious. As the IoT becomes increasingly complex over the next few years, stretching out to become a technology in every individual’s home, directing entire smart cities, the consequences of an attack could be considerable. To combat this, IoT security spending has increased rapidly to match the technology’s growth. A recent Gartner report stated that worldwide spending will reach $547 million in 2018, up from $348 million this year and $281.5 million in 2015.
The growth of this new technology will offer society numerous benefits, but it’s important to think about embedding security right from the start. Identifying these priorities and increasing security spending accordingly is a good start.
(Security Affairs – security, Internet of Things)
|
<urn:uuid:7cfaec15-1c03-41c0-a7fc-27b8a445e8ef>
|
CC-MAIN-2022-40
|
http://securityaffairs.co/wordpress/50010/hacking/internet-of-things-security.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00040.warc.gz
|
en
| 0.950883 | 684 | 2.71875 | 3 |
Read Time: 07 min.
Data encryption, especially on the Cloud, is an extremely important part of any cybersecurity plan in today’s world. More companies migrate their data to the Cloud for its ease of use, reduced cost, and better security. The most prominent Cloud Service Providers (CSPs), like Google, Azure, and Amazon, all have different data encryption methods, but they are all secure and user-friendly.
How Data Encryption on the Cloud works
Cloud data resides in two places: in-transit and at-rest.
Data-in-transit encryption refers to using SSL or TLS to create a security “wrapper” around the data being moved. This ensures that it is more challenging to steal data-in-transit, but even if it were successfully stolen, it would be a confusing block of characters that would not make sense to the attacker. Most data-in-transit encryption is done through web browsers or FTP clients, so it does not need to be as managed as data-at-rest. Data-at-rest encryption is done when data is on a disk or another storage method. Similar to data-in-transit encryption, the data is jumbled into a random series of characters to stop attackers from stealing the plaintext.
CSPs have many different ways of providing data encryption to the user. Data can be encrypted by default if the user implements that option. Each section of a Cloud platform handles the encryption of data differently. Some may encrypt all data that they store and allow the CSP to manage the keys involved in encrypting the data, while others may give the user full control over what happens to the data, encryption-wise. Most services on the Cloud have a middle ground, allowing the user to select if the CSP should manage everything, or if they wish to control it all themselves, or something in between that. Many users create their methods of automatically encrypting data since platforms like Google Cloud Platform (GCP) provide so many tools for the creation of encryption methods.
GCP Provided Tools for Data Encryption
GCP uses AES-256 encryption by default when data is at-rest in Google Cloud Storage, and data-in-transit is encrypted with TLS by default. When encrypting data on the Cloud, GCP utilizes DEKs and KEKs, which are used and stored with Google’s Key Management Service (KMS) API. A DEK is a data encryption key, which is used to encrypt the data itself. A KEK, or key-encryption key, is then used to encrypt the data encryption key to ensure an extra security layer. The KMS API works closely with other Google Cloud services, such as Cloud security services, Google Cloud Functions, etc, to store keys used for encryption and decryption on the Cloud. When other APIs attempt to access DEKs and KEKs, the user must first have the necessary permissions to access the keys. Services like IAM provide roles for users to be able to access KMS.
IAM, or Identity and Access Management, creates essential roles for services that they will need to work with different APIs within GCP. IAM offers another layer on top of KMS when protecting encrypted data. Administrators may create their roles for services and users, giving them more control in what they want access to certain users or services. IAM can also connect other GSuite applications, such as Gmail or Google Drive, to applications and services within a user’s Google Cloud account, further authenticating users.
Another example of a GCP API that assists in encrypting data is the Data Loss Prevention (DLP) API. This API can be used within or outside of Google Cloud and helps the user identify potentially sensitive data, such as Personally Identifiable Information, and mask that data from attackers. Google Cloud Platform users can integrate the KMS and DLP APIs to do encryption methods like Format Preserving Encryption, which encrypts data to be misunderstood while keeping the same formatting as the plaintext, allowing the PII data to be used with false values.
These methods and more allow users the freedom to manage their data encryption methods on the Google Cloud Platform. KMS, IAM, and DLP can also be integrated with Google Cloud Functions to encrypt data when uploaded to Google Cloud Storage automatically. Google Cloud Dataflow can use DLP and KMS to encrypt data automatically from several different storage locations. This shows how users can create their own, potentially more robust data encryption methods to assist in the storage of sensitive data on the Cloud.
|
<urn:uuid:f63e85a9-a82a-4a5c-ae3e-6205eb0dcecc>
|
CC-MAIN-2022-40
|
https://www.encryptionconsulting.com/tag/gcp-kms/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00040.warc.gz
|
en
| 0.918666 | 946 | 3.109375 | 3 |
As educational institutions adapt to changing circumstances and employ remote learning policies, Desktop-as-a-Service (DaaS) can be a great investment.
Like VDI — which you can read more about in our blog, The Future of Remote Access — DaaS is an effective way to implement virtualisation and make remote access even easier for educational institutions. This is an especially beneficial solution because of its low cost and reduced maintenance needs.
In this blog, we’ll cover more benefits of DaaS and why it could be a great solution for your educational institution.
What is DaaS?
DaaS is a way to deliver virtual content where the responsibility for system security and management is provided through a cloud-based company. These providers ensure that authorised users are connected to the appropriate desktops and applications. Their solutions interact with virtual infrastructures, gateway services and desktops to ensure a secure operating environment.
Because no software resides on user devices, DaaS provides flexibility in content delivery while lowering costs and reducing time to implementation. Here are five ways DaaS can benefit education.
While more educational content is being delivered online, its effectiveness depends on accessibility. According to the World Economic Forum, 95% of children from countries such as Switzerland or Norway have internet access from personal devices. In contrast, only 34% of children in Indonesia have the same accessibility. The disparity is within countries as well as between them. In the United States, 25% of disadvantaged children do not have access to computers, while almost 100% of privileged students do.
Contrast those numbers with mobile phone usage. Worldwide, almost 70% of the population has a mobile phone. Being able to deliver content to mobile phones could significantly reduce access disparity. DaaS has the ability to display virtual content on desktops, laptops, phones and tablets. Merely deploying a DaaS solution can improve accessibility and reduce learning disparity.
Cybersecurity threats against educational institutions continue to increase as they are seen as easy targets. In fact, the University of Birmingham fell victim to a cyber-attack last year which resulted in an undisclosed ransom being paid to the hackers by the software suppliers.
For the most part, educational institutions lack the financial resources to maintain a highly secured infrastructure. Even if institutions had the financial resources to build a secure network, finding personnel to manage it would be challenging. As of late 2019, over 80% of companies worldwide had trouble filling cybersecurity positions.
Moving to a DaaS environment eases the security burden on educational institutions. DaaS providers are responsible for ensuring that only authenticated users are given access to educational resources. As the number of students connecting to online platforms increases, so do network vulnerabilities. Schools have no control over the security measures present on a student’s device.
Unless there are adequate precautions in place, cybercriminals could gain access to an institution’s network. Having a more secure environment reduces the odds of a successful cyberattack.
Maintaining a physical network can be costly. Equipment fails and must be replaced or repaired. Mission-critical devices require an onsite backup to ensure continuity. Educational services cannot wait for a replacement to arrive.
When IT personnel monitor performance and troubleshoot potential problems, they have little time for anything else. Resources are not available to answer questions or provide support when educators or students have difficulty accessing the system. The lack of responsiveness leads to frustration and degrades the educational experience.
Maintenance is not just hardware. Software has to be updated, and those changes must be tested. Upgrades add to the workload of an IT staff. With DaaS, network maintenance is almost eliminated, because institutions are only responsible for the endpoints. With fewer devices to maintain, IT personnel can focus on improving the student experience.
Lowers operating costs
DaaS does not require a large upfront capital investment. All infrastructure components are in the cloud, so the only equipment costs are for endpoints. If institutions allow students to use their own devices, there are even fewer endpoints to purchase and maintain.
DaaS implementation does not require extensive IT expertise because network management is moved to the cloud. IT workloads are reduced to monitoring endpoints and keeping applications working. Extensive IT departments or outsourced resources are no longer required, further reducing operating expenses.
Of course, lower maintenance means lower operating costs, but it also means better productivity. IT personnel can be re-allocated to spend more time supporting staff and students instead of focusing on keeping the network running. Productivity improves as problems are addressed faster and with less friction. Increased productivity at no additional costs contributes to a positive bottom line.
Most DaaS providers offer a subscription plan, making it much easier to budget IT expenses. Whether the subscription is monthly or annually, the cost is fixed—no need to budget for that unexpected equipment failure or additional service fees.
Prepares for the future
Online learning is expected to grow significantly over the coming years. Recent research found that older students retained 25% to 60% more material when studying online. The retention rates improved because students were able to learn at their own pace and in their own way, demonstrating that there is increasing value in this kind of learning.
DaaS is positioned to scale as more instruction is moved online. Whether it is more courses or more students, DaaS can quickly scale to support the increased demand. Institutions do not have to worry about degraded performance or architecture limitations. This capability gives the educational sector the agility it needs to address changes in enrollment.
The platform provides flexibility in content delivery. Because it supports multiple endpoints, DaaS can expand its connectivity to support new devices. Whether the endpoint is a 5G cell phone or the next generation of laptops, the DaaS architecture can quickly transform to support them. This flexibility makes it easier for educators to tap into a broader market as new endpoints develop.
The educational sector needs to adopt new tech
Flexibility and agility have become critical factors in an institution’s ability to survive. As the demand for more virtual learning grows, institutions will have to find ways to deliver a quality learning experience if they want to survive. Constructing an architecture that can pivot quickly to address a changing educational landscape is one way to ensure an institution’s success.
DaaS could be that architecture for many institutions. Finding the right kind of tech to help adapt to fast-changing circumstances is key, but it’s all about implementing the right solution to address your specific problem. Nexstor offers end-to-end solutions to meet client needs, whether on-premise or in the cloud. Our DaaS solution is designed to help educational organisations traverse the uncertain terrain of the 21st century.
You manage your costs, we'll
manage your IT
In-house IT is expensive, get instant quotes on cost-effective managed IT services using leading backup vendors.
|
<urn:uuid:418297ee-046f-4d48-95e9-8cc39eda9e91>
|
CC-MAIN-2022-40
|
https://nexstor.com/desktop-as-a-service-in-education/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00040.warc.gz
|
en
| 0.943358 | 1,434 | 2.53125 | 3 |
Defining distinct types of hackers, motives, goals, and acts
A hat-based classification of types of hackers distinguishes between different attackers. Some might be pranksters, luring people into traps, or flashing controversial messages on their targets’ screens. Others can identify as activists, exposing hypocrisy, illegal behavior, or scrutiny-deserving acts. The third category can be cybersecurity specialists hacking systems at their owners’ requests. Thus, there are many types of hackers, and relating this word to crime is not a just ruling. Let’s throw our prejudice out the window and determine what types of hackers exist, their goals, and motivations.
What is a hacker?
Hackers are people who use a computer to obtain unauthorized access to information or devices. Other names that could relate to these individuals include tricksters, attackers, perpetrators, or digital vandals. The malevolence of cybercrime has affected thousands of entities, from regular users to companies.
You have probably heard of crafty and sophisticated attacks launched on services. For instance, ransomware is a virus that denies access to the targeted device, encrypts files, and demands ransoms. Over the years, hackers have harmed thousands of institutions, including hospitals and schools.
Hackers are responsible for such malicious acts, but sometimes, they do not even need high-tech skills to cause havoc. In 2020, these malicious individuals had stored thousands of passwords and usernames of Spotify users. However, in this case, they had no reason to execute any attacks or hacking sprees. These hackers simply took the credentials stolen in other data breaches and checked whether users had reused them.
Thus, while we might imagine hackers as crafty programming geniuses, they can be far from it. They are opportunistic individuals, searching for the right moment to strike.
Additionally, not all hackers are malicious. Sometimes they use their skills to improve and guide users to safety. Thus, the different types of hackers represent their role in the digital world. While some seek profits and have no moral code, other hackers can be ethical professionals refining cyberspace.
Types of hackers
All hackers have unique motives for pursuing their goals. Some might break through security defenses strictly as a way to entertain themselves. Others may target companies due to their political or societal views. And, of course, some look for monetary gain and nothing else.
White hat hacker
White hat hackers are the types of hackers that focus on penetration testing. Their goal is to find security loopholes in systems by attempting to invade them. Thus, these hackers make a living by helping other companies keep their systems secure.
Most of them participate in various bug bounty programs, promising earnings for detected bugs. In rare cases, the rewards are astonishing, like when Facebook paid a $30,000 bounty for a bug on Instagram. In 2020, Google paid $6.5 million to bug bounty hunters, proving that the life of a white hacker can be lucrative. The best part is that there is nothing illegal here: all these hunters do everything by the book.
So, white hat hackers are skilled individuals with good intentions. They differentiate themselves from criminal hackers and choose a righteous path. Many of them work for government agencies or corporations, while some prefer testing systems’ resistance independently.
Black hat hackers
Black hat hackers are the ones responsible for all the malicious hacks and other illegal acts. These cunning individuals look for gaps in security to exploit them and steal data. Then, for instance, they can sell their loot to interested parties, and many online forums facilitate this market.
Of course, the thriving ground for these malicious artists is none other than the dark web. For instance, our report has shown that hackers sell SSN (social security number) for as little as $4. There are many different types of data that these malicious individuals can sell. Others might even provide hacking software or other tools for less-skilled counterparts.
Thus, a black hat hacker feels no remorse and does not work for a greater cause. Their primary goal is monetary gain, and the means for getting it are not important.
Gray hat hackers
Gray hat hackers are individuals standing in the middle of white and black hat attackers. These individuals might not obsess about monetary gain. Instead, their hacking skills are the tools they use to entertain themselves. Thus, many attacks initiated by gray hat hackers are for fun or to troll users/companies.
Even though they do not have malicious intent, their activities might still be illegal. Any type of unauthorized access is a crime. Therefore, these types of hackers can face legal repercussions for their actions. In 2017, a gray hat hacker accessed 150,000 printers to warn their owners of dangers associated with keeping their printers exposed. It is only one of the examples of how these attackers cause no significant harm.
Hence, some of these individuals might perform relatively benign activities. However, this does not mean that others welcome their contributions. Others feel that gray hat hackers stand on dangerous ground. They are one step away from turning into black hat hackers.
Green hat hackers
Green hat hackers are the rookies in the hacking realm. They are wannabes attempting to replicate the success of seasoned attackers. However, they might lack technical skills; thus, some even refer to them as “noobs.”
Despite the gaps in their skillset, green hat hackers desperately pursue their goals. And, for the most part, they are not a highly severe digital threat. Nevertheless, many regard them as dangerous not for their intentional hacks but for the accidental errors they might make. In some cases, the damage they inflict on a system might be irreparable and useless to hackers themselves.
Similar to a green hat hacker, script kiddies are also new to the hacking lifestyle. They lack knowledge and ability, and they are unlikely to cause damage to properly-secured systems. Thus, script kiddies choose targets that might be less demanding to compromise.
Additionally, many of these types of hackers exploit scripts or code written by other, likely professional hackers. Therefore, their primary goal might be to get some attention, have a laugh, but their knowledge of the hacking process is very minimal.
Blue hat hackers
There are two ways to see blue hat hackers:
- Security professionals working outside of an organization. Companies might invite them to test software or systems for bugs.
- Amateur hackers seeking attention and popularity amongst their peers. They might also be vengeful individuals, using hacking as means for revenge. For instance, they might commit doxxing on the people who wronged them.
Red hat hackers
Red hat hackers are the knights in shining armor who try to stop black hat hackers. However, despite their noble goal, their actions also typically constitute illegal behavior. Thus, they choose the wrong means for pursuing righteous aims. Also, they often select rather ruthless tactics for stopping black hat hackers. Therefore, just like their targets, red hat hackers will face the appropriate legal consequences.
So, these types of hackers are the caped crusaders of the digital world. Sadly, means do not justify the ends. As a result, according to the law, they are also the ones who break it.
These hackers work for various government institutions. Their primary goal is to defend a nation’s interests and security at home and abroad. In some cases, they might target another country as means of finding controversial or sensitive information. There are many state-sponsored hacker groups pledging allegiance to a specific country.
For instance, Cozy Bear is a big part of the Russian attempts to wreak havoc on their targets. Allegedly, the group was the one to influence the 2016 US presidential elections. The second well-known name is Lazarus Group. Sponsored by North Korea, the group has quite a few dishonorable medals. For one, Lazarus Group is the one allegedly responsible for the devastating WannaCry attacks in 2017.
Hacktivists typically claim to have a solid moral code as their hacking raids reflect their societal or political beliefs. Therefore, government institutions or companies suspected of shady behavior might become their targets. Thus, hacktivists claim to be advocates for social justice. However, they take matters into their own hands and pursue illegal activities to expose conspiracies, crimes, and racial injustice.
Hacktivists divide society into people with two contrasting opinions. One side believes these hackers to be modern-day Robin Hoods, exposing the shady dealings of the rich and powerful. The other side cannot overlook the fact that they break the law, which is illegal no matter the end goal.
Have you heard stories of an ex or current employee exposing the companies they work for? Whistleblowers are these individuals following a moral agenda to uncover illegal or unethical activities. For the most part, whistleblowing does not constitute an unlawful act on its own. However, if these people hack into systems, their actions become criminal.
For instance, Daniel Ellsberg is one of the notorious whistleblowers who released the scandalous Pentagon papers. You might have also heard of Edward Snowden, Chelsea Manning, Reality Winner, and Shadow Brokers. Despite making certain decisions for the greater good, whistleblowers are in no way relieved of the responsibility to obey the law.
Types of hackers: different motivations and goals
Overall, not all types of hackers use their excellence to commit crimes. Some choose good deeds in the hopes of shaping a safer digital tomorrow. Thus, being a hacker does not necessarily relate to malicious intentions. Many of these experts deserve our applause and appreciation for their contributions. However, certain types of hackers do pose a risk to our digital lifestyles. And while others do follow a righteous path, their means for achieving justice turns them into criminals as well.
|
<urn:uuid:98dbe4e8-a825-484d-b91d-222b983cc660>
|
CC-MAIN-2022-40
|
https://atlasvpn.com/blog/defining-distinct-types-of-hackers-motives-goals-and-acts
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00040.warc.gz
|
en
| 0.945325 | 2,003 | 2.75 | 3 |
What is Voice Recognition? The Basics, New Tech Products
Voice recognition is defined as the ability of a computer program or machine to understand and carry out spoken commands or to receive and interpret diction. Voice recognition will be enabled automatically after a user speaks to a given device with voice recognition capabilities. Voice recognition software allows users to perform any number of hands free functions such as making phone calls, setting reminders, setting up a GPS navigation system or setting an alarm for work. Common voice recognition software options on the market today in Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana.
There are many different types of voice recognition software options available to consumers. These include but are not limited to:
- Automatic speech recognition – these systems used AI technology to automatically detect what the speaker is saying.
- Speaker dependent system – these systems require users to complete voice recognition training before use, typically in the form of a series of words and phrases that must be read aloud.
- Speaker independent system – these programs will identify a user’s voice without the need for any training.
- Discrete speech recognition – these systems require that a user pause before speaking each word so that the speech recognition software can accurately identify each word
- Continuous speech recognition – These systems can recognize voices at a normal conversational level of speaking.
- Natural language systems – these systems can not only distinguish between voices but can answer questions and queries as well.
How does voice recognition work?
In order to function properly, voice recognition software running on computers requires that the analog audio be converted into digital signals, a process known as analog-to-digital conversion. In order for a computer to accurately decipher a signal, it must have a digital database of vocabulary, words, and syllables, as well as an expeditious for comparing this data to the digital signals. These speech patterns are stored onto the hard drive of a computer and loaded into memory whenever the voice recognition software is run. Moreover, a comparator then checks these stored patterns against the output of the A/D converter, an action known as pattern recognition.
The size and range of a voice recognition’s effective vocabulary will be dependent upon the random access memory capacity of the computer the software is being run on. For instance, a voice recognition program will run significantly faster if the entire vocabulary can be loaded into RAM. Comparatively, searching the hard drive for some of the word matches is a more tedious and time consuming process. Furthermore, processing speed plays a significant role, as this will affect how quickly a computer can search for these RAM matches.
What are the advantages and disadvantages of voice recognition software?
The primary advantage of voice recognition software is the convenience that it can provide consumers. For example, with the help of an AI virtual assistant like Siri, a user could drive their car, make a phone call, and activate the smart alarm at their house all at the same time. While the original voice recognition systems released with computers during the 1970s could only pick up around a thousand words, current software options can pick up virtually any English word or phrase imaginable. This is done by using sophisticated and nuanced algorithms that quickly transform spoken words into written text.
On the other hand, there are some limitations to voice recognition software. While software offerings and features are constantly evolving and improving, all of these systems will undoubtedly be prone to error. For example, many popular voice recognition programs will struggle to differentiate between similar sounding words such as hear and here. Additionally, background noise can obviously produce false input and cause confusion. As such, voice recognition software must still be used in a quiet and undisturbed environment, limiting some of its applications further.
What are the differences between voice recognition and speech recognition?
While the difference between voice recognition and speech recognition may seem minute and arbitrary at first glance, they are in fact two distinctly different functions within a computer program or verbal assistance system. To put it simply, voice recognition is looking to pick the unique voice of the speaker, while speech recognition aims to pick up the specific words and diction that a person uses when speaking. Voice recognition allows for security features such as voice biometrics to be enabled. Conversely, speech recognition software allows for accurate commands and automatic transcription. As such, voice and speech recognition respectively are used in two completely different contexts.
Voice recognition software listens to your voice in real time and responds instantly. However, this is at the cost of both accuracy and functionality, as these features are usually limited to speaking about the task at hand. Alternatively, speech recognition is most often used in the context of audio transcription. The words and phrases contained in such transcriptions will almost always be more complicated and complex than the speech given to voice recognition software. The decision on which feature to use will depend upon the specific needs of the consumer using the particular software or program at hand.
|
<urn:uuid:661c6fa0-8f10-4a82-8e2e-00bec432a1a2>
|
CC-MAIN-2022-40
|
https://caseguard.com/articles/what-is-voice-recognition-the-basics/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00040.warc.gz
|
en
| 0.940065 | 989 | 4.125 | 4 |
Most often measurements made on electric circuits are that of current, voltage, resistance, and power. The base units—ampere, volt, ohm, and watts—are the values most commonly used to measure them. Table 1 lists these basic electrical quantities and the symbols that identify them.
Table 1: Electrical Units, Symbols, and Definition
In certain circuit applications the basic electrical units—volt, ampere, ohm, and watt—are either too small or too big to express conveniently. In such cases metric prefixes are often used. Recognizing the meaning of a prefix reduces the possibility of confusion in interpreting data. Common metric prefixes are shown in Table 2.
Table 2: Common Metric Prefixes and their Symbols
A metric prefix precedes a unit of measure or its symbol to form decimal multiples or submultiples.
Prefixes are used to reduce the quantity of zeros in numerical equivalencies. For example, in an electrical system, the signal from a sensor may have strength of 0.00125 V, while the voltage applied to the input of a distribution transformer may be in the 27,000-V range. With prefixes, these values would be expressed as 1.25 mV (millivolts) and 27 kV (kilovolts), respectively. Figure 1 shows examples of prefixes used in the rating of electric components.
Figure 1 Prefixes used in the rating of electric components.
Knowing how to convert metric prefixes back to base units is needed when reading digital multimeters or using electric circuit formulas. Figure 2 and the following examples illustrate how many positions the decimal point is moved to get from a base unit to a multiple or a submultiple of the base unit.
Figure 2 Movement of the decimal point to and from base units.
To convert amperes (A) to milliamperes (mA), it is necessary to move the decimal point three places to the right (this is the same as multiplying the number by 1,000).
To convert milliamperes (mA) to amperes (A), it is necessary to move the decimal point three places to the left (this is the same as multiplying by 0.001).
To convert volts (V) to kilovolts (kV), it is necessary to move the decimal point three places to the left.
To convert from megohms (MΩ) to ohms (Ω), it is necessary to move the decimal point six places to the right.
To convert from microamperes (μA) to amperes (A), it is necessary to move the decimal point six places to the left.
- What is the base unit and symbol used for electric:
- Write the metric prefix and symbol used to represent each of the following:
- One million
- One millionth
- One thousand
- Convert each of the following:
- 2,500 Ω to kilohms
- 120 kΩ to ohms
- 1,500,000 Ω to megohms
- 2.03 MΩ to ohms
- 0.000466 A to micro-amps
- 0.000466 A to milliamps
- 378 mV to volts
- 475 Ω to kilohms
- 28 μA to amps
- 5 kΩ + 850 Ω to kilohms
- 40,000 kV to megavolts
- 4,600,000 μA to amps
- 2.2 kΩ to ohms
Review Questions – Answers
- (a) Ampere A, (b) Volt V, (c) Ohm Ω, (d) Watt W
- (a) Milli m, (b) Mega M, (c) Micro μ, (d) Kilo k
- (a) 2.5 kΩ, (b) 120,000 Ω, (c) 1.5 MΩ, (d) 2,030,000 Ω, (e) 466 μA, (f) 0.466 mA, (g) 0.378 V, (h) 0.475 kΩ, (i) 0.000028 A, (j) 5.85 kΩ, (k) 40 MV, (l) 4.6 A, (m) 2,200 Ω
|
<urn:uuid:aa772631-b60b-4bdd-9874-2284a9eff224>
|
CC-MAIN-2022-40
|
https://electricala2z.com/electrical-circuits/electrical-units-and-metric-prefixes-examples/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00040.warc.gz
|
en
| 0.786418 | 972 | 4.34375 | 4 |
The information security market has grown tremendously over the past two decades as major cyber attacks have resulted in over ten billion dollars in damages1, revealed millions of personally identifiable information2, and new legislation begins fining companies for failure to take appropriate measures3. Yet, the vast majority of defensive systems are designed solely for distributed platforms and fully ignore the mainframes which run the vast majority of the world’s financial transactions and maintain the bank’s most sensitive data.
“But the mainframe is the most secure platform, it has never been hacked!”
I’ve heard this line repeatedly since I’ve entered the mainframe market. The mainframe has earned years of security through obscurity because the original hacker culture simply lacked access to the prohibitively expensive system to break and pen test. Today, the cost of access for nation state sponsored Advanced Persistent Threats (APTs), ubiquity of z/Linux, and access to a virtualized z/OS platform have lowered the barrier to entry for malicious actors and necessitate companies begin protecting the backbone of the IT infrastructure.
Two companies learned this lesson the hard way when a small group of hackers, led by the Pirate Bay co-founder Gottfrid Svartholm Warg, broke into their IBM mainframes and made transfers of up to $858,500.4 Both the Swedish Nordea Bank and Logica, a Swedish IT firm that provides tax services to the Swedish government were compromised by the four suspects who leveraged two 0Day exploits that they developed themselves using a z/OS virtual machine. Both Windows and Unix operating systems have heavily paid teams of researchers looking for vulnerabilities in their operating systems which increases the overall security posture as bugs are found and patched.
How many researchers are currently attacking the z/OS operating system?
What does that say for the likelihood that further bugs exist that have yet to be discovered?
One of these researchers, Phil Young, gave a tremendous talk on this hacking operation and explained that the hackers were quickly able to:
- Escalate their privileges from a normal user to an administrator/super user with z/OS special and operations access
- Modify the Authorized Program Facility (APF) to give themselves persistence to the machine
- Upload programs and scripts both in REXX and C to extend their toolset
- Read Personally Identifiable Information (PII) stored in their datasets and exfiltrate the files off the mainframe
- Transfer money to external accounts
Within a few hours, the hackers had full control of the victim mainframe and all the data it controlled. At this point, the hackers had the full capability to encrypt and destroy the company’s most vital information which would have left a devastating impact on the company as they scrambled to recover their tape files. The only way to effectively manage and stop attacks like this is to use tools like BMC AMI Security to have real time notification and monitoring of mainframe events aggregated into the enterprise SIEM where security analysts can immediately spot unauthorized users escalating privileges, access to the mainframe’s most sensitive files, and modification of the APF.
Luckily, the aftermath only cost them around $700,000 to conduct their incident response and investigation as the hacker’s greed in transferring over $800,000 in raised enough audit red flags that the company could catch them before further damage could be done. The Companies were additionally lucky that they managed to avoid the ever-growing trend of compliance regulations like GDPR which would have levied significant fines for the loss of such sensitive user PII.
The time to prevent stories like this from being about your organization is now. Treat your mainframes like the crown jewels of your IT infrastructure that they are and include them in your security architecture like your distributed systems while training your security analysts on indicators of compromise before you are in the headlines.
For more information on how to protect your mainframe from malicious breaches, download 11 Guidelines for Minimizing Vulnerability for IBM z/OS while Improving Compliance today.
These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.
See an error or have a suggestion? Please let us know by emailing [email protected].
|
<urn:uuid:8ff13dd8-37db-4f51-9ef0-2342d3a90e96>
|
CC-MAIN-2022-40
|
https://www.bmc.com/blogs/are-mainframes-your-weakest-link/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00040.warc.gz
|
en
| 0.950806 | 871 | 2.546875 | 3 |
With the demands on data centers increasing, fueled in part by the rise of Internet of Things (IoT) technology, expectations are rising for uptime to remain consistent — at 100 percent 24/7. Yet, there is also an increasing expectation that data centers will address the equally rising demand for power.
The U.S. Department of Energy, for example, has launched a “Better Buildings Challenge” initiative that encourages commercial buildings, institutions, and multi-level housing developments to reduce energy consumption 20 percent by 2020. Within a five-year period, partners in the program have contributed to a reduction in energy costs that has resulted in $1.3 billion in savings as well as the reduction of 10 million tons of harmful carbon emissions, according to the DOE.
Currently, 310 Better Buildings Challenge partners — which represent 34,000 buildings and facilities — have set goals to reduce energy use by at least 20 percent by 2020.
As part of that challenge as well as other initiatives,, companies are seeking innovative ways to decrease data centers’ environmental footprint. “The focus for data centers is reliability, and that requires uninterruptible electricity,” Equinix’s David Rinard said in an article for Reit.com.
Rinard, the senior director of global sustainability for the company, said innovations are focusing on reducing power dependency through infrastructure, air conditioning, and temperature control.
Enterprise companies that largely rely on data centers are leading the way in testing out innovations involving hydro, solar and wind to generate electricity — reducing dependence on carbon dioxide generating fuels. Google, eBay, Microsoft and Facebook are among those that are committing to use all green operations in their data centers.
Here are just a few recent initiatives:
– Apple, Facebook, eBay and Microsoft are among the companies who have stated a commitment to transition to all green data center operations. That trend is expected to become mandatory for other companies as the United States has pledged to have 50 percent of its electricity generation to come from clean sources by 2025.
– Apple is testing out two massive solar farms — each on 100 acres — to provide fuel using a multiple fuel-cell electricity generator that relies on bio-gas produced from nearby landfills. According to Apple, each of the farms is able to produce 42 million kilowatt hours of electricity.
– A data center operated by eBay in Utah relies on 30 Bloom Energy fuel cells, the same type that Apollo spacecraft depended upon for its flight to the moon. It also depends upon solar panels installed on the data center’s roof, and coal power in the area for backup only.
Want to learn why EMP shielding, FedRAMP certification, and Rated-4 data centers are important?
Download our infographic series on EMP, FedRAMP, and Rated-4!
|
<urn:uuid:c9abec8b-9a79-4e12-b4e4-3412a875efaa>
|
CC-MAIN-2022-40
|
https://lifelinedatacenters.com/data-center/green-trends/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00240.warc.gz
|
en
| 0.946521 | 575 | 2.953125 | 3 |
What’s a Bitcoin? A Bitcoin is a form of digital currency whose creator called it “a peer-to-peer, electronic cash system.” It is presently used where the parties to a transaction do not want to leave a digital trail, like credit card transactions, or any other trail for that matter. For this reason, the Bitcoin is presently mostly used for gambling, drug transactions, or speculation.
Background of the Bitcoin
On Wikipedia, one can find a detailed analysis of the creation of the Bitcoin, some historical timelines, and some of the caveats that one should be aware of before using this digital form of currency.
There are also many other articles and analyses available online describing this interesting creation of complex calculation and cryptographic protocol.
It’s newsworthy to note that some high-profile people have taken a serious interest in Bitcoins and have invested heavily in them. Remember the Winklevoss brothers? They are the wealthy twins who were able to successfully sue for a share of Facebook. They became more famous when their involvement with Facebook was portrayed in the movie “The Social Network.”
One of the Winklevoss brothers recently spoke about their investment in Bitcoins: “We have elected to put our money and faith in a mathematical framework that is free of politics and human error.”
As I see it, they are choosing to turn away from common currencies that are backed by the various countries that issued those currencies, and turn to a mathematically created currency that is based on a very complex model, one free of politics and government.
Flaws of the Bitcoin
It’s true that there is no electronic trail when dealing with Bitcoins. It’s sort of like delivering or receiving a bag of cash — no trails to be concerned with.
Yes, you can use Bitcoins to make online transactions where both parties agree to this form of currency. But presently, most online transactions are being done through credit cards and companies such as PayPal. In fact, online transactions are currently approaching 20 percent of total sales in the U.S. So in my estimation, there is no need for a newly created form of cash.
In any event, Bitcoins have their setbacks. On August 6, 2010, a major vulnerability was found in the Bitcoin protocol. Simply put, users could bypass Bitcoin’s restrictions for creating additional currency. Someone found this vulnerability a few days later, on August 15, and generated or created more that 184 billion Bitcoins that were sent to two addresses on the Bitcoin network.
Fortunately, this untoward event was rapidly spotted and the bug in the system was quickly addressed and fixed. However, the point is that it was exploited, and such exploitation can quickly affect the ever-changing value of the Bitcoin.
Gyrations in the Bitcoin’s Value
First off, let me state that most of us realize that all currencies fluctuate in value. For example, some countries are so economically hard-hit that they have to devaluate their currencies in order to sustain their country’s viability.
Also, most of us realize that currencies normally fluctuate on a day-to-day basis. Just take a look at the dollar versus the euro. This is a natural phenomenon based upon the everyday economic dynamics that affect countries and their currencies.
But how does Bitcoin’s value fluctuate? To answer that, I’m going to quote economist and Nobel Laureate Paul Krugman, from a recent statement about valuating the Bitcoin:
“It’s very peculiar, since Bitcoins are in a sense the ultimate fiat currency, with a value conjured out of thin air. Gold’s value comes in part because it has non-monetary uses, such as filling teeth and making jewelry; paper currencies have value because they’re backed by the power of the state, which defines them as legal tender and accepts them as payment for taxes. Bitcoins, however, derive their value, if any, purely from self-fulfilling prophecy, the belief that other people will accept them as payment.”
Bitcoin’s value is based upon perception, without regard to economics, unlike the way the currencies of most countries fluctuate. Since the countries of the world ultimately back their currencies, the economic and political fortunes of the various countries do in fact affect the perceived values of the various currencies. This is something that we can understand and deal with — we have been doing so for centuries.
What about the Bitcoin? How is it doing vis–vis fluctuations in its value? Not so well as far as I’m concerned. In early spring of this year the value of the Bitcoin gyrated like an out of control dervish. Its value at one point went up more than 300 percent, and then fell about 50 percent in a few short hours. Not a stable currency as far as I can tell.
As to other people accepting Bitcoins as payment, I read a CNN iReport dated April 16, 2013, that the country of Zimbabwe will make the Bitcoin its official currency. Please note that the CNN iReport producer states that the story has not been verified. As of this writing, I can’t find any valid independent verification of this story. I do note, however, that bitcoinmoney.com had a posting on its site that talked about Zimbabwe and said: “Why not imagine a situation where Bitcoin merges with M-pesa so you get mobile telephone money backed by a quasi-commodity standard like the Bitcoin? I think most Africans readily would accept that money.”
So, I’m not sure whether the story about Zimbabwe converting to Bitcoins was prompted by bitcoinmoney.com or not. Still, it’s all a very interesting phenomenon, created by mathematically gifted people coupled with modern-day technology.
As for me personally, I’m sticking with the dollar!
|
<urn:uuid:072ad016-4231-4045-8360-23053b3dfbf9>
|
CC-MAIN-2022-40
|
https://www.crmbuyer.com/story/the-digital-desires-that-bank-on-bitcoins-78053.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00240.warc.gz
|
en
| 0.962388 | 1,209 | 2.703125 | 3 |
We’re all bound to work with PowerPoint to create presentations at one point or another in our academic or professional lives. Unfortunately, many of us are still unsure of how to best utilize PowerPoint to create a presentation that is both engaging and informative.
Most professional presenters that use PowerPoint are in agreement that the most important thing to remember when creating a presentation is not to overwhelm your audience. Slides flooded with facts often confuse and bore, not engage, your audience. And even worse, text heavy slides are commonly read by the audience, and in the process, the speaker is ignored.
With this in mind, here are 6 suggestions to consider in order to create a more effective presentation:
- Telling a story is always an amazing way to present an idea. If you are a good storyteller, stick to a small number of slides to accentuate the story and let your speech do the talking.
- If you are presenting how-to’s or other instructional information, try creating simple slides that use a small number of words to make your point, and don’t be afraid to use a lot of slides.
- If you are presenting structured information, use bullet points, not sentences or paragraphs to help your viewers’ eyes scan through the slide and catch each point.
- If you can avoid it, don’t use Clip Art, animations or fancy slide transitions. When it comes down to it, they distract from your presentation and reduce the effectiveness of your message.
- Focus on branding your business by putting your logo on every page and keeping your presentation consistent with your color palette.
- Practice first! It’s usually quite obvious to an audience if you haven’t practiced your presentation using your slides before getting on stage. Avoid the embarrassment and get your timing and speech dialed in before you present.
By using these tips for your next presentation you should end up with a well-balanced and effective presentation that will intrigue your viewers and help establish you as an accomplished presenter.
For more information on how to best utilize PowerPoint for presentations, check out this Inc. article.
|
<urn:uuid:a8720f23-8f55-44c8-806e-ff7763a63b9a>
|
CC-MAIN-2022-40
|
https://www.ittropolis.com/6-tips-for-creating-more-effective-powerpoint-presentations/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00240.warc.gz
|
en
| 0.905295 | 434 | 2.65625 | 3 |
The Experimental Therapeutics Branch accelerated its research by turning to machine learning, AI and high-performance computing.
Developing new drugs is costly, complex and generally takes years to reach a point where novel treatments are ready to be validated by clinical tests—but scientists within the Walter Reed Army Institute of Research turned to high-tech tools to more rapidly funnel out drug compounds that could effectively target COVID-19.
“Rightfully, the people of the United States and the world want an answer fast,” Deputy Director of the institute’s Experimental Therapeutics Branch Maj. Brandon Pybus told Nextgov during a recent virtual panel. “So we looked into ways that we could shave time off of the front end of the drug discovery process, which can be pretty lengthy when it takes its normal course, because you, of course, have to not only prove that the drugs that you're developing are safe, but you have to prove that they're effective—and then they have to clear all of these regulatory paths.”
The Experimental Therapeutics Branch holds significant clout as a drug discovery and development program, including having a hand in every Food and Drug Administration-approved malaria prevention drug. As the pandemic emerged, Pybus said inside officials opted to partner with Southwest Research Institute, which had developed what he deemed “a really interesting and novel machine learning- and [artificial intelligence]-based approach to find molecules that can successfully bind to a target of interest in the virus.”
The Experimental Therapeutics team had identified rare, derived structures of the spikes on the surface of the novel coronavirus that are responsible for attaching and invading cells. Researchers shared them with SRI to try to use AI and machine learning to screen through a very large number of compounds that they “threw at the spikes” to see if they would bind in the virtual space to inhibit attachment. But the team hoped to confront 41 million compounds, a computationally intensive number that ultimately warranted the support of a supercomputer to more quickly puzzle out those with the possibility to be effective.
Pybus and his team turned to the Defense Department’s High-Performance Computing Modernization Program, which he noted is essentially the Pentagon’s portal into all-things supercomputing. With the group’s publicly-provided resources, which are also being offered up to the U.S.-led high-performance computing consortium, Pybus and the team were able to swiftly screen those millions of compounds and narrow down and advance more than 100 into benchtop testing against the virus.
“What we saw is that—like we had hoped—it did really enrich the hit rate, which shaved a tremendous amount of time, years, off the front end of the drug discovery development,” he explained. “I mean, it's really remarkable. As you can imagine, a whole-of-government effort, when they first take off, is a little bit like the wild west. But everybody knows the desired end state and everybody was very open—and still is very open—to collaborating to achieve that end state faster.”
A traditional program would have required screening each of the millions of compounds but supplemented by the tech, Pybus told Nextgov Wednesday that the team at this point has whittled 41 million down to 13 hits for more advanced screening.
“That’s pretty great for three or four months,” he noted. “The [high-performance computing] really made that possible.”
Some on Pybus’ team have had to work inside the facility to create and test new chemical entities and more, but a large part of the work—as with most federal agencies and industry shops across the U.S.—can be completed remotely. The department really “stepped up,” according to Pybus, in terms of providing technologies to enable teams to work efficiently from out-of-office locations.
“I think that it was a trial that we had to go through—it was necessary because we had to maintain physical distance for the safety of all the staff and in the community-at-large,” he said. “But it has gone very well, and I think that it has shown us that we can do a lot more remotely than we would have imagined before.”
Loads of lessons are being learned along the way, but top of mind is the impact of leaning into a little creativity and technical support to move faster and drive new outcomes. Pybus isn’t a computer scientist—but as a biochemist and structural biologist, he’s now experienced firsthand how emerging and advanced resources offer researchers across many fields, exploring a range of therapeutics, “a very powerful toolset.”
“It's a convergence of a lot of technologies that aren't necessarily new, but I do like to think that, right now, they are converging in a new way and very aggressively, which is good to see,” he said.
Though he considers the real measure of success for the work to be “when that compound that the computer predicts goes into that plate and kills the virus,” the metrics that the team has used to date involve considering whether the number of hits the team’s uncovered has been boosted by the effort.
“And I would say it has been—so it’s been successful so far,” Pybus said.
|
<urn:uuid:99ba4109-6207-454f-b5f6-b21e1d34c5b8>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/emerging-tech/2020/07/walter-reed-scientists-use-artificial-intelligence-screen-drugs-potentially-treat-covid-19/166774/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00240.warc.gz
|
en
| 0.973739 | 1,127 | 2.515625 | 3 |
In the era of the Internet, TCP/IP and its suite of associated protocols has become the de facto standard for networking heterogeneous computer systems. All major operating systems now support TCP/IP and it often serves as the bridge that allows otherwise incompatible systems to communicate and share information. FTPone of the oldest members of the TCP/IP suiteis still the most useful member of that suite. Originally introduced in Request for Comment (RFC) 454 back in 1973 (and revised several times since then), FTP remains an elegant and simple way to transfer files between all types of computers. The current FTP RFC is 959, which was released in October 1985. It can be seen at http://info.internet.isi.edu/in-notes/rfc/files/rfc959.txt. Several other RFCs have also been added to the original protocol.
Many AS/400 shops have already discovered FTP as a way to move files between computers. FTP can be used to transfer database query result sets, relocate system logs, or back up files to a centralized system. Transfers can be scheduled to run overnight, often as part of nightly batch processing. (Editors note: For a list of AS/400 Network Expert articles that discuss FTP in an AS/400 environment, see the Related Materials section at the end of this article.)
However, since most AS/400 shops also have Windows NT installed, the question becomes, How do we set up FTP on Windows NT 4.0? NT 4.0 can act as an FTP client, copying files to and from an AS/400 FTP server. FTP scripts and the Windows NT scheduler service can be used to automate the process. Windows NT 4.0 can also act as an FTP server by installing Internet Information Server (IIS) on your NT Server or the Personal Web Server component on an NT Workstation. IIS and Personal Web Server also install other Internet services, however, and security becomes a concern once they are installed. To answer these questions, this article covers the right way to use Windows NT
4.0 as both an FTP client and server in your AS/400 network.
Windows NT As an FTP Client
If you have installed Windows NT 4.0 and the TCP/IP protocol, you already have an FTP client. Windows NT includes a command-line FTP client as part of its standard TCP/IP network installation. Simply type FTP on the Windows NT command line and it will bring up the FTP client. Assuming that the AS/400 has its FTP server installed and running, you can then open a connection from the client to the server and begin transferring files. If you prefer a graphical interface, there are many Windows NT freeware and shareware FTP
clients that are available on the Web, such as WS_FTP from Ipswitch (www.ipswitch.com/index.html). The command-line client will suffice for most transfers, however.
If youre not familiar with the standard FTP client operation, typing help at the FTP> prompt will display a list of available commands. Because these commands are relatively self-explanatory even to someone who has not used FTP before, I will not go into all of them. However, several commands should be noted, as they are especially helpful when constructing FTP scripts. The standard commands of PUT and GET are used to transfer files to and from an FTP server. Be sure to use the BINARY or ASCII command to tell the client what kind of data you are transferring. When in doubt, set the data type to BINARY, as that performs a bit-by-bit transfer of files. ASCII mode changes the carriage returns/line feeds in a file depending upon the receiving platform, which could corrupt a non-ASCII file.
Also, if you use the PROMPT command followed by either the MPUT or MGET command, you will be able to use wildcards when transferring files, while the standard PUT and GET require a specific file name. Other commands (such as OPEN, CLOSE, CD, LCD) are used to open and close connections, change directories, and manipulate file names.
One particularly useful feature of the Microsoft FTP client is its ability to use scripts to automate common procedures. To use a script, create a text file that lists the FTP commands to be executed, and pass that file as an argument to the FTP client by using thes switch. For example, the following command will execute the ftpcmd.txt file:
C:> ftp -s:ftpcmd.txt
Figure 1 shows the listing of ftpcmd.txt. The FTP client will open a connection to ftp.myserver.com and sign in using the listed ID and password. Once connected, it will change to the appropriate directory on both the server and the client and change the transfer mode to binary. Then it will use the PROMPT and MPUT commands to upload the entire contents of the directory to the server. This will not upload the contents of any subdirectories, however. This whole process could be automated by incorporating the above command line into a batch file and using the Windows NT Scheduler service. For example, the following command will cause the Windows NT Scheduler to execute the c:ftmcmd.bat script at midnight on the first day of every month.
C:> at 00:00 /every:1 c:ftpcmd.bat
Heres one final note on security: FTP transfers all user names and passwords over the network as clear text. Be careful using automated transfers over an insecure network as a hacker could capture the network packets and read the login information as well as the data from the file. If security is a concern, be sure to encrypt the data file before sending it over the network. User IDs and passwords are still vulnerable however, so it is best to use a custom ID that has no other rights on the network. A secure FTP was outlined in RFC 2228 in October 1997, but this has not been incorporated into the standard Microsoft client at this time.
Windows NT As an FTP Server
Windows NT 4.0 server does not come with an FTP server installed by default. In order to use NT as an FTP server, you must first obtain the Windows NT Option Pack. This addon is now included with new Windows NT purchases or you can download it for free from the Microsoft Web site at www.microsoft.com/ntserver/nts/downloads/recommended/NT4OptPk/default.asp. The Option Pack includes several utilities for Windows NT, including Microsoft IIS 4.0 and
Microsoft Transaction Server 2.0. On Windows NT Server, all of the features of the Option Pack can be used, including IIS with virtual site hosting. If you install the Option Pack on Windows NT Workstation, however, many of the more advanced IIS features are not available, such as IP address filtering and multiple site hosting. Microsoft has dubbed this more limited version Personal Web Server 4.0. For the purposes of setting up a simple FTP server, either platform will do. The CD-ROM version of the Option Pack also includes Windows NT Service Pack 3 and Internet Explorer 4.01, both of which must be installed before you can begin the Option Pack setup. Of course if you already have the latest versions of these products, such as Service Pack 4 and Internet Explorer 5, you are ready to begin.
Not only does the Option Pack include some extra features, but IIS also installs several services onto your Windows NT 4.0 server. Start installing the Option Pack by running the X:NTOPTPAKEnx86Winnt.SRVSetup.exe program where X: is equal to the drive letter of your CD-ROM drive. If you are just interested in the FTP service, you will want to do a Custom install after starting the setup program. From the list of available components, only select IIS, Microsoft Management Console, and the NT Option Pack Common Files. You should also limit the Internet services installed by clicking on IIS and using the Show Subcomponents button to prevent the installation of a Web server. By default, IIS will try to install a Web server along with the FTP server. In June 1999, a company made headlines by publishing a program on the Web that exploited a security hole in the IIS web server. Although Microsoft released a patch soon after, the best protection is to simply avoid installing the Web server if youre not intending to use it. For basic FTP, only the FTP Server and Internet Service Manager need to be checked in the IIS subcomponents. IIS setup will now prompt you to create a directory for the default FTP site. The default is C:INETPUBFTPROOT. All files and directories destined for FTP should be placed under this directory.
With the setup program completed, an icon will be created for the Internet Service Manager. (You can find this icon on the Start menu under Programs/Windows NT 4.0 Option Pack.) This administrative tool is really a snap-in for the Microsoft Management Console (MMC). With Windows 2000, the MMC will be used to control all functions on the server. IIS 4.0 was one of the first programs to use the Microsoft Management Console, so most of the same configuration options Ill discuss will also apply on the Windows 2000 platform. Once the console is started, the Internet Information Server screen shown in Figure 2 will appear. Multiple virtual FTP sites can be created on the same server, as long as the proper Domain Name Server (DNS) entries exist for each entry. The FTP service can be controlled by using the play, pause, and stop buttons that appear on the toolbar. Right-clicking on a particular FTP site, followed by a left-click on the Properties menu item, will allow you to configure that sites properties (Figure 3), including setting up security.
The FTP configuration settings have options that can be used to secure an FTP site. On the first tab of your FTP properties panel, FTP Site, an administrator can specify which TCP port is used to connect to the server. The default port of 21 is well known, but, for security purposes, you can select a higher port number so that only those people who know the correct port will be able to access the server. The users will have to reconfigure their FTP clients to connect using the new port number. Figure 3 shows the Security Accounts tab, which contains one of the most important security settings for IIS. By default, anonymous logins to the FTP server are allowed. Unless the server is to be used by the general public, this option should not be enabled. Clearing the check box means that only users with a valid Windows NT logon account will be able to access the server. If you do decide to allow anonymous connections, you will want to be sure to limit the rights of the IUSR_servername account (where servername is equal to the name of your Windows NT server, IUSR_PETPC32 in my example). This account is created during the IIS install, and it is used to define the permissions that an anonymous FTP user will have to files on an NTFS partition, as well as to define system rights for that user. For security purposes, this
account should be configured so that it only has access to the directories you will be publishing.
There is one other FTP security mechanism on your IIS that you need to control. Figure 4 shows the Directory Security tab, which allows an administrator to specify which IP addresses will be allowed to connect to this FTP server. Once again, the default setting allows anyone to access the FTP site. This tab also illustrates one of the differences between IIS and Personal Web Server: Personal Web Server on Windows NT Workstation is not able to implement IP address restrictions.
Once security is configured, the only remaining task is to define which directories are to be published by the FTP server. Figure 5 shows the Home Directory, which specifies the local directory that will act as the root for the FTP server. A network drive may also be used as the local directory by specifying the Universal Naming Convention (UNC) path instead of a local directory. Additional security can be implemented by making the directory read-only. With security configured and the directories published, the FTP site is now open for business.
The FTP Common Ground
While the FTP protocol lacks many of the finer features of proprietary file sharing systems, it remains the one of the easiest and most readily available ways to transfer files between computers. FTP can be used to solve numerous problems on any network, such as the nightly transport of a DB2 query set to a users desktop PC. Using FTP on Windows NT will provide yet another way for your AS/400s and desktop PCs to coexist peacefully
Internet Engineering Task Force Web Site: www.ietf.org (Contains links to RFC pages.) Microsoft Training and Certification Web page: www.microsoft.com/train_cert/Courses/936dfinal.htm (Contains information on Microsoft Official Curriculum Course 936: Creating and Managing a Web Server Using Microsoft Internet Information Server 4.0.) Advice: Client Access Data Transfer vs. FTP: When to Use Each Technique, Joe Hertvik, AS/400 Network Expert, July/August 1999 How to Create Blazing FTP File Transfers, Daniel Wood, AS/400 Network Expert, March/April 1999
Figure 1: This is a sample listing of a FTP scripting file that can be run under the Windows NT Scheduler service.
Figure 2: You can view and control your Windows NT FTP sites through the Internet Information Server screen.
Figure 3: Be sure to configure your anonymous connections correctly on the Security Accounts panel to lock down your Windows NT FTP server.
Figure 4: The Directory Security panel allows you to grant FTP access based on a clients IP addresses. (This panel is not available for Personal Web Server users.)
Figure 5: The Home Directory panel specifies the local directory that acts as the root directory for your FTP server.
|
<urn:uuid:e787f6f5-caf8-4ca0-a80f-b7d2d900c389>
|
CC-MAIN-2022-40
|
https://www.mcpressonline.com/programming-other/microsoft/setting-up-windows-nt-to-ftp-to-the-as400/print
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00240.warc.gz
|
en
| 0.900209 | 2,938 | 2.78125 | 3 |
Sites of colleges and universities are constantly subject to DDOS attacks. But who does it? do students really need to drop sites?”
Hacking is a major vice of this generations young adult. Usually, it starts with the basics like hacking Wi-Fi password or a Facebook or Instagram account for pranks or just to be mean to another person. However, depending on what is at stake the hacking becomes more sophisticated. The average man’s question is usually “is this website safe?” when dealing with special information.
What is a DDoS attack? You know, yea?
DDoS is an abbreviation for Distributed Denial of Service. It is a category of DOS attack where multiple compromised systems, which are often infested with a Trojan, are used to target a single system producing a Denial of Service (DoS) attack.
Victims of a DDoS bout are usually both the end targeted system and all systems maliciously used and controlled by the hacker.
Several types of distributed denial-of-service attack techniques exhaust or saturate the targeted system in diverse ways. There are three common types of attacks: protocol, volumetric and application attacks.
Each of these can stay as long as minutes to months and can range from an inconspicuous amount of traffic to more than the highest on record, reported at 1.35 terabits per second.
Who makes DDoS attacks on College and Universities
A critical and thorough analysis has shown that major cyber-attacks on college and university sites are primarily people on the inside. This suggests that the perpetrators of such acts are students or even staffs of the university.
A student who would not study or write an essay for an exam but would rather spend time on the computer or decide to hack the system sometimes to protest the exams, rebel against the system or even to get the exam questions beforehand.
Accrediting cyber-attacks is often a difficult task but JISC (a not-for-profit digital support service for advanced education) studied a lot of DDoS attacks on universities and concluded that indistinct patterns show that these occurrences happen during term-time and working days and drops when students have a break.
According to John Chapman, head of security operations at JISC, this style could indicate that the perpetrators are students or staff, or other people acquainted with the academic cycle.
There are different approaches a hacker could use. The hacker may decide to Command-and-control server or make use of Botnet and bots to gain control of the system and weaken functionality.
Why makes DDoS attacks on College and university sites?
An assault to a person usually stems from a feeling of wanting to hurt someone. When a college or university site is attacked there may be a number of reasons considering that most times the perpetrators are student or staff.
One school of thought is of the opinion that the occurrences are a means of settling grudges of which in time past there have been cases of such.
For example, the University of London survived 15 years of the attack until September 2015, when an ex-staff member started a cyber-attack against the senior manager. According to him, the senior manager was responsible for dismissal, so he attempted to take the organization down in the process.
Sometimes grudges may be held by a group of students, however not all are hackers, therefore some of the buy DDoS in order to make the attack.
It took combined forces of law enforcement agencies around the world to take down ‘Webstresser’. This is a DDoS for hire service which illegitimately traded kits that were used for overpowering networks. With this taken down, there was seemingly a decline in DDoS outbreaks in universities and college sites
Other times, the attacks are done just for the fun of it or the credit to a hacker. One instance was when the attacker launched an attack in order to disadvantage a rival in online games, in this case, the DDoS attack against a university network which took place across four nights in a row was found to be specifically targeting halls of residence. The hacker was doing this just for the fun of it.
Who benefits from DDoS attacks on College and University sites?
Cyber attacks are done for different reasons and in the end, there are different benefits. However, the benefits accrued mostly depend on the motive for the initial attack.
For most strikes, the perpetrator gets to quench the thirst for revenge. If the perpetrator buys DDoS, the seller gets paid and also takes credit for it.
Since the agenda for the hack is fulfilled the organizer of the campaign is satisfied and if the organizer is the hacker get such a person gets to take credit for is work although most times on the dark web. These occurrences usually put the college and university sites in a precarious position as they are vulnerable and open to other forms of attack on the system.
|
<urn:uuid:e32cf281-8b2a-4941-bf8f-5a3c81e6970a>
|
CC-MAIN-2022-40
|
https://gbhackers.com/ddos-attack-colleges-universities/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00240.warc.gz
|
en
| 0.966106 | 1,002 | 2.65625 | 3 |
The IPSEC NAT Traversal feature introduces IPSEC traffic to travel through Network Address Translation (NAT) or Port Address Translation (PAT) device in the network by addressing many incompatibilities between NAT and IPSEC.
NAT Traversal is a UDP encapsulation which allows traffic to get the specified destination when a device does not have a public address.
IPSEC provides confidentiality, authenticity and integrity. However, problem occurs when a NAT device does its NAT translations, however the address of the source within the IP payload does not match the source address of the IKE packet as it is replaced by the address of the NAT device during NAT translation. Authenticity, integrity will break which will cause the packet by the remote peer to be dropped.
NAT and IPSEC are incompatible with each other and it can be resolved by using NAT Traversal. NAT Traversal adds a UDP header which encapsulates the IPSEC ESP packet. New UDP packet is not encrypted and is treated just like a normal UDP packet the NAT device can make the required changes and process the message which would now overcome the problem.
Related – Proxy vs NAT
NAT and IPSEC Incompatibility and Solution
- Internet Key Exchange (IKE) IP Address and NAT
This incompatibility applies only when IP addresses are used as a search key to find a pre shared key. Modification of the IP source or destination addresses by NAT or reverse NAT results in a mismatch between the IP address and the pre shared key.
- Embedded IP Addresses and NAT
Because the payload is integrity protected, any IP address enclosed within IPSEC packets cannot be translated by NAT because embedded IP address is used by FTP, SNMP, LDAP and SIP.
UDP encapsulation addresses incompatibility issues between IPSEC and NAT.
- Incompatibility between IPSEC ESP and PAT Resolved
To prevent this situation UDP encapsulation is used to hide the ESP packet behind the UDP header. PAT treats the ESP packet as a UDP packet and the ESP packet as a normal UDP packet.
- Incompatibility between Checksums and NAT Resolved
The checksum value is always zero. This value prevents an intermediate device from validating the checksum result against the packet checksum. Resolving the TCP UDP checksum issue by NAT changes the IP source and destination addresses.
- Incompatibility between IKE Destination Ports and PAT Resolved
PAT changes the port in the new UDP header for translation and leaves the original payload as it is. In phase 1 setup, three ports must be open on the device that is doing NAT for VPN –
- UDP port 4500 for NAT traversal
- UDP port 500 for IKE and
- IP protocol 50 or ESP
After this, the data is sent using IPSEC over UDP which is effectively NAT Traversal. The receiving peer first De-capsulate the IPSEC packet from its UDP packet and then processes the traffic as a standard IPSEC packet.
Benefits of NAT Traversal
Before the NAT traversal, a standard IPSEC virtual private network (VPN) tunnel would not work if there were one or more NAT or PAT device in the path of the IPSEC packet. NAT IPSEC feature aware allows remote access users to build IPSEC tunnels to home gateways. The IPSEC NAT Transparency feature permits IPSEC traffic to travel through NAT or PAT device in the network by encapsulating IPSEC packets in a User Datagram Protocol (UDP) wrapper, which allows the packets to travel across NAT configured devices.
- Configuring NAT Traversal
NAT Traversal is a feature that is auto detected and enabled by default. There are no configuration steps. If both devices are NAT-T capable, NAT Traversal is auto detected and auto negotiated.
- Disabling NAT Traversal
To disable NAT traversal, following command is used –
#no crypto IPSEC NAT-transparency udp-encapsulation
NAT-T is a method of assigning Public IP address and encountering problem when data protected by IPsec passes through a NAT device and changes to the IP address cause IKE to discard packets. During the Phase 1 exchanges, NAT-Traversal adds a UDP encapsulation to IPsec packets so they are not discarded after address translation. NAT-T encapsulates both IKE and ESP traffic within UDP port 4500 used as both the source and destination port.
Related – NAT CHEATSHEET
|
<urn:uuid:ff9b2b56-54f2-4543-b13c-a4216061331a>
|
CC-MAIN-2022-40
|
https://networkinterview.com/what-is-nat-traversal/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00240.warc.gz
|
en
| 0.894328 | 917 | 2.65625 | 3 |
Researchers this week published information about a newfound, serious weakness in WPA2 — the security standard that protects all modern Wi-Fi networks. What follows is a short rundown on what exactly is at stake here, who’s most at-risk from this vulnerability, and what organizations and individuals can do about it.
This author has long advised computer users who have Adobe’s Shockwave Player installed to junk the product, mainly on the basis that few sites actually require the browser plugin, and because it’s yet another plugin that requires constant updating. But I was positively shocked this week to learn that this software introduces a far more pernicious problem: Turns out, it bundles a component of Adobe Flash that is more than 15 months behind on security updates, and which can be used to backdoor virtually any computer running it.
Researchers have uncovered an extremely critical vulnerability in recent versions of OpenSSL, a technology that allows millions of Web sites to encrypt communications with visitors. Complicating matters further is the release of a simple exploit that can be used to steal usernames and passwords from vulnerable sites, as well as private keys that sites use to encrypt and decrypt sensitive data.
On Thursday, the world learned that attackers were breaking into computers using a previously undocumented security hole in Java, a program that is installed on hundreds of millions of computers worldwide. This post aims to answer some of the most frequently asked questions about the vulnerability, and to outline simple steps that users can take to protect themselves.
|
<urn:uuid:3c675abe-836d-4c5a-b25f-ed46fb3452c0>
|
CC-MAIN-2022-40
|
https://krebsonsecurity.com/tag/cert/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00440.warc.gz
|
en
| 0.945413 | 303 | 2.71875 | 3 |
Data center energy efficiency and sustainability have become mandatory considerations among data center managers over the past few years. While big tech firms, such as Amazon, Facebook, Google, and Microsoft, run their data center operations very efficiently and have committed to transitioning to 100% renewable energy, globally, data center power consumption amounts to approximately 416 terawatts — roughly 3% of all electricity generated on the planet.
Worldwide, internet traffic tripled between 2015 and 2019 and is expected to double by 2022, according to Cisco. Meanwhile, the proliferation of AI applications among enterprises and the anticipated surge in the number of sensors expected to be deployed with the mainstream adoption of the IoT raises the questions of how much more energy usage in data centers will increase, and what this growth will mean for the release of carbon emissions globally.
Concerns about sustainability and the environment have led to renewed focus on varying metrics that assess the energy efficiency of data centers and the emergence of new standards. So, let’s take look at the most important and newest metrics to measure, determine, and validate the efficiency of the data center.
Many Metrics, But Much Progress Needed in Data Center Efficiency
Perhaps the most well-known metric used to gauge the energy efficiency of a data center is
power usage effectiveness (PUE). Introduced by The Green Grid in 2007, PUE is a ratio of the amount of power needed to drive and cool the data center versus the power draw from the IT equipment in the facility.
For the mathematicians in our audience, such an equation looks like this:
PUE = Total Facility Energy ÷ IT Equipment Energy
Expressed as a ratio, the overall efficiency of a given data center improves as the quotient gets closer to 1. Put another way, a lower PUE is better, because it indicates a smaller amount of energy is used for operations other than running the processors. To counter both rising electricity costs and concerns about carbon dioxide emissions, hyperscale cloud companies and larger colocation facilities have achieved annual or design PUE figures between 1.1 and 1.4.
The Uptime Institute has tracked industry average PUE numbers at various intervals for more than a decade, and its Global Data Center Survey 2019 report found that, for the first time, there was no recorded improvement in energy efficiency. In fact, the efficiency ratings deteriorated slightly, from an average PUE of 1.58 in 2018 to 1.67 in 2019. Possible explanations range from extreme temperatures in regions of the world that necessitated increased use of cooling to data centers with higher density racks that may have caused cooling systems to work harder to cooling inefficiencies due to poor layout of servers and data centers operating below their optimal designs.
Data center infrastructure efficiency (DCiE) is another Green Grid energy efficiency benchmark that compares a data center’s infrastructure to its existing IT load. Expressed as a percentage, DCiE is calculated by dividing IT equipment power by total facility power. For example, if a data center’s total IT load is 94 kW and its total facility load is 200 kW, that data center yields an efficiency score of 47%.
Developed by the U.S. Green Building Council (USGBC), Leadership in Energy and Environmental Design (LEED) is the most widely used green building certification system in the world. LEED uses a set of rating systems to provide independent verification of a building’s green features and sustainability initiatives, offering building owner-operators and designers the framework and metrics to ensure their properties are environmentally responsible and use resources efficiently. In addition to data center energy efficiency, LEED considers indoor environmental quality, location, materials, and water usage. Interestingly enough, only a small percentage of data centers worldwide achieve LEED certification.
Founded in 1894, ASHRAE is a global society advancing human well-being through sustainable technology for the built environment. For the data center sector, ASHRAE publishes specific guidelines and standards for temperature and humidity operating ranges of IT equipment.
Additionally, there is the PAR4 Energy Efficiency Rating System, which was developed by data center software provider, Power Assure, and is tested by the Underwriters Laboratories (UL). PAR4 is unique in that it measures server power consumption in several meaningful and more nuanced ways, including monitoring idle power, peak power, and total utilization power. This approach is not only logical but timely, since underutilized servers are a primary source of energy waste in data centers, especially during periods of low application demand.
What these and other standards and metrics make clear is that there are many opportunities to reduce energy consumption in data centers and lower costs while maintaining a safe environment for IT equipment and, in the process, curtail carbon emissions. But because you can’t manage what you don’t measure, software solutions, such as Intel® Data Center Manager, are needed to enable facilities and IT professionals to intelligently assess energy consumption. These tools provide real-time power and thermal consumption data, giving data center managers the clarity needed to lower power usage, safely increase rack density, and prolong operation during outages.
Moreover, many data center administrators seeking to prevent infrastructure failures will overcool their systems and waste valuable energy because they lack visibility into actual server temperatures and power consumption. By aggregating server-inlet temperature data into thermal maps, data center management software solutions allow IT staff to monitor the effectiveness of their cooling solutions and airflow design. Data center management solutions also help identify underused servers, which not only saves power but helps to identify opportunities for consolidation and virtualization.
According to a new Uptime Institute report, efforts to improve the energy efficiency of the mechanical and electrical infrastructure of the data center are producing only marginal improvements. Now is the time for IT staff and facilities administrators to turn to data center management solutions to gain clarity into power consumption, significantly reduce energy use and carbon footprint, and slash mounting energy bills.
|
<urn:uuid:343ada0b-8414-4826-90ef-9e9e276a58bb>
|
CC-MAIN-2022-40
|
https://www.missioncriticalmagazine.com/articles/92881-data-center-management-solutions-reduce-energy-consumption
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00440.warc.gz
|
en
| 0.93303 | 1,204 | 3.09375 | 3 |
The most obvious and recognizable difference between IPv4 and IPv6 is the IPv6 address. An IPv4 address is 32 bits and expressed in dotted-decimal notation, whereas an IPv6 address is 128 bits in length and written in hexadecimal. However, there are many other differences between the two protocol addresses. IPv6 includes new address types as well as changes to familiar address types.
In this chapter, you will become familiar with reading IPv6 addresses. You will also learn how to represent many IPv6 addresses with fewer digits, using two simple rules.
This chapter examines all the different types of IPv6 addresses in the unicast, multicast, and anycast categories. Some addresses, such as global unicast, link-local unicast, and multicast addresses, have more significance in IPv6. These addresses are examined more closely in Chapter 5, “Global Unicast Address,” Chapter 6, “Link-Local Unicast Address,” and Chapter 7, “Multicast Addresses.”
Representation of IPv6 Addresses
IPv6 addresses are 128 bits in length and written as a string of hexadecimal digits. Every 4 bits can be represented by a single hexadecimal digit, for a total of 32 hexadecimal values (016 through f16 ). You will see later in this section how to possibly reduce the number of digits required to represent an IPv6 address. The alphanumeric characters used in hexadecimal are not case sensitive; therefore, uppercase and lowercase characters are equivalent. Although IPv6 address can be written in lowercase or uppercase, RFC 5952, A Recommendation for IPv6 Address Text Representation, recommends that IPv6 addresses be represented in lowercase.
As described in RFC 4291, the preferred form is x:x:x:x:x:x:x:x. Each x is a 16-bit section that can be represented using up to four hexadecimal digits, with the sections separated by colons. The result is eight 16-bit sections, or hextets, for a total of 128 bits in the address, as shown in Figure 4-1. Figure 4-1 also shows an example of IPv6 addresses on a Windows host and a Mac OS host. These addresses and the format of these addresses will be explained in this chapter.
Figure 4-1 Preferred Form of IPv6 Address
The longest representation of the preferred form includes a total of 32 hexadecimal values. Colons separate the groups of 4-bit hexadecimal digits.
The unofficial term for a section of four hexadecimal values is a hextet, similar to the term octet used in IPv4 addressing. An IPv6 address consists of eight hextets separated by colons. As Figure 4-1 illustrates, each hextet, with its four hexadecimal digits, is equivalent to 16 bits. For clarity, the term hextet is used throughout this book when referring to individual 16-bit segments. The following list shows several examples of IPv6 addresses using the longest representation of the preferred form:
0000:0000:0000:0000:0000:0000:0000:0000 0000:0000:0000:0000:0000:0000:0000:0001 ff02:0000:0000:0000:0000:0000:0000:0001 fe80:0000:0000:0000:a299:9bff:fe18:50d1 2001:0db8:1111:000a:00b0:0000:9000:0200 2001:0db8:0000:0000:abcd:0000:0000:1234 2001:0db8:cafe:0001:0000:0000:0000:0100 2001:0db8:cafe:0001:0000:0000:0000:0200
At first glance, these addresses can look overwhelming. Don’t worry, though. Later in this chapter, you will learn a technique that helps in reading and using IPv6 addresses. RFC 2373 and RFC 5952 provide two helpful rules for reducing the notation involved in the preferred format, which will be discussed next.
Rule 1: Omit Leading 0s
One way to shorten IPv6 addresses is to omit leading 0s in any hextet (that is, 16-bit section). This rule applies only to leading 0s and not to trailing 0s; being able to omit both leading and trailing 0s would cause the address to be ambiguous. Table 4-1 shows a list of preferred IPv6 addresses and how the leading 0s can be removed. The preferred form shows the address using 32 hexadecimal digits.
Table 4-1 Examples of Omitting Leading 0s in a Hextet*
|Leading 0s omitted|| 0: 0: 0: 0: 0: 0: 0: 0
|Leading 0s omitted|| 0: 0: 0: 0: 0: 0: 0: 1
|Leading 0s omitted||ff02: 0: 0: 0: 0: 0: 0: 1
|Preferred||fe80: 0000: 0000: 0000:a299:9bff:fe18:50d1|
|Leading 0s omitted||fe80: 0 : 0: 0:a299:9bff:fe18:50d1
|Preferred||2001: 0db8: 1111:000a:00b0:0000:9000:0200|
|Leading 0s omitted||2001: db8: 1111: a: b0: 0:9000: 200
|Preferred||2001: 0db8: 0000: 0000:abcd:0000:0000:1234|
|Leading 0s omitted||2001: db8: 0: 0:abcd: 0: 0:1234
|Preferred||2001: 0db8: aaaa: 0001:0000:0000:0000:0100|
|Leading 0s omitted||2001: db8: aaaa: 1: 0: 0: 0: 100
|Preferred||2001: 0db8: aaaa: 0001:0000:0000:0000:0200|
|Leading 0s omitted||2001: db8: aaaa: 1: 0: 0: 0: 200
|* In this table, the 0s to be omitted are in bold. Spaces are retained so you can better visualize where the 0s were removed.|
It is important to remember that only leading 0s can be removed; if you deleted trailing 0s the address would be incorrect. To ensure that there is only one correct interpretation of an address, only leading 0s can be omitted, as shown in the following example:
Incorrect (trailing 0s):
Correct (leading 0s):
Rule 2: Omit All-0s Hextets
The second rule for shortening IPv6 addresses is that you can use a double colon (::) to represent any single, contiguous string of two or more hextets (16-bit segments) consisting of all 0s. Table 4-2 illustrates the use of the double colon.
Table 4-2 Examples of Omitting a Single Contiguous String of All-0s Hextets*
|(::) All-0s segments||::|
|(::) All-0s segments||::0001|
|(::) All-0s segments||ff02::0001|
|(::) All-0s segments||fe80::a299:9bff:fe18:50d1|
|(::) All-0s segments||2001:0db8:1111:000a:00b0::0200|
|(::) All-0s segments||2001:0db8::abcd:0000:0000:1234|
|(::) All-0s segments||2001:0db8:aaaa:0001::0100|
|(::) All-0s segments||2001:0db8:aaaa:0001::0200|
|* In this table, the 0s in bold in the preferred address are replaced by the double colon.|
Only a single contiguous string of all-0s segments can be represented with a double colon; otherwise, the address would be ambiguous, as shown in this example:
Incorrect address using two double colons:
Possible ambiguous choices:
2001:0000:0000:0000:0000:abcd:0000:1234 2001:0000:0000:0000:abcd:0000:0000:1234 2001:0000:0000:abcd:0000:0000:0000:1234 2001:0000:abcd:0000:0000:0000:0000:1234
As you can see, if two double colons are used, there are multiple possible interpretations, and you don’t know which address is the correct one.
What happens if you have an address with more than one contiguous string of all-0s hextets—for example, 2001:0db8:0000:0000:abcd:0000:0000:1234? In that case, where should you use the single double colon (::)?
RFC 5952 states that the double colon should represent:
The longest string of all-0s hextets.
If the strings are of equal length, the first string should use the double colon (::) notation.
Therefore, 2001:0db8:0000:0000:abcd:0000:0000:1234 would be written 2001:0db8:: abcd:0000:0000:1234. Applying both Rules 1 and 2, the address would be written 2001:db8::abcd:0:0:1234.
Combining Rule 1 and Rule 2
You can combine the two rules just discussed to reduce an address even further. Table 4-3 illustrates how this works, showing the preferred address, application of rule 1, and application of rule 2. Again, spaces are left so you can better visualize where the 0s have been removed.
Table 4-3 Examples of Applying Both Rule 1 and Rule 2
|Leading 0s omitted||0: 0: 0: 0: 0: 0: 0: 0|
|(::) All-0s segments||::|
|Leading 0s omitted||0: 0: 0: 0: 0: 0: 0: 1|
|(::) All-0s segments||::1|
|Leading 0s omitted||ff02: 0: 0: 0: 0: 0: 0: 1|
|(::) All-0s segments||ff02::1|
|Leading 0s omitted||fe80: 0: 0: 0:a299:9bff:fe18:50d1|
|(::) All-0s segments||fe80::a299:9bff:fe18:50d1|
|Leading 0s omitted||2001: db8:1111: a: b0: 0:9000: 200|
|(::) All-0s segments||2001:db8:1111:a:b0::9000:200|
|Leading 0s omitted||2001: db8: 0: 0:abcd: 0: 0:1234|
|(::) All-0s segments||2001:db8::abcd:0:0:1234|
|Leading 0s omitted||2001: db8:aaaa: 1: 0: 0: 0: 100|
|(::) All-0s segments||2001:db8:aaaa:1::100|
|Leading 0s omitted||2001: db8:aaaa: 1: 0: 0: 0: 200|
|(::) All-0s segments||2001:db8:aaaa:1::200|
Table 4-4 shows the same examples as in Table 4-3, this time showing just the longest preferred form and the final compressed format after implementing both rules.
Table 4-4 IPv6 Address Preferred and Compressed Formats
|Preferred Format||Compressed Format|
Even after applying the two rules to compress the format, an IPv6 address can still look unwieldy. Don’t worry! Chapter 5, “Global Unicast Address,” shows a technique that I call the 3–1–4 rule. Using that rule makes IPv6 global unicast addresses (GUAs) easier to read than an IPv4 address and helps you recognize the parts of a GUA address.
|
<urn:uuid:e02497ad-fa4c-4acb-891e-29290c510994>
|
CC-MAIN-2022-40
|
https://www.ciscopress.com/articles/article.asp?p=2803866&seqNum=3
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00440.warc.gz
|
en
| 0.768449 | 3,628 | 3.890625 | 4 |
What is SCADA and SCADA System?
Supervisory control and data acquisition (SCADA) is an architecture that enables industrial organizations to manage, monitor, and control processes, machines, and plants.
SCADA systems use computers, networks, and graphical human-machine interfaces (HMIs) to provide high-level control, management, and supervision of industrial processes. SCADA networks are crucial to industrial operations but are made up of hardware and software that can easily fall prey to hacking, which makes SCADA security increasingly important.
To define SCADA, it is an industrial control system (ICS) that monitors and controls infrastructure processes. SCADA systems communicate and interact with devices and industrial equipment as part of control systems engineering processes. They gather data, record and log it, and present information through HMIs.
SCADA systems are typically deployed by organizations involved in the provision of electricity, natural gas, waste control, water, and other necessary utility services. SCADA networks are therefore highly valuable but also highly vulnerable. The government agencies and private companies responsible for managing these services must ensure SCADA security is in place to protect them.
Core Functions of SCADA
A SCADA is a system of hardware and software components. The hardware is responsible for gathering and feeding data into a computer with SCADA software installed on it. The computer processes the data, then records and logs events into a file stored on a hard disk or sends it to a printer. A SCADA application will also issue a warning or sound an alarm when conditions become dangerous or hazardous.
SCADA enables industrial organizations to control industrial processes; gather, monitor, and process data in real time; interact with critical devices like motors, pumps, sensors, and valves; and record events in log files.
SCADA solutions are essential to many industries as they enable organizations to ensure efficiency, make smarter decisions, and eliminate the risk of downtime.
Automated Control of Industrial Processes and Machines
Most of the control actions with a SCADA system are performed automatically by the system’s remote terminal units (RTUs) or programmable logic controllers (PLCs).
SCADA systems enable organizations to automate control of the industrial process and machines, which would be too complicated or complex for humans to manage manually. Systems use measuring devices and sensors to automatically detect alarms and abnormal behavior and provide a response through a programmed control function.
For example, if an alarm occurred as a result of too much pressure occurring in an industrial line, then the SCADA system would issue a response to open a valve and restore regular pressure levels.
Data Collection and Analysis
SCADA is often used as a term defining data collection, analysis, and presentation. A SCADA system takes analog data and presents it in graphs. It also collects digital data, which may have alarms that can be triggered, and accumulates pulse data, which typically involves counting the revolutions of a meter.
These data collection and analysis processes help SCADA control the infrastructure processes of critical facilities and utilities. However, SCADA typically coordinates processes in real time, as opposed to controlling them in real time.
SCADA systems can be used to monitor industrial equipment, machines, systems, or buildings, such as power plants. This process can either be automatic or can be initiated through operator commands.
Event and Alarm Notifications
Most SCADA systems include an alarm supervision and management feature that supports the software within the system. It is important for this to be configured to either be managed by the SCADA system itself or be triggered by users.
SCADA systems integrate with measuring devices like sensors across industrial and manufacturing organizations’ infrastructure. It collects data in either analog or digital form, then sends it to RTUs or PLCs so it can be translated into actionable and usable information. This information is then sent to an HMI or other displays that enable operators to analyze and interact with the data.
Main Components of SCADA
SCADA systems are comprised of multiple components, hardware, and software that enable the collection and transmission of data necessary to control and monitor industrial processes. These key components include:
Remote Terminal Units (RTUs)
RTUs collect and store information from sensors then send it to the master terminal unit (MTU), which is composed of a computer, PLC, and a network server that forms the core of a SCADA system. An RTU collects and stores data until it receives the appropriate command from the MTU, then transmits the necessary data. The MTU is then able to communicate with operators and share data with other systems.
Human-Machine Interface (HMI) SCADA
An HMI is a user interface or dashboard that enables a person to connect to a device, machine, or system. This enables operators to monitor machine input and output, oversee their key performance indicators (KPIs), track production time and trends, and visually display data across the SCADA system.
HMIs are used by the vast majority of industrial organizations to interact with machines and optimize their processes. They can take the form of computer monitors, tablets, and screens built onto machines, which provide insight into the performance and progress of the mechanical system. For example, an operator on the floor level of an industrial plant could use an HMI to control and monitor the temperature of a water tank or monitor the performance of a pump within the facility.
The communications network is the connection between the RTU and the MTU, which enables data to be transmitted between the two units. This wireless communication channel is bidirectional and used for networking purposes alongside other communication processes and equipment, such as fiber optic cables and twisted pair cables.
SCADA systems rely on inputs that are read and written by a PLC to log and store data. What is a PLC, you may ask. It is a mini computer that sits within a SCADA network and collects inputs and outputs from devices in the system. The PLC monitors the state of inputs, such as the speed and performance of a motor, then uses this insight to output signals to devices, such as stop or slow down the motor.
Industries that Widely Use SCADA Systems
SCADA systems control and monitor industrial processes and machines across a wide range of industries. These include:
Food and Beverage Processing
SCADA systems are crucial to helping food and beverage processing companies improve their production quality and quantity processes. SCADA also ensures they reduce costs and keep wastage to a minimum.
Pharmaceutical and biotech firms use SCADA systems to ensure equipment works at an optimal level and to reduce maintenance costs. They also rely on SCADA to maximize their production processes.
Water management firms use SCADA systems to ensure their plants operate efficiently and to monitor the performance of storage tanks, pump stations, treatment facilities, and other equipment. SCADA is crucial to preventing cyberattacks and ensuring appropriate safety measures remain in place at water treatment plants.
HVAC and Commercial Building Management
SCADA systems are crucial to regulating the performance of heating, ventilation, and air conditioning (HVAC) systems, as well as the lighting and input systems at commercial buildings.
Energy Pipelines and Utilities
The operations of most energy pipelines are automated, but manual intervention can be required. SCADA systems provide the monitoring and alarm processes that allow energy firms to intervene should the plant’s activity deviate from normal. SCADA systems also help utility firms ensure reliability and continuous performance monitoring to minimize human error.
SCADA systems enable seafood processing firms to improve consistency and guarantee the quality and yield of their product. It also helps maximize machine performance in factories to reduce costs.
Sorting and Fulfillment
Sorting firms can monitor and control the performance of their machines using SCADA. This ensures mistakes do not occur and that machines are performing to a high standard.
Energy Management and Refrigeration
SCADA systems enable energy management firms to detect machine performance, monitor circuit operations, and manage availability. Through SCADA, they can see real-time parameters, monitor alarms and trends, generate reports, and keep costs down.
How Fortinet Can Help
Fortinet provides protection of critical infrastructure for defense, energy, food, manufacturing, and transportation firms with our ICS/SCADA solution. FortiGate Rugged NGFWs deliver enterprise security for operational technology environments with full network visibility and threat protection.
The Fortinet solution integrates OT solutions with threat protection, which provides control, visibility, and automated high-speed detection of potential security risks. It also reduces the complexity and operating expense of OT security.
What Is SCADA Security?
SCADA security protects SCADA networks and prevents vulnerabilities from being exploited by cyber criminals. SCADA networks are used to control and monitor vital systems and infrastructure, so SCADA security is crucial to safeguarding these services.
However, some of these networks are particularly vulnerable to attacks by hackers, insider threats, and even terrorists. For example, ICS firm Schneider Electric was attacked by sophisticated hackers who launched a targeted zero-day attack on Schneider's systems in 2018. The attack used a remote access Trojan, the first of its kind to infect safety-instrumented systems equipment, which is crucial to monitoring utility firms’ critical systems. The firm released a firmware update and issued advice and tools for customers to detect and mitigate the attack.
Common weaknesses in SCADA systems include a lack of security around application development, issues with SCADA systems monitoring, and a lack of maintenance or updates to the software, thus creating security gaps. Another key threat to SCADA systems is a lack of security training for employees, who need to understand the potential threats they face and how to spot a potential cyberattack.
Avoiding potential security issues is reliant on documenting and mapping where systems connect to the internet and other internal networks and the people who have access to them. This provides insight into all potential data entry and exit points, which helps organizations monitor for cyberattacks.
Organizations also need to implement appropriate detection and monitoring systems that can prevent attacks and malware injection. They also must ensure procedures are in place around network security, including report monitoring, standard protocols, and security checks, which will help them address new and existing vulnerabilities.
|
<urn:uuid:33a38dc3-773b-4dbb-aaee-1ea0149e5907>
|
CC-MAIN-2022-40
|
https://www.fortinet.com/resources/cyberglossary/scada-and-scada-systems
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00440.warc.gz
|
en
| 0.923843 | 2,116 | 3.5 | 4 |
Ransomware attacks are happening all over the world at an alarming speed. One of the more publicized attacks that has recently occurred happened on a larger scale at Carleton University. This time, there have been several computers that have been infected by ransomware.
What is Ransomware?
Ransomware is a computer virus that uses a type of encryption to hold files hostage and request payment in order for them to be returned.Since many computers have been infected at Carleton University, it is very likely that the main network was compromised during the attack. This is what has enabled the attackers to reach multiple computers on the same network and is one of the ways ransomware attackers find their victims. Their network is Windows based, which is also a component of ransomware attacks as of late.
How it Was Discovered and What is Being Done
The ransomware attack was actually discovered by a graduate student at the university. The student emailed the university to notify them that they had received a request for payment in bitcoin. Bitcoin is a digital currency and it is often very difficult to trace, which makes is very popular for cyber attackers who have focused in on ransomware attacks. The student reported that he had seen the message on a school computer that he was using. The attackers were requesting either two bitcoin per machine or a total of 39 bitcoin to release the files they had encrypted. 39 bitcoin equates to around $38,941 at the standard bitcoin exchange rate today. Currently, the university has told all students about the messages and has instructed them to report them and ignore them. Students are also instructed to stay off of the school’s wireless network and shut their computers down. They have not been able to do much research on the attack because their computers are all either infected or shut down. They do not know all of the details but they do know that multiple systems, such as registration, admissions, payroll, and other administrative tasks, are affected by the attack.
What to Learn from This Attack
Without the right kind of protection, ransomware attacks are not just a possibility, but they are likely to happen. Carleton University may have had some type of protection but in today’s world, it is even more important to have extra layers of protection to ensure that if one form fails, the next one won’t. You should have multiple layers of defense in place if you want to prevent this kind of attack on your network.
To find out more about what you can do to protect yourself from a ransomware attack, contact Fuelled Networks in Ottawa either by (613) 828-1280 or by [email protected]. It is better to protect yourself than to have to clean up the mess after an attack, just like Carleton University is trying to do now.
Published On: 6th December 2016 by Ernie Sherman.
|
<urn:uuid:90ad462c-8ca3-4e0c-a2cb-e01336775491>
|
CC-MAIN-2022-40
|
https://www.fuellednetworks.com/ransomware-attack-on-carleton-university/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00440.warc.gz
|
en
| 0.982885 | 578 | 3.109375 | 3 |
In today’s world data is everywhere. We are surrounded by it in the form of spreadsheets, sales figures, energy consumption, personnel data, budgets, etc. Market data drives company strategy, production data informs about manufacturing processes and costs, and sales figures contribute to profitability. Data is essential and the lifeblood of business.
Understanding data, categorizing it, using it, analyzing it, tracking it, storing it, and deriving conclusions from it are therefore critical for the success of any organization.
This is where data lineage becomes important.
What is Data Lineage
Data lineage is, essentially, a map of the journey data takes through a company. It starts with its origin, references each process and transformation along the way, including an explanation of how and why the data has moved over time, through to its delivery to an endpoint or end-user.
Typically Data Lineage can be split into two, distinct, types:
Solution / Datawarehouse data lineage, comprising:
specific data origin
the logic that decides data transformation steps
Enterprise data lineage, comprising:
where data resides
which apps read or write data
where in the world the data is physically located
how the data is used within the business
Solution data lineage is, therefore, more transactional/process related whereas Enterprise data lineage focuses on the high-level and bigger picture. This primer will focus on Enterprise data lineage.
An important part of data lineage is understanding how the data exists within your technical- and business landscape. There are three ways or “master data management patterns” in which data is used and moved around within an organization:
These are explained in more detail in our Data Lineage Metamodel document.
In some instances, data lineage can be documented visually to show the source of the data, each process or change it encounters on its journey within the organization, and its destination.
Why Is Data Lineage Important
As the volume of data streams grows, especially via the cloud, it is becoming ever more critical for organizations to fully understand the data lifecycle: mapping how and where data is sourced, its transformation, and where it is stored. This knowledge is becoming increasingly important from a compliance and governance perspective.
Data governance requires that data be stored and processed in line with organizational policies and, importantly, regulatory standards.
Data lineage can be used to provide compliance auditing, improve risk management, and ensure data is stored and processed as required. An example of this is GDPR compliance, especially in the processing and storage of data.
Benefits of Data Lineage
Understanding everything about your data including its provenance, what information it actually provides, how your organization processes it, and where it finally resides is critical for a number of reasons.
Specifically, data lineage will help your organization with the following:
Providing the necessary core data, including its source, its flow, its storage, and its use within the business to ensure compliance with existing and future regulatory and legal directives.
Enabling improved risk and compliance management
Making transparent where in the world data is physically stored. For example, if Personal Identifiable Information is stored within the European Union (EU).
Gaining visibility about how data is processed on its journey through the organization compared to how the business actually needs the data to be processed
Ensuring that critical data is from a reliable source
As the volume of data, together with societal attention on ethical data usage, increases it is essential that businesses know how and where their critical data comes from, is processed, and stored.
Data lineage provides this information through two streams: Enterprise data lineage, which provides a high-level view of the data, how it flows between applications (the patterns), and where it is stored, and Solution data lineage, which provides a lower-level, transactional, view of the data, where it comes from and how it is processed.
As compliance and governance directives become stricter having knowledge about data flows is essential.
For more detailed information about how Ardoq deals with data lineage please follow this link.
|
<urn:uuid:8044a0e5-1b6b-4a0a-afbb-21fecaad3343>
|
CC-MAIN-2022-40
|
https://help.ardoq.com/en/articles/6541944-data-lineage-primer
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00440.warc.gz
|
en
| 0.920744 | 856 | 2.765625 | 3 |
By now, you have probably heard about the massive cyberattack involving SolarWinds that slammed the United States in 2020. What you might not have heard is some of the dramatic but not unjustified comparisons that have been drawn to it: Steven J. Vaughan-Nichols with ZDNet called it the “Pearl Harbor of IT,” and Gilman Louie writing for The Hill called it “a cyber 9/11.”
How could a simple cyber incident possibly be comparable to two of the worst attacks on American soil in history? No one was killed, after all. Isn’t this all just a matter of some “computer people” fighting over some lines of code? Well, no. When you hear some of the details, you will begin to understand why the SolarWinds hack is such a big deal, even if the media is largely focused on other issues of the day. It all comes down to the scale of the attack and the long-lasting implications it will have on cybersecurity in the country.
The SolarWinds Hack: What Happened?
SolarWinds, if you’re unfamiliar, is a major US information technology firm that provides software to government agencies and big corporations. One of its software products, Orion, is used by some 33,000 SolarWinds customers to manage their IT resources.
Sometime between October 2019 and March 2020, Russian hackers successfully compromised Orion. They did so by hacking into a system used to prepare updates for the software. Unbeknownst to SolarWinds or its customers, these hackers uploaded malicious code to otherwise legitimate updates to Orion between March and June 2020, according to the SEC.
Releasing updates to products is nothing new for software providers and developers. They do this to fix bugs, add new features, and patch vulnerabilities. Most customers receiving the updates install them quickly—as they should. A ServiceNow study from 2019 found that a majority of breaches are the result of unapplied security patches.
Between March and June 2020, approximately 18,000 unsuspecting SolarWinds customers installed the tainted updates to Orion, believing them to be routine, beneficial software patches. In reality, the patches left them vulnerable to hackers. The code, when installed, created backdoors to customers’ IT systems, allowing hackers to install even more malware and spyware.
In total, around 250 federal agencies and businesses are believed to have been affected, including:
- The Pentagon (and, by extension, the Department of Defense)
- The Department of Homeland Security
- The State Department
- The National Nuclear Security Administration
- The National Institute of Health
- The Department of the Treasury
- The Department of Commerce
- The Department of Energy
- State and local governments
- Tech companies like Microsoft, Cisco, FireEye, Intel, Nvidia, VMWare, and Belkin
- California Department of State Hospitals
- Kent University
The attack was sophisticated, stealthy, and went undetected for months. According to the Wall Street Journal, some victims may never know if they were hacked or not—a terrifying prospect to say the least. It is also difficult to determine exactly what the hackers made off with. Federal agencies and companies are conducting large–scale investigations to determine, among other things:
- If they hackers are still inside, and if they are, how to remove them
- What was viewed, changed, stolen, deleted, or exploited
- If any of their partners or customers were affected as a result
This last point is particularly scary and helps bring the scale of the attack into focus. Take Microsoft for instance. They are a SolarWinds customer. On December 17, Microsoft acknowledged that it found indications that malware had been uploaded to its systems.
According to Reuters, “Microsoft’s own products were then used to further the attacks on others,” though Microsoft denied the claim. Specifically, they wrote in a December 31 blog that they had “found no evidence of access to production services or customer data.”
Despite this, Microsoft acknowledged in the same blog that they detected “unusual activity with a small number of internal accounts” including one which was “used to view source code in a number of source code repositories.” Source code is the secret, proprietary code that makes software products tick. In Microsoft’s case, this means the Windows operating system as well as Office 365 products. There are more than 1 billion devices that use Windows 10, according to Microsoft, and another 1.2 billion use Microsoft Office.
Even assuming that Microsoft is correct and no customer data or products were affected, the underlying idea of continued spread is viable. Once hackers used Orion updates to gain access to SolarWinds customers, they could turn around and do the same thing for that company’s customers (without detection, if they are crafty enough). IT service companies often have broad access to their customers’ networks. It doesn’t take a lot of imagination to see how things could quickly spiral out of control; this one malicious hack spread like wildfire first to SolarWinds’ customers and next to customers of SolarWinds’ customers. This, you’ll remember, includes the federal government.
For their part, SolarWinds and Microsoft have teamed up to contain the spread, with the latter apparently taking control of the hackers’ infrastructure to stop them dead in their tracks. SolarWinds is also cooperating with the FBI, the U.S. intelligence community, and other government agencies, according to the SEC. Though to be fair though, a New York Times report notes that SolarWinds ignored basic security practices and cut costs by moving software development to Eastern Europe.
What Does It All Mean?
In the words of CNN contributors Paul Leblanc and Jeremy Herb, “it is already becoming clear that this marks one of the most significant breaches of the US government in years.” This comment comes despite the fact that we may never know the full extent of this attack. Why? First of all, the scale is massive, making it inherently hard and complex to investigate. Next, there’s the fact that the foreign intelligence personnel who carried out this attack are skilled hackers and “incredibly hard to kick out of networks,” according to cybersecurity expert Dmitri Alperovitch. As mentioned before, there’s also the concern that some victims may never even know if they were hacked or not. Complicating this is the fact that small companies with fewer resources may struggle to tell whether they remain vulnerable.
The Wall Street Journal reports that big companies that keep detailed logs of activity on their systems should be able to tell if backdoors into their networks are used or not. But for others, performing this level of scrutiny “will be a difficult and expensive task that many are likely to ignore,” meaning the hackers could stick around in some networks indefinitely.
In the opinion of Gilman Louie, the SolarWinds hack also exposes our organizational faults. The intelligence functions of the Department of Homeland Security and the FBI “are too small to deal with the growing attacks from nation-state actors, leading to inefficient intelligence enrichment, collaboration and information-sharing,” he writes. His recommendation is to create a National Cyber Protection Center “to improve our ability to fuse and share cyber-related intelligence across the public and private sectors, advise on coordinated responses, and proactively prevent attacks like this from occurring again.”
Other writers also criticize the status quo. Bruce Schneier with CNN argues the U.S. must “prioritize minimum security standards for all software sold in the United States” since, as it stands now, we are too ignorant about the software that runs on our devices, what it’s sending, and where it’s connecting geographically. Without greater regulation and transparency in the software industry, Schneier believes we will continue to jeopardize our personal safety.
Should I Be Worried?
Probably not, but you should be vigilant. Arming yourself with knowledge will help keep you from being surprised the next time a large-scale cyber incident occurs. While you obviously cannot influence huge geopolitical forces, you can stay informed to guide your business through turbulent waters.
Ultimately, what you should be focused on is making the same security steps we’ve been recommending at Machado for some time: following industry best practices, backing up your critical files and systems, enabling two-factor authentication, using strong password habits, and yes, updating and patching regularly.
How can we still be recommending regular updating and patching? After all, wasn’t this whole SolarWinds mess caused by customers downloading tainted updates? A logical conclusion therefore might be, hey, let’s do away with software updates altogether, at least until we can be sure of their integrity. That sounds reasonable, right?
Well no, actually. Here’s why delaying your software updates is not a good idea. As we saw earlier, the majority of breaches in 2019 were the result of unapplied security patches. What’s more, the spike we’re seeing in healthcare ransomware attacks can be partially attributed to poor patching practices. Think of it this way: a piece of software is just a really long, complex piece of code. The developer does their best to think up every edge case the code may encounter and every weird behavior that might result. However, since the code is so complex, software very frequently gets released with issues, issues that get revealed over time. Patches and updates are the developer’s way to fix those issues. The longer a version of software has been out, the more time cyber criminals have had to analyze it and find those issues. When they attempt to hack you, they’ll try to exploit those older vulnerabilities first. If your software is not up to date, then you won’t have the developer’s latest fixes and will be totally exposed.
You can think of the SolarWinds hack as an extreme outlier. In the vast majority of cases, software updates and patches are not only safe but one of the best ways to keep yourself and your business safe online. Delaying updates, even for a little while, is dangerous. The longer you use outdated software, the more opportunities hackers have. Developers eventually drop their support of older software (rest in peace Internet Explorer) once they don’t want to keep updating it any longer. As software matures, more and more issues get exposed. In conclusion, using the developer’s latest release of their in-support software is the best way to fend off cyberattacks.
When Garmin was hacked in July 2020, we had some concrete advise for our readers. Since ransomware was the suspected attack method in that incident, we could tell our readers to back up their important data to the cloud as a means to protect themselves. When Twitter was hacked around the same time, we learned that the company’s people, not its systems, were vulnerable, proving the importance of zero standing privileges. This time around, however, we don’t have so many practical takeaways. The lesson from this massive attack might be that anything can be hacked. When the source of your vulnerability comes from the software developer itself, it is almost impossible to guard against. You should continue to trust software updates from reputable developers.
Keep an eye on the news for any security alerts for software you use. An easy way to do this might be to set up Google alerts to your phone. When you hear about a new vulnerability, be sure to have your team install the latest fix from the official developer as soon as possible.
If you could use a hand managing your updates and your cyber infrastructure at large, reach out to us a Machado Consulting here or by phone at (508) 453-4700. We can’t wait to show you what we can do for your business.
|
<urn:uuid:92039166-bcc1-4ae9-9e6d-3a2bce4d18a5>
|
CC-MAIN-2022-40
|
https://gomachado.com/what-you-need-to-know-about-the-solarwinds-hack/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00440.warc.gz
|
en
| 0.955159 | 2,460 | 2.515625 | 3 |
Pandemic-Related Fraud and How to Catch It
The COVID-19 pandemic has forced many people to adapt to a new way of living. These changes included social distancing, public masking, business closures and stimulus checks to help people struggling to make ends meet.
These new circumstances have also created a new avenue for scammers to trick people out of their money. Whether by selling fake coronavirus testing kits and vaccination cards or posing as a stimulus check distributer asking for financial information, people have to be warier than ever of potential threats. Knowing how to recognize these scams is critical in keeping yourself and others safe.
The Rise in COVID-19 Fraud
According to the U.S. Department of Justice, several types of fraud have occurred throughout the pandemic. Recognizing them when they appear is crucial in avoiding them and protecting your money and information. Some popular types include:
- Stimulus check fraud: The introduction of stimulus checks led to scammers calling potential victims and asking for their information to give them financial aid. Government officials will not ask for sensitive information over the phone. If you receive one of these calls, it’s a scam.
- Healthcare fraud: With active cases staying stagnant, people are anxious to provide proper healthcare for themselves and their loved ones. Scammers take advantage of this by calling and posing as healthcare dealers, offering seemingly beneficial deals to steal personal information.
- Supply fraud: Some scammers will call you claiming to sell at-home COVID-19 tests or supplies. These supplies are either entirely fake or unauthorized and should not be purchased.
- Vaccination card fraud: In some cases, scammers will contact you and offer to sell you a COVID-19 vaccination card. Others will offer to buy your card from you. Others will use information from your card to impersonate you. Using a false vaccination record is a crime, as is using someone else’s. Don’t share your personal information with strangers, especially over the phone, and don’t post any pictures of your vaccination card if you have one.
- Contact tracer fraud: Officials use contact tracing to track those who may have come into contact with someone infected with COVID-19. This allows them to track the spread of the virus. Legitimate contact tracers will not ask for your personal information or payment over phone, so if you receive a call claiming to be from a contact tracer, don’t give out your personal information.
The best way to spot a scam is if someone claiming to be from the government or a major business asks for personal information or for a payment over the phone.
Who Is Responsible for COVID-19 Scams
There are three main groups of COVID-19 scammers:
- Hustlers: These are individuals who usually fraudulently file for benefits they haven’t earned. They can also recruit friends and family to their scams. These scammers are usually found out if and when local law enforcement arrests them for a different crime.
- Gangs: Street gangs and organized crime groups will usually focus on more systematic types of fraud. They will often steal information, through interpersonal connections or by buying it off of the dark web, and use it to commit benefit fraud.
- Hackers: Professional hackers will attack the target’s systems directly, making it more difficult for these systems to detect fraud, and steal any personal information saved to their databases.
Notice that these groups all tend to target the system instead of individuals. Even if they try to file for benefits from government groups doesn’t mean they won’t attempt to swindle the individual, so it’s important to be aware.
It’s also important to know who’s most at risk of being scammed. Two easy targets for COVID-19 are the elderly and those anxious about the sudden onset of new illnesses and treatments. The elderly are at a higher risk of fraud, but people who are nervous about the disease or the vaccine are prime targets for fake tests and healthcare deals and vaccine card fraud.
Challenges in COVID-19 Fraud Prevention
Prevention methods are still struggling to catch up with the sudden onslaught of COVID-19 related fraud. Scammers take advantage of scared people and coax information or money from them. Since many of these scams take place over the phone, it can make them harder to track.
Many systems are overworked and underprepared to handle these fraud cases, making them more difficult to track and solve. Many small businesses, for example, don’t use case management software in their operations, leaving them more vulnerable to fraud.
Why Prevention Is Key
Many government agencies are swamped with work nowadays. For example, prior to the pandemic, unemployment agencies would only receive some hundred thousand claims per week and could process them within two to three weeks. With the onset of the pandemic and closure mandates, the number of unemployment claims has skyrocketed.
With government agencies so backlogged, they have to prioritize those who need the most help. This means that they’ll have very little time to help fraud victims. By the time they’re able to look at a fraud victim’s case, the scammer may already be out of reach. Some government agencies don’t have the resources to track scammers, meaning they can’t help you no matter how much time they have.
Even if your local agency can help you recover your money or information, doing so will take a considerable amount of time. Preventing fraud at the start saves you time and a considerable amount of emotional distress. The Federal Trade Commission has advice for preventing fraud before it happens.
How Kaseware Helps in the Fight Against COVID-19 Fraud
Some software exists to prevent COVID-19 fraud. Agencies can use pre-existing fraud software against newer fraud forms, such as COVID-19 fraud.
Kaseware was founded by former FBI agents with experience in corporate software development. Our fraud investigation software makes reporting and managing fraud cases quick and efficient. We combine all necessary software to a single platform. It also helps to prevent fraud before it happens, utilizing technology such as intuitive dashboards, link analysis charts and data centralization software.
Stop COVID-19 Fraud with Kaseware
The onset of the pandemic has severely impacted the modern world. This upheaval includes the onset of COVID-19 fraud, whether healthcare scams, information stolen off of vaccination cards, fake testing kits or more. Whether these scams affect individuals or businesses, they can strike hard and vanish almost immediately. With Kaseware, there is hope.
Kaseware is a premier protective software service with a complete set of tools to document each step of fraud investigation. With our fraud software, you’ll have everything you need to open and organize your fraud case, all with a single software package. Contact us and request a free, personalized demo today.
Share this post on social media.
|
<urn:uuid:568df8e7-c50a-4bec-b96c-67d519d25c51>
|
CC-MAIN-2022-40
|
https://www.kaseware.com/covid-19-fraud/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00440.warc.gz
|
en
| 0.9362 | 1,443 | 2.78125 | 3 |
Whether you've read up on Greek mythology or you're simply a big fan of Marvel comics, the name "Zeus" should be familiar to you. In the context of cybercrime though, ZeuS (aka the Zbot Trojan) is a once-prolific malware that could easily be described as one of a handful of information stealers ahead of its time. Collectively, this malware and its variants infected millions of systems and stole billions of dollars worldwide.
ZeuS was primarily created to be a financial or banking Trojan, otherwise known as crimeware. But, as you’ll see, the extent of its information stealing ability could easily go beyond covertly pilfering financial information, making it a real threat to individuals and organizations of all sizes.
First spotted in-the-wild in 2007, the earliest known version of the ZeuS Trojan was caught stealing sensitive information from systems owned by the United States Department of Transformation. It was believed that ZeuS originated in Eastern Europe. ZeuS affiliates focused their efforts away from corporations and large banks, going after small- to medium-sized organizations, including towns and churches, according to the Federal Bureau of Investigation (FBI).
ZeuS usually arrives via phishing campaigns, spam campaigns, and drive-by downloads. However, this is easy to change and anyone motivated to conduct financial fraud can easily change who they target and how they want their ZeuS to be delivered. Victims have been infected by ZeuS variants via instant messengers (IM), messaging features in social media platforms, and even a pay-per-install (PPI) service—a way to distribute ads to users that a ZeuS user employed for their campaigns.
Once a machine gets infected, ZeuS immediately steals information from web browsers and Windows’ protected storage (PStore), such as banking or financial information and stored account credentials, respectively. All stolen data are siphoned off via a command & control (C&C) server.
Furthermore, any system infected with ZeuS also becomes a bot in a botnet. A kind of illegal Cloud computing platform that can be rented out to other criminals. These bots were also used to remotely update the ZeuS variants residing in them.
To date, there are 545 versions of the ZeuS Trojan, according to a website called ZeuSMuseum.com.
How mighty is the ZeuS Trojan?
A ZeuS Trojan toolkit can be fashioned to do a number of things both for the fledgling and adept fraudster.
ZeuS lurks inside infected machines as it stealthily monitors the websites users visit. It recognizes when a user is on a banking website, for example, and then records keystrokes when the user logs into the site. Because of this, fraudsters can easily log back into that banking account using the recorded keystrokes.
Some variants of ZeuS also affect mobile devices that run Android, Symbian, and Blackberry. ZeuS is the first information stealing malware that steals Mobile Transaction Authentication Numbers (mTANs), a type of two-factor authentication (2FA) method that banks use when you want to perform transactions. An mTAN, also called SMS TAN code, is usually a 6-digit number that is unique per transaction and is sent via SMS.
ZeuS steals information in a number of ways, including: Stealing user keystrokes; collecting the text users enter into web forms; taking screenshots whenever the mouse is clicked; so-called man-in-the-browser (MiTB) attacks that add new elements to web forms asking for things like social security numbers or bank PINs.
As to what, exactly, ZeuS steals, here is non-exhaustive a list provided by the SecureWorks security researchers:
- Data submitted in HTTP forms
- Account credentials stored in the Windows Protected Storage
- Client-side X.509 public key infrastructure (PKI) certificates
- FTP and POP account credentials
- HTTP and Flash cookies
ZeuS is also capable of re-encrypting itself every time it infects a system, making each infection "unique" and therefore harder to detect.
Many researchers attribute ZeuS’s ability to stay under the radar for long periods of time as the main reason why it became the most sought-after info-stealer kit in the underground market during its time. It's likely that ZeuS infected millions of computers, with many victims not realizing that their sensitive data had fallen into the hands of criminals and that their computer was part of a botnet.
The ZeuS developers also put a lot of effort into protecting their malware. According to SecureWorks, ZeuS 1.3.4.x, a privately sold version of the kit, is protected via a hardware-based licensing system. Also known as hardware-locked licensing, this system allows the kit to be installed on only one computer.
The "fall" of ZeuS Trojan
In 2011, the source code for ZeuS 22.214.171.124 was leaked. Some groups or individuals started offering the use of ZeuS botnets on a subscription basis. According to a case study on ZeuS from students at the University of Cambridge, this "maximises earnings by providing the same service to multiple users. For the user of the service, the benefits are in a reduction in the initial financial outlay, while outsourcing the logistical and maintenance requirements, and reducing the risk of failure to achieve results.”
Cybercriminals also began creating their own ZeuS-based information stealers, make ZeuS itself something of a footnote. Citadel, GameOver, Panda Banker, Terdot, Floki, and Sphinx are some of the known ZeuS variants to date.
Before the code leak, it was rumored that the ZeuS creator would be retiring and then selling his code to a competitor called SpyEye, an up-and-coming information stealer that made heads turn for being able to remove ZeuS infections. There had been reports of a code hand-over, yes, further confirming the merging of the two malware, but the ZeuS creator didn’t quit. According to a report from Brian Krebs, the creator merely stopped selling it publicly and started creating “a more robust and private version of Zeus” instead.
In 2013, the FBI charged and arrested Aleksander “Harderman” Panin, a 24-year-old Russian male believed to be the creator of the SpyEye Trojan. That same year, Hamza Bendelladj, a 24-year-old Algerian male, was arrested and charged for developing components of SpyEye, operating botnets infected with SpyEye, and of course, fraud charges.
Is ZeuS dead?
As long as criminals continue to use bits and pieces of its code to create their own malware, ZeuS can't be considered dead, so much as fading away slowly. However, ZeuS's purpose, data theft, is making a comeback.
Banking trojans haven't gone away, but in recent years their activity has been eclipsed by an epidemic of ransomware. Recently though, major ransomware operators have taken to stealing victims' data before encrypting it, so they can threaten to leak it.
The tactic has been so successful that some ransomware actors claim to be moving away from encrypting files, and focussing entirely on finding and exfiltrating sensitive data from organisations.
In fact, following a devastating attack on Ireland's public health system, the Conti ransomware gang issued the Health Service Executive (HSE), a free decryption key to unlock all of their affected files, convinced that simply publishing and selling the data they had stolen was leverage enough.
How long I wonder, before information stealers are another thing Biden will be phoning Putin for?
|
<urn:uuid:3d018f7c-1a12-4116-87e9-5b5be1ad6c98>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2021/07/the-life-and-death-of-the-zeus-trojan
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00440.warc.gz
|
en
| 0.95145 | 1,644 | 2.84375 | 3 |
BT and Toshiba have shown everybody what the future of secure communication looks like. At BT’s research and development centre in Ipswich, the two companies have opened the UK’s first secure quantum communications showcase, where quantum cryptography was presented. The cutting edge in secure communication, this technology can be used to protect digital information communicated from banks and similar financial services organisations.
According to the press release, it works by delivering ‘secret keys over fibre optic cable’. It uses the tiniest possible packets of light. That allows the user to easily detect any eavesdroppers, because any monitoring will disturb the photons sending these keys and result in errors in encoding. The technology has been in development for the past two years, at Toshiba’s research lab in Cambridge.
Firstly, they managed to use quantum cryptography on ‘lit’ installed fibre carrying 10Gbps data signals. More recently, however, they discovered that quantum key distribution and 100Gbps data can be combined on the same fibre.
Professor Tim Whitley, head of research for BT, and MD of Adastral Park, said: “We’ve been conducting research into quantum cryptography for several years now so this is a great step forward in demonstrating how our research can benefit businesses. Businesses and organisations today face a tide of ever increasing and highly sophisticated attacks from cyber criminals so ensuring the secure transfer of critical data is more important than ever. We’re confident that quantum cryptography will play an increasingly important role in helping companies guarantee that their secure communications remain water-tight in the future.”
Image source: Shutterstock/violetkaipa
|
<urn:uuid:a35d2ac7-6452-41a9-b140-d0138a0d5fdf>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/news/bt-and-toshiba-showcased-quantum-cryptography-future-of-secure-comms/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00440.warc.gz
|
en
| 0.915537 | 338 | 2.640625 | 3 |
A group of tech giants has come together to develop an open-source code that is applicable to all
wireless—connected devices. Under the name “Open Internet Consortium”, companies such Dell,
Samsung, Atmel, Broadcom and Intel’s Wind River are working together on a communal concept of the
Internet of things.
Their first target group of devices are smart home and office solutions, and if all goes according to plan, the code will be implemented in 212 billion devices by 2020. To accelerate the Internet of Things (consisting of devices such as smartphones, PCs and tablets) the participating hardware producers and everyone else involved in the production of such devices will have to agree on a standardised way in which wireless devices connect.
The goal of this venture is to create a structure that works for every device regardless of the service provider in use or other factors that have caused fractions so far, such as operating systems or hardware. But there is competition: The AllSeen Alliance, consisting of companies such as Qualcomm, Sharp, LG, Panasonic and Microsoft are taking the same path.
Read more here.
(image source: Betsy Weber)
Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!
|
<urn:uuid:cdc0f9f8-a1c8-43f2-9974-c48aa5b4230b>
|
CC-MAIN-2022-40
|
https://dataconomy.com/2014/07/internet-things-way-universal-language/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00440.warc.gz
|
en
| 0.935723 | 266 | 2.625 | 3 |
In this article you will understand what a DVR is and what are the key factors you should evaluate when looking at a catalog of a product.
When you finish reading this article you will also have a better understanding of what are the transmission and recording technologies used by a DVR.
Technical definition of a DVR
DVR stands for Digital Video Recorder. It's a device that converts the analog signals from a CCTV camera to digital format, store the information into a hard drive and also send live video stream to other devices on the network.
A DVR is not 100% analog
A CCTV system that has a DVR is not totally analog. The cameras send video signal in analog format and the DVR converts these signals to the digital format before recording and sending over the network.
So when you're working on a CCTV project that has at least one DVR, your project is not analog, it's actually a hybrid CCTV project.
The different technologies of DVRs
Now you know that DVRs have hybrid technology (analog / digital), and that most of the security cameras that are connected to DVR input have analog technology. Let us take a better look at this subject.
There are several different DVR options on the market and this can generate some confusion at the time of making your decision on which one is better.
You may have noticed that some DVR dealers offer different types of DVRs always stating that they have the best option bu using acronyms like HD-TVI, HD-CVI and AHD for example.
These acronyms represent the transmission technologies used by the analog cameras that those recorders can work with.
In this article, however, apart from the above mentioned technologies, I will explain what are the differences between different DVRs related to the recording and playback capacity.
DVR recording resolution
The quality of recording and viewing depends on several factors and one of them is the resolution, which is very important to see details.
When you are shopping for cameras and recorders, one of the things you should be aware of is the resolution that you should use in your project.
CIF, 2CIF and 4CIF Resolution DVR
Common Interchange Format (CIF) is a format that was created in 1988 for video teleconferencing systems and has been adopted in CCTV systems.
It's a digital resolution. If you have a camera or recorder that can work with CIF resolution, this means that the image generated has 352 x 240 pixels.
Below is an example of a catalog that shows a 16 channel DVR with CIF.
At 2CIF resolution the amount of pixels in the horizontal is doubled, so they are 720 x 240 pixels, which makes the image long so it is not so popular.
The 4CIF resolution doubles the amount of pixels both horizontally and vertically for a total of 720 x 480 pixels, which makes the image bigger and more interesting for recording, so it is more commonly used in CCTV.
720p and 1080p resolution DVR
These resolutions are very common now in cameras and DVRs that have the HD-TVI, HD-CVI and AHD technology mentioned above.
720p originates from the amount of pixels in the image that is 1280 x 720 horizontally and vertically respectively. If you multiply these pixels you will get a total of 921,600 pixels, that is equivalent to 0.9 Megapixel,
1080p originates from the amount of pixels in the image that is 1920 x 1080 horizontally and vertically respectively. If you multiply these pixels you will obtain a total of 2,073,600 pixels that is equal to 2 Megapixel,
See an example of a DVR with 1080p resolution:
These resolutions are not unique to a single brand, in the market it is very common to find different DVR brands with these resolutions.
MegaPixel resolution DVR
So you can say that the resolution of your DVR is 1080p or 2MP, But there are still higher DVR resolution on markets that goes beyond the 2MP.
Manufacturers such as Dahua and Hikvision have 3MP, 5MP and 8MP DVRs.
These devices were quickly spread around the world through these manufacturers and through other brands that use their technologies.
So it's common to find catalogs with camera and recorders with resolutions up to 4K. But obviously the price will be much higher.
DVR Frame rate
Another important factor for a DVR is the amount of frames per second it can record. This information is found in the catalogs as FPS (Frames Per Second).
See an example of a 30 FPS DVR (per channel)
The higher the number of frames, the lower the robotization effect of the image, which means you can see a smoother, smoother image.
The DVR will use an algorithm to convert to the digital format and compress the image to save storage and bandwidth over the network. This process is done by CODECs
Most modern DVRs use more advanced CODECs like H.264+ or newer H.265 and H, 265+ to compress video without losing quality.
To learn more about CODECs read the following article:
It's necessary to learn more about DVRs before go shopping. Obviously there are numerous other functions and details to take in consideration such as brand, dealer warranty and technical support.
Anyway, now you have enough information to make a better choice
Want to learn more ?
If you want to become a professional CCTV installer or designer, take a look at the material available in the blog. Just click the links below:
Please share this information with your friends...
|
<urn:uuid:e8f9b892-c35f-4fda-9bb7-afedf5b0cc82>
|
CC-MAIN-2022-40
|
https://learncctv.com/cctv-dvr-digital-video-recorder/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00440.warc.gz
|
en
| 0.946778 | 1,188 | 2.84375 | 3 |
Instead of ditching the SSN as an identifier, the government could take steps to modernize it.
As cyberattacks and data breaches make Social Security numbers increasingly insecure, the government needs to explore new ways to verify people’s identities, according to a recent report.
“This nine-digit number has become the core credential for government and commercial purposes—things for which it was never designed,” cybersecurity researchers at McAfee and the Center for Strategic and International Studies wrote in a report published Wednesday. “The [Social Security number] faces significant problems as an identifier, and after 80 years, it is time to modernize it.”
In 2015, experts estimated between 60 and 80 percent of Social Security numbers had at some point been stolen by hackers, and that was before the massive breach at Equifax exposed information on 143 million Americans last year. As a result, for most people, the number “is no longer a secret,” researchers said.
Still, the government needs some mechanism to authenticate identity and connect records to a specific individual, they said. Instead of exploring brand new authentication system, researchers argued for modernizing the Social Security number to make it harder to steal and easier to secure if it does get compromised.
They concluded creating a “smart” Social Security card would be the best strategy.
Like a credit card, the modernized Social Security card could be embedded with a readable chip that connects to an external account. The chip itself would carry a proxy number that links to someone’s encrypted Social Security number, creating a second layer of security that reduces the risk of fraud. If that proxy number is stolen, the government could generate a new one without changing the actual Social Security number, they said.
The report explores different strategies for linking encrypted Social Security numbers to such proxies—like blockchain, public key infrastructure, mobile authenticator apps or biometric identifiers—but they said the private sector would likely play a role in designing the new system.
“Modernization should change the SSN from its current form and replace it with a dynamic credential that relies on online processes for confirmation and provides a path forward for the adoption of new technologies,” researchers wrote. “We recommend smart cards as the best path.”
Lawmakers and industry experts have long agreed Social Security numbers could use an upgrade for the digital age. At a hearing in May, cyber advocates detailed the risks of bundling so many valuable assets into a single nine-digit number before a House panel.
“Social Security numbers are so deeply compromised and so widely available to the public...that they can no longer be used as an authenticator,” said Paul Rosenzweig, a cybersecurity expert at the R Street Institute. “Using my Social Security number as an authenticator is as stupid as using the last four letters as my last name as authenticator, or the last four digits of my phone number.”
|
<urn:uuid:e062ffaa-c767-4db5-9056-5aefbd871b57>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/cybersecurity/2018/10/cyber-researchers-propose-smart-social-security-card/151898/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00640.warc.gz
|
en
| 0.930479 | 606 | 2.75 | 3 |
New Training: Understand and Configure Spanning Tree Protocol (STP)
In this 10-video skill, CBT Nuggets trainer Knox Hutchinson explores Spanning Tree Protocol and how it can be deployed on Junos devices. Watch this new training.
Learn with one of these courses:
This training includes:
1.2 hours of training
You’ll learn these topics in this skill:
Everyone Hates Spanning Tree Protocol
STP Key Terms
How Spanning Tree Builds the Tree
RSTP Port States and Roles
Deploy Junos VSTP and Cisco PVST+
What is Spanning Tree Protocol?
Spanning Tree Protocol (STP) is a network protocol for ethernet networks that creates a loop-free logical topology. The foundational feature of STP is preventing bridge loops and the broadcast radiation that tends to cause. Additionally, STP network designs tend to be more resilient, leveraging backup links to provide fault tolerance if an active link fails.
The basic design of a spanning tree includes connecting nodes with a series of layer-2 bridges, disabling any links that are not part of the spanning tree.This leaves a single active path between any two network nodes.
The concept of STP was adopted by numerous vendors before the IEEE published a set of standards, leading to numerous compatibility issues. Cisco equipment dominated the market, but most non-Cisco tech had limited compatibility. Juniper Networks, however, developed a VLAN Spanning Tree Protocol (VSTP) to work with Cisco switches, allowing both to be included in a single large area network.
|
<urn:uuid:96cd0e2c-bcd0-4aeb-a673-3a3b2f7dcecc>
|
CC-MAIN-2022-40
|
https://www.cbtnuggets.com/blog/new-skills/new-training-understand-and-configure-spanning-tree-protocol-stp
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00640.warc.gz
|
en
| 0.887321 | 360 | 2.921875 | 3 |
Data is not sedentary. Once data has been created, it gets moved around to support many different purposes. One type is production data versus test data, and it is likely that there are multiple test environments, for example, to support unit testing, integration testing, new product deliveries, training, and so on. But there may be multiple copies of the “same” data in each of those environments to support different applications, different geographies, different users, different computing environments, and different DBMSs.
Rarely is a single copy of any piece of data good enough. Data is copied and transformed and cleansed and duplicated and stored many times throughout most organizations. Different copies of the same data are used to support transaction processing and analysis; test, quality assurance, and operational systems; day-to-day operations and reporting; data warehouses, data marts, and data mining; and distributed databases. And who manages all of this moving data? Typically, it is the DBA group.
There are many techniques that can be used to facilitate data movement. One of the simplest ways for the DBA to move data from one place to another is to use the LOAD and UNLOAD utilities that come with the DBMS. The LOAD utility is used to populate tables with new data (or to add to existing data), and the UNLOAD utility is used to read data from a table and put it into a data file. Each DBMS may call the actual utilities by different names, but the functionality is the same or similar from product to product. For example, Oracle offers SQL*Loader and Microsoft SQL Server provides the BCP utility.
Although unloading data from one database and loading it to another is a relatively simple task, there are many factors that can complicate matters. For example, specifying the sometimes-intricate parameters required to un/load the data in the correct format can be daunting, especially when the underlying source and target table definitions differ. Furthermore, unloading and loading is not very efficient for large tables, or even when many small-to-medium-sized tables need to be moved or refreshed.
Some DBMS products offer export and import utilities. Although similar to unload and load, import and export facilities typically work with more than just the data. For example, exported data may contain the schema for the table along with the data. In such cases, the import facility can create the table and import the data using just the export data file. Another difference is that an export file may contain more than just a single table. Nevertheless, export and import typically suffer from many of the same issues as unload and load, including inefficiency.
Another method for moving large quantities of data is ETL (extract, transform and load) software, which is primarily used to populate data warehouses and data marts from other data sources. Although many DBMS vendors offer ETL software, it usually is not included with the base DBMS license. Using ETL software, the DBA can automate the extraction of data from disparate, heterogeneous sources.
For example, data may need to be extracted from legacy IMS databases and VSAM files from the mainframe; relational databases such as Oracle, SQL Server, and IBM Db2 on various platforms; spreadsheets stored on the LAN; as well as external data feeds. The ETL software can be set up to recognize and retrieve the data from these many different sources. Once retrieved, the data may need to be transformed in some fashion before it is sent to the target database. ETL software can be more flexible for complex data movement than simple unload/load or export/import utilities.
Yet another method of moving data is through replication and propagation. When data is replicated, one data store is copied to one or more different data stores, either locally or at other locations. Replication can be implemented simply by copying entire tables to multiple locations or by copying a subset of the rows and/or columns, and it can be set up to automatically refresh the copied data on a regular basis.
Propagation is the migration of only changed data. Propagation can be implemented by scanning the database transaction log and applying the results of data modification statements to another data store. Initial population of a data warehouse can be achieved by replication, and on-going changes by propagation.
Messaging software, also known as message queuing software or application integration, is another form of data movement. When using a message queue, data is placed onto the queue by one application or process; the data is read from the queue by another application or process. Messaging software works by providing APIs to read and write formatted messages to and from a queue. An application can read or write messages to and from the queue from any platform supported by the software.
Of course, many other methods exist for moving data—from the simple, such as using a table editing tool to highlight and copy data, to the complex, such as writing programs to read the database and write to external files or directly to other databases. There are also vendor products that bypass database control to copy and move data at the file level, which typically outperform standard database utilities. Furthermore, some DBMSs provide additional built-in methods for copying and moving data, such as Oracle Transportable Tablespaces.
The bottom line is that data is constantly being moved from one place to another in most companies. Indeed, it is not hyperbole to say that a lot of CPU power is dedicated to moving data all over the place. And DBAs are constantly being asked to move more data—and to do it faster and more efficiently. DBAs should keep up with the data movement needs of their organization and deploy the proper techniques and products to keep that data moving effectively.
|
<urn:uuid:a9c4bd2e-fba1-48eb-908d-8fbc7d8b33e0>
|
CC-MAIN-2022-40
|
https://www.dbta.com/Columns/DBA-Corner/Managing-Data-That-is-Constantly-on-the-Move-149724.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00640.warc.gz
|
en
| 0.924039 | 1,176 | 3.140625 | 3 |
Startup in Laser Control Breakthrough
When radios evolved from AM (amplitude modulated) to FM (frequency modulated), the quality of radio transmissions leaped upwards. Startup FiberSpace Inc. figures its technology could have a similar impact on optical networks. Today's optical networks are amplitude modulated, that is, the amplitude of the signal carries the information. Optical networks don't use frequency modulation because the frequency of the lasers can't be controlled accurately enough at present.
FiberSpace has developed a technique that can improve the accuracy of a laser's frequency (or wavelength) by a factor of a thousand, it claims. It does this by taking any off-the-shelf laser, and adding a proprietary opto-electronic circuit. The circuit measures the laser's wavelength and uses an optical feedback loop (rather than an electrical one) to keep it stable.
The most obvious benefit of a frequency-stable laser is that it can help signals go farther on an optical fiber without optical regeneration, says Leonardo Berezovsky, FiberSpace's founder and CEO. That's because chromatic dispersion – an effect that causes different wavelengths to travel at different speeds in the fiber – is less of a problem when the wavelength is more accurately defined. Berezovsky doesn't know how much farther signals would go, but thinks it would be a significant improvement. "Part of our near-term work is to quantify this," he says.
Frequency-stable lasers could also help squeeze DWDM (dense wavelength-division multiplexing) channels closer together by reducing interference between neighboring channels.
The above improvements can be made without changing the basic nature of the optical signal, i.e., it is still AM. But ultimately, Berezovsky believes that frequency modulation will play a more important role.
"In metro applications there may be thousands of different channels, and the challenge is to deliver the right wavelength to the right customer," says Bob Welch, FiberSpace's VP for marketing. One way is to use optical crossconnects or switches. Another would be to let the receivers pick out which wavelength they're receiving. "For this you need a 'heterodyne receiver', like the one in the radio on your kitchen table," Welch explains. A heterodyne receiver would contain a frequency-accurate laser, which it tunes to the incoming signal.
FiberSpace received first round funding of $12.5 million from J.P. Morgan & Co. (Nasdaq: JPM) and one of its affiliates and Morgenthaler Ventures (see Laser Startup Has $12M First Round).
It all sounds impressive, but the startup is very early stage. It hopes to be in a beta testing phase by the end of 2001. Don't touch that dial.
-- Pauline Rigby, senior editor, Light Reading http://www.lightreading.com
|
<urn:uuid:a0684aad-e5f0-4354-9825-49c16bf4d457>
|
CC-MAIN-2022-40
|
https://www.lightreading.com/startup-in-laser-control-breakthrough/d/d-id/572240
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00640.warc.gz
|
en
| 0.947309 | 587 | 2.875 | 3 |
SCCM (A.K.A ConfigMgr) Explained
SCCM is Microsoft Microsoft Endpoint Manager Configuration Manager. This solution is used by most of the organizations in the world to manage their enterprise devices. This is the best resource to Learn and troubleshoot on issues.
How is SCCM (A.K.A ConfigMgr) Used?
SCCM solution is mainly used to manage Windows devices. But it has rich capability to manage and Mac OS devices as well. As per Microsoft, this tool is managing more than 75% of enterprise devices of the world. Linux and Unix devices are not supported by MEMCM (A.K.A Microsoft Endpoint Manager Configuration Manager)
How CAN SCCM Be Applied to Your Organization?
This solution can be used to install the application within your organization. OS deployment is another feature of this solution used within most of the enterprises. Another important use of this solution is to deploy patches across the enterprise and secure those devices.
There are 1000000 devices managed by this solution around the world. And SCCM device management solution is used within organizations to deploy millions of applications.
Server Client Application
This solution is a server-client application. All the managed clients’ inventory is stored in the CM SQL database.
SCCM Core infrastructure, Updates for Configuration Manager, Supported configurations for Configuration Manager, Cloud-attached management of CM, Co-management for Windows 10,
Manage clients on the internet, Windows as a service, CMPivot, Application management.
Other Uses for SCCM
SCCM can used for Manage apps from the Microsoft Store for Business, OS deployment, Introduction to OS deployment, Upgrade to Windows 10, Phased deployments, Software update management, Introduction to software updates management, Manage Office 365 ProPlus updates.
SCCM MVP community group is one of the known community groups in the IT Industry.
|
<urn:uuid:6753c763-eac8-4965-986b-b1703b6697b4>
|
CC-MAIN-2022-40
|
https://www.anoopcnair.com/sccm/page/160/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00040.warc.gz
|
en
| 0.894358 | 397 | 2.5625 | 3 |
Recently the Securing Energy Infrastructure Act (SEIA) bill passed the Senate floor. This act calls for the US to improve power grid security by using analog and manual technologies as a defensive measure against foreign cyber-attacks. At this time, the bill needs the approval from the US House of Representatives.
If the SEIA bill is approved, it would establish a two-year pilot program with the National Laboratories where researchers would study power grid operators and their vulnerabilities. They would also develop new analog devices.
How this works is that the analog technology would allow the US to isolate the most important control systems in the grid which would help limit any catastrophic outages. This “new” system would design ways to replace any automated systems with human operators. The whole point is to make cyber-attacks much more difficult. By using a manual operation approach, an attack would require someone to physically touch the equipment instead of hacking into an online grid.
There have not been any large-scale attacks that lead to this approach, it is a preventative measure. The SEIA bill was inspired by an attack on Ukraine’s power grid back in 2015. Russian hackers crashed a part of the power grid and more than 225,000 Ukrainians were left without power. However, because Ukraine operates its grid with manual technology the attack was less catastrophic than it could have been with automated operations.
The other motivation comes from a 2017 report showing a Russian-linked hacker group that was going after power grid operators, and the US was on its targeted list.
As it stands, experts and industry insiders believe that SEIA is a good step forward in preventing and protecting against future cyber-attacks. Though they also believe that it is not the end-all, but only one step in the whole approach.
|
<urn:uuid:7cd03b4d-8d46-4975-b71d-b64b7a482816>
|
CC-MAIN-2022-40
|
https://www.glocomp.com/how-the-us-plans-to-improve-power-grid-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00040.warc.gz
|
en
| 0.970805 | 361 | 2.59375 | 3 |
The EXPLAIN PLAN command allows developers to view the query execution plan that the Oracle optimizer uses to execute a SQL statement. This command is very helpful in improving the performance of SQL statements because it does not actually execute the SQL statement—it only outlines the plan and inserts this execution plan in an Oracle table. Prior to using the EXPLAIN PLAN command, a file called utlxplan.sql (located in the same directory as catalog.sql, typically ORACLE_HOME/rdbms/admin) must be executed under the Oracle account that will be executing the EXPLAIN PLAN command.
The script creates a table called PLAN_TABLE that the EXPLAIN PLAN command uses to insert the query execution plan in the form of records. This table can then be queried and viewed to determine if the SQL statement needs to be modified to force a different execution plan. Oracle supplies queries to use against the plan table, too: utlxpls.sql and utlxplp.sql. Either will work, but utlxplp.sql is geared toward parallel queries. An EXPLAIN PLAN example is shown next (executed in SQL*Plus).
Q. Why use EXPLAIN without TRACE?
A. The statement is not executed; it only shows what will happen if the statement is executed.
Q. When do you use EXPLAIN without TRACE?
A. When the query will take an exceptionally long time to run.
The procedures for running TRACE vs. EXPLAIN are demonstrated here:
Q. How do I use EXPLAIN by itself?
A. Follow these steps:
1. Find the script; it is usually in the ORACLE_HOME/rdbms/admin directory:
2. Execute the script utlxplan.sql in SQLPLUS:
This script creates the PLAN_TABLE for the user executing the script. You can create your own PLAN_TABLE, but use Oracle’s syntax—or else!
3. Run EXPLAIN PLAN for the query to be optimized (the SQL statement is placed after the FOR clause of the EXPLAIN PLAN statement):
4. Optionally, you can also run EXPLAIN PLAN for the query to be optimized using a tag for the statement:
Use the SET STATEMENT_ID = ‘your_identifier’ when the PLAN_TABLE will be populated by many different developers. I rarely use the SET STATEMENT_ID statement. Instead, I explain a query, look at the output, and then delete from the PLAN_TABLE table. I continue to do this (making changes to the query), until I see an execution plan that I think is favorable. I then run the query to see if performance has improved. If multiple developers/DBAs are using the same PLAN_TABLE, the SET STATEMENT_ID (case sensitive) is essential to identifying a statement.
5. Select the output from the PLAN_TABLE:
Use EXPLAIN instead of TRACE so you don’t have to wait for the query to run. EXPLAIN shows the path of a query without actually running the query. Use TRACE only for multiquery batch jobs to find out which of the many queries in the batch job are slow.
You can use the utlxpls.sql and utlxplp.sql queries provided by Oracle to query the plan table without having to write your own query and without having to format the output.
|
<urn:uuid:7e26767b-dc43-42f6-8ebc-fec716fbcb10>
|
CC-MAIN-2022-40
|
https://logicalread.com/oracle-11g-using-explain-plan-alone-mc02/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00040.warc.gz
|
en
| 0.868628 | 738 | 3.015625 | 3 |
The United States' monopoly on satellite positioning systems will soon end as other nations build their own constellations of navigation satellites.
The U.S. military’s Global Positioning System is the only fully operational global navigation satellite system (GNSS) in orbit, but that will end during the next decade, as other nations bring their own global navigation systems online.
Accurate real-time location and navigation provided by GPS has become a popular consumer service. But military applications are driving other countries to free themselves from dependence on the U.S. system.
An article in China Daily about the launch of China’s third navigation satellite in January emphasized that point. “Modern weapons, including guided missiles and missile defense systems, all need information supported by navigation satellites,” Peng Guangqian, a senior military strategist, said in the article. “Relying on other navigation satellite systems for such information is impossible in wartime.”
China expects its Compass navigation system to be globally operational by 2020. The European Union is building Galileo, a civilian alternative to GPS, and Russia is rebuilding its Glonass system, which fell into disrepair after the fall of the Soviet Union. India and Japan also are making GNSS plans.
A satellite positioning system uses a constellation of satellites, usually in middle-Earth orbits, that continuously broadcast orbital information and time signals. Earth-based receivers pick up those signals and compare the time a signal was sent with the time it was received to determine its distance from the satellite. By comparing signals from several satellites at the same time, a triangulated position can be obtained, usually with an accuracy of several meters.
That information can be augmented with data from other satellite or ground-based systems to provide greater accuracy. In the United States, the National Oceanic and Atmospheric Administration uses the Continuously Operating Reference Stations network of permanent ground-based GPS receivers to augment GPS data. By combining GPS data from a user’s receiver with data from permanent and precisely located positions, a user’s location can be established to less than a meter.
New satellite navigation systems are planned to be interoperable with one another and with GPS so the increased number of satellites can provide more accurate information. But they will also be able to operate independently.
Here is a list of the major GNSS projects.
Global Positioning System: The first GNSS became operational in 1978 and became generally available for civilian use in 1994. It is a joint services effort led by the U.S. Air Force Space Command at Los Angeles Air Force Base, Calif., and contains from 24 to 32 satellites at a time. The U.S. Coast Guard runs the Navigation Information Service, the GPS point of contact for civilian users.
Glonass: The Russian military’s answer to GPS fell into disrepair with the fall of the Soviet Union. With the loss of some of its satellites, it could not maintain global service. The country is building the constellation back up to the necessary complement of 24 to 30 satellites to restore global service, and as of June 17, it contained 21 operational satellites and two backups that have reached the end of their operational lives.
Galileo: A civilian program of the European Space Agency in collaboration with a number of non-European countries, Galileo is intended to ensure independence from foreign military systems, such as GPS, which could cut off service during a war or time of crisis. Planning began in 2002, and an experimental satellite is in orbit. Plans call for it to be operational by 2012 and eventually to include as many as 30 satellites, 27 of them operational and three for backup.
Compass: This is a follow-on to China’s regional Beidou satellite navigation system. China said it expects the system to be globally operational with as many as 35 satellites by 2020. It has three satellites, one launched into a middle-Earth orbit in 2007 and two in geosynchronous Earth orbits that were launched in 2009 and 2010.
NEXT STORY: Fixing the flaws in North American maps
|
<urn:uuid:c8dc62e4-ca83-4637-b146-347e881a01b0>
|
CC-MAIN-2022-40
|
https://gcn.com/data-analytics/2010/07/new-space-race-is-on-for-satellite-positioning-systems/292244/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00040.warc.gz
|
en
| 0.954494 | 833 | 3.203125 | 3 |
5G technology, especially in its purest mmWave form, is incredibly fast, and boasts almost real-time latency, but it has one major issue: it struggles getting through walls, trees, and pretty much any other physical object that sits in the way.
This means that planning for the introduction of 5G, especially in urban areas, requires businesses and local authorities to think about where 5G small cells will be located, which enable 5G to be accessed indoors.
To deal with the headache of planning for 5G, Liverpool-based tech company CGA Simulation (opens in new tab) has created a planning tool that enables you to create a ‘digital twin’ of an area, so planning can take place in a digital environment first, and this technology is now being used to help the ‘Liverpool 5G Create (opens in new tab)’ project.
“The tool creates a 3D digital copy of the network build area using local data from Ordnance Survey, local authority mapping, and Office of National Statistics, to accurately access where houses, roads, lamp posts and street furniture are located - replicating these in a visual display,” explained Jon Wetherall, managing director of CGA. “Using the planning tool we can analyse how a 5G connection penetrates through walls, and where 5G nodes should be placed to navigate around obstructions like trees.”
Liverpool 5G Create
Liverpool 5G Create is a £7.2m project, funded by DCMS (Department for Culture Media and Sport), as part of its 5G testbed and trials (opens in new tab) programme, which is tasked with showcasing how 5G can support different sectors in the UK.
The cutting edge planning tool, designed by Liverpool tech company CGA Simulation, saves time and resources for projects planning a 5G network (5G testbeds, local authorities, and transport hubs, including train stations), allowing them to plan their network build online first.
“The tool maps where on lamp posts, the side of buildings, or street furniture, the 5G nodes should be placed to communicate effectively - via ‘line of sight’,” said Liverpool 5G’s technology lead, Andrew Miles. “This reduces planning time as the hard work can be done online rather than by foot.
"It is a cost effective, efficient and easy to use alternative for teams on a tight budget. The planning tool can also generate a ‘kit pack’ for planning teams, which lays out the exact technical parts a team needs to erect the working 5G network. They know exactly which parts to order and when,” Miles concluded.
|
<urn:uuid:0d2a8f89-efce-4b27-8385-3d8a71978c23>
|
CC-MAIN-2022-40
|
https://www.5gradar.com/news/uk-government-backed-trial-uses-digital-twin-technology-to-plan-5g-networks
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00040.warc.gz
|
en
| 0.939802 | 554 | 2.921875 | 3 |
NASA has granted Psionic rights to use a light detection and ranging technology that the agency originally developed for use in robotic servicing of a satellite in low-Earth orbit.
Under a signed licensing agreement, the company can further develop and integrate the Kodiak lidar system with similar commercial technologies to support future space missions, NASA said Saturday.
The agency's Goddard Space Flight Center in Maryland designed the technology to generate a 3D image for data collectors to measure the altitude and position of the target’s surface.
Kodiak will function as part of a relative navigation system for the On-orbit Servicing Assembly, and Manufacturing 1 spacecraft on a mission to automatically refuel the government-owned Landsat 7 satellite.
Hampton, Virginia-based Psionic licensed a Doppler lidar technology from NASA's Langley Research Center in 2016.
|
<urn:uuid:3b620b6e-dc1e-48d0-b50d-838591cd3fc2>
|
CC-MAIN-2022-40
|
https://executivegov.com/2021/08/nasa-licenses-3d-lidar-technology-to-psionic/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00040.warc.gz
|
en
| 0.886104 | 171 | 2.890625 | 3 |
Is Microservices Architecture The Next Best Move For Businesses?
Technology is getting better and better each day. Several technologies and architectural patterns have emerged and evolved during the past few years, and it only gets better with time. Microservices architecture or microservices is one of those patterns. It emerged from the world of domain-driven design and persistence.
In this article we will cover:
- What is microservices architecture?
- The difference between microservices, monolithic architecture, and service-oriented architecture (SOA)
- The benefits and examples of implementing Microservices Architecture.
What Is Microservices Architecture?
Microservices architecture is a specific method of designing software systems that can structure a single application as a collection of loosely coupled services.
Microservices architecture is made up of several components in their own individual compartments in the software. This makes them independently upgradeable or replaceable.
Microservices architecture simplifies the process of building and maintaining certain types of applications by breaking them down into many smaller pieces that work together. Though this increases the complexity, it offers greater advantages over the monolithic structure.
Now you may wonder: Isn’t Microservices just another name for monolithic architecture and service-oriented architecture (SOA)?
Let’s clarify that for you!
Read more: Progressive Web App Development: 10 Benefits
Microservices Architecture Vs. Monolithic Architecture
In the current age of Kubernetes, Monolithic architecture faces many limitations. Please note a few:
- Monolithic architecture is a single application. It is generally released once a year with the newest updates. Whereas, Microservices architecture is cloud-based and can be updated as required.
- Monolithic architecture is slow. Modifying a small section may require complete rebuilding and deployment of the software. Microservices on the other hand are faster to deploy and quick to isolate any defects.
- Monolithic architecture is harder to adapt to the specific or changing product lines while individual models of Microservices architecture enable scaling and development.
Microservices Architecture Vs. SOA (Service-oriented Architecture)
Microservices architecture is distinct from SOA. Here are a few differences:
- SOA model is dependent on ESBs and so it is slower. Whereas, microservices is faster as it leverages faster-messaging mechanisms.
- SOA focuses on imperative programming style, while microservices focuses on a responsive-actor programming style.
- SOA has an outsized relational database. But Microservices architecture tends to use NoSQL or micro-SQL databases.
Business Benefits Of Microservice Architecture
Microservices architecture can help your business grow quicker, increase productivity, and innovate better to deploy competitive products into the market. Here are some specific benefits of the Microservices architecture:
1. Better organization for efficiency
Microservices architecture organizes business applications. It can extend those applications to support plugins for new features, devices, etc. You can easily add more features to each of those popular applications to generate more revenue.
2. Increased scalability
Microservices architecture divides applications into smaller modules. Each of these modules can operate independently enabling businesses to scale applications up or down, as required. As these modules operate independently, a fault in the single module does not mean disruption of the entire system.
If one module fails due to outdated technology or the inability to further develop the code, developers can use another module. In other words, the applications continue to function even when one or more modules fail.
This capability allows developers the freedom to build and deploy services as needed without having to wait for the entire application to be corrected.
3. Easy to maintain
It is easier to maintain and test a single module as opposed to an entire system. Since each module has its own storage and database, organizations can build, test, and deploy all the modules with less complexity.
4. Faster development
Since all modules are loosely coupled, change in one module does not affect the performance of the other. This means you can update a single module at a time leading to faster development.
5. Enhanced performance
Microservices architecture can enhance the performance of the application. It reduces downtime while developers take their time to troubleshoot the issue and bring the system back to normalcy.
6. Dynamic yet consistent
The individual modular approach in Microservices architecture is easy to replicate. This allows for consistency in applications, which in turn makes managing these modules simple and easy.
Prominent Examples of Successful Microservices Implementation
Prominent examples of Microservices architecture are Amazon, Netflix, Uber, and Etsy. Over time these enterprises refactored their monolithic applications into Microservices-based architectures. This move has helped to quickly achieve scaling advantages, greater business agility, and unimaginable ROIs.
In the early 2000s, untangling dependencies was a complicated process for Amazon developers. It faced development delays, coding challenges, and service interdependencies.
However, Amazon assigned ownership to each independent service team. This allowed the developers to identify the bottlenecks and resolve issues more efficiently. Also, it helped them create a very highly decoupled architecture.
Within a year of starting its movie-streaming service, Netflix was suffering from service outages and scaling challenges. It experienced major database corruption and was on standstill for three days! That is when it decided to move towards more reliable, horizontally scalable systems in the cloud.
First, Netflix moved its movie-coding platform to cloud servers as an independent microservice. This allowed Netflix to overcome its scaling challenges and service outages.
Uber, the ride-sharing service faced growth hurdles. It struggled to launch new features, fix bugs, and integrate its global operations. Besides, it became increasingly difficult to make minor updates and changes to the system.
Uber then decided to move to cloud-based microservices. This allowed its developers to build individual functions like trip management or passenger management. This boosted the speed, quality, and manageability of their services. Among other things, they achieved more reliable fault tolerance.
Etsy experienced poor server processing time. However, with the help of Microservices architecture, Etsy created a variety of developer-friendly tools and went live in 2016. From that point forward, Etsy benefits from a structure that supports continual innovation, faster upgrades, and more.
How Fingent Can Help You Implement Microservices Architecture
Microservices architecture supersedes SOA and monolithic models. However, it has its challenges. This is where Fingent comes to your assistance.
Fingent can help you implement Microservices Architecture correctly to improve your productivity and ROI. Designing your architecture is not just a technological option. It is a necessity! It is a business decision that can directly affect your business growth. Fingent can help you take care of the technical aspect while you concentrate on your business goals. Give us a call and let’s get talking.
|
<urn:uuid:cf89c328-7ec3-4ae5-a76e-f256331773ce>
|
CC-MAIN-2022-40
|
https://www.fingent.com/blog/is-microservices-architecture-the-next-best-move-for-businesses/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00241.warc.gz
|
en
| 0.924109 | 1,429 | 2.875 | 3 |
Finally a viable Layer 1 solution!
You no longer have to choose between performance/features and affordability. ColdFusion is the ONLY Layer 1 Switch to offer:
How does a Layer 1 Switch Work?
The easiest way to think of a Layer 1 switch, also known as a physical layer switch, is as an electronic patch panel. Completely transparent connections between ports is performed based on software commands sent over the control interface. In testing environments, this allows the tests to be as accurate as if there were a patch cord between the devices.
Why is Layer 1 Important?
Automating the physical connectivity in a lab environment is the final roadblock to total automation. If the physical layer is manually configured, human intervention will always be required. This also supports a low-contact, work-from-anywhere environment.
Test configurations executed in seconds vs. hours
Secure, remote access to lab from anywhere
Human error, errant results, and retests are eliminated
Equipment use is maximized
What is a Layer 1 Switch
The easiest way to think of a Layer 1 (L1) switch, also known as a physical layer switch, is as an electronic patch panel. Completely transparent connections between ports is performed based on software commands sent to the L1 switch over its control interface. In testing environments, this allows the tests to be as accurate as if there were a patch cord between the devices.
Using Layer 2 (L2) switches for connection purposes in a test environment can cause a number of problems. In L2 switches, the output bit stream is different from the input bit stream in that MAC control frames, such as pause frames, are discarded by the MAC layer. Also the differing clock timing forces the PHY on the L2 switch to add/delete idle characters to compensate, making it impossible to compare input data streams to output data streams when testing using an L2 switch.
L1 switches on the other hand, are fully transparent to the traffic going through them. Once a connection is made between two ports the attached devices are essentially directly connected. L1 switches have very low latency and do not store or manipulate a single bit in the data stream. The internal hardware architecture of the L1 switch allows the L1 switch to duplicate any incoming data to any number of output ports at full wire speed without dropping a single bit. This allows testing of multiple devices from a single test set or output. Software simulation of cable breaks (port flapping) can also be simulated using software-defined duration times and repetitions of the simulated cable break.
|
<urn:uuid:bc32f6aa-9cae-42fc-b3e5-519d60077bb2>
|
CC-MAIN-2022-40
|
https://www.leptonsys.com/home
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00241.warc.gz
|
en
| 0.94469 | 523 | 2.984375 | 3 |
A cybercrime investigator’s main goal is to collect information from digital networks that can be used in the investigation of internet-based, or cyberspace, illegal activity. Many crimes in today’s world include the use of the internet. A cybercrime investigator may help collect vital information to aid in the investigation of these crimes.
Although a cybercrime investigator has many of the same capabilities as a computer forensics investigator, they are particularly focused on and skilled at solving crimes that use the internet as the primary attack vector.
Cyber-attacks by hackers, foreign rivals, and terrorists are investigated by the cybercrime prosecutor. Cybercriminals pose a substantial — and – — danger. Cyber-attacks are becoming more frequent, more dangerous, and more sophisticated.
Every minute of every day, adversaries threaten both private and public sector networks. Universities are criticized for their research and growth, while companies are punished for trade secrets and other sensitive data. Identity thieves target citizens, and online predators target children. For the effective prosecution of these crimes, the ability to store and retrieve digital evidence is crucial.
Steps to becoming a cybercrime investigator
To work as a cybercrime investigator, you’ll need a mix of knowledge and experience. Both cybersecurity and investigations should be covered by this education and practice, or a mixture of both.
Education is quite important. To work as a cybercrime investigator, you’ll typically need a bachelor’s degree in criminal justice or cybersecurity. Few community colleges offer two-year associate degrees in criminal justice, which can be transferred to a four-year college or university to obtain a bachelor’s degree. A computer science degree is also advantageous for work as a cybercrime investigator.
Choosing a career path A typical career path for this investigation specialty involves working as a member of a cybersecurity team for many years. A solid understanding of cybersecurity protections gives the applicant a solid foundation for predicting how cybercriminals would behave in a number of situations. Employment in a discipline that has helped the candidate develop investigative expertise is highly valued in the industry.
Certifications for professionals Although no industry-wide technical credential is needed to work as a cybercrime investigator, two certifications stand out as desirable qualifications.
The Certified Information Systems Security Professional (CISSP) credential validates an applicant’s knowledge of security architecture, engineering, and management. The Certified Ethical Hacker (CEH) also shows a thorough understanding of cyberattacks and countermeasures.
Experimentation Since the knowledge base necessary to be a good cybercrime investigator is cross-functional in several ways, it is a job ideally suited for cybersecurity or criminal investigations professionals with experience. Even if an applicant has one of the above-mentioned bachelor’s degrees, he or she is unlikely to have the required expertise in both cybersecurity and investigations. Experience in the field will allow you to combine your cybersecurity skills with a strong understanding of investigative principles and procedures, or vice versa.
What is a cybercrime investigator?
An investigator or detective who specializes in cybercrime is known as a cybercrime investigator. These investigators are in high demand in both the private and public sectors because they have the expertise to solve today’s complex internet crimes.
Every year, billions of dollars are spent fixing networks that have been harmed by cyberattacks. Some take down critical infrastructure, causing hospitals, banks, and emergency call centers around the country to experience disruptions and, in some cases, outages. The cybercrime investigator collects the requisite information to prevent cybercriminals from carrying out their nefarious activities.
Cybercrime investigator skills and experience
In order to properly collect and retain evidence for later trial, this is a multi-functional position that requires both forensic techniques and cybersecurity skills.
It’s crucial to be able to function in a multi-jurisdictional or cross-jurisdictional setting. The nonlocal nature of cybercrime is a significant feature. Illegal activity may take place across large distances between jurisdictions. This presents significant difficulties for cybercrime investigators, as these crimes often necessitate international cooperation. Is it, for example, a crime if an individual accesses child pornography on a computer in a country that does not prohibit it from being accessed in a country where such materials are prohibited? The cybercrime investigator must be able to pose and answer questions in order to determine the precise location of cybercrime.
What do cybercrime investigators do?
The majority of cybercrime analysts work for law enforcement, consulting firms, or businesses and financial institutions. Like white hat hackers, cybercrime investigators may be recruited full-time or on a freelance basis in some cases. The investigator’s job entails examining the defenses of a particular network or digital device, while also offering penetration testing (pen testing) services. The aim is to find security flaws or vulnerabilities that could be abused by real-world adversaries.
Investigators must archive and catalog digital evidence after gathering it. The proof is often used to compile reports and is presented in court. A cybercrime investigator may perform all of these tasks.
Cybercrime investigator job description
A cybercrime investigator is a specialist who focuses solely on cyber, or internet-based, crimes. While a detective or law enforcement investigator can investigate a variety of crimes, a cybercrime investigator is a specialist who focuses solely on cyber, or internet-based, crimes.
A cybercrime investigator looks at a variety of crimes, from retrieving file systems on compromised or destroyed computers to investigating crimes against children. Cybercrime authorities also recover data from devices that can be used in criminal prosecutions.
Cybercrime investigators write reports that can be used in court until they have obtained all of the relevant electronic evidence. In addition, cybercrime analysts are required to testify in court.
Large companies may hire cyber crime investigators to test security systems that are already in place. Investigators do this by attempting to break into the company’s data networks in a variety of ways.
Job responsibilities may include:
- Following a crime, examining operating systems and networks.
- Data that has been lost or corrupted must be recovered.
- Obtaining proof
- Information about computers and networks is gathered.
- Cyberattacks are being reconstructed.
- Working in a cross-jurisdictional or multi-jurisdictional environment.
- Creating expert reports on extremely complicated technological issues.
- Giving evidence in court.
- Law enforcement officers are being trained on cyber-related problems.
- Expert testimony, affidavits, and reports are all written by me.
- Clients, bosses, and administrators were all consulted.
- Via research and training, I’m constantly honing my investigative and cybersecurity skills.
- Recovering password-protected/encrypted files and information that has been hidden.
- Detecting security bugs in software programs, networks, and endpoints.
- Determine and propose strategies for proof preservation and presentation.
- The willingness to work and communicate effectively as part of a team.
Outlook for cybercrime investigators
Since computers and the internet were widely adopted in the United States early on, the majority of the first victims of cybercrime were Americans. By the twenty-first century, however, there was scarcely a culture on the planet that had not been affected by cybercrime of some kind. The demand for cybercrime investigators is now global and increasingly increasing. The market for cybercrime investigators does not appear to be slowing down in the near future.
The rise in online criminal activity, such as identity theft, spamming, email abuse, and unauthorized downloading of copyrighted materials, would increase the demand for investigators. For cybercrime investigators, the prospects are expected to be outstanding.
How much do cybercrime investigators make?
Information security specialists (a closely related specialty to cybercrime investigators) earned a median annual salary of $98,350 in 2018, according to the US Bureau of Labor Statistics (BLS), while police and detectives, in general, earned a median salary of $63,380. (www.bls.gov). According to the Bureau of Labor Statistics, demand for this closely related specialty is expected to rise 32% from 2018 to 2028, much faster than the national average.
According to other reports, career growth would be at least 22% (the expected rate of growth for private investigator jobs) and likely higher than 27%. (the projected rate of growth of computer-support-related jobs).
|
<urn:uuid:50b3e200-009a-4dc4-927d-4aabb47382cb>
|
CC-MAIN-2022-40
|
https://cybersguards.com/a-step-by-step-guide-to-being-a-cybercrime-investigator/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00241.warc.gz
|
en
| 0.933802 | 1,724 | 2.984375 | 3 |
When you snap a digital photo with your camera or phone, it stores more than just the pixels and colors that make up the image. Each image file also contains metadata, which includes details ranging from creation date and copyright info to the location where the photo was taken.
The same goes for images modified with many photo editing programs. Image editing programs often add metadata to images including modification timestamps, system info, and tracked changes.
Metadata can pose a privacy threat to people who share and post photos online. Although some social networks and photo storage and sharing sites scrub metadata from uploaded photos, many fail to do so, Comparitech researchers say, which could allow attackers to gather personal information from images posted online. For example, if someone posts a vacation photo with GPS coordinates and a timestamp in the metadata, an attacker could easily find when and where they traveled.
Metadata can be categorized into three broad categories:
- System metadata is generated when the image is stored (i.e. when a photo is taken or edits are saved). It includes specific labeled criteria, like the date and time the image was created and details about the camera and/or editing software used
- Substantive metadata include the contents of the actual file, such as tracked changes to an edited image
- Embedded metadata include data entered into a document that is not normally visible, such as formulas in an Excel spreadsheet
Image metadata can be embedded internally in common image file formats like JPEG and PNG. Such image data is usually stored in Exif (exchangeable image file format). But it can also exist outside the image file in a digital asset management (DAM) system. These are sometimes referred to as “sidecar” files, and are often stored in the XMP format.
Metadata has three broad use cases:
- Describing the contents of the file, including keywords, names of persons pictured, and location coordinates
- Administrative data can include the creation date, modification date, location, and other system metadata mentioned above.
Which image sharing services scrub metadata and which ones don’t?
Comparitech researchers analyzed the metadata scrubbing practices of 12 popular image storage and sharing services online. They uploaded an image of the Mona Lisa loaded with metadata to each of the services. After the upload, they then downloaded the image from each respective service to see if the metadata remained intact or not.
Let’s start with the most popular places to share images on the web. Imgur, Facebook and Instagram all scrub all metadata from photos upon upload. You don’t have to worry about leaking metadata when uploading images to these sites. Bear in mind, however, that even though users of those sites don’t have access to metadata, the sites themselves do.
Flickr keeps all of the original metadata data and even displays a lot of it on each photo’s web page.
Photobox.co.uk tags photos in the metadata comments section to indicate that uploaded images are compressed. The rest of the metadata is intact. It was the only service that actually added or modified data.
The remaining image sharing and storage services we examined didn’t remove or modify any metadata except for “date modified” timestamps:
If you don’t want to expose EXIF metadata on those sites, you’ll have to scrub images beforehand. More on how to do that below.
How you can be tracked using EXIF metadata: research examples
Comparitech researchers proved the sensitivity of image metadata by using publicly available images to track down image subjects and creators. (Note: we’ve scrubbed all of the following images of their original metadata).
Let’s start with a simple example. Using the GPS metadata in the above photo, we determined it was taken near Sørstranda, Norway.
The next subject was a photo of a man’s face. Using the image metadata, reverse image search, and a bit of open-source intelligence (OSINT), researchers were able to identify him as a previous game-show contestant. They found his country, date of birth, wedding date, spouse’s name, Facebook profile, Twitter account, LinkedIn page, Instagram account, work experience, skills, education, and interests. Researchers were also able to identify and find info about the subject’s game-show teammates as well.
Another subject was a passport-style headshot featuring a man in what appear to be military fatigues. Researchers were able to track down the image to a site with photos of the subject’s school graduation. Using the school name and graduation gallery, researchers retrieved the names of everyone in his graduating class. With the possibilities narrowed down, they found a man with a name similar to that of the image filename. Researchers went on to find the man’s Facebook and Instagram profiles. Using these images, they further discovered he was indeed a soldier. They learned his division and brigade, and info about his closest relatives.
Lastly, researchers identified a Philippine national using a photo of herself posted on an image-sharing site. The subject is holding up photo identification. Such photos are often used to verify the subject’s identity to a digital service, such as an online bank. Researchers were able to find out the subject’s country, birth date, weight, height, blood type, address, Facebook profile, job, education, that she recently had Covid-19, and her Youtube channel.
Metadata used as court evidence
Metadata from images and other files has been used as evidence in courts of law and police investigations, demonstrating metadata’s value from a privacy perspective. Here are a few prominent examples:
- In 2016, two Harvard students used GPS coordinates stored in the metadata of photos posted on the dark web to identify drug dealers 229 drug dealers. Dark web drug dealers often post images of their products online to help prove their credibility, but they often forget to scrub EXIF data beforehand.
- In 2017, an employee of Bio-Rad Laboratories filed a suit against his employer alleging he was fired for telling authorities about potential bribery in China. A performance review with a metadata timestamp dated after he was fired served as evidence in the case, resulting in a higher payout for violating laws against firing whistleblowers. This is the biggest metadata-linked payout to date at $10.8 million in damages.
- In 2015, a judge threw out a case in which a woman accused her spouse of physical abuse. The plaintiff provided several photos as evidence of abuse, but the metadata indicated the date that the wife had claimed the abuse occurred three months after the photos were taken.
- Digital forensics company Legility published a case study (PDF) describing a lawsuit it investigated. In that case, a healthcare company acquired another business. Employees from the original company left to start their own company. The now-acquired company sued the new company, alleging it poached employees and stole trade secrets and proprietary documents including customer lists. Using the metadata of those documents as evidence of when documents were copied and transferred, the acquired company was rewarded $7 million (GDP £5.1 million).
How to remove metadata from images
Cameras and camera apps vary quite a bit, but many of them have an option to turn off or limit the generation of metadata. Check your camera or app settings.
Most cameras and image editing programs store image metadata in the EXIF format. You might be able to edit EXIF data on exiting images through your camera or photo editing app.
Windows 10 comes with a built-in option to remove metadata. However, this will only remove metadata that Windows 10 understands, which means it could leave some metadata behind. Still, it should at least help minimize the information stored in images. If you’re a PC user, just follow these steps:
- Right-click the image file and select Properties to open a new window
- Click the Details tab at the top
- Click the link that says Remove Properties and Personal Information at the bottom. Another new window will pop up.
- In the Remove Properties window, select Remove the following properties from this file:
- Click Select All, then OK
|
<urn:uuid:283db780-4ff8-4acd-bc91-c61fe8a95172>
|
CC-MAIN-2022-40
|
https://www.comparitech.com/blog/vpn-privacy/exif-metadata-privacy/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00241.warc.gz
|
en
| 0.939968 | 1,765 | 2.921875 | 3 |
Enterprises large and small are aware of the threat of cyberattacks and security breaches. Many spend large portions of their annual budgets on mitigation tools like hardware and device firewalls, antivirus software, and tools for constantly monitoring. However, companies may be missing one major avenue of breaches and cybercrimes—people.
Unsuspecting employees are often the targets of malicious actors using social engineering techniques. According to an Accenture study, the average annual cost to a company as the result of phishing and/or social engineering was $1.4 million per year in 2018.
In a recent survey conducted by Electric, 71% of IT professionals indicated an employee at their organization had succumbed to a social engineering attack since the pandemic began.
All companies, and in particular SMBs, need to take the threat of social engineering attacks seriously. As we continue to live in a world with more remote-first and dispersed workforces, it’s likely that nefarious actors will continue to find new ways to exploit the situation.
What is a Social Engineering Attack?
Social engineering attacks are breaches or incidents that initially target people rather than devices or software. The attacks attempt to exploit human behavior and weaknesses rather than try to “break in” to a company’s cybersecurity defenses using technical skills. They can take place in person and over the phone, but more recent successful social engineering attacks have been facilitated via email or social media.
Some of the most infamous hacks in recent years — Sony Pictures, Target, and the Democratic Party in 2016 — were the result of social engineering attacks.
The 4 Most Common Types of Social Engineering Attacks
Below we discuss some of the most common types of social engineering attacks, and how to prevent them.
Phishing is likely the most widely used type of social engineering attack. Scammers use emails (and increasingly text messages) to trick victims into divulging sensitive information. These emails and messages appear to come from a trusted source like an IT employee or a known vendor or contractor.
The messages often appeal to a sense of urgency by informing the reader that “something is wrong with an account” or an “invoice needs to be paid immediately.” Readers may be encouraged to click on a link where they will inadvertently enter credentials or financial information. Phishing emails may also direct the reader to download a file which usually contains malware.
There is a variant of phishing known as “whaling” or “spear phishing.” If you were to think of phishing as casting a wide net, whaling is more targeted. Instead of sending hundreds of employees a generic email, whaling and spear phishing attacks target a small number of employees, usually ones with a high level of authority.
One of the most infamous spear phishing attacks in recent years was of John Podesta, chair of Hillary Clinton’s 2016 presidential campaign. Podesta received a fraudulent email appearing to be from the Gmail security team. He followed a URL to a fake log-in page where he entered his credentials. The group behind this social engineering attack was a Russian hacking group that gave the contents of the email account to Wikileaks.
Baiting is a social engineering attack that takes advantage of our natural curiosity and desire for information. The “bait” is often insider information that the victim would not normally have access to. One way that this social engineering attack is performed is by a hacker leaving a USB drive in a conspicuous place inside or near an office. It usually has an enticing label (e.g., board meeting minutes, employee salaries) that will tempt the finder into taking the device and plugging it into their machine. However, the USB likely contains malware that will give the hacker more access to a company’s network.
With fewer people in offices due to the rise of remote work, other forms of baiting are becoming more common. Similar to a phishing attack, a victim may be lured into downloading a digital file that also contains malware.
Tailgating is an old-fashioned hacking technique, but malicious actors still find it effective. Someone posing as an employee of a company will follow an actual employee inside of a building or restricted area by pretending to have forgotten their key card. The malicious person may also pose as a delivery person attempting to drop off a package. Once inside, the attacker may try to install malicious software on unsupervised terminals or plant USB keys around for a future baiting social engineering attack.
You can think of pretexting as a more sophisticated step up from phishing. Hackers engaging in pretexting build a seemingly trusting relationship with their victim by impersonating someone known to them. This might be through a series of emails, text messages, and possibly phone calls. Once the relationship is established, the hacker may ask the victim to disclose sensitive information, usually in the guise of needing it to be able to do their job. The victim assumes that the request is legitimate and there is nothing out of the ordinary about it.
According to the Wall Street Journal, a hacker recently used a mix of pretexting and an AI-generated voice of the CEO of a German company to convince the CEO of its UK subsidiary to transfer $243,000 to a Hungarian supplier. The victim thought he received a call from the actual CEO of the parent company in Germany. In actuality, the AI-generated call replicated the voice and German accent of the impersonated CEO well enough to get the UK subsidiary CEO to perceive it as his boss’s voice.
How Can I Prevent Social Engineering Attacks?
No amount of antivirus software or network firewalls is going to prevent an employee from giving information to someone that they think that they know and trust. The first step in defending against social engineering attacks is educating your workforce on its existence and the problems it can cause. This should include ongoing training about commonly used and new cyberthreats so employees know what to look for.
Although education is key, here a few simple steps you can take today to avoid falling victim to social engineering attacks:
Hover over all hyperlinks before clicking on them to confirm the URL directs to a legitimate site
Tell anyone who asks for sensitive information that you will call them back at their phone number or email address listed in the company directory.
Be wary of messages asking for sensitive information; forward them to your IT or security department.
In addition, use real-world examples to further explain the threat of social engineering. Many people know about the large hacks and data breaches that companies have dealt with. However, few know the actual facts of the cases, and that many were the result of someone simply being fooled by a phishing email.
We understand how grievous a social engineering attack can be to your organization and are always focused on providing you with the best-practice recommendations for security management that will keep your organization’s data well-protected. Figuring out all your bases to cover is not an easy process to navigate, especially in times like these— and that’s why Electric is here to support your organization.
|
<urn:uuid:397c24dd-d6db-437b-b55a-74524b5e1979>
|
CC-MAIN-2022-40
|
https://www.electric.ai/blog/types-of-social-engineering-attacks-how-to-protect-against-them
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00241.warc.gz
|
en
| 0.962643 | 1,468 | 3.109375 | 3 |
There are three basic principles to consider when deciding how to provide access to sensitive data in a secure manner, namely: Confidentiality, Integrity, and Availability. These principals are collectively known as the CIA triad.
The level of confidentiality will naturally determine the level of availability for certain data. Confidentiality is a question of how, and where, the data can be accessed. To ensure confidentiality, one must safeguard the data using encryption as well as protecting the physical network and storage devices. However, it’s not only attackers monitoring the network for sensitive information that we need to be concerned about, but we also need to watch out for ‘social engineering’ attacks. Social engineering attacks are when a user is deceived and manipulated in a way that encourages them to hand over certain sensitive information. Such attacks are becoming increasingly more common, and increasingly more sophisticated. Since such attacks are based on erroneous human actions, they are not easy to monitor and prevent. Training must be provided which ensures that staff members are vigilant and able to identify such attacks.
Data has integrity if it is accurate and reliable. To maintain the integrity of the data, we need to focus on both the ‘contamination’ and ‘interference’ of the data – or in other words – the data that is stored on disk, and the data that is transmitted. While we are often made aware to the existence of certain viruses circulating the web, it is often the case whereby a disgruntled or troublesome employee – such as a programmer – installs a back-door, leaving the data open to attack. Network monitoring, encryption, and strict access controls can be used to protect against these kinds of attack. The integrity of the data can also be compromised in various non-malicious ways, such as incorrectly entering data or using the wrong applications to edit the data. The system should be setup to check against such eventualities and alert the users accordingly. Encryption techniques can also be used to ensure that information isn’t being tampered with during transit.
There are many factors which may affect the availability of your system, such as: faulty or mismanaged network devices, network congestion, configuration changes, power outages, denial of service (DoS), as well as various environmental factors such as fire’s, hurricanes etc. According to the University of Michigan, 23 percent of total network downtime is attributed to router failure, which is often the result of configuration changes. Availability of information doesn’t necessary imply that all information must be available on request. If you are frequently storing large amounts of data, you may not have sufficient storage space and may be required to utilize an offline storage unit.
|
<urn:uuid:33636731-84d6-4757-9ba3-3bc7ade13f1e>
|
CC-MAIN-2022-40
|
https://www.lepide.com/blog/cia-triad-the-basic-principals-of-data-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00241.warc.gz
|
en
| 0.951822 | 612 | 3.609375 | 4 |
Interest in broadband connectivity and accessibility has a direct impact on education and opportunity through all levels of a person’s education. Some people are raised in communities with Advanced Placement (AP) courses throughout their high school years and a computer for every student. Other districts struggle to have updated computers students must share and no exposure to AP courses.
One area in this example can be seen is with coding, an early requirement of any IT curriculum. Early exposure to coding shows a direct impact on real-world problem solving, critical thinking, and creativity skills, according to the national education nonprofit Project Tomorrow. These courses may be taught in school but use online resources that can stretch an impoverished district’s resources; access to the internet in remote areas can complicate a budding IT mind’s future path and limit exposures readily accessible to others. “Student interest in learning coding transcends gender, grade level, community type and home poverty, and that interest is growing–middle school student interest in learning coding increased by 23% in just three years,” according to Project Tomorrow.
In March 2020, many schools had to turn classroom-based curriculum into online learning on a dime and major at-home connection issues with multiple children needing equipment and bandwidth became readily apparent. Years before the pandemic, the United Nations General Assembly passed a resolution in 2016 that internet connectivity was a human right and “expressed concern that many forms of digital divides remain between and within countries and between men and women, boys and girls, and recognizing the need to close them.”
The UN also strongly emphasized its partner states should echo this concern and look within themselves to the “importance of applying a comprehensive human rights-based approach when providing and expanding access to the Internet and for the Internet to be open, accessible and nurtured by multi-stakeholder participation.”
Steps had already been taken in this direction by individual countries such as Finland, which passed an amendment in 2010 to its Communications Market Act. While stopping short of calling it a human right, Finland became the first to declare broadband “a legal right for all its citizens, entitling them to a one megabit per second broadband connection now, with a 100-Mbit/s connection to become a right by the end of 2015.”
Looking beyond the entertainment value that broadband connection provides one can examine the broader picture of what internet access allows. The World Bank Group, an observer of the United Nations Development Group wrote a brief on the subject highlighting the economic and human development byproducts of broadband connection. It even specifically called out workforce growth in information and communication technologies. Online outsourcing of task-based work could provide millions of jobs worldwide, along with job skill development access.
With integrated accessibility not currently in place, “The challenge is to expand broadband access, especially in rural areas. Even ‘digital divides’ in access exist across regions and countries, such divides within countries have a disproportionate impact on rural communities and the poor.”
The World Bank Group is made of five institutions that make leveraged loans through a charter based on reducing poverty, increasing shared prosperity, and promoting sustainable development. It is headquartered in Washington, D.C.
Focusing on stateside matters, the United States legislation is moving on President Joe Biden’s call to Congress in June to pass a $1.2 trillion Bipartisan Infrastructure Framework as part of his Build Back Better vision. Now called the Infrastructure Investment and Jobs Act, it has $65 billion appropriated for a broadband package. Approved by the Senate in August, the House of Representatives is set to continue debates Monday, September 27. As of press time, the House was still debating the bill, which it has tied to other legislation pending approval.
broadband infrastructure that provides minimally acceptable speeds – a particular problem in rural communities throughout the country.”
Key Broadband Portion Elements:
Help lower prices for internet service by:
- Requiring funding recipients to offer a low-cost affordable plan
- Creating price transparency and helping families comparison shop
- Boosting competition in areas where existing providers aren’t providing adequate service
- Pass Digital Equity Act
- End digital redlining
- Create a permanent program to help more low-income households access the internet
The idea sounds great, but the funding should be scrutinized, according to University of Florida Director, Digital Markets Initiative, Dr. Mark A. Jamison. Reducing the digital divide requires proper oversights of broadband appropriations from the act to avoid a repeat of 2009’s $7.4 billion in broadband stimulus projects through the America Recovery and Reinvestment Act, he told AOTMP® in a phone interview.
That interview, September 2, was coincidentally scheduled on a day that Jamison was waiting to read his copy of the Gainesville Sun. Jamison, who is also a non-resident Fellow with public policy think tank American Enterprise Institute, had written an op-ed piece for the paper calling on the state to not waste any funds sent its way.
“If you live in rural Florida, you are 20% less likely to have broadband available to you than if you live in an urban area. If you are Black or Hispanic, you are 13% to 20% less likely to have broadband than if you were white,” Jamison stated in his editorial.
He told AOTMP® that many types of communities can benefit from broadband, and while some populations are thirsty for advanced technologies, others may be unaware of its value. A provision of the program should include targeted efforts to help people better understand what ways and how broadband can benefit them.
He offered these suggestions to guide those who may be in positions to disburse funds:
- Define broadband gaps using mapping from the Technology Policy Institute
- Write project descriptions to include clear measurables
- Establish transparent, efficient process for competition of funds
- Measure results, successes, and failures
He is a strong proponent of the reverse auction process for broadband. “What people should do, once you’ve identified the projects, is use the FCC’s reverse auction process. They’re the best in the world. The first auction saved 70% monetary savings of what the spending is for broadband. If you’re a state that gets money through the bill, if you run it like a beauty contest you will spend it all. Instead of $1 billion, you would only spend half a billion and be able to do additional programs and get a lot more bang for your buck. “I would encourage people in areas where there is no broadband, talk with ISPs and ask them to apply for the subsidy to serve your area. If you do it yourself, chances are they can come in at an auction with a lower price.”
And as simple as it sounds, Jamison recommends foregoing paying until service is delivered, a practice that has not been historically followed.
“Paying before is the way broadband has been done in the past. Make sure the providers are delivering broadband before you pay them. Pay after the service is delivered, just like construction projects work. That’s what I would encourage the states to do as well.” Passing the bill is not so much as important he emphasized as what happens after. “Whether this is important will depend on how it is handled. If it is spent on developing infrastructure through competitive processes, we’ll get more bang for our buck than if it follows traditional government grant processes.”
How will individuals benefit in any specific form? This was our closing question for Jamison. His answer?
“Can’t answer until we know what will be done.”
|
<urn:uuid:efc3fee9-aff4-4db6-9050-418cba3d719e>
|
CC-MAIN-2022-40
|
https://aotmp.com/insights-article/broadband-questions-yet-to-be-answered-in-infrastructure-bill/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00241.warc.gz
|
en
| 0.953314 | 1,585 | 3.46875 | 3 |
Developing Supervisory Control and Data Acquisition (SCADA) troubleshooting skills is a subset of any good "SCADA Training" program.
SCADA troubleshooting is a key component of keeping your SCADA Remote Monitoring and Control system online. If you can't quickly diagnose and solve the problems that crop up, you really can't do your job effectively.
The basis of effective SCADA troubleshooting is a solid grasp of SCADA fundamentals. What are Remote Telemetry Units (RTUs)? How are they different from Intelligent Electronic Devices (IEDs)? How does a SCADA master collect network alarm management data from remotes? If you don't fully understand how your SCADA monitoring system works, how will you keep it running properly?
To develop the SCADA background knowledge you need, you must find a source you can trust to provide you with good information.
The best information comes from experts with hundreds of successful system installations. Not only do they know what they're talking about, but they also know what information is relevant and what can be left out. That's the kind of real-world expertise you need for effective SCADA troubleshooting.
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free SCADA tutorial.
An introduction to SCADA from your own perspective.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
|
<urn:uuid:d8740d6c-642c-4956-bc79-51930d1b0b8a>
|
CC-MAIN-2022-40
|
https://www.dpstele.com/scada/troubleshooting.php
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00241.warc.gz
|
en
| 0.924062 | 342 | 2.546875 | 3 |
You already know IoT can help your business. But did you know that IoT can help the planet at the same time?
To achieve global climate goals, we must reduce methane emissions while also urgently reducing carbon dioxide emissions,” explains climate scientist Drew Shindell. “The good news is that most of the required actions bring not only climate benefits but also health and financial benefits, and all the technology needed is already available.”
IoT and location-based services are part of the technological solution. They allow fleet managers to set smart sustainability goals and meet them — all while improving efficiency and productivity.
Download the eBook, “IoT for Good: Location-Based Services Aids in Sustainability”
|
<urn:uuid:fee543fc-0183-4aea-be8f-98136d251cbb>
|
CC-MAIN-2022-40
|
https://www.korewireless.com/resources/ebooks/iot-for-good-how-data-driven-solutions-can-increase-fleet-sustainability
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00241.warc.gz
|
en
| 0.95227 | 150 | 2.5625 | 3 |
What is a Knowledge Base?
A Knowledge Base (KB) is a crucial part of knowledge management in any business. It is a collection of information that is organized and accessible to employees so they can find what they need on-demand. In this blog post, we will discuss the benefits of knowledge bases and how they can help improve IT management and online library services.
Benefits of Knowledge Management
The following are some primary knowledge management system benefits:
- Having a knowledge base will ensure that your customers have the information they need at their fingertips. This will help improve productivity and efficiency as they will not have to spend time searching for information or contacting IT or the contact center for assistance.
- Customers can access it to learn about new products, services, or procedures.
- They can also use it to find answers to common questions and problems.
- The knowledge base can be customized to meet the specific needs of your organization.
- It can even lower your cost because your IT helpdesk will be more efficient since they will be able to focus on more complicated customer tickets.
The benefits of knowledge bases are numerous, and they can help improve IT management, online library services, and employee productivity and efficiency.
Common Mistakes in Creating a Knowledge Base
However, there are some mistakes you need to avoid when it comes to a knowledge base. The following is a list of 11 most common ones:
No Knowledge Base At All
By not having a knowledge base, your customers will waste time searching for information on their own. They may also rely too much on email or other communication tools to share knowledge, leading to inconsistency and confusion.
Not Organized Well
If your knowledge base is not well organized, it will be difficult for customers to find the information they need. This confounding setup can lead to frustration and wasted time.
Incomplete or Outdated Information
If the information in your knowledge base is incomplete or outdated, it will be of no use to your customers. It is essential to make sure that all of the information in the knowledge base is accurate and up-to-date. You can assign a knowledge base owner who can be responsible for updating the KB whenever there is a platform or service update.
Difficult to Use
If users do not know how to use the knowledge base, it will be ineffective. Make sure the knowledge base is user-friendly and intuitive to use by performing usability tests. It is not enough to simply create a knowledge base - you also need to provide adequate training materials, so users know how to make the most out of it. If users do not understand how to use the knowledge base or where to find specific information, they will quickly abandon using it, or worse, turn to other sources that may be unreliable.
Not Enough Use
If your knowledge base is not used often enough, it will become irrelevant. It is important that you promote and encourage customers to use the knowledge base whenever necessary. You can do this by adding a chatbot that automatically directs users to the relevant KB sections.
Having Developers Write Your Knowledge Base
A KB is meant to be a collaborative endeavor that collects and organizes knowledge from various subject matter experts in your organization. It is a disservice to your customers and website visitors if the KB becomes inaccessible to the ones who will be using it the most. A developer may use language that is too specialized for the layman to understand. To avoid unnecessary dissatisfaction, it is a great idea to use natural language tools to make your KB information easier to understand.
Too Much Information
If there is too much information under one topic, employees will become overwhelmed and frustrated. It is crucial to find a balance between too much and not enough information. It is likely to become an organizational nightmare if one Q & A hosts information from multiple problems instead of each problem having a dedicated information bank. A KB is supposed to make it easier for users to find information, not make it more difficult.
If the KB is not well organized, or if the navigation is confusing, users will be unable to find what they are looking for. This can lead to frustration and dissatisfaction with the KB. It is important to test your KB design before launch to ensure that everything works as intended.
A search bar is another helpful feature you can add to your KB. This way customers with queries can find all applicable information faster using a few keywords instead of having to scroll multiple questions to find the answers they need. If a knowledge base does not have good search capabilities, it becomes almost useless to its users. Without the ability to easily find what they are looking for, people will quickly give up on using the resource. Make sure your knowledge base has a robust search feature so that users can easily find the answers they need.
Formatting issues can also occur in a knowledge base and cause user frustration. Incorrectly formatted information can look unprofessional and make it difficult for users to read. Fonts that are too small or dense text blocks will discourage people from reading the content. Make sure you use easy-to-read fonts, and space out the text so it is easier on the eyes.
Incorrect information due to disorganization or outdated data is another common issue in knowledge bases. When users come across this information, it can be frustrating and confusing. In fact, your customers can feel distrust towards your brand and business. To avoid losing your customer's confidence, you can run regular audits of your knowledge base to ensure there are no inaccurate instructions and all intelligence is the latest, factual material.
Not Using Visuals
Screenshots, diagrams, and infographics are all great ways of breaking down complex information into easy-to-follow steps. You can enhance the user experience by providing visual cues for your customers to follow. This is a powerful way to engage your customers further.
A knowledge base can be an extremely valuable asset to any organization when used correctly. By providing clear and concise instructions, customers can learn how to use various tools and applications quickly and effectively. To make the most of your knowledge base, avoid making the above common mistakes.
Learn more about how Giva's knowledge base software can help you provide the best in customer care!
|
<urn:uuid:392db491-a736-4bcf-b7ff-07727b862d66>
|
CC-MAIN-2022-40
|
https://www.givainc.com/blog/index.cfm/2022/2/24/avoid-these-common-mistakes-when-building-knowledge-base
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00441.warc.gz
|
en
| 0.928407 | 1,290 | 2.609375 | 3 |
For over a decade agile software development practices are considered a key to the productivity of software development teams. IT giants and world leaders in software development are using agile methodologies as part of a DevOps (Development and Operations) approach. DevOps is the mainstream methodology to developing, deploying and releasing high-quality software by teams of developers. The popularity of the DevOps approach is currently reflected on the fact that it is currently extended to cover all aspects of software systems. Prominent examples of such extensions can be founded in the MLOps and DevSecOps methodologies, which enhance DevOps to support Machine Learning and Security applications respectively.
DevOps is largely about using the right tools for all phases of software development, integration and deployment. Specifically, DevOps developers tend to be proficient in a wide range of tools that facilitate teams to implement agile practices. While there is a very large number of DevOps tools, some tools of them are used more frequently than others. In this post, we present seven of them.
DevOps is much about transparent and responsive collaboration between development teams. Jira is one of the most popular collaboration tools for DevOps teams. First and foremost, it facilitates the planning of complex software projects. Specifically, teams use Jira to create user stories and issues associated with their projects. Moreover, Jira enables the agile planning of iterations, including the planning of “Sprints” in Scrum terminology. It also enables project managers and developers to distribute tasks across the members of the software development team. Furthermore, Jira can facilitate the tracking of stories and development tasks. Tracking boosts the prioritization of critical tasks, while providing visibility and transparency on the progress of user stories.
Jira is also useful for the actual release of the product. It facilitates stakeholders to access up-to-date information about the scope and contents of each release. Likewise, the tool improves the performance of teams through enabling them to access visual data about the development tasks, but also to use these data in order to identify and remedy the pale points of the team’s development pipelines.
Nowadays many teams use Slack for their business communications. It is a communication platform that provides conventional IRC (Internet Relay Chat) like functionalities, such as topic-specific chat rooms (i.e. “channels”), private groups, and direct messaging. The Slack platform is therefore a collaboration hub which is much more versatile and much more powerful than e-mail. It facilitates natural collaboration between team members, which resemble face-to-face communications in intelligence, user-friendliness and flexibility. Conversations are segmented in public and private channels, as well as direct messages, rather than being restricted to one-to-one or group chats like how other popular tools operate. Overall, Slack eliminates conventional overstuffed inboxes and distributes messages in dedicated spaces that are conveniently called channels. Based on the channels’ mechanism, Slack facilitates members of DevOps team to follow conversations about specific development threads, while at the same time increasing the productivity of information searches.
DevOps is largely about automating development and deployment. In the development forefront there is a need for automating build processes. Maven is one of the oldest and most popular tools for builds’ automation. It is used to build and manage complex projects and their configurations. Maven was originally developed to support automation of Java projects. Nevertheless, over the years support for other popular languages (e.g., C#, Ruby, Scala) has been added. The operation of Maven relies on the specification and use of a specific configuration file, while is called POM (Project Object Model). Each POM file provides information for handling builds, dependencies and documentation of complex projects.
Continuous Integration (CI) and Continuous Delivery (CD) are the main productivity drivers of DevOps. Jenkins is a free and open source automation server for Continuous Integration and Continuous Delivery tasks. Specifically, the server automates building, testing, deployment, as well as other CI/CD tasks. It helps developers to build and test software projects by setting an automated CI/CD environment. Jenkins integrates very nicely with tools that manage versions and dependencies such as Git and Maven.
Software teams need to be flexible in setting up and deploying applications on a given host. To this end, they leverage the concept of containers, which package and run an application in a loosely isolated environment over a server. Docker is probably the most popular platform for setting up and managing containers. It uses virtualization at the Operating System level in order to package and deliver software within containers. Each container bundles its own software, libraries and configuration files. Furthermore, Docker allows communication between containers. The concept of containers bears similarities to traditional Virtual Machines (VMs). However, they exhibit much better performance than VMs given that they share the host kernel rather than emulating a full operating system. This is one of the main reasons why Docker-based applications exhibit very good performance. However, they are not quite as fast as applications running on the native Operating System.
A great deal of DevOps work involves deployment configurations and application delivery. DevOps does not look only at development tasks. It also caters for high performance, fine-tuned operations. In this context, the Chef tool helps to treat infrastructure as code. Specifically, it provides the so-called “Chef recipes” that describe the configuration of the servers where a software product is deployed. In this way, Chef boosts application delivery and facilitates DevOps developers and administrators’ collaboration. Specifically, it enables them to automatically provision and configures new machines for their DevOps projects. It supports solutions for both small and large scale systems while supporting most of the popular cloud-based platforms that are commonly used in DevOps projects.
Non-trivial DevOps applications use multiple containers and create a need for container-orchestration solutions. Kubernetes is an open source platform for container orchestration. It supports automated application deployment, scaling, and management of container applications. Kubernetes manages containerized workloads and services while supporting declarative configuration and automation of containers’ deployments. Kubernetes is associated with a rapidly growing ecosystem, which includes a wide range of support services and tools. Using Kubernetes, DevOps teams are capable of eliminating several of the manual steps that are involved in the deployment and scaling of complex containerized applications.
There are many more tools for DevOps development. However, the above-listed ones provide a sound basis for understanding and implementing DevOps projects. We strongly recommend that prospective DevOps developments become proficient with these tools, while at the same time investing on how to best combine them in the scope of their development and deployment pipelines. Overall, practicing these tools is one of the best ways to start DevOps on the right foot.
Significance of Customer Involvement in Agile Methodology
The role of CIOs in fostering an agile and innovative DevOps culture
Microservices: A Powerful tool for Business Agility
DevOps: A Modern Vehicle for Business Growth and IT Excellence
An Introduction to Continuous Integration and Workflows
Quantum Computing for Business – Hype or Opportunity?
The emerging role of Autonomic Systems for Advanced IT Service Management
Why is Data Fabric gaining traction in Enterprise Data Management?
How Metaverse could change the business landscape
We're here to help!
No obligation quotes in 48 hours. Teams setup within 2 weeks.
If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch.
Outsource with Confidence to high quality Service Providers.
If you are a Service Provider looking to register, please fill out
this Information Request and someone will get in
Enter your email id and we'll send a link to reset your password to the address
we have for your account.
The IT Exchange service provider network is exclusive and by-invite. There is
no cost to get on-board;
if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
|
<urn:uuid:d7d1d2ef-15da-4498-8767-9413dc495573>
|
CC-MAIN-2022-40
|
https://www.itexchangeweb.com/blog/7-popular-tools-for-agile-software-development/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00441.warc.gz
|
en
| 0.925787 | 1,663 | 2.546875 | 3 |
Over the past decade, businesses and organizations have come to rely on the competitive edge afforded by predictive analytics, business modeling, and behavioral marketing. And these days, enlisting both data scientists and citizen data scientists to optimize information systems is an effective way to save money and squeeze the most from data sets.
What is a Citizen Data Scientist?
Citizen data scientist is a relatively new job description. Also known as CDSs, they are low- to mid-level “software power users” with the skills to handle rote analysis tasks. Typically, citizen data scientists use WYSIWYG interfaces, drag-and-drop tools, in addition to pre-built models and data pipelines.
Most citizen data scientists aren’t advanced programmers. However, augmented analytics and artificial intelligence innovations have simplified routine data prep procedures, making it possible for people who don’t have quantitative science backgrounds to perform a scope of tasks.
Except in the rarest of circumstances, citizen data scientists don’t deal with statistics or high-level analytics.
At present, most companies underutilize CDSs. Instead, they still hire experts, who command large salaries or consulting fees, to perform redundant tasks that have been made easier by machine learning.
What is a Data Scientist?
Data scientists — also known as expert data scientists — are highly educated engineers. Nearly all are proficient in statistical programming languages, like Python and R. The overwhelming majority earned either master’s degrees or PhDs in math, computer science, engineering, or other quantitative fields.
In today’s market — where data reigns supreme — computational scientists are invaluable. They’re the brains behind complex algorithms that power behavioral analytics and are often enlisted to solve multidimensional business challenges using advanced data modeling. Expert data scientists work with structured and unstructured objects, they also often devise automated protocols to collect and clean raw data.
Why Should Companies Use Both Expert and Citizen Data Scientists?
Since CDSs cost significantly less than qualified scientists, having both citizen and expert data engineers in the mix saves money while allowing your business to maintain a valuable data pipeline. Plus, data engineers are in short supply, so augmenting their support staff with competent CDSs is often a great solution.
Some companies outsource all their data analytics needs to a dedicated third party. Others recruit citizen data scientists from within their ranks or hire new employees to fill CDS positions.
How to Best Leverage Citizen Data Scientists and Expert Data Scientists
Ensuring your data team hums along like a finely tuned motor requires implementing the five pillars of productive data work.
- Document an Ecosystem for CDSs: Documenting systems and protocols makes life much easier for citizen data scientists. In addition to outlining personnel hierarchies, authorized tools, and step-by-step data process rundowns, the document should also provide a breakdown of the company’s goals and how CDS work fits into the puzzle.
- Augment Tools: Instead of reinventing the wheel, provide extensions to existing programs commonly used by citizen data scientists. The best augmentations complement CDS work and support data storytelling, preparation, and querying.
- Delegate: Pipelines that use both expert and citizen data scientists work best when job responsibilities are clearly delineated. Tasks that require repetitive decision-making are great for CDSs, and the experts should be saved for complex tasks.
- Communication: Communication is key. Things run smoother when all levels share results and make everyone feel part of the team.
- Trash the Busy Work: People perform better when they feel useful. Saddling citizen data scientists with a bunch of busy work that never gets used is a one-way road to burnout — and thus a high turnover rate. Try to utilize every citizen data scientist to their highest ability.
Implementing a Comprehensive Data Team
Advancements in machine learning have democratized the information industry, allowing small businesses to harness the power of big data.
But if you’re not a large corporation or enterprise — or even if you are — hiring a full complement of expert and citizen data scientists may not be a budgetary possibility.
That’s where data analysis software and tools — like Inzata Analytics — step in and save the day. Our end-to-end platform can handle all your modeling, analytics, and transformation needs for a fraction of the cost of adding headcount to your in-house crew or extensive tech stacks. Let’s talk about your data needs. Get in touch today to kick off the conversation. If you want your business to profit as much as possible, then leveraging data intelligence systems is the place to start.
|
<urn:uuid:b448cf34-a355-403e-98a3-b7a45d6dde96>
|
CC-MAIN-2022-40
|
https://www.inzata.com/citizen-data-scientist-vs-data-scientist-whats-the-difference/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00441.warc.gz
|
en
| 0.91698 | 967 | 3.109375 | 3 |
Zero trust is a security architecture that trusts no one by default. In a zero trust model, anyone trying to access a company network must be continuously verified via mechanisms like multi-factor authentication (MFA) and adaptive authentication. It’s used to enable digital transformation while tightly controlling user access and protecting against data breaches.
Explore additional zero trust topics:
The core logic of a zero trust security is essentially “never trust, always verify.” In a world of complex cybersecurity threats and hybrid workforces equipped with numerous applications and devices, zero trust aims to provide comprehensive protection by never assuming an access request comes from a trustworthy source—even if it originates from within the corporate firewall. Everything is treated as if it comes from an unsecured open network and trust itself is viewed as a liability within the zero trust framework.
Zero trust may also be called perimeterless security. This term shows how it is the polar opposite of traditional security models, which follow the principle of “trust, but verify” and regard already-authenticated users and endpoints within the company network perimeter, or those connected via virtual private network (VPN), as safe. But such implicit trust increases the risk of data loss caused by insider threats, since it allows for extensive, unchecked lateral movement across the network.
A zero trust security architecture instead is built upon:
As the way people work changes, having a zero trust security strategy in place is critical. It’s the most reliable cybersecurity framework for defending against advanced attacks across complex IT ecosystems, with dynamic workloads that frequently move between locations and devices. A zero trust architecture is especially important as multi-cloud and hybrid cloud environments become more common and expand the range of applications that companies use.
With the number of endpoints in the typical organization on the rise and employees using BYOD devices to access cloud applications and company data, traditional cybersecurity methodologies can’t reliably prevent access from bad actors. A malicious insider who has already connected to the company network via a VPN would be trusted from then on, even if their behavior were unusual—if they were to download enormous amounts of data, for example, or access files from an unauthorized location.
In contrast, the zero trust model is always evaluating each identity on the network for risk, with a close eye on real-time activities. At the core of this approach is the concept of least-privilege access, which means each user is given only as much access as they need to perform the task at hand. Zero trust frameworks never assume that an identity is trustworthy, and accordingly require it to prove itself before being allowed to move through the network. Another way to think of zero trust is as a software-defined perimeter that is continuously scaling and evolving to protect applications and sensitive data, no matter the user, device, or location.
The zero trust model’s origins go back at least to the early 2000s, when a similar set of cybersecurity concepts was known as de-perimeterization. Forrester research analyst John Kindervag eventually coined the term “zero trust.” The zero trust approach came to the fore around 2009, when Google created the BeyondCorp zero trust architecture in response to the operation Aurora cyberattacks, which involved advanced persistent threats (APTs) that had eluded traditional network security architectures.
The main benefits of a zero trust security are:
Implemented properly, a zero trust security model is closely attuned to behavioral patterns and data points associated with access requests made to a company network. Zero trust solutions may grant or deny access based on criteria such as geographic location, time of day, and device posture.
Effective zero trust security will be highly automated, and its protections may be delivered via cloud or from an on-premises implementation. Identity providers and access management are key components of any zero trust framework since they provide critical measures like adaptive authentication and single sign-on and streamline workflows like employee onboarding.
For these reasons, zero trust is often associated with zero trust network access (ZTNA), which is used specifically to protect access to corporate applications and the data stored in them.
Ready to learn about the latest zero trust network access security solutions? See how vendors stack up in the guide from Gartner.
Cybersecurity solutions such as next-generation firewalls and secure browsers help isolate traffic from the main corporate network. This segmentation curbs lateral movement, improves the organization’s security posture, and minimizes the damage of a breach even if it does occur. Because risky users are confined to a relatively small subnet of the network, they cannot move laterally without authorization. Under normal circumstances, microsegmentation security policies also help limit access by user group and location.
Traditional VPNs do not align with zero trust principles, since one-time access gives a user the metaphorical keys to the kingdom. Instead of this castle-and-moat security approach, the zero trust model uses a dedicated VPN-less proxy that sits between user devices and the full spectrum of applications, from web and SaaS apps to client/server (TCP and UDP) based apps, and even unsanctioned web apps. This proxy can enforce granular cybersecurity measures, such as adding a watermark and disabling printing, copying, and pasting on an endpoint if the contextual evidence supports doing so.
Adaptive access and adaptive authentication allow organizations to understand the state of end user devices without having to enroll them with a mobile device management (MDM) solution. Based on a detailed device analysis, the system intelligently offers the user with a suitable authentication mechanism based on their role, geo-location, and device posture.
Remote browser isolation redirects the user session from a local browser to a hosted secure browser service when the access occurs on an unmanaged device. This ensures users can access their apps in a sandbox environment and allows them to stay productive. At the same time, this protects endpoints and networks from malicious content from the internet with browser isolation capabilities, creating an airgap from corporate resources.
Security analytics solutions amass the valuable data needed for determining what counts as anomalous activity on a network. Networks can intelligently evaluate in real time whether a request is risky and help automate security enforcements based on user behavior and anomalies detected in the system. This helps reduce manual work for IT, provides timely enforcement, and reduces the risk of breaches.
Implementing zero trust does not involve a single product. Rather, it’s an overarching security framework for continuously evaluating risk and controlling secure access across an environment. Accordingly, multiple solutions, including but not limited to those described above, may be deployed in tandem to support a zero trust model.
The exact process for designing and building zero trust security will vary by organization and solution set, but a common progression will involve:
Citrix provides a range of solutions to help organizations at every stage of the zero trust journey:
In North America:
1 800 424 8749
|
<urn:uuid:d8525b00-2667-413d-89a8-18ea278fc941>
|
CC-MAIN-2022-40
|
https://www.citrix.com/en-au/solutions/zero-trust-network-access/what-is-zero-trust-security.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00441.warc.gz
|
en
| 0.936223 | 1,430 | 3 | 3 |
Google Glass is being used by patients with Parkinson’s disease in early trial experiments aimed at finding new ways to help people affected by this debilitating disease. The work, which is being conducted at Newcastle University in England, is showing early promise by helping patients remember to take their medications and giving users more confidence as they fight the disease.
For Parkinson’s patients, the eyewear-mounted Glass devices are so far showing to be easier to use than smartphones to stay in touch with family members and to get through their daily lives, Roisin McNaney (pictured), a Ph.D. student at the university’s Digital Interaction Group in the School of Computing Science, told eWEEK in a telephone interview.
“We know from the specific patient symptoms related to Parkinson’s that smartphones can be a hindrance” for easy communications with others, said McNaney. Hand tremors and unmovable limbs can be traits of the disease, which can make it hard for patients to operate the often small controls and buttons on today’s smartphones, she said.
Google Glass, however, doesn’t have those drawbacks due to their voice activation, head-tilt features, easy gesture controls and hands-free capabilities, said McNaney. “With the different gestures used in Glass, we thought this could be useful.”
The university was able to acquire three Glass devices from Google in August 2013 after successfully entering a research competition, according to Dr. John Vines, a senior research associate with the university’s Digital Interaction Group in the School of Computing Science. The first small trial happened immediately upon receiving the units, but it was quickly expanded to 20 patients in another test project that is still ongoing, said Vines.
Because the Digital Interaction Group conducts research to find ways in which technologies can remove the stigmatization that people with Parkinson’s or other diseases can feel in society, Google Glass was a perfect fit for the experiments, said Vines.
“They have a huge array of different types of sensors, a forward-facing camera, a rudimentary eye tracker, accelerometers and a gyroscope,” said Vines, all of which can help Parkinson’s patients share what they are experiencing and seeing with others at any given moment. “These are all useful things.”
The patients who have been participating in the latest Glass trials are receiving the devices for about a week and then are meeting with the researchers and their team for collaborative design sessions where the patients give feedback and ideas about other features that could help them.
“Apps can then be developed to address those needs,” then given to the patients for later follow-up, said Vines.
Google Glass Put to Work to Help Parkinson’s Patients
Two Parkinson’s patients who are participating in the latest study shared their personal experiences with Glass in statements to the university.
Ken Booth, 56, of County Durham in England, was first diagnosed with Parkinson’s in 1991. He underwent a surgical procedure called deep brain stimulation last year in a bid to relieve some of the side effects of the condition, according to the university. The surgery helped him control the symptoms of the disease after medications lost their effectiveness.
Soon after, he began trying Glass, and their helpfulness cannot be overstated, he told the university. “They’re just fantastic. The potential for someone with Parkinson’s is endless. For me the biggest benefit was confidence. When you freeze your legs stop working but your body carries on moving forward and it’s easy to fall.”
That becomes less worrisome when wearing Glass, he said. “Because Glass is connected to the Internet you can link it to computers and mobile phones. So if you’re alone, you just have to look through the Glass and [caregivers], friends or relatives will be able to see exactly where you are and come and get you. Or you just tell it to call someone and it rings them.”
His girlfriend, Lynn Tearse, 46, also has Parkinson’s and is also a fan of using Glass in the Newcastle University experiments. Tearse, a retired teacher, was diagnosed with Parkinson’s in 2008.
“People would probably say you can do all these things on a smartphone but actually, with Parkinson’s, negotiating a touch screen is really difficult,” Tearse said. “It’s not just the tremor. During a ‘down time’ when the medication is starting to wear off and you’re waiting for the next lot to kick in, it can be like trying to do everything wearing a pair of boxing gloves. Your movements are very slow and your body won’t do what you want it to.”
That’s where Glass can be helpful to help unlock the brain when it “freezes,” she said. “No one really understands why it happens, but it happens when the flat surface in front of you breaks up or the space in front of you narrows such as a doorway. Revolving doors are particularly bad. Your legs gradually freeze up and the difficulty is getting started again. The brain seems to need a point beyond the blockage to fix on and people use different things—Ken will kick the end of his walking stick out in front of him but many people use laser pens to create a virtual line beyond the barrier. This is where Glass could really make a difference.”
Glass units are also being used to give medication reminders to patients. “The drugs don’t cure Parkinson’s, they control it so it’s really important to take the medication on time,” said Booth. “I was taking two or three different drugs every two hours, different combinations at different times of the day; some with water, some with food, the instructions are endless. Having a reminder that is literally in your face wherever you are and whatever you are doing would really help.”
Google Glass Put to Work to Help Parkinson’s Patients
McNaney told eWEEK that because Glass is a new technology and because Parkinson’s patients are usually older, researchers expected to get negative feedback from them about using the newfangled devices. Instead, she said, rather than being intimidated by Glass, the patients have been very positive about using the devices.
“We were learning as well,” she said. “I think that they really could see quite a lot of potential for people with Parkinson’s [by using Glass]. One thing you find when you work with people like this is that they are really the experts in this disease. So the next stage is to work with them to see what other apps and patient needs can be filled. It’s really through working with the people with Parkinson’s that we’re going to be able to see the potential in doing that.”
Asked about the work being done with Glass at the school, a Google spokesman told eWEEK in an email reply, “Newcastle University is an excellent example of how people and institutions are thinking creatively about how to unlock the potential of wearables like Glass. We’re excited about their work and look forward to seeing it develop in the months and years ahead.”
Other medical patients have also been experimenting with Glass in separate experiments. In August 2013, eWEEK reported on a young woman in the United States, Alex Blaszczuk, who was in a severe car crash in 2011 that left her a quadriplegic. Blaszczuk has been using Google Glass to take photos, send messages to friends and more. She was selected by Google to participate in the company’s Google Glass Explorer Program, which allowed prospective users to submit ideas for why they should be chosen to buy and test out one of the first Glass devices. Blaszczuk’s entry was selected from the thousands of submissions to the #ifihadglass competition.
Google Glass has been a topic of conversation among techies since news of it first arrived in 2012. The first Google Glass units began shipping in April 2013 to developers who signed up at the June 2012 Google I/O conference to buy an early set for $1,500 for testing and development, where it was the hit of the conference. Google also then began shipping Glass units to lucky users who were selected in the #ifihadglass contest for the opportunity to buy their own early versions of Glass.
In February 2013, Google expanded its nascent test project for its Glass eyewear-mounted computer by inviting interested applicants to submit proposals for a chance to buy an early model and become part of its continuing development. In March, Google also began notifying a pool of applicants who were selected to purchase the first 8,000 sets of Google Glass when they become available for real-world use and testing later this year by consumers. Those selected applicants have been receiving their units in waves.
Each Google Glass device includes adjustable nose pads and a high-resolution display that Google said is the equivalent of a 25-inch high-definition screen from 8 feet away. The glasses also feature a built-in camera that takes 5-megapixel photos and video at 720p. Audio is delivered to wearers through their bones, using bone-conduction transducers.
Google Glass isn’t yet ready for the general public, but sales of the devices are expected to begin sometime later in 2014.
|
<urn:uuid:0bd0c1c7-c774-45db-baa4-12aa4d2434b2>
|
CC-MAIN-2022-40
|
https://www.eweek.com/mobile/google-glass-put-to-work-to-help-parkinson-s-patients/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00441.warc.gz
|
en
| 0.968435 | 1,974 | 2.921875 | 3 |
While developing and maintaining software applications, having knowledge of Object Oriented Programming (OOP) can be valuable.
Here, we are going to explain basic principles of OOP in an easy to understand way.
What is Object Oriented Programming?
OOP is computer programming model that organizes:
- Software design around data / objects instead of logic and function.
- Well-suited approach for large, complex and actively maintained programs.
- Allows developers to focus on objects rather than logic required for data manipulation.
- In simple terms, an object can be thought as a data field that has unique behavior and attributes.
Object oriented approach allows developers collaborative development: dividing projects into groups.
Object Oriented Programming Benefits for programmers:
These include scalability, reusability and efficiency.
What is scalability? Basically it is an attribute of algorithm and its implementation design.
OOP is a programming paradigm and scalability/ maintainability / testability are independent of it.
Data Modeling: It is the very first step in OOP, involves the collection of all the objects that a programmer wants to manipulate. Then identify how these objects are related to each other.
Once an object is identified, it is labeled with a class. A class is a collection of objects that defines the type of data and any logic sequence that can manipulate objects.
Now, each unique logic sequence is called a method.
Objects can communicate with each other via well-defined messages called interfaces.
4 basic principles of Object Oriented Programming:
In Object oriented programming paradigm, encapsulation is achieved by keeping state of each object private inside a class.
To make changes, other objects do not have access to this class. We can say that encapsulation helps in achieving data hiding.
Ultimately, it provides greater data security feature and prevents data from being corrupted.
Suppose a scenario. We have a game in which there is a cat and there are people. They communicate with each other.
Now, to apply encapsulation, we encapsulate all cat logic into a class named “Cat”.
Here, the state of the Cat is some private variables named mood, hungry and energy.
It also has a private method called meow (). It can call it when it wants. Other classes can’t tell the Cat when to meow ().
Now, what people can do with Cat is defined via public methods Feed (), Play () and Sleep (). Each of them may modify internal state of Cat class and may invoke meow () method.
Hence, the binding between the private state and public methods is made.
This is called encapsulation.
It means hiding un-necessary implementation code. Objects show only those internal operational mechanisms that are relevant for the use of other objects.
Abstraction can be thought as an extension of encapsulation. In OOP, programs are extremely large and objects communicate with each other a lot.
So, maintaining a larger database, with changes along the way, over the years, can be difficult.
Abstraction is a concept to ease this problem.
Think once how you use your mobile phone?
Another common problem in OOP design is objects are very similar and they share common logic. But they are not entirely the same.
So, how can we reuse the common logic and extract the unique logic into a separate class?
One way to achieve this is inheritance…
In simple terms, inheritance means we form a hierarchy. We have a main class called “Parent class” and we create class from it called “Child class”.
Now, child class is richer as compared to its parent class because it reuses all the fields and methods of its parent class + it has its own unique fields or methods.
This feature of OOP allows developers ensuring higher level of accuracy. It reduces development time too.
It means an object can take one or more than one forms depending on the situation.
Which form an object will take upon execution? It is determined by program logic.
In real life, a person behaves like student while in class room. The same person behaves like customer while in market, behaves like son while at home.
This is polymorphism – one object, many forms – one person, multiple behaviors, depending on the context.
List of Object Oriented Programming Languages
Most popular OOP languages are:
- Visual Basic.Net
Criticism on Object Oriented Programming
Main criticism is: Object oriented programming does not focus on algorithms and computations. It over-emphasizes the data component about software development.
It may take long time to write and compile OOP code.
Functional Programming and Structured Programming are some alternative ways to Object Oriented Programming.
Frequently Asked Questions (FAQs)
- What is difference between OOP and Structural Programming?
In structural programming, programs are divided into functions. It provides logical structure. Its top-down approach and does not provide code reusability.
Whereas, OOP is a programming paradigm, that is based on objects instead of just functions and procedures. Its bottom-up approach and provide code reusability.
- Why OOP?
Code maintenance is easy due to encapsulation. This programming
Paradigm is mainly used for relatively big software.
- What is the purpose of Inheritance?
The purpose of inheritance is Code Reusability.
- What are the limitations of inheritance?
To execute a program, programmer requires more time and efforts due to jumping back and forth between many different classes. Inheritance demands careful implementation otherwise it may lead to incorrect results.
- What is the difference between private, public and protected access specifiers?
Let’s have a look at some other MAJOR Programming Paradigms in detail:
Subscribe to FinsliQ Blog:
If you have enjoyed and find our blogs informative, then please support the platform by subscribing to our daily newsletters. Benefits of becoming a subscriber:
- Get daily updates with the latest blogs/article
- New updates within the same subject area are release every day (release dates can be found next to the link in the blog)
- Stay up to date with the latest Tech news
- Variety of different types of blogs
Visit FinsliQ | Tech Academy. A variety of course are available in cloud computing, Dev-ops, Cloud Architecture, Cyber Security and much more.
|
<urn:uuid:58a420c9-9d34-4343-a9dd-07323aeac407>
|
CC-MAIN-2022-40
|
https://www.finsliqblog.com/programming-languages/object-oriented-programming-what-should-you-know/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00441.warc.gz
|
en
| 0.912208 | 1,381 | 4.125 | 4 |
Isaac Asimov’s three laws of robotics are safe, sensible rules. First laid out in 1942, rule number one prevents a robot from harming a human being. The second forces it to obey orders given it by people, except where such orders would conflict with the first law. Finally, it must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those are some pretty sound, sensible rules – and Stefano Zanero has persuaded industrial robots to break all of them.
Zanero, an associate professor at Italian university Politecnico di Milano, will explain how in his talk at SecTor later this month. He has found flaws in industrial robots and developed theoretical attacks that could dramatically affect corporate users, and worse.
“I was taking coffee with a colleague that works on robotics, and was looking at their labs. While we were talking, I was going through all the research that I had read in the last few years. I realized that I had not seen anybody look into one of these things,” he says. That’s not surprising. “Not everybody has an €80,000 robot sitting above their lab.”
Zanero did. Robotics is a key research area at the Politecnico di Milano, so he set to work investigating several robots’ underlying security protections. He found some common flaws.
“Most of the components in the robot were relatively weak. They were not designed to withstand hacking attacks,” he says.
In one of the robots he investigated, he found a default user that couldn’t be disabled, and a default password that couldn’t be changed. “When you compromise the first Internet-facing component, all the other components are basically also yours,” he explains. Those components all download the firmware from the first, compromised component without checking code signatures.
Breaking law number two
In compromising this software, an attacker is able to violate Asimov’s second law by giving it new instructions that its original programmers didn’t intend.
Industrial control systems built to this level of security are not meant to be Internet facing, he adds, and yet the move towards ‘Industry 4.0’ – an increasingly connected factory environment in which robots and other industrial systems are accessible via IoT-based networks – is increasingly putting them there. Many industrial robots today are a browser away from the Internet or in some cases directly connected, he warns.
What could he make a robot do with these vulnerabilities? He came up with several possibilities. The first was the introduction of micro-defects.
“If you get control of a robot, you can introduce in a subtle way a lot of micro-defects into the parts being manufactured. These defects would be too small to be perceived,” he says. “Since the robot isn’t designed with this attack model in mind, there is absolutely no way for the people programming the robot to realise that it has been put off centre and miscalibrated.”
A slight offset in a welding algorithm could produce a structural flaw that could have significant implications for product safety. Imagine a production line altered to produce unsafe automotive components. A year after the attack, the attacker could make the flaw known and force a product recall, costing the victim millions and trashing their brand. Worse still would be not making the flaw known, waiting instead until road accidents started happening.
Goodbye, law number three
“The second big area of concern is that using the same manipulations, you can actually make a robot destroy itself,” says Zanero.
There goes Asimov’s third law, and with it, your factory’s profit. Production lines have a high downtime cost, running into thousands of dollars per minute. Robots are also custom-configured and difficult to source, making them difficult to replace.
This also raises the possibility of ransomware, says Zanero. An attacker could incapacitate a robot and then demand a ransom payment to set it going again. That would change the attacker’s business model from industrial sabotage to pure profit.
Violating the most important law of all
Another possibility is that the robot could be programmed to violate the first law, harming a human directly. This would admittedly be difficult for an attacker to do. Robots working alongside humans are tightly monitored and designed not to make movements that could harm their coworkers. Nevertheless, there is scope for abuse, Zanero says.
“Even if the robot moves slowly and doesn’t really harm you by moving, if the point of the tool is toward you, it could harm you,” he says. Robots are programmed to keep pointy things away from people. “They are super good at that. There is a lot of safety around that, but it is software, not hardware,” he points out, adding that an attacker could change that software.
To its credit, the industrial robot vendor that Zanero’s team contacted about the flaws was responsive and quick to react. It thanked the team and patched the bug in its products, which is an encouraging sign. Nevertheless, there is more work for the robotics industry to do.
“We have tested one specific robot, and then we tested others just to see if our architectural considerations would generalize,” he says. “And they did.”
Zanero will talk more about his work at SecTor, which runs Nov 13-15 2017 at the Metro Convention Centre in Toronto. In the meantime, read more about it at robosec.org.
|
<urn:uuid:32642f3f-775f-4c1c-98f9-96befdace897>
|
CC-MAIN-2022-40
|
https://sector.ca/sabotage-and-subterfuge-hacking-industrial-robots/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00441.warc.gz
|
en
| 0.970092 | 1,171 | 2.90625 | 3 |
Machine learning (ML) is taking the IT world by storm. One of the hottest phrases in the 21st century, the term was actually coined in 1959 by Arthur Samuel, who defined ML as a “field of study that gives computers the ability to learn without being explicitly programmed.” While the concept of ML first came to prominence in the 1950s and 1960s, particularly in academia, there were only a few applications that could take advantage of such sophisticated technology at the time.
ML saw a resurgence shortly after the year 2000. With the costs of storage and compute power starting to drop, ML suddenly became an affordable and scalable solution. Today, ML is on the rise, thanks to a few key factors:
- Access to cloud computing
- Open source tools that evolve with technology
- Increased demand for enhanced product development, customer management and process automation
ML has advanced greatly since its origin, and every business venturing into ML needs to understand how to build successful models. As such, it is essential to know what the possibilities of ML are today. Even more important is to understand the common myths associated with ML and how to avoid them in order to best utilize and benefit from its capabilities.
Machine Learning Today
Today, ML is complemented by a plethora of open source workbenches and utilities, including Python, R, TensorFlow, scikit-learn and many more. It’s also bolstered by the current state of cloud computing as well as scalable data and data storage capacities. In frameworks like Keras for TensorFlow, algorithms are contained in prepackaged software libraries and are widely available to anyone who wants to delve into the world of ML.
ML first took hold in academia and it remains a staple of IT learning in colleges and institutions around the globe. With undergraduate and masters programs dedicated to developing skills in data science and ML the technology will only improve in the coming years.
In its current state, ML has many different uses in a multitude of professional disciplines:
- Financial services: ML is useful in credit underwriting, fraud detection, and portfolio risk management.
- Marketing and advertising: Tech-savvy marketers use ML for response modeling, dynamic AdWord bidding and banner ad targeting.
- Healthcare: The healthcare industry relies on ML for drug research and development, pathology screening using computer vision, and genomic/transcriptomic research.
- Retail: ML is used in retail for product pricing, purchasing and logistics.
With all of ML’s potential uses, however, it is important to acknowledge some potential pitfalls so that you know to avoid them in the future.
Myth #1: We have all this data, now all we need is a data science team to make sense out of it and put it all to work.
It’s often thought that a data science team is all that’s needed to make the most of ML. The reality is the use cases must originate from the product or strategy teams. A data science team certainly should be used to push the organization’s agenda forward and help refine the use cases. For best results, connect the data science team with strategy and execution teams to create and establish value.
Myth #2: Machine learning functionality requires expensive infrastructure and support systems.
While there was a time when ML was cost prohibitive, today’s systems are actually quite affordable. Just be sure to establish objectives that align with your resource capabilities. With a reasonable investment, you can accommodate your high-value use cases, which typically consume relatively low volumes of data anyway. With such an approach, you can realize high project ROI.
Myth #3: Machine learning results are “black box” solutions.
Although it is true that MLalgorithms can be quite complex, they are all pretty standard in the worlds of data science and information technology. The rise of explainable AI and relevant literature has given immense insight about what is going on under the hood of widely accepted ML algorithms.
Myth #4: Machine learning applications get smarter and smarter over time.
It’s a common misconception that all ML systems actually gain intelligence over the course of time. While this is possible, certain cases can be architected as “self-learning” processes; however, it’s not the primary goal of every application. The vast majority of ML applications use a static algorithm to classify/predict/forecast with the expectation that model refits or updates will be a manual process.
The complexity associated with self-learning solutions often leads to investments that render the process unprofitable. In short, avoid the allure to develop and deploy a solution that unnecessarily detracts from the business strategy. Static algorithms tend to solve a substantial number of problems.
Myth #5: The data science team alone will develop a self-sustaining machine learning solution.
In most cases, the average data science team will not devise, develop, and implement the end-to-end solution. Not only is a highly qualified data science team required to develop and produce a working model, but product teams are also needed to define business problems and envision solutions. Furthermore, software teams are needed to implement the solution. As such, a complex, multi-stage process, multiple teams within your organization must collaborate consistently to drive results.
The Future of Machine Learning
Modern ML is full of promise. It’s a reliable engine that drives innovation, creates new products, enhances the customer experience and automates tedious tasks, but pioneers of ML need to be wary of the common pitfalls. For more information, or to find out how you can benefit from a team of experts, please contact DecisivEdge today.
|
<urn:uuid:a416c78e-f233-40c0-8745-98262507b20e>
|
CC-MAIN-2022-40
|
https://www.decisivedge.com/blog/machine-learning-pitfalls-5-common-myths-and-how-to-avoid-them/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00441.warc.gz
|
en
| 0.948513 | 1,164 | 2.859375 | 3 |
Let’s be blunt: two-factor authentication isn't very secure.
It’s a common misconception that two-factor authentication is the most secure way to protect your accounts from malicious threat actors or vulnerabilities. Two-factor authentication, also known as 2FA, is the most primitive form of multi-factor authentication, and while newer solutions are far safer, many organizations still take the traditional route to multi-factor authentication.
2FA gives a sense of security, but not necessarily a practical means of keeping hackers at bay. Let’s discuss how and why two-factor authentication isn’t as secure as you may think.
How 2FA Works
As one of the most common methods of verifying identity, 2FA is often touted as a secure authentication solution (despite lukewarm adoption), but it leaves open many opportunities for hackers to infiltrate your most mission-critical applications and systems, putting your login credentials at risk. 2FA uses two different factors to authenticate a login attempt, and these factors fall into three categories—either something you know, something you have, or something you are.
Something you know is one of the most familiar forms of 2FA, utilizing information such as personal identifying questions, or even as simple as a username and password. Something you have could be any number of things, but among the most common include a verification text message code, a security key, or an authenticator app. Finally, the last and most secure category of the three, something you are—this includes any biometric from your fingerprint, to a scan of your retina, to facial recognition software.
Although these 2FA authentication methods sound secure, they largely just help to give the individual a false sense of security. There are newer solutions that are far safer and more secure than 2FA.
How 2FA Can Be Compromised
While 2FA does add an extra layer of protection to your accounts, there are still many ways in which 2FA can be hacked, including utilizing social engineering, information theft, and other methods. For example, if you are baited to visit a fake or counterfeit website, a malicious threat actor could easily get you to verify the text message or authentication app code you received. Once the hacker has that information, they can use your credentials to access the site you originally intended to, leaving your account vulnerable to attack. This type of attack is known as phishing.
Another example of how 2FA can be compromised is due to force of habit. A user that is distracted may simply hit “approve” without thinking when they receive an email, text message, or other alert asking the user to confirm their identity. Many users simply approve these notifications or verify the application out of habit, without looking into where the request is coming from. This gives the hacker easy access to your account just by knocking on the door, and because you were inattentive, you’ve let them inside your home and given them a key. MFA can be costly to an organization, for more reason than one, and is almost never worth it.
An example of a more strategic 2FA hacking attempt could be a SIM swap, where attackers take control of a victim's phone number by persuading a mobile phone provider account representative to allow the switch, then hijack their personal information, social media accounts, and more. This most notably happened in 2019 to Twitter CEO Jack Dorsey after he admitted to “falling behind” on some of his security protocols, and numerous highly offensive Tweets were sent from his account. As a result, Twitter was forced to issue numerous apologies and there was a large public backlash from the incident. Successful hacks don’t just leave your personal information at risk, but also affect public opinion and often result in a loss of consumer trust. Learn more about how MFA can be hacked in our blog post here.
How Passwordless Authentication Is More Secure
Now that we know multi-factor authentication, and more specifically two-factor authentication, is not as secure as many believe, Beyond Identity is here to help you keep all your applications and resources safe and secure.
Unlike traditional MFA, which is subject to many forms of password-based attacks, Beyond Identity eliminates passwords leaving no credentials for hackers to attack in the first place. Beyond Identity verifies users and identities the same cryptography tools that TLS uses to secure trillions of dollars of transactions daily. Organizations of all industries and sizes reduce risk by eliminating all passwords-based attacks.
Going passwordless may seem like a daunting task, but Beyond Identity makes it easy. Beyond Identity provides secure authentication without adding friction for users.
|
<urn:uuid:5696874d-afe6-43db-93c0-84a2529b340d>
|
CC-MAIN-2022-40
|
https://www.beyondidentity.com/blog/how-secure-two-factor-authentication
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00641.warc.gz
|
en
| 0.948171 | 947 | 3.171875 | 3 |
By Ludimila Centeno
Flying drones opens a whole new world of opportunity. But there isn’t much opportunity when you lose control and crash into a tree or your drone flies away never to be seen again. In order to take advantage of that opportunity, you need to be able to control them, which requires situational awareness and an improved drone navigation system.
Before we tell the solution, we need to look at where the problem developed.
The problem with early drones was the lack of sophisticated sensors and associated control systems. A very common scenario was for the operator to have to be in the line of sight with the drone. The control system typically involved two joysticks and worked something like the following: pushing the left joystick forwards or pulling it backwards increased or decreased the throttle (power to the rotors), respectively. Also, pushing the left joystick to the left or the right controlled the yaw (the direction the front of your drone is facing). Meanwhile, pushing the right joystick forwards or pulling it backwards controlled the pitch (whether the drone tilts upwards or downwards), and pushing it left or right controlled the roll (movement in any direction on a horizontal axis).
What do these terms mean? In some respects, it’s a bit difficult to wrap your brain around this in the case of a drone because the machine tends to look much the same from all directions, but one side is definitely considered to be the front. If we assume that we are standing behind the drone and that the front of the drone is facing away from us, then pushing the right joystick away from us will cause the drone to rotate around the pitch axis and go forward, while pulling toward us will cause the done to go backward. Similarly, pushing the right joystick to the left will cause the drone to rotate around the roll axis and go left, while pushing it to the right will cause the done to go right.
Things became complicated if the operator needed to fly the drone some distance and then turn it around before bringing it back home — perhaps due to a fixed-position payload camera hanging underneath. In this case, from the perspective of the operator, once the drone had turned, the actions of the pitch and roll controls became reversed, which lead to no end of confusion.
A related problem was that the drone might slowly rotate around the vertical (yaw) axis. Furthermore, any gusts of wind could cause the drone to wander off course.
How do you Improve a Drone Navigation System?
One solution to all of this is for the operator to don a virtual reality (VR) style headset, and to equip the drone with a forward-facing camera that streams a video signal to the headset. This means that the operator’s point of view (POV) is looking to the front of the drone, so all the controls continue to work as expected.
Another approach that was used to improve the ease of flying drones was to add inertial guidance systems in the form of microelectromechanical systems (MEMS) that contain both mechanical and electronic components. For example, it’s now possible to purchase a nine degrees of freedom (9DOF) MEMS sensor that boasts a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer — all presented in teeny-tiny package that’s only a couple of millimeters in size. The inertial guidance system can use the information from these sensors to detect and correct any unplanned rotation around the yaw axis and to detect and correct any deviations caused by gusts of wind.
In many cases, a drone’s inertial navigation system will be augmented with a global positioning system (GPS) sensor, which can be used for tasks like retracing a complicated flying path or determining the shortest route back to home base. Having said this, GPS may be restricted from drones in some countries, it’s efficacy may be degraded in metropolitan areas with large buildings, and its usefulness may become non-existent if the drone is flying under a large structure like a bridge, or in an enclosed space like a warehouse or a tunnel.
What is the Optimal Way to Improve an Existing Drone Vision System?
The best way to augment an existing drone navigation system is to improve the drone’s vision system by means of optical cameras coupled with artificial intelligence (AI). In addition to navigation, such systems are extremely efficacious with respect to object detection, recognition, and avoidance. Today, it’s possible to create extremely small and lightweight camera modules that feature ultra-wide-angle lenses coupled with high-resolution sensors.
And although we can create optimal lightweight cameras, we don’t live in an ideal world, therefore flying drones cannot always be done in ideal conditions. When looking to use your UAV (Unmanned Aerial Vehicle) from dawn-to-dusk, platforms with these navigation camera modules must also be able to work in daytime and nighttime conditions both indoors and outside.
Having said this, not all lenses and lens-sensor combos are created equal, and this is especially true when it comes to flying drones in low-light conditions. Here at Immervision, for example, we’ve created a state-of-the-art wide-angle camera module that — in addition to meeting stringent size, weight, and power requirements — has been designed from the ground up to satisfy the demanding requirements of operating in low-light conditions.
Drones equipped with these drone navigation systems can not only fly from dawn-to-dusk (daytime), but they can also operate from dusk-to-dawn, both outdoors and indoors.
At Immervision, we are working on the latest and greatest in drone camera technology. We also love working with our customers on their unique drone vision system projects. So, if you are planning on developing drone camera system of your own — or if you are creating the drone and need some help with its camera systems — feel free to Contact Us and let’s start collaborating together!
All about Drones
|
<urn:uuid:4ac81284-5ad5-4bf8-9c8d-fdc204ad2a9e>
|
CC-MAIN-2022-40
|
https://www.immervision.com/drone-navigation-system/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00641.warc.gz
|
en
| 0.9349 | 1,249 | 2.53125 | 3 |
When it comes to network security, the main focus is to of course protect your IT infrastructure. But if you had a choice, which is really important; the program or the data you have gathered?
At first glance it would have to be the data for sure. Software programs can be easily modified and replaced depending on the requirements of any organization. Database protection is important since without it, companies have no basis for analysis and comparison as far as actual performance and reference for clients stored in the database is concerned. If you had to rate both, it would be data first and software second.
There are other people who put premium on software of course. But this will depend on their contingency plan better known to most IT professionals as backed up or archived data. Normally this is so basic that you don’t need to remind anyone the need to have archived historical data in cases where system crashes or intrusions may occur. There will always be scheduled backups and archiving for any program using entity since these are valued and important as far as linking all transactions and tracing revenue.
But the actual safeguarding of these two IT elements is how you expose it. There are usually policies governing the actual level of exposure such as net presence or the use of external storage devices like CDs and floppy disks. Normally, these are discouraged but knowing people who are hard headed today, some of them still ignore these policies and even get away with it.
|
<urn:uuid:93d5aec4-8377-41d3-a2ea-2eb4a0f5fbb9>
|
CC-MAIN-2022-40
|
https://www.it-security-blog.com/it-security-basics/data-or-program-which-are-you-really-safeguarding/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00641.warc.gz
|
en
| 0.960808 | 287 | 2.59375 | 3 |
Anyone would often be advised to regularly change his password in any access points such as e-mails, log on servers and websites. Reason for this is to increase the need for security as far as gaining access and safeguarding files and pertinent information that is usually stored.
With the large number of hackers that have been cropping up one by one, various means to steal passwords,also known as phishing, or hack accounts have been their main course of action. While some would disregard such acts, the real pain begins once important messages, attachments and relevant information are tampered. True that some would not need to change passwords regularly, but just to be on the safe side, it is best to maintain a regular schedule of updating password security and make it a combination of numbers and letters to establish a more secure and harder way of being cracked or accessed by anyone today.
[tags]password theft, passwords, hacking, cracks, codes, security[/tags]
|
<urn:uuid:481160a9-6298-4911-9bce-24b11d67a770>
|
CC-MAIN-2022-40
|
https://www.it-security-blog.com/it-security-basics/why-users-should-change-their-password-regularly/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00641.warc.gz
|
en
| 0.972851 | 193 | 2.671875 | 3 |
Wastewater surveillance systems are helping communities predict COVID-19 outbreaks, but lack of national coordination and standardized methods pose challenges to wider adoption, according to the Government Accountability Office.
More cities and states are using wastewater surveillance to detect the COVID-19 virus and other pathogens and chemicals in human waste, making it an effective method for detecting community-level outbreaks, according to an April 11 tech spotlight from the Government Accountability Office.
As home testing grows popular and fewer people seek tests from public health agencies, wastewater analysis is set to become an important factor for monitoring virus spread.
In small environments like college dormitories or long-term care facilities, officials can use wastewater to pinpoint infection rates in specific buildings and subsequently target clinical testing in those locations. In medium and large populations, the data can inform how health officials allocate resources to hot spots and enable large-scale surveillance and monitoring programs, GAO said.
Despite wastewater surveillance’s high potential as a public health tool, some aspects may need further development, GAO said.
A lack of standardization is hindering agencies’ ability to support wider use cases and produce more useful data. While many state health departments are testing wastewater, “the lack of a standardized approach complicates efforts to aggregate, interpret, and compare data across sites and develop large-scale public health interventions,” GAO said.
Rainwater and industrial discharge can often dilute samples, and contaminants like animal waste can complicate efforts to determine sample origin or quality.
The potential savings of wastewater surveillance are also unclear. “The general lack of cost-benefit analyses makes it difficult to determine how and when to use it,” GAO said.
Additionally, wastewater surveillance raises privacy issues and ethical concerns. Human genetic data can be identified in wastewater and potentially misused, GAO said. Also, communities whose wastewater indicates pathogen spread or illicit drug use may be stigmatized, it added.
Nevertheless, wastewater monitoring and surveillance programs have helped map COVID hot spots and identify trends in several states.
Virginia’s wastewater monitoring initiative launched in September 2021 to sample infection trends at the community level. Missouri’s Sewershed Surveillance Program obtained samples from more than half of the state’s population to create a map for residents to see where cases are highest. In Michigan, data and analysis from a coordinated network of labs, local health departments and universities populates a dashboard that shows regional and statewide COVID-19 wastewater monitoring data.
|
<urn:uuid:4996ddcb-b9b7-4f9e-ad65-666c41762415>
|
CC-MAIN-2022-40
|
https://gcn.com/data-analytics/2022/04/wastewater-testing-programs-need-better-coordination-data-standards-gao-says/365628/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00641.warc.gz
|
en
| 0.920514 | 509 | 3.234375 | 3 |
Geographic Information Science and Technology (GIST) plays a key role in scientific research, with a wide array of applications for spatial data and visualizations in earth science.
FREMONT, CA: Geographic Information Science (GIS) offers a robust means of looking at the world and tools for solving complex problems. Innovations continue to rise from GIS, whether transportation companies are optimizing logistics or manufacturers are tracking equipment locations with the Internet of Things sensors. Geographic Information Science and Technology (GIST) is also instrumental in scientific research, with a wide range of applications for spatial data and visualizations in earth science. The professionals who utilize these methods to gather, analyze, manipulate, and visualize geographic data can display fascinating details about the world and even other planets. When exploring how GIST is employed in different fields, it’s easy to see why geospatial reasoning is vital to understanding earth science and pursuing new inquiry areas.
GIS in Geology
Geologists investigate the planet’s structure, composition, and changes over time. However, it is not always practical for scientists to visit a location for field observation. The application of remote sensing in geology means scientists can use electromagnetic radiation to gather detailed information from all over the world. Interpreting and visualizing the data from remote sensors are among the primary uses of GIS for geologists.
GIS in Meteorology
Mapping and modeling the weather and climate with GIS yields valuable insights for meteorologists as they study the atmosphere’s work processes. Scientists pinpoint the locations of weather events and analyze how systems move over time. Identifying meaningful patterns and trends in GIS weather data leads to more accurate predictions.
GIS in Oceanography
The vastness and complexity of the Earth’s oceans mean that the scientists who specialize in the field might focus on topics ranging from marine ecosystems to plate tectonics. The application of GIS in oceanography moves around assisting researchers by giving them broad perspectives on the underwater world.
GIS in Astronomy
Scientists have incorporated GIS to understand the universe, mapping from space to teach more about their world and explore other planets and objects in the solar system. Employing GIS in astronomy means revealing the mineral composition, topography, tectonic activity of celestial bodies. That’s why NASA has made collecting and analyzing spatial data a crucial part of unmanned observation missions.
|
<urn:uuid:919b2f47-2f83-4d16-a338-fcc72cfd6847>
|
CC-MAIN-2022-40
|
https://www.enterprisetechnologyreview.com/news/how-can-geographic-information-science-be-applied-in-earth-sciences-nwid-1003.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00641.warc.gz
|
en
| 0.895438 | 489 | 3.5 | 4 |
In the last 50 years, the shift from an industrial economy to an information economy has caused data to become increasingly important. This has highlighted the costly impact that poor quality data can have on a company's financial resources . Poor data quality contaminates the data in downstream systems and information assets, which increases costs throughout the enterprise. Moreover, customers' relationships deteriorate as a function of poor quality data leading to inaccurate forecasts and poor decision making. Recently, Harvard Business Review reported that out of 75 companies sampled, only 3% had high-quality data.
However, the financial impact of poor data quality is not measured solely in decreased revenue for a company but it can also entail financial losses and significant legal repercussions.
How can a company start its Data Quality journey?
1. How bad is your data?
Just knowing that the quality of the data is not high is not enough to start a data quality program. It is essential to understand how much of the data is of poor quality and what type of issues are present. The first step to address this is data profiling. Data profiling can guide in data quality analysis by uncovering the issues in each of the fields analyzed. Data profiling can be done using either SQL or data quality profiling tools like IBM InfoSphere Information Analyzer or AbInitio Data Profiler.
Here are some examples of issues detected by data profiling within a field:
- Special characters
- Numeric values
Let's take a look at the special characters issue. Sometimes, the presence of special characters in a specific field can be a sign of poor data quality. However, it is essential to define which special characters constitute an abnormality in that particular field before making a specific determination. As an example, let's look at the field First Name with the following values:
In Jørund, the ø detected as a special character is the same as the * in Nicol*tta. However, ø is a valid "character" in Scandinavian countries, but * constitutes a real data quality issue. It is essential to understand the data quality issue and analyze it, given the nature and source of the data.
2. Who has what?
Not all the records in a specific system have the same type of information (corresponding fields). Even for the standard fields, there are different priorities and sensitivity. Why is this important? Let's consider this: John Doe is a customer of Acme (Bank). To open an account, John has to provide the following information:
Jane Doe is a third party for Acme, which means she can play some role in Acme's products. However, she does not own any Acme product herself (for example, Ultimate Beneficial Owner). In her case, the same attributes are available, but Acme requires only some:
The presence of blanks in any mandatory field constitutes a data quality issue. At the same time, the same is not true for optional fields. Not having the driver's license information for John's record is a data quality issue. However, it is inconsequential for Jane's record. Not knowing these distinctions between different customers can create false positives in detecting data quality issues for Acme and are also a waste of time and resources.
3. Oh, the possibilities!
Once a data quality issue is identified, data quality rules/controls need to be created to analyze it. For each data element, data quality rules can be created for each data quality dimension that needs to be analyzed. A data quality dimension is a characteristic of the data that can be measured to analyze the data's quality. Some examples of data quality dimensions are:
- Completeness (whether a field is populated or not)
- Consistency (whether a field is the same across several systems)
- Conformity (whether the data is conforming to the set of standard definitions, i.e., the date should be mm/dd/yyyy)
Data quality dimensions are selected since not all the data quality dimension available are relevant for each field. Data quality rules dimensions are selected based on the There is not a 1:1 relationship between data quality rules and data quality dimensions. This is determined by the nature of the data quality dimension. A data element can have only one completeness data quality rule and several consistency data quality rules.
Let's go back to the example of John and Jane. If we create a data quality rule for completeness for Driver's License, the issue detected for John is a real data quality issue because Driver's License is a mandatory field to create a customer record. The same completeness data quality issue identified for Jane is not an actual data quality issue, since the Driver's License is an optional field.
Another example is the Name field. When analyzing the quality of the data in Client Referential Systems, it is worth learning the system's basics. First and Last Name are mandatory fields in any Client Referential Systems. This rule is enforced at the database level. It means that no record will be stored in the database without First and Last Name. Therefore, creating a completeness data quality rule for the First and Last Names will not bring any insight into the quality of the data.
4. Where does your data really come from?
Analyzing the quality of data in a system is not limited only to what we see but also from where it comes. Data quality assessment is a substantial part of every project that requires data migration since we want to avoid a GIGO (Garbage-In-Garbage-Out) scenario. Let's look at the Name and Identifier data for John Doe at Acme. To open an account at Acme, John has to provide the following:
- First Name
- Last name
Workflow 1: John goes to the closest Acme branch, fills in the information form, and presents his Driver's License as a valid document. This document provides verification of the name and address provided by John. It is a manual process, and the Driver's License is scanned and kept with John's record.
Workflow 2: John goes to the closest Acme branch and presents his ID card with a microchip. All his personal information is downloaded from it.
In both workflows, we can find the data with anomalies (in this case, we are looking at special characters) in the First Name. However, the percentage of these anomalies is higher for workflow 2 than workflow 1.
How can that be possible?
With workflow 1, we can see that a manual error can be attributed to this data quality issue. But what about workflow 2? It's hard to believe, but it is still a manual error. It is a two-step manual error. Obtaining a legal document that can be used as proof of identity is a manual process. The forms are created to suit the data of a country's majority population. Some countries do not have the parsing of First and Last Name. When it comes to storing foreign names, the government's form may not allow the proper storage of the name. One common "remedy" is to avoid any attempt at parsing the string provided into First, Middle, and Last Names and store the string with the complete name in the Last Name field. And since the First Name must be present on the ID document, a default value of ?? is allocated to the First Name field. This, however, is not the only "manual" entry point. There is a second "manual" entry point that allows for such a data quality issue. The issue is created by how the data provided with the chip is "migrated" to the Client Referential System. The problem is due to the constant changing of the default value to "replace" the ?? from the government ID. The most common of values found in the First Name fields are *, X, XX, XXX. These default values are caused by a different type of "manual manipulation”: no single default value has been defined at enterprise level for First Name (a data quality issue that stems form the lack of data governance).
It's not just about ROI in the end.
An increase in revenue is the most quoted benefit of achieving high-quality data. It also helps prevent financial loss and other adverse and costly legal outcomes.
In the past 15 years, several major financial institutions have paid hefty fines and legal fees for proceedings that found them guilty of irregular activities. The common denominator was poor quality data. The poor quality of data is not as much at the customer level as it is at the third-party level, facilitating almost completely undetected and illicit money transfers.
By not taking care of their data (including quality), the financial institutions were held responsible, liable, and ordered to improve the quality of their data worldwide. The legal fines and the verdicts have been particularly harsh.
The fines and the expenses of cleaning the Client Referential Systems worldwide were not the only financial negative impact. The Financial Institutions had to regain the trust of their customers and develop new strategies to attract potential/new customers.
Data Quality initiatives alone can be the first step to improve data utilization within a company. However, the benefits of Data Quality can be fully leveraged only when carried out within Data Governance, Master Data Management, and Data Analytics frameworks.
We all have to start somewhere, and as we say in Italy, Well begun is half done.
Your data is more valuable than you think!
|
<urn:uuid:d73512a3-b296-491f-a621-e4770d0e8508>
|
CC-MAIN-2022-40
|
https://mastechinfotrellis.com/blog/there-is-more-to-data-quality
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00641.warc.gz
|
en
| 0.934651 | 2,012 | 2.890625 | 3 |
The Justice Department has reportedly shut down hundreds of websites that have defrauded consumers with fake deals that were taking advantage of the global pandemic crisis. Security analysts are working tirelessly to chart the magnitude of the danger we face. There are a few data trends that shed light on the largest cybercrime surge that we have seen in the last few years.
Researchers have found that the majority of the world’s cyberattacks are coming from only three countries. The originating countries for most of the cyberattacks in 2020 have come from Russia, Ukraine, and the Philippines. These three countries had the most real people working on attacks rather than using bots executing the attacks.
But not all attackers are solely using humans to do their dirty work; many are using bots to spearhead their attacks. The humans behind the bot attacks program them to target social media. Usually these social media bots aren’t being used to attack specific people but rather to spread political propaganda and misinformation – aka fake news.
Social media aren’t the only platforms that are experiencing cyberattacks during these times, though. The gaming industry has seen a massive spike in usage since the beginning of the COVID-19 crisis with so many people staying indoors away from crowds and working remotely from home. With this dramatic increase in gaming, the gaming platforms have also experienced an increase in attempted cyberattacks. According to the Arkose report, “gaming websites and communities associated with gaming are experiencing as many as 65 cyberattacks per second during the first half of 2020.”
So, what can you do about these cyberattack attempts? It’s crucial to be skeptical of the information that you see on social media, particularly if it’s leaning towards politics during election times. Always ask yourself what the intention behind the post is. Is the post/information based in fact or in emotion? Usually, if it’s an emotion-invoking post, it can be viewed as propaganda and should be viewed with caution. When it comes to online games, be cautious and suspicious of free items, subscriptions, or other perks that the game does not normally provide. It is especially important to be wary of this for games that regularly involve financial transactions.
As consumers, we can look at the trends in cybercrime listed above and reasonably assume the possible outcomes if they continue – cybercrime attacks will only increase in the coming months. All that you can do as a consumer is educate yourself and have a plan in place for when something happens to prevent or recover from a devastating identity theft incident.
LibertyID provides expert, full service, fully managed identity theft restoration to individuals, couples, extended families* and businesses. LibertyID has a 100% success rate in resolving all forms of identity fraud on behalf of our subscribers.
*Extended families – primary individual, their spouse/partner, both sets of parents (including those that have been deceased for up to a year), and all children under the age of 25
|
<urn:uuid:eb57bd2e-2747-4272-8695-7a7263f29320>
|
CC-MAIN-2022-40
|
https://www.libertyid.com/blog/looking-ahead-the-future-of-cybercrime-based-on-recent-trends/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00641.warc.gz
|
en
| 0.95843 | 604 | 2.953125 | 3 |
Modernizing industries depend heavily on emerging technologies. These technologies, like artificial intelligence, are primarily impactful for the manufacturing, energy, and transportation sectors. Enterprises are being transformed into a digital environment with emerging technologies. Every time the phrase “technology” is used, something new is always being developed or put into use that could benefit organizations.
A few years ago, no one thought emerging technologies would soon take over our lives. The users’ quick development has impacted the business ecosystem in wants and expectations for real-time interaction on these applications. These technical advancements significantly impact how we respond to global concerns. These new technologies can improve people’s lives, alter the course of the international economy, and improve the quality of life for present and future generations.
Table of Contents
What is artificial intelligence?
Artificial intelligence is, generally speaking, the activity of creating computer systems that can make intelligent decisions based on context rather than direct input. Understanding that AI systems always act following programmed rules is crucial. Consider a computer playing chess. While many people today would not consider this to be artificial intelligence (AI), it certainly satisfies the description of a system with rules that make decisions and estimates probability depending on the opponent’s movements.
Today, AI is becoming more popular as capabilities get closer to resembling sentience. This predicament results from various trends, which are essential elements needed when businesses incorporate AI into their strategies.
What do you mean by emerging technologies?
The word “emerging technology” is typically used to describe a new technology, but it can also refer to the ongoing evolution of existing technology. It can have slightly different meanings when used in different contexts, such as in the media, business, science, or educational fields.
What are the most emerging technologies?
PwC examined more than 250 of them to determine which emerging technologies would have the biggest business impacts across industries. The technologies with the most potential were dubbed the Essential Eight. They consist of robotics, 3D printing, blockchain, drones, augmented reality (AR), artificial intelligence (AI), augmented reality (AR), virtual reality (VR), and the Internet of Things (IoT).
“Companies across industries are using, investing in or planning to invest in artificial intelligence (AI). AI is improving industrial processes and making machines “smart.” It is expected to be one of the most disruptive technologies impacting industry and business. As the market for AI grows, boards should understand how this technology will affect their company’s strategy.”
The pandemic is speeding the adoption of emerging technologies, and the Essential Eight are still evolving and leaving their imprint today. Some, like AI, are becoming essential to all business types. Others, like 3D printing, have focused on specific industries, including manufacturing. While doing so, we have been monitoring yet another significant change: the revolutionary ways these many technologies combine.
Although there are other emerging technologies, such as quantum computing and nanotechnology, the Essential Eight will continue to have the most significant and practical effects over the next five years. However, the way they will collaborate to produce this impact will differ.
Is artificial intelligence an emerging technology?
In almost every sector, artificial intelligence influences how people will live in the future. It already serves as the primary driver of emerging technologies like big data, robotics, and the Internet of Things, and it will continue to do so for the foreseeable future. So, the answer is yes. Artificial intelligence is an emerging technology.
The role of artificial intelligence in the future of technology
AI will revolutionize how we work and live. It increases productivity by assisting us in several ways and saving a great deal of time when performing boring duties at home and work.
For instance, we can now set the robot vacuum to clean the floor, saving us time and effort. The vacuum robot effectively cleans the surface while avoiding collisions with obstacles, thanks to artificial intelligence.
The requirement for repetitive labor will also be eliminated thanks to advances and AI. By removing the risk of human error, autonomous cars will ease our burden of driving and probably lower the likelihood of car accidents.
In addition to decreasing repetitive work and increasing employee productivity, artificial intelligence has the potential to change many industries due to its ability to make quick decisions. We must take the necessary precautions to prevent ethical and safety issues as AI becomes increasingly ingrained in our daily lives.
Top emerging technologies in artificial intelligence
Many people’s perceptions of AI are distorted or constrained because of the long history of AI in popular culture and prominent instances in the consumer sector. Although intelligent chatbots and natural language interfaces are undoubtedly a part of the AI ecosystem, they are currently one of the least common ways for organizations to use new types of artificial intelligence.
It is no great surprise to learn that the Internet of Things, another cutting-edge technology, is frequently linked to AI. IoT system complexity essentially necessitates some level of automation and network learning. While lesser implementations of IoT can yield some benefits, large-scale systems are likely to incorporate AI as part of the solution.
Examples of artificial intelligence technologies
Machine learning: Machine learning gives computers the ability to interpret large data sets without having to be explicitly trained. Business decision-making is aided by machine learning techniques when data analytics are carried out utilizing statistical models and algorithms. Businesses invest extensively in this field to gain from machine learning’s use in various fields.
Healthcare and the medical profession need machine learning algorithms to analyze patient data for disease prediction and efficient treatment. Machine learning is necessary for the banking and financial industries to analyze client data, discover and recommend customer investments, and reduce risk and fraud. Retailers use machine learning to analyze customer data and forecast shifting consumer preferences and behavior.
Virtual agents: The use of virtual agents by instructional designers has increased significantly. A computer program that communicates with people is referred to as a virtual agent. Web and mobile applications use chatbots as customer care representatives to communicate with people and respond to their inquiries.
Both Amazon’s Alexa and Google Assistant make it easier to plan meetings and go shopping. A virtual assistant performs similar functions to a language assistant by taking cues from your preferences and choices. The common customer care inquiries posed in various ways are comprehended by IBM Watson. Virtual agents function similarly to software as a service.
Speech recognition: Another significant branch of artificial intelligence is voice recognition, which transforms spoken language into a form that computers can utilize and comprehend. The bridge between human and computer interactions is speech recognition. The technology can translate and recognize human speech in a variety of languages. A well-known example of speech recognition is Siri on the iPhone.
Deep learning: Another area of artificial intelligence that relies on artificial neural networks is deep learning. This method encourages computers and other devices to learn by doing, much like people do. Because neural networks have hidden layers, the word “deep” was created. A neural network typically contains two to three hidden layers and up to 150 hidden layers.
Deep learning is effective on large amounts of data when training a model with a graphics processing unit. A hierarchy of algorithms is used to automate predictive analytics. Deep learning has gained traction in a variety of industries, including aerospace and military, to recognize things from satellites, worker safety by identifying dangerous situations when a worker is near a machine, cancer cell detection, etc.
Natural language processing: Machines transmit and interpret information differently than the human brain. A popular method called “natural language generation” transforms structured data into the user’s native tongue. Algorithms are programmed into the machines to transform the data into a format that the user will find appealing. A subset of artificial intelligence called natural language assists content creators in automating content and delivering it in the desired format.
To reach the desired audience, content creators can employ automated content to promote on different social media platforms and other media platforms. The amount of human intervention will be greatly reduced as data is transformed into the desired formats. Charts, graphs, and other visual representations of the data are available. Today natural language processing models are heavily utilized by AI art illustrators. For instance, emerging technologies like DALL-E 2, Midjourney AI, and many more utilize this method.
How intelligent is artificial intelligence today?
Large quantities of fuel, natural resources, and labor from humans are used to create AI. Additionally, it lacks any human-like intelligence. Without considerable human instruction, it cannot make distinctions, and the statistical reasoning it uses to determine meaning is entirely different. Since the inception of AI in 1956, we have committed this dreadful fault—a kind of original sin for the field—by assuming that minds are similar to computers and vice versa. Nothing could be further from the truth than our assumption that these objects are analogs of human intelligence.
However, AI is now well-established in many different corporate sectors and progressively influences decisions in fields like human resources, insurance, and bank lending, to mention a few.
Machines learn about us and what we like by analyzing our online behavior. After removing less important information, recommendation systems suggest movies to watch, articles to read, or clothing we would enjoy on social media.
Mathematical models have been greatly improved thanks to the development of ever-more-powerful computers and the digitization of information. The rest has been completed by the internet and its limitless datasets, advancing the capabilities of AI systems.
What is the future of artificial intelligence?
While it is impossible to predict how artificial intelligence will affect our lives fully, some things are certain:
AI will become a cornerstone of international relations
The National Security Commission on Artificial Intelligence came to the conclusion that the US government must significantly speed up the development of AI. There is little question that AI will be essential to the United States’ continued economic strength and geopolitical leadership.
AI and ML will evolve the scientific method
Large-scale clinical trials and the construction of particle colliders are examples of important science that are costly and time-consuming. There has been significant, well-founded worry about the stalling of scientific advancement in recent decades. The era of great scientific discovery may be finished.
We may anticipate orders of magnitude improvements in what can be done with AI and machine learning (ML). Humans are limited in the range of concepts they can computationally investigate. Humans and computers can discuss a wider range of concepts.
AI will pave the way for next-generation CX
The metaverse and cryptocurrency are two examples of next-generation consumer experiences that have generated a lot of hype. AI will be essential in making these experiences and more like them possible. Because humans lack the perception necessary to overlay digital things on actual physical surroundings or comprehend the spectrum of human activities and their related impacts in a metaverse setting, the metaverse is intrinsically a problem for AI.
These are organic drivers for AI to close feedback loops between the virtual and real worlds. For instance, at their foundation, distributed finance, cryptocurrencies, and blockchain are all about incorporating frictionless capitalism into the economy. Distributed apps and smart contracts will need a better grasp of how financial operations interact with the actual world, an AI and ML challenge, to make this vision a reality.
AI is necessary to address the climate change
We, as a society, still have a lot to do to reduce the socioeconomic risks brought on by climate change. Policies for pricing carbon are still in their infancy, and their usefulness is debatable. AI is necessary for many promising new concepts to be practical. One potential new strategy is prediction markets powered by AI that can link policy to impact and take a comprehensive perspective of environmental knowledge and interdependence.
AI will also change the future of healthcare
Since the human genome was decoded, personalized medicine has been an ambition. Tragically, it nevertheless remains an aspiration. Creating personalized treatments for patients is an intriguing new use of AI. In addition, AI can one day synthesize and forecast tailored treatment modalities in close to real-time without the need for clinical trials.
In other words, AI is ideally qualified to build and assess “digital twin” rubrics of personal biology and can do so in the context of the communities a person lives in.
What is the end goal of AI?
AI seeks to create machines that emulate human thought processes and actions, such as perception, reasoning, learning, planning, and prediction. One of the key traits that sets humans apart from other animals is intelligence. Industrial revolutions have resulted in the displacement of human labor from all walks of life by an ever-increasing variety of machines. The impending replacement of human resources by machine intelligence is the next major obstacle to be addressed.
The research in the field of AI is rich and diversified because so many scientists are concentrating on it. Search algorithms, knowledge graphs, natural language processing, expert systems, evolutionary algorithms, machine learning (ML), deep learning (DL), and other areas of AI research are just a few examples. There’s an insightful research addresses this topic called “Artificial intelligence: A powerful paradigm for scientific research”
Which industries will AI change?
Specifically, “narrow AI,” which executes objective functions using data-trained models and frequently falls into the categories of deep learning or machine learning, has already had an impact on practically every significant business. The proliferation of connected devices, strong IoT connectivity, and ever-faster computer processing have contributed to a significant increase in data collection and analysis during the past few years.
While some industries are just beginning their AI journey, others are seasoned travelers. Both still have a ways to go. Whatever the case, it’s difficult to ignore AI’s impact on our daily lives.
- Education: Artificial intelligence (AI) is used for digitizing textbooks, early-stage virtual tutors support human instructors, and facial analysis measures student emotions to identify better who is struggling or bored and better adapt the experience to their specific needs.
- Healthcare: Diseases are diagnosed more quickly and accurately, drug discovery is accelerated and streamlined, virtual nursing assistants keep an eye on patients, and big data analysis helps to provide a more individualized patient experience in the relatively young field of AI in healthcare.
- Transportation and logistics: Autonomous vehicles will transport us someday, albeit they may take some time to develop.
- Manufacturing: For a restricted range of operations like assembly and stacking, AI-powered robots collaborate with humans, and predictive analysis sensors keep equipment in good working order.
- Customer services: Last but not least, Google is developing an AI assistant that can make calls that sound human-like to schedule appointments. The technology is capable of comprehending context and nuance in addition to words.
- Media: Journalism is utilizing AI as well and will continue to do so. In order to assist readers quickly understanding complex financial information, Bloomberg uses Cyborg technology. The Associated Press now generates nearly four times as many earning report pieces yearly (3,700) using Automated Insights’ natural language processing capabilities.
Modernizing industries depend heavily on emerging technologies. Over the next ten years, artificial intelligence applications will significantly impact our society and economy. We are currently in the early stages of what many reliable experts consider to be the most promising period for technological innovation and value creation in the near future.
|
<urn:uuid:d62393c1-0e9e-4807-8a97-7597077629aa>
|
CC-MAIN-2022-40
|
https://dataconomy.com/2022/09/emerging-technologies-artificial-intelligence/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00641.warc.gz
|
en
| 0.933876 | 3,123 | 3.203125 | 3 |
In response to the epidemic, organizations throughout the world changed gears to virtual workplace models and remote working practices to ensure company continuity. Increased cost-cutting opportunities, increased staff productivity, and the lack of a vaccination all point to this theme continuing for the foreseeable future. The epidemic emphasized investing in digital technology including video conferencing tools, cloud platforms, and Learning Management Systems (LMS). While the Digital India campaign planted the seeds of a digital economy, it was COVID-19 that pushed large-scale digital adoption and transformation, necessitating the acquisition of new skills.
Organizations are increasingly striving to implement capability-building projects in light of ever-changing business conditions. As new developments such as Machine Learning (ML) and Artificial Intelligence (AI) continue to disrupt methods of working, upskilling and lifetime learning have become critical.
Challenges related that companies are facing due to COVID19
A paradigm changes in the learning environment
According to our study, the environment related to training and learning in organizations has changed changing significantly. And we noticed:
Workplace dynamics are changing because of the epidemic.
The connection of learning to business goals was rated as a priority by 57% of the companies polled. The changing corporate environment and innovations brought about by COVID-19, such as remote working and virtualization, have forced organizations to reconsider the role of learning. Organizations are attempting to develop learning techniques to effectively incorporate learning into the company. As disruptors continue to influence practically every industry, organizations' learning departments are redefining how they generate and distribute learning material to align learning with organizational goals.
Author Name: Nitish P.
|
<urn:uuid:ffe35d3a-3ac4-43a4-9835-01b22fbfd068>
|
CC-MAIN-2022-40
|
https://www.alltheresearch.com/white-paper/role-of-lms-to-monitor-resources-post-covid-era
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00641.warc.gz
|
en
| 0.950759 | 331 | 2.578125 | 3 |
Introduction to TCP & UDP
TCP and UDP both are used for transferring data or packets on the internet or Intranet. Both perform the same job but the way is different. In this blog, we are discussing the difference between TCP and UDP i.e. TCP vs UDP.
TCP stands for “Transmission Control Protocol”. UDP stands for “User Datagram Protocol”. The main difference between UDP vs TCP is that the TCP is connection-oriented while UDP is connectionless.
In TCP after the connection is set up, bidirectional sending of data is possible but in UDP, packets are sent in chunks. TCP is more reliable than UDP but UDP is faster than TCP.
Header size of TCP is 20 bytes while Header size of UDP is 8 bytes. UDP doesn’t have error checking mechanism that is why it is less reliable but is faster in data transferring while TCP is reliable but comparatively slow as it keeps the data smooth and checks error.
Related- Common TCP
Key points of difference between TCP and UDP
- TCP stands for “Transmission Control Protocol” while UDP stands for “User datagram Protocol”.
- TCP is the connection-oriented protocol while UDP is connectionless protocol.
- TCP is more reliable than UDP.
- TCP uses both error detection and error recovery. UDP works on a “best-effort” basis
- UDP is faster for data sending than TCP.
- UDP makes error checking but no reporting but TCP makes checks for errors and reporting.
- TCP gives a guarantee that the order of data at receiving end is the same as on sending end while UDP has no such guarantee.
- Header size of TCP is 20 bytes while that of UDP is 8 bytes.
- TCP is heavyweight as it needs three packets to set up a connection while UDP is lightweight.
- TCP has acknowledgement segments but UDP has no acknowledgement.
- TCP is used for an application that requires high reliability but less time-critical whereas UDP is used for an application that is time-sensitive but requires less reliability.
Comparison table: Difference between TCP and UDP
A More detail comparison between TCP vs UDP is shared below –
|Full Form||Transmission Control Protocol||User Datagram Protocol or Universal Datagram Protocol|
|Connection||TCP is a connection-oriented protocol.||UDP is a connectionless protocol.|
|Half-Closed connection||TCP allows half closed connections||Not applicable for UDP protocol|
|Function||As a message makes its way across the internet from one computer to another. This is connection based.||UDP is also a protocol used in message transport or transfer. This is not connection based which means that one program can send a load of packets to another and that would be the end of the relationship.|
|Usage||TCP is suited for applications that require high reliability, and transmission time is relatively less critical.||UDP is suitable for applications that need fast, efficient transmission, such as games. UDP's stateless nature is also useful for servers that answer small queries from huge numbers of clients.|
|Use by other protocols||HTTP, HTTPs, FTP, SMTP, Telnet,SSH||DNS, DHCP, TFTP, SNMP, RIP, VOIP,IPTV|
|Multiplexing & Demultiplexing||Using TCP port number||Using UDP port numbers|
|Ordering of data packets||TCP rearranges data packets in the order specified.||UDP has no inherent order as all packets are independent of each other. If ordering is required, it has to be managed by the application layer.
|Speed of transfer||The speed for TCP is slower than UDP.||UDP is faster because error recovery is not attempted. It is a "best effort" protocol.
|Reliability||There is absolute guarantee that the data transferred remains intact and arrives in the same order in which it was sent.||There is no guarantee that the messages or packets sent would reach at all.|
|Header Size||TCP header size is 20 bytes||UDP Header size is 8 bytes.|
|Common Header Fields||Source port, Destination port, Check Sum||Source port, Destination port, Check Sum|
|Streaming of data||Data is read as a byte stream, no distinguishing indications are transmitted to signal message (segment) boundaries.||Packets are sent individually and are checked for integrity only if they arrive. Packets have definite boundaries which are honoured upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent.|
|Weight||TCP is heavy-weight. TCP requires three packets to set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control.||UDP is lightweight. There is no ordering of messages, no tracking connections, etc. It is a small transport layer designed on top of IP.|
|Data Flow Control||TCP does Flow Control. TCP requires three packets to set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control.||UDP does not have an option for flow control|
|Error Checking||TCP does error checking and error recovery. Erroneous packets are retransmitted from the source to the destination.||UDP does error checking but simply discards erroneous packets. Error recovery is not attempted.|
|Fields||1. Sequence Number|
2. AcK number
3. Data offset
5. Control bit
7. Urgent Pointer
11. Source port
12. Destination port
2. Source port
3. Destination port
4. Check Sum
|Acknowledgement||Acknowledgement segments||No Acknowledgment|
|Handshake||SYN, SYN-ACK, ACK||No handshake (connectionless protocol)|
Download the comparison table here.
|
<urn:uuid:509c7edd-f2dc-422b-bb71-a1b50944510e>
|
CC-MAIN-2022-40
|
https://ipwithease.com/tcp-vs-udp/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00641.warc.gz
|
en
| 0.885948 | 1,287 | 3.6875 | 4 |
Many of us would concede that buildings housing data centers are generally pretty ordinary places. They’re often drab and bunker-like with few or no windows, and located in office parks or in rural areas. You usually don’t see signs out front announcing what they are, and, if you’re not in information technology, you might be hard pressed to guess what goes on inside.
If you’re observant, you might notice cooling towers for air conditioning and signs of heavy electrical usage as clues to their purpose. For most people, though, data centers go by unnoticed and out of mind. Data center managers like it that way, because the data stored in and passing through these data centers is the life’s blood of business, research, finance, and our modern, digital-based lives.
That’s why the exceptions to low-key and meh data centers are noteworthy. These unusual centers stand out for their design, their location, what the building was previously used for, or perhaps how they approach energy usage or cooling.
Let’s take a look at a handful of data centers that certainly are outside of the norm.
The Underwater Data Center
Microsoft’s rationale for putting a data center underwater makes sense. Most people live near water, they say, and their submersible data center is quick to deploy, and can take advantage of hydrokinetic energy for power and natural cooling.
Project Natick has produced an experimental, shipping-container-size prototype designed to process data workloads on the seafloor near Scotland’s Orkney Islands. It’s part of a years-long research effort to investigate manufacturing and operating environmentally sustainable, prepackaged datacenter units that can be ordered to size, rapidly deployed, and left to operate independently on the seafloor for years.
The Supercomputing Center in a Former Catholic Church
One might be forgiven for mistaking Torre Girona for any normal church, but this deconsecrated 20th century church currently houses the Barcelona Supercomputing Center, home of the MareNostrum (Latin for Our sea, the Roman name for the Mediterranean Sea) supercomputer. As part of the Polytechnic University of Catalonia, this supercomputer is used for a range of research projects, from climate change to cancer research, biomedicine, weather forecasting, and fusion energy simulations.
The Under-a-Mountain Bond Supervillain Data Center
Most data centers don’t have the extreme protection or history of the The Bahnhof Data Center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described Bond villain ambiance.
We previously wrote about this extraordinary data center in our post, The Challenges of Opening a Data Center — Part 1.
The Data Center That Can Survive a Class 5 Hurricane
Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category five hurricane winds.
The MI1 facility provides access for the Caribbean, South and Central America and to more than 148 countries worldwide, and is the primary network exchange between Latin America and the U.S., according to Equinix. Any outage in this data center could potentially cripple businesses passing information between these locations.
The center was put to the test in 2017 when Hurricane Irma, a class 5 hurricane in the Caribbean, made landfall in Florida as a class 4 hurricane. The storm caused extensive damage in Miami-Dade County, but the Equinix center survived.
The Data Center Cooled by Glacier Water
Located on Norway’s west coast, the Lefdal Mine Datacenter is built 150 meters into a mountain in what was formerly an underground mine for excavating olivine, also known as the gemstone peridot, a green, high- density mineral used in steel production. The data center is powered exclusively by renewable energy produced locally, while being cooled by water from the second largest fjord in Norway, which is 565 meters deep and fed by the water from four glaciers. As it’s in a mine, the data center is located below sea level, eliminating the need for expensive high-capacity pumps to lift the fjord’s water to the cooling system’s heat exchangers, contributing to the center’s power efficiency.
The World’s Largest Data Center
The Tahoe Reno 1 data center in The Citadel Campus in Northern Nevada, with 7.2 million square feet of data center space, is the world’s largest data center. It’s not only big, it’s powered by 100% renewable energy with up to 650 megawatts of power.
An Out of This World Data Center
If the cloud isn’t far enough above us to satisfy your data needs, Cloud Constellation Corporation plans to put your data into orbit. A constellation of eight low earth orbit satellites (LEO), called SpaceBelt, will offer up to five petabytes of space-based secure data storage and services and will use laser communication links between the satellites to transmit data between different locations on Earth.
CCC isn’t the only player talking about space-based data centers, but it is the only one so far with $100 million in funding to make their plan a reality.
A Cloud Storage Company’s Modest Beginnings
OK, so our current data centers are not that unusual (with the possible exception of our now iconic Storage Pod design), but there was a time when Backblaze was just getting started and was figuring out how to make data storage work while keeping costs as low as possible for our customers. It’s a long way from these modest beginnings to almost one exabyte (one billion gigabytes) of customer data stored today.
The photo below is not exactly a data center, but it is the first data storage structure used by Backblaze to develop its storage infrastructure before going live with customer data. It was on the patio behind the Palo Alto apartment that Backblaze used for its first office.
The photos below (front and back) are of the very first data center cabinet that Backblaze filled with customer data. This was in 2009 in San Francisco, and just before we moved to a data center in Oakland where there was room to grow. Note the storage pod at the top of the cabinet. Yes, it’s made out of wood. (You have to start somewhere.)
Do You Know of Other Unusual Data Centers?
Do you know of another data center that should be on this list? Please tell us in the comments.
|
<urn:uuid:1daef688-73a9-47f8-89b4-fcb30b3d82fe>
|
CC-MAIN-2022-40
|
https://www.backblaze.com/blog/these-arent-your-ordinary-data-centers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00641.warc.gz
|
en
| 0.941767 | 1,457 | 3.171875 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
By Mike Cobb, Director of Engineering
Advancements in data storage technology have not ended with flash. Have you heard of storing data in DNA?
Growing Amount of Data
As computers, smartphones and tablets become more and more widespread, the amount of data being produced is growing at an exponential and staggering rate. It seems that every two years, there’s a new report saying 90% of the world’s data was created in the last two years. Today, there are 3.7 billion people using the internet and we are creating 2.5 million terabytes of data every single day.
It goes without saying that as more data is created each year, more storage is needed to hold that data. Cloud storage companies, corporations and government entities are perpetually busy adding to their servers, adding new servers and otherwise increasing capacity for stored data. And so far, storage device manufacturers have kept up with demand.
But with the exponentially increasing need for storage devices and data storage space, will production continue to outrun demand? According to scientists at data storage company Catalog, it will not.
Hyunjun Park, CEO and co-founder of Catalog, told Digital Trends that he expects data produced to far outweigh worldwide storage capacity as early as 2025. He expects that, if we continue with current storage technology, by 2025 we will only be capable of storing less than 15% of the world’s data. Catalog’s answer? DNA.
Watch Daily Mail’s interview with Catalog co-founder Hyunjun Park.
What is DNA Storage and How Can It be Used?
As science fiction as this concept may sound, nobody is taking blood from a person or an animal and storing data in their DNA. Instead, synthetic DNA will be manufactured for the purpose of data storage.
There are currently several companies in business who produce synthetic DNA that never belonged to any living thing. Applications are far ranging and include everything from developing antiviral medications to making fabric for clothing. Data storage is just one more practical use.
If you’re thinking that in a few years you’ll be buying DNA pools instead of external hard drives to back up your home computer, you may have the wrong idea. The process of writing data to DNA is lengthy and expensive. The process of reading data from DNA is even more so. This form of data storage will likely be limited to huge archive data centers for the purpose of saving space and keeping data in cold storage for years at a time with no access.
Benefits of DNA Data Storage
So why DNA? Three reasons:
- Data density
- Timeless technology
Data storage devices that hold more and more data are already getting smaller and smaller using electronic components, such as monolithic flash. However, the growing amount of data stored is outpacing even the miniaturization of storage devices and data centers are still having to purchase hundreds of thousands of square feet – millions in some cases – to house it all.
Add to this the prediction referenced above that data produced is expected to outpace devices manufactured and you have an even bigger problem. We will eventually run out of both data storage devices and physical space to house them.
Catalog claims that they are able to store up to a terabyte of data on a single gram-sized DNA pellet. With this technology, a million square foot data center of today may be able to store all of its data in a single domestic-size refrigerator. Space issue solved…for now.
With the increasing rate that humans are producing and storing data, how long until we’re storing refrigerators of synthetic DNA data storage in space? But that’s a question for another time.
Currently, the longest lasting data storage technology in regular use is tape storage, which can last up to 30 years in ideal conditions. Optical storage has seen some incredible advancements in the last few years, with some disks theoretically able to last up to 10,000 years; however, optical storage is not widely used in large data centers because they take up so much more space to store the same amount of data as other data storage technologies.
DNA, on the other hand, has been proven to last thousands of years and, as mentioned above, is extremely compact. In fact, it is estimated that DNA storage can last up to 6.8 million years in optimal conditions.
When was the last time you used a 3.25” floppy? When I brought up floppy disks to a 23-year old recently, he had never heard of any such thing. Showing him pictures did not ring any bells either.
Although there are a few storage technologies that have stood the test of time, such as tape and HDD, most have come and gone. With archival storage, this is a constant fear: we might store irreplaceable data only to find that one day we cannot retrieve it because the storage technology that was used to preserve it has become obsolete.
With DNA, this will theoretically never be an issue. Even when one day data storage technologies advance further, there will always be a need to read DNA for other purposes, such as genetic testing or criminal investigations. Therefore, there will always be a way to read it for data storage purposes as well.
History of DNA Data Storage
The first snippet of data – a 23-character text-based message – was stored on synthetic DNA in 1999. Since that time, several universities and technology companies have been working together to improve the technology. As recently as 2016, Microsoft Research and the University of Washington (UW) released a study showing significant advancements: storage of 200MB of data on DNA in a space the size of a pencil tip.
So why didn’t it immediately go mainstream? Two reasons: cost and length of process.
The process of DNA data storage and retrieval, as used by Microsoft and UW, required three separate pieces of equipment: a DNA synthesizer to translate digital data into the components of DNA, storage for the DNA, and a DNA sequencer to translate the DNA components back into readable data. The entire process takes an insane amount of time and costs thousands of dollars per MB of data.
Boston-based Catalog is keeping details of its improvements on DNA data storage technology mostly a secret. However, they claim to have dramatically reduced length of process and cost by using a different approach to data-to-DNA translation. They intend to begin providing their improved technology to the entertainment industry and other big data industries by next year – 2019.
The average 2 terabyte hard drive currently weighs a few hundred grams. Catalog claims to be able to store up to 250 petabytes on a pellet of DNA weighing only one gram! One petabyte, by the way, is equal to 1,000 terabytes. So 100,000 2TB hard drives may one day soon be replaced by a single tiny pellet.
Data Loss in DNA Data Storage
A handful of data loss scenarios have already been identified for DNA data storage. Data loss can be caused by logical corruption, decay, oxidation, high temperature, truncation, errors in the conversion process and other factors.
It is possible that data may one day be recoverable in some of these data loss situations. At DriveSavers, we always enjoy a challenge and eagerly await the opportunity to explore data recovery techniques specific to this new technology.
Whatever new data storage innovation arises, it is always important to back up, back up, back up! Even with this amazing advancement, if the customer archives the data in only one place, then they are vulnerable to losing data. This is true of HDD, SSD, DNA and any other technology that may be developed. If you or your company one day choose to take advantage of DNA data storage, we recommend multiple backups or else DriveSavers will have to retool for recovering data from DNA strands!
|
<urn:uuid:a158306c-a239-4b0e-bdac-b78171d995e1>
|
CC-MAIN-2022-40
|
https://drivesaversdatarecovery.com/growing-data-dna-data-storage/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00641.warc.gz
|
en
| 0.945452 | 1,645 | 2.78125 | 3 |
A switch is a device that takes in packets of data being sent by devices that connect to its physical ports and sends them out again. However, it sends them only through the ports that lead to the devices the packets are intended to reach. In other words, switches connect Ethernet IP devices and forward information between them. In it’s simplest form, an unmanaged switch is a fancy ethernet hub, and can be had quite cheaply. As such, an unmanaged switch requires no configuration. They are typically for basic connectivity.
On the other hand, a managed switch is communicates across multiple networks while simultaneously providing built-in network security and improving the network’s bandwidth by prioritizing packet requests.
Why a managed switch?
There is no ability to configure an unmanaged network switch because it lacks a “brain.” A managed switch gives you the ability to manually configure, monitor, and manage the devices on your network.
The reason an organization would spend good money on a managed switch with all these heavy-duty features is simply this: managed switches give you greater security, more features, and flexibility. Consequently, you can configure them to custom-fit your network. With this greater control, you can better protect your network and improve the quality of service for those who access the network.
Plus, a managed switch Like the FS 124E has more software on it that allows the user to have more control over their network. For example, you can use it to control the LAN to create VLANs (Virtual Local Area Networks).
In addition, these switches provide network status, diagnostics, and data prioritization. And, unauthorized devices cannot connect to the network, thus preventing malicious threats.
How do I manage all these functions?
A good switch should be simple to set up and manage. The FortiSwitch 124E (and all FortiSwitches) is tightly integrated into the Fortinet Security Fabric via FortiLink, an innovative proprietary management tool. This allows any FortiGate firewall to seamlessly manage any FortiSwitch. This enables it to become a logical extension of the FortiGate. As a result, the 124E is integrated directly into the Fortinet Security Fabric. At that point, the FortiSwitch can be managed directly from the familiar, easy-to-grasp FortiGate interface. The nice thing is, it’s a “single pane of glass” interface, and provides complete visibility and control of users and devices on the network no matter how they connect.
The integration between the FortiGate and FortiSwitch — the reporting and monitoring — is superb. One of the most valuable features is that you can completely isolate devices that are compromising the network.
Performance of the FortiSwitch 124E
The FS-124E has 24 GE RJ45, and 4 GE SFO ports. It delivers 56 Gbps of Switching Capacity, and 83 Mpps Packets per Second. Additionally, it has 4 ms latency. Lastly, it supports 4000 VLANS. A FortiGate firewall will be able to manage anywhere from 8 to 300 FortiSwitches, depending on the model.
Who is the FS-124E designed for?
The 124E is ideal in converged network environments; enabling voice, data and wireless traffic to be delivered across a single network. Specifically it will suit threat-conscious small to mid-sized businesses and branch offices.
Networks today are essential for supporting businesses, providing communication, delivering entertainment—the list goes on and on. And building a small business network is not possible without switches to tie devices together. It unites the computers, printers, and servers in a small business network seamlessly and safely.
|
<urn:uuid:8fbb63d1-0548-4e19-bc17-a6a1a3db87f9>
|
CC-MAIN-2022-40
|
https://www.corporatearmor.com/fortiswitch-124e/fortiswitch-124e-and-the-function-of-switches/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00641.warc.gz
|
en
| 0.927747 | 759 | 2.515625 | 3 |
Searches for vulnerabilities require special knowledge, extensive experience, and a sixth sense. But what about novice security researchers? They have no experience and cannot gain it because don’t know where to start from. This is where automated vulnerability scanners come into play. In this article, I will present the main types of such programs and explain how to use them.
Today, I will explain how to hack the CTF virtual machine available on Hack The Box training grounds. For the purposes of this article, the abbreviation “CTF” refers to Compressed Token Format, not Capture the Flag. This VM is vulnerable to various types of LDAP injections, while its authentication mechanism is based on stoken, a generator of one-time passwords. In addition, the target machine uses a loose Bash script, and I will exploit it to fool the 7z archiver and gain root access.
This article addresses a vulnerability in Apache Tomcat that enables the attacker to read files on the server and, under certain conditions, execute arbitrary code. The problem lies in the implementation of the AJP protocol used to communicate with a Tomcat server. Most importantly, the attacker does not need any rights in the target system to exploit this vulnerability.
Web security is a very broad term. It includes bugs in old protocols, usage of dangerous techniques, trivial human errors made by developers, and more. It is difficult to test products in such a broad area without a plan. The Open Web Application Security Project (OWASP) made the life of pentesters easier by producing the OWASP Testing Guide.
As you are aware, any penetration test starts from information collection. You have to find out what operating system is running on the remote host, and only then you can start looking for vulnerabilities in it. This article presents seven useful tools used inter alia for OS detection and explains their operation principles.
A critical vulnerability resulting in a denial-of-service error has been recently discovered in ModSecurity, a popular web application firewall (WAF) for Apache, IIS, and Nginx. The bug is truly severe: not only does the library stop working, but applications using it as well. Let’s see what was the mistake of the ModSecurity developers and how we, ethical hackers, can exploit this vulnerability in our penetration tests.
Firmware of popular routers often contains errors identified by security researchers on a regular basis. However, it is not enough just to find a bug – it must be neutralized. Today, I will explain how to protect your network against known and yet-unknown vulnerabilities in RouterOS.
|
<urn:uuid:14a31d3f-3438-4f85-bbf5-ecc8565cd2ab>
|
CC-MAIN-2022-40
|
https://hackmag.com/page/11/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00041.warc.gz
|
en
| 0.921128 | 533 | 2.578125 | 3 |
- Comparison between Different Network Devices – Ethernet Switch vs Hub vs Router
- Ethernet Switch vs Hub vs Router Comparison table
- What is a splitter?
- What is a repeater?
- What is a Hub?
- What does a hub do?
- Types of HUB
- Advantages and Disadvantages of HUB
- What is a Switch?
- What does an Ethernet switch do?
- Types of Switch
- Advantages and Disadvantages of an Ethernet Switch
Comparison between Different Network Devices – Ethernet Switch vs Hub vs Router
This blog post will help you understand the comparison between Ethernet switch vs hub vs router and how they work. You’ll also learn which device is best for your home or office. Whether it’s an ethernet hub, repeater, splitter, ethernet switch, or router, all of this equipment is utilized to extend the network‘s reach. Devices may have similar functions, yet they all have unique applications. So let’s get started straight with the ethernet hub vs switch vs router comparison table.
Ethernet Switch vs Hub vs Router Comparison table
|Criteria||Hub or Repeater||Switch||Router|
|OSI Layer||Layer 1 - Physical layer||Layer 2 - Data link layer||Layer 3 - Network layer|
|Function||Hub - Hub repeats every signal it receives via any of its ports out every other port. For Example - To connect a network of personal computers, you can join them|
through a central hub. A hub can be a repeater.
Repeater - Repeaters are usually used to extend a connection to a remote host or to connect a group of users who exceed the distance limitation of
10Base-T. A repeater cannot be a Hub.
|A switch, in contrast, keeps track of which devices are on which ports and forwards frames only |
to the devices for which they are intended. Switches allow connections to multiple devices, manage ports, manage VLAN security
|A router connects two or more networks together for example - it connects a LAN to the WAN.|
|Data Transmission form||electrical signal or Bits||frame & packet||packet|
|Port||2/4/12 ports||Multiport, usually between 4 and 48||2/4/5/8 or more ports depends on the vendor and type|
|Transmission type||Frame flooding, unicast, multicast, or broadcast||The first broadcast, then unicast or multicast, depends on the requirement||At Initial Level Broadcast then Uni-cast and multicast|
|Device type||Non-intelligent and Passive device - Do not alter frames |
or make decisions based on them in any way.
|Intelligent and active device - Can alter frames|
or make decisions based on them.
|Intelligent device - Can alter IP received data packets
or can make decisions based on them.
|Possibility of Collision||High possibility because Hubs repeat inbound signals to all ports, regardless of type or destination||No Possibility - Switch can only forward a broadcast to a particular network segment or VLAN.||No Possibility - Operates outside of the ethernet segment and on Layer 3.|
|Used in(LAN, MAN, WAN)||LAN (Hubs are mostly used on user desks or in-home where 2-3 devices need to be connected)||LAN (You can use switches inside the home as well if you have heavy network usage applications like multiplayer games or music file sharing.)||LAN, MAN, WAN|
|Transmission mode||Half-duplex||Half/Full duplex||Full duplex|
|Speed||10Mbps||10/100Mbps, 1Gbps||1-100Mbps(wireless); 100Mbps-1Gbps(wired)|
|The address used for data transmission||MAC address||MAC address||IP address|
|STP - Spanning Tree Protocol||STP is not possible.||STP is possible.||STP is not possible.|
|Store MAC Address||NO||Yes||Yes|
|Broadcast Domain||Single Broadcast Domain||A switch can create multiple Broadcast Domain through VLANs.||Broadcast Domain ends at the router.|
|Collision Domain||A hub generally expands a collision domain through all ports||An Ethernet switch creates a collision domain on each port|
|Cost||Less Costly than a Switch||Costly, it mostly depends on the number of ports and speed of the ports.||Costly|
|Collisions||Collisions occur mainly in setups using Hubs.||No Collisions.||No Collisions.|
|Possibility of Broadcast Storm||High Chances||Fewer chances.||No Chance of broadcast storm.|
|Which is faster switch or hub or a router?||Hub is slower than switch. Hub ports are usually 10 Mbps or 100 Mbps maximum.||Switch is faster than Hub. They can provide 100 Mbps, 1 Gbps, 10 Gbps and 100 Gbps port speeds.||Routers are almost equally fast as switches or, in some scenarios, faster than switches.|
After going through the above comparison table for ethernet switch vs hub vs router, let us look at each device separately and understand what they have to offer.
What is a splitter?
When separating a network connection, Ethernet splitters are the most user-friendly solution. The gadget is a coaxial cable transmission system that divides a single cable into two.
Unlike hubs and switches, which require an external power source, a splitter is a passive device. In addition, they’re the most straightforward and most user-friendly to set up and utilize. Splitters are primarily used on a user desk or in a house where a single connection is coming, and you need to connect two devices. The optimum application for splitters in a home or business is to limit the number of long cables that you must run between rooms.
What is a repeater?
A repeater is a device that repeats a signal. Repeaters are typically used to extend a connection to a remote host or connect a group of users separated by a distance greater than the 10Base-T maximum distance limitation, 100 meters.
What is a Hub?
An ethernet hub is essentially a device with multiple ports that connects two or more Ethernet cables, allowing their signals to be repeated to every other port in a single network.
For this reason, hubs are sometimes referred to as repeaters; nevertheless, it is crucial to note that while a hub is a repeater, a repeater is not always considered a hub.
In other words, a hub is like a repeater, except that a repeater may only have two connectors, but a hub can have many more, and it repeats a signal over many cables instead of just one.
What does a hub do?
A network hub connects multiple ethernet devices on a network and provides a conduit for data broadcast between them.
In today’s network, hubs are typically used on a user desk where the single cable comes to the desk, and the user wants to split and be connected to multiple devices, like two laptops, a projector, a printer on the same desk. Although this is not a good practice, you must connect one cable to a single device. For this reason, You can also use hubs to expand the size of a network by allowing more devices to be connected.
Hubs main features are:-
- Hubs are predecessors of ethernet switches.
- A hub is a multi-port repeater that has many ports. The several ports on the device accept ethernet connections from various network devices.
- A Hub is considered the least intelligent device because it does not filter data and does not know to which destination the data is supposed to be transmitted.
- When a data packet arrives at one of the ports, it is automatically copied to all the other ports. As a result, the data packet is received by all the devices, even though not intended for them.
- A hub generally has 4 to 12 ports.
- Hubs are purely physical and electrical devices
- Hub works on the Physical layer of the OSI model.
- Hubs are simple devices and not as smart as switches as routers.
- The majority of packet collisions occur within a hub.
- Hubs are primarily used on a user’s desk, home networks, or smaller LAN environments.
Types of HUB
- Here are two kinds of Hub: Active Hub and Passive Hub
- An active hub has a power supply. It is a multi-point repeater and can regenerate signals in the network.
- It can also clean, improve, and relay the signal along with the network.
- The device can be used both as a repeater and a wiring center.
- As the name implies, it is a passive device meaning it does not have its power supply and relies on the active hub or external device.
- A passive hub is a connector that connects wires coming from other systems.
Advantages and Disadvantages of HUB
|Advantages of a Hub||Disadvantages of a Hub|
|It is a simple-to-use device.||It is an unintelligent device as they do not alter frames or make decisions based on them in any way.|
|It can be used as a repeater to extend a connection strength distance greater than 100 meters.||Hub can only support half-duplex.|
|Hub can use it for network monitoring.||It has only one collision and broadcast domain.|
|You can use hubs inside small home networks, or only sharing and signal repetition is required.||Hubs cannot differentiate devices and are not smart, so you should not use them in large networks.|
|It is not secure as it can broadcast data and send it to an unintended user.|
|Hubs don't offer dedicated bandwidth.|
|You cannot reduce or increase network traffic when using Hubs.|
|It does not support VLANs or STP.|
What is a Switch?
The switch was the next logical step in the development of Ethernet. Compared to hubs, switches are more involved in the frame forwarding process. Remember that a hub repeats every signal it receives via any of its ports out of every other port. On the other hand, a switch maintains a record (mac-address-table) of connected devices and ports and forwards frames exclusively to those devices.
Ethernet Switch is also a multi-port network bridge or a bridging device since it connects network segments.
Besides, a network switch can accomplish everything a hub can do more efficiently, including identifying the source information’s intended destination.
What does an Ethernet switch do?
Ethernet switches send frames only to specific devices having specific mac addresses and only intended to receive that frame.
So now the question comes:-
How does the switch decide where to send the frames transmitted from different devices on the network?
Every frame has a source and destination MAC address field, and a switch opens up (if it needs to) that frame and looks at that information. Then it cross-verify that source MAC address in its table. If the source MAC address is not present in the table, it adds the MAC address and the port into the table. This table is referred to as the content-addressable memory table ( CAM table) in CatOS and the MAC address table in IOS holds a map of which MAC addresses have been detected on which ports and is used to identify which ports have been discovered.
The switch then identifies the destination MAC address of the frame and examines the table to see whether there is a match. If there is a match, that frame is only forwarded to that port. The frame is transmitted to all ports if a match is not found.
Ethernet switches can segregate network devices into groups by using VLANs.
Types of Switch
Below are the main types of switches:-
As the name implies, manageable switches are manageable locally and remotely.
You can assign an IP address on a console port of a manageable switch to manage it remotely.
Manageable switches are mainly two types.
Fixed-configuration switches are smaller, usually 1 RU and up to 48 ports. These switches are designed for situations where larger switches are unnecessary.
Cisco 2950, 3550, and 3750 switches are fixed-configuration switches.
A Modular gigabit switch is bigger, usually 2-10 RU, and has more than 100 ports. These switches are designed for larger and high-speed networks.
Cisco 6500,4500 are modular switches.
You cannot manage an unmanageable switch from a remote location.
It is impossible to assign an IP address as there is no console port.
Advantages and Disadvantages of an Ethernet Switch
|Advantages of an Ethernet Switch||Disadvantages of an Ethernet Switch|
|Switches have port ranges between 8-48 ports suitable for small and large Local Area Networks. You can use switches inside the home as well if you have heavy usage applications like multiplayer games or heavy music file sharing.||Switches with more ports are costlier.|
|Switches are more Intelligent than hubs and have many features, including device identification, layer2 security, flood identification, prevention, Spanning Tree Protocol (STP), etc.||You need networking knowledge to configure a switch properly. A wrong or improper switch configuration can do disaster in the network.|
|The switch reduces the number of broadcast domains.||Switches are not as good as routers in limiting a broadcast; however, nowadays, there are layer3 switches that can handle broadcasts like routers.|
|The switch supports VLANs for logical port segmentation.||Some switches only support normal VLAN ranges from 1-1005 and do not support extended VLAN range(1006 to 4094). Therefore, you should always check a switch VLAN limits as per your requirement.|
|Switches can use the CAM database or MAC address tables to map the port to MAC.||Some switches have a specific limit for MAC address tables; you should always check the maximum CAM database size before deploying switches.|
|Switches are robust and can handle broadcast and multicast packets.||Although switches can handle broadcast and multicast packets well, handling Multicast packets in switches requires careful planning and design.|
|Administrators can manage VLAN security and turn ports on and off using intelligent ethernet switch features.||Again these advanced features need proper networking knowledge and careful planning.|
Conclusion: Which One Should You Use – An Ethernet Switch or A Hub?
To conclude, the ethernet switch vs hub debate is that you should always pick a switch if you have to decide between these two. But if a hub is readily available, you can still use it for a tiny network but should not connect more than three to four devices to it. On the other hand, you must use a manageable Ethernet switch with all the functionalities you need for a more extensive network, as discussed in the switch advantages section.
I hope you liked the topic, please share.
Are hubs still used today?
Hubs have been largely replaced by network switches, except in very old installations or for very specific applications, since the introduction of network switches. Repeaters and hubs have been deprecated by IEEE 802.3 as of 2011. More details on this Wikipedia article about ethernet hubs.
Why are hubs slower than switches?
Hubs operate only at half-duplex and cannot send and receive data at the same time, that’s the reason they are slower than switches and routers.
Each port on a 10/100Mbps switch gets a full 10/100Mbps, unlike a Hub. In other words, no matter how many computers are sending data, everyone will always be able to use their full share. When it comes to network connectivity, switches are superior to hubs because of these reasons.
Does an Ethernet hub reduce speed?
If you are the only one (or two to three other devices) utilizing it, the hub will not slow down your internet speed. But if the same number of users start to send and receive traffic at the same time, hubs will drastically reduce the network speed.
And the reason is hubs can only work on half-duplex and have less speed on ports (not more than 100 Mbps), so you can either send data or receive data at one point in time.
|
<urn:uuid:d6cf24f5-6165-4847-8db0-99bfbaa935e5>
|
CC-MAIN-2022-40
|
https://afrozahmad.com/blog/ethernet-switch-vs-hub-vs-router/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00041.warc.gz
|
en
| 0.907998 | 3,506 | 2.828125 | 3 |
Read time 3 minutes, 47 sec
Microsoft had given its digital imprimatur to a rootkit that has decrypted all the encrypted communications and sent them to the attacker-controlled servers. This malicious driver has been spread within gaming environments. This driver is known as “Netfilter,” whose primary purpose or critical role is as a rootkit that communicates with the Chinese command-and-control (C2) IPs.
Karsten Hahn, who is a researcher at security firm G Data, discovered this driver using his company’s malware detection system. The initial observation was declared a false alarm as Microsoft had digitally signed Netfilter under the company’s Windows Hardware Compatibility Program. After further testing and research, Karsten concluded that this was not a false warning or positive. He and his fellow researchers discovered that “The core functionality seems to be eavesdropping on SSL connections. In addition to the IP redirecting component, it also installs and protects a root certificate to the registry” (by reverse engineer Johann Aydinbas on Twitter)
What is a Rootkit?
A rootkit is a type of malware written to prevent or stop itself from being shown in file directories, other standard OS functions, and task monitors. A root certificate is usually used to authenticate traffic sent through connections protected by the Transport Layer Security protocol; this helps encrypt the data in transit and ensures the server whether a user connected is genuine or an imposter.
Typically, these TLS certificates are issued by a Windows-trusted Certificate Authority (or CA), and by installing these root certificates in Windows, hackers can bypass the CA requirement.
The driver Netfiler was seen communicating with China-based C&C IPs providing no legitimate functionality, which further led to suspicions. Around this time, G Data’s malware analyst Karsten Hahn shared the signature info publicly on Twitter and contacted Microsoft.
Figure 1 Malicious binary signed by Microsoft
According to Hahn, any code that runs in kernel mode must be tested and signed before being released publicly to ensure stability for the Operating System. At this time, BleepingComputer also began observing the behavior of C2 URLs and contacted Microsoft for a valid reason or explanation.
The first few C2 URL returns a set of more routed separated by the pipe (“|”) symbol:
Figure 2 Navigating to the C2 URL
Each of these serves a purpose:
- The URL which is ending in “/p” means it’s associated with proxy settings
- “/s” denotes encoded redirection IP addresses.
- “/h?” is for representing CPU-ID.
- “/c” gave a root certificate
- “/v?” denotes the malware’s self-update functionality.
According to BleepingComputer, the “/v?” path provided the URL to the malicious Netfilter driver in the question itself (at “/d3”):
Figure 3 Path to malicious Netfiler driver
The G Data researcher, Hahn spent quite some time sufficiently analyzing the driver, result concluded that this driver has self-update functionality. According to him, the sample has a self-update routine that sends its MD5 hash to the server through “hxxp://188.8.131.52:2081/v?v=6&m=”.
The server then replies with the URL for the latest sample with “OK” if the model is up-to-date and the malware replaces its file accordingly.
Figure 4 Malware self-update functionality analyzed
Microsoft said they investigate a malicious actor who distributed these negative drivers (Netfilter) within gaming environments. This actor submitted drivers for certification through the Windows Hardware Compatibility Program. These drivers were built by a third party, so Microsoft has suspended their account and reviewed their submissions for additional signs of malware. The company (Microsoft) could not find evidence of either the Windows Hardware Compatibility Program signing certificate or its WHCP signing infrastructure being compromised. So, they have since added Netfilter detections to the Windows Defender AV engine built into Windows and provided this detection to other AV providers.
Update regarding the Malware
- Jun 26th, 12:26 PM ET: Clarified that BleepingComputer did not see the DoD list explicitly mentioning the alleged Chinese company, contrary to the details in the researcher’s report. Also reached out to Hahn for clarification.
- Jun 27th, 04:58 AM ET: A previous version of the blog post mentioned another researcher, @cowonaut alleging that the company mentioned above was previously marked by the U.S. Department of Defense (DoD) as a “Communist Chinese military” company. The claim has since been retracted from the original blog post, and we have updated our article to reflect the same. However, BleepingComputer did not see Ningbo Zhuo Zhi Innovation Network Technology Co., Ltd. present on any of the DoD lists available.
Despite the limitations, this security lapse was a serious one. Microsoft’s certification program was designed to block precisely the kind of attack which G Data first discovered. Microsoft has yet to say how they came to sign the malware digitally; company representatives also declined to explain.
|
<urn:uuid:4af4e5ba-273c-4f4c-a376-8d60c42e1212>
|
CC-MAIN-2022-40
|
https://www.encryptionconsulting.com/microsoft-signed-rootkit-malware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00041.warc.gz
|
en
| 0.94114 | 1,117 | 2.625 | 3 |
What are incident response scenarios?
These exercises are a practical way for businesses to test their incident response plans (IRP) and educate their teams on the importance of cybersecurity and what to do in the event of a data breach. This is done by setting out a realistic scenario and asking participants questions like:
How would you respond?
What tools would you use?
What is your role in reporting the breach?
Who would you speak to in order to resolve the issue?
How would you report the problem?
5 incident response scenarios you can use to test your team
Set out a made-up scenario and give your team a bit of context behind it. They’ll then need to identify the cause of the problem and how they’d approach it.
Most of these are simple tests that can be completed in as little as 15 minutes, so you don’t need to set aside hours for these scenarios. They are, however, perfect for getting your team thinking about cybersecurity and ensuring they’re equipped to deal with a breach.
1. A patching problem
The key issue: a member of your support team deploys a critical patch in a hurry making the internal network vulnerable to a breach.
An example of the scenario you could present: it’s last thing on a Friday, and your network administrator receives a ticket looking for a critical patch on one of your systems. They quickly put something together, deploy it, and go home for the weekend. Hours later, the weekend service desk technician starts receiving calls saying the system is down and nobody can log in.
What’s being assessed: participants will have to identify the risks of an untested patch and how this could lead to a cybersecurity incident. They’ll also have to work out whether these patches can be recalled and who they need to contact to solve the issue.
2. A malware problem
The key issue: crossover between work and home devices has led to an employee infecting the company systems with malware.
An example of the scenario you could present: a member of the marketing team borrowed a company USB drive so they could take their presentation home and continue working on it. They plugged the USB into their home laptop, and while connected, it was infected with malware. Once back at the office, they re-inserted this into their work computer, infecting the systems with the same malware.
What’s being assessed: this tests how quickly/whether the employee can work out what’s happened and also whether your team are aware of security issues such as malware. This highlights the importance of keeping work and home devices separate as much as possible.
3. A potential cyberthreat
The key issue: A hacker is threatening to break into the company systems and access sensitive data, but how they plan to attack is unknown.
An example of the scenario you could present: after believing they have been wronged by the company, a hacker starts emailing members of staff threatening to hack the company database. However, the nature of the attack is unknown, and the business needs to act fast to ensure all systems are protected.
What’s being assessed: this scenario requires participants to plan ahead for an attack that could come from anywhere. They must identify weaknesses in the systems and decide very quickly how to bolster the company’s defences and security measures.
4. The cloud has been compromised
The key issue: a cloud-based service you use to store data has been hacked, and the passwords and data stored within have been compromised.
An example of the scenario you could present: a news story reports that a third-party cloud storage service you use has been hacked. The extent of the breach isn’t yet known, but it’s revealed that some of the data stored within has been exposed.
What’s being assessed: participants will be tested on their incident response, how they plan to get on top of the issue and whether they believe their company should still be held accountable for the breach, despite it coming from a third-party provider.
5. A financial mix-up
The key issue: data from the payroll system has been tampered with/deleted and this was flagged after employees didn’t receive their pay.
An example of the scenario you could present: despite allegedly being added to payroll over a month ago, five new members of staff haven’t received their pay and have raised the issue with their managers. After closer inspection, it appears that they were added to the system by someone in finance – but their information seems to have been removed or gone missing.
What’s being assessed: using the scenario, participants must work out what’s gone on and what led to their information going missing. This will test their incident response and if they know who to report to when there’s been a breach in the financial systems.
The benefits of incident response scenarios
How well your teams handle these incidents will indicate how prepared they are for a data breach or whether there are huge gaps in your company’s IRP. These tests can highlight areas of strength and where there’s room for improvement, making Incident Response Scenarios beneficial to both individual staff and the business as a whole.
Contact us today to find out more about our range of cybersecurity solutions, and how they can help you and your business reduce the attack surface against the growing threat of cybercrime. Call 028 8225 2445 or email [email protected].
|
<urn:uuid:d9e3646b-42e6-4200-b40e-eec035082b01>
|
CC-MAIN-2022-40
|
https://loughtec.com/5-cyber-incident-response-scenarios-to-test-on-your-team/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00041.warc.gz
|
en
| 0.965045 | 1,136 | 2.84375 | 3 |
”“”This article shows necessary actions for mounting a network drive with administrator rights.
If you have a local network set up at home or at work between devices, you have access to a shared folder on another computer, and also if you have a network drive (NAS), or a USB drive is connected to the router, then in Windows 10 can be connected as a network drive. In this article we will look at this process on the example of Windows 10. But in other versions of Windows there are almost no differences.
By connecting a network drive, we can quickly access a specific network folder. All network drives are displayed in the explorer (this computer). Of course, to connect a shared folder as a separate drive, our computer must find these shared folders. Simply put, in the explorer, on the Network tab, we should have access to folders on other computers on the network, to a network drive, or a router (if a USB flash drive or disk is connected to it).
If you have devices there, then you can connect their shared folders as a network drive. If they are not, or not only the necessary computers, or drives, then it is possible that you need to configure the local network.
1. Create a New Admin User (Type From the Command Line)
“Net user” + username + “” + password + “/ add” (for example:
net user pcunlocker 123 / add) — create the user “pcunlocker” with the password “123”
“net localgroup” + group_name + “” + username + “/ add” (for example:
net localgroup administrators pcunlocker / add) — the user “pcunlocker” becomes the administrator
2. Create Temporary File tmp.inf
Create a file “C: \ windows \ security \ templates \ tmp.inf”, which should contain:
signature = “$ CHICAGO $”
DriverVer = 07/01 / 2001,5.1.2600.0
MACHINE \ System \ CurrentControlSet \ Control \ Lsa \ ForceGuest = 4.0
3. Import It Into Local Policies
Type the command that will create the file and import it into local policies, this is necessary in order for local policies to allow access as a normal input and not as a guest.
Secedit / configure / db c: \ windows \ security \ database \ tmp.sdb / cfg c: \ windows \ security \ templates \ tmp.inf”
Description of the step by step command:
seceditis a program that applies local policy settings
/ configure— a command that uses settings from the saved “c: \ windows \ security \ templates \ tmp.inf”
/ dbis a database that “secedit” saves and imports into local policies, in this case it is c: \ windows \ security \ database \ tmp.sdb
4. Then Correct the Network Drive
You can also correct the network drive itself, and make it multiplayer
- from the beginning, delete it “net share” + drive letter + “$ / delete”
- create anew, but with the “unlimited” parameter: “net share” + drive-letter + “$ =” + drive-letter + “: \ / UNLIMITED”
When you connect a disk, you must use: “\\ ip-address \ drive-letter $”, and use the created User and password with administrator rights.
5. Connect Shared Folder as Network Drive on Windows 10
Go to “This Computer”. Click on “Computer” – “Map Network Drive”.
Click on the “Browse” button, select the necessary shared folder from the network environment and click “Ok”.
Drive letter can be left by default, or choose any other.
If you need to specify a different username / password to access this folder, check the box “Use other credentials”. But as a rule, it is not necessary to do.
Please note: depending on the sharing settings on the device to which you want to connect, you may need to specify a username and password.
Similarly, you can connect another computer to the local network.
All mapped drives will be displayed in Windows 10 Explorer. On the “This Computer” tab.
To disconnect a drive from a network location, simply right-click on it and select Disconnect.
Also, if necessary, you can create a shortcut.
It will be placed on the desktop. And you will have access to the shared folder on another computer or network drive directly from the desktop.
Consider Using Action1 to Map Network Drive Remotely if:
- You need to perform an action on multiple computers simultaneously.
- You have remote employees with computers not connected to your corporate network.
Action1 is a cloud-based IT management platform for patch management, software deployment, remote desktop, software/hardware inventory, endpoint management and endpoint configuration reporting.
|
<urn:uuid:ccb5744d-0743-4c88-922c-359d594bc698>
|
CC-MAIN-2022-40
|
https://www.action1.com/how-to-map-network-drive-remotely-on-windows-systems/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00041.warc.gz
|
en
| 0.844878 | 1,112 | 2.65625 | 3 |
Introduction to IoT vs M2M
Technology advancement is occurring at a rapid pace and so does technology innovation. “M2M” and “IOT” are 2 innovation terms which are common discussion points of the technical innovation community.
Notable is that M2M may be called the predecessor of IoT. Let’s understand the terms & difference in M2M and IOT in more detail.
M2M or Machine to Machine –
M2M concept refers to two or more machines can communicate with each other and carry out certain functions without human intervention. Some degree of intelligence can be observed in the M2M model. Some of key applications which leverage M2M technology to provide services are –
- Warehouse Management Systems (WMS)
- Supply Chain Management (SCM)
- Harvesting energy like oil and gas
- Customer billing like smart meters
- Traffic control
- Remote monitoring
IoT or Internet of Things –
IoT refers is an ecosystem of connected devices (via the Internet) where the devices have the ability to collect and transfer data over a network automatically without human intervention. IoT helps objects to interact with internal and/or external environment which in turn control the decision making. Some of applications and services of IOT Technology are –
- Smart Home
- Connected cars
- Agriculture and Retail
- Smart cities
- Poultry and Farming
If you’re planning to use this advanced technology for your business, now is the right time. You can create a secure web or mobile application making the processes convenient for your business. If you’re unclear about IoT’s potential, here is a short piece on the internet of things for software development that might help you know how IoT works for any business. Find more information about IoT here https://sumatosoft.com/
IoT vs M2M : The Difference
M2M solutions contain a linear communication channel between various machines that enables them to form a work cycle. It’s more of a cause and effect relation where one action triggers the other machinery into activity. Conversely, in case of IoT, multiple devices communicate with each other through sensors and digital connectivity.
While M2M (machine to machine) is commonly associated with isolated solutions like a solution of Wi-Fi thermostats, a vehicle location system or home automation, IoT (Internet of Things) works by stretching its boundaries and integrating multiple disparate systems into outputs beneficial for Business goals.
Another key differentiator is that M2M’s key focus is on direct point-to-point connectivity across mobile networks or fixed-line while IoT communication involves IP networks and will usually employ cloud or middleware platforms.
When we consider the scalability in M2M vs IoT, clearly IOT wins the race since it works on Cloud-based architecture and as we know Cloud can expand substantially.
Also, IoT makes use of Open APIs for communication across distinct systems while M2M mostly has limited or no open APIs.
Comparison Table: IoT vs M2M
|Abbreviation for||Machine to Machine||Internet of Things
|Philosophy||M2M is Concept where two or more machines can communicate with each other and carry out certain functions without human intervention. Some degree of intelligence can be observed in M2M model.||IOT is an ecosystem of connected devices ( via Internet) where the devices have ability to collect and transfer data over a network automatically without human intervention. IOT helps objects to interact with internal and/or external environment which in turn control the decision making.
|Connection Type||Point to Point||Through IP Network using various Communication types
|Communication protocols||Old proprietary protocols and communication techniques||Internet protocols used commonly
|Focus Area||For monitoring and control of 1 or few infrastructure/assets.||To address everyday needs of humans.
|Sharing of collected data||Data collected is not shared with other applications||Data is shared with other applications (like weather forecasts, social media etc.) improve end user experience
|Device dependency||Devices usually don’t rely over Internet connection||Devices usually rely over Internet connection
|Device in scope||Limited devices in scope||Large number of device sin scope
|Scalability||Less scalable than IOT||More scalable due to cloud based architecture
|Example||Remote monitoring , fleet control ||Smart Cities, smart agriculture etc.
|Business Type||B2B||B2B and B2C
|Technology Integration||Vertical||Vertical and Horizontal
|Open APIs||Not supported||Supported
|Related terms||Sensors , Data and Information||End users, devices, wearables, Cloud and Big Data
Download the difference table IOT vs M2M.
Hope you would have understood the difference between IoT and M2M from the above comparison table.
|
<urn:uuid:57562c72-fa93-4653-bd94-69a63444ec2f>
|
CC-MAIN-2022-40
|
https://ipwithease.com/internet-of-things-vs-machine-to-machine-iot-vs-m2m/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00041.warc.gz
|
en
| 0.878184 | 1,071 | 2.9375 | 3 |
What is RARP Protocol?
RARP is an abbreviation for Reverse Address Resolution Protocol. RARP is a TCP/IP protocol that is responsible for the translation of Physical Address (e.g. – Ethernet address) to be translated into an IP address.
Hosts like diskless workstations only have their hardware interface addresses or MAC address, but not their IP addresses. They must discover their IP addresses from an external source, usually via RARP protocol. RARP is defined in RFC 903.
RARP protocol is described in Internet Engineering Task Force (IETF) publication RFC 903 It has been considered obsolete with inventing to new methodologies like the Bootstrap Protocol (BOOTP) and the DHCP.
Both the new methods support a much greater feature set than RARP protocol.
RARP requires one or more server hosts to maintain a database of mappings of Link Layer addresses to their respective protocol addresses.
Media Access Control (MAC) addresses need to be individually configured on the servers by an administrator.
RARP is limited to serving only IP addresses.
Steps to Achieve the IP Address from RARP Server:
- Source Device “Generates RARP Request Message” – The source device generates a RARP Request message. The Source puts its own data link-layer address as both the Sender Hardware Address and also the Target Hardware Address. It leaves both the Sender Protocol Address and the Target Protocol Address blank.
- Source Device “Broadcasts RARP Request Message” – The source broadcasts the ARP Request message on the local network.
- Local Devices “Process RARP Request Message” – The message is received by each device on the local network and processed. Devices that are not configured to act as RARP servers ignore the message.
- RARP Server Generates RARP Reply Message: Any device on the network that is a RARP server responds to the broadcast from the source device. It generates a RARP Reply and sets the Sender Hardware Address and Sender Protocol Address to its own hardware and IP address of course. It then sets the Target Hardware Address to the hardware address of the original source device. It looks up in a table the hardware address of the source, determines that device’s IP address assignment, and puts it into the Target Protocol Address field.
- RARP Server Sends RARP Reply Message: The RARP server sends the RARP Reply message unicast to the device looking to be configured.
- Source Device Processes RARP Reply Message: The source device processes the reply from the RARP server. It then configures itself using the IP address in the Target Protocol Address supplied by the RARP server.
Related – RARP vs DHCP
To know more about ARP & RARP Protocol watch this video –
|
<urn:uuid:a0e48eef-4798-40ac-91d0-e6ab02b67cdc>
|
CC-MAIN-2022-40
|
https://ipwithease.com/rarp-reverse-address-resolution-protocol/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00041.warc.gz
|
en
| 0.885027 | 578 | 3.390625 | 3 |
Robotic Systems Developed to Find Lost Items
For all those who suffer from the daily chore of finding missing keys, phones and wallets before leaving the house, the world of robotics may have an answer for you. Innovators are creating robotic systems to help users find lost items, with these smart devices designed to rapidly sort through clutter and identify objects buried under piles of other household items.
One example is a fully integrated robotic arm from MIT, known as RFusion, which can identify and retrieve items hidden beneath other objects. The robotic arm combines visual data with radio frequencies (RF) emitted from an antenna attached to the arm’s gripper. These frequencies bounce off of tags a user can attach to objects, and as the signals can travel through most surfaces, RFusion can locate a tagged item even if it’s obscured by other objects, for instance, a pile of mail or laundry. The camera is used to verify the item selected is indeed the correct one.
According to MIT, the system has an accuracy rate of 96% and can find items within a few minutes.
More recently, another MIT team unveiled the algorithm FuseBot to similarly find lost items, the main difference being that the newer system does not require the lost item to be tagged to be found. Instead of sending RF signals, FuseBot uses algorithms that can logically predict the probable location of an item, as well as determine the best way to extract it.
According to the team, FuseBot recovered a lost item with a 95% success rate.
The model, if scaled, also has potential industrial use cases such as warehouse order fulfillment or retail back-of-house, as well as in more high-pressure situations such as disaster zones to search through the rubble.
“As the research evolves, it would be interesting to explore how incorporating more complex models that account for deformability would allow FuseBot to achieve even higher efficiencies,” the FuseBot team wrote in their study results.
“Having robots that are able to search for things under a pile is a growing need in industry today,” said Fadel Adib, an associate professor in MIT’s department of electrical engineering and computer science when talking about RFusion. “Right now, you can think of this as a Roomba on steroids, but in the near term, this could have a lot of applications.”
|
<urn:uuid:132ca065-7772-48fb-801d-f7cc05e70bd6>
|
CC-MAIN-2022-40
|
https://www.iotworldtoday.com/2022/08/09/robotic-systems-developed-to-find-lost-items/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00041.warc.gz
|
en
| 0.953944 | 493 | 3.078125 | 3 |
It seemed like the Paris Agreement might have finally turned the tide of climate change. Enforced back in 2015, it was meant to limit global warming by encouraging companies to adopt carbon neutrality strategies. But where do we find ourselves in 2022?
Today, it seems like the commitments of big tech corporations to solve the climate crisis might not be as grand as initially proposed. If we take a closer look at their climate-related initiatives, we will, however, find quite similar and seemingly ambitious targets.
For example, Facebook (now Meta) is planning to reach net-zero emissions for its value chain and become water positive by 2030.
“Today, we’re building on these efforts and announcing a new goal to be water positive by 2030. This means Facebook will return more water to the environment than we consumed for our global operations,” the company’s statement reads.
Google is setting the same goals. They seek to replenish more water than they consume by 2030 and achieve net-zero emissions, as well as fully run on carbon-free energy.
“To accelerate the transition to a circular economy, we’re working to maximize the reuse of finite resources across our operations, products, and supply chains and to enable others to do the same,” its annual report suggests.
Apple and Microsoft are also planning to reach carbon neutrality and carbon negativity by 2030 respectively. The difference between the two is that carbon neutrality means emitting the same amount of carbon into the atmosphere as removing, while carbon negativity means removing more carbon than emitting.
“We strive to be transparent with our commitments, evidenced by our announcement that Microsoft’s cloud datacenters will be powered by 100 percent renewable energy sources by 2025,” said Microsoft’s statement.
Healthy ambitions or unrealistic goals sold to gullible consumers?
Undoubtedly, such initiatives by big tech make quite an impression, but their implementation is questionable. To some experts, they are just impressive-sounding statements that have little in common with reality.
“Misleading advertisements by companies have real impacts on consumers and policymakers. We’re fooled into believing that these companies are taking sufficient action, when the reality is far from it,” said Gilles Dufrasne from Carbon Market Watch.
A report by New Climate found that 25 of the world’s largest companies (such as Amazon and Google) only commit to reducing their emissions by 40% instead of 100% stated in their carbon neutrality strategies. According to the findings, “only one company’s net-zero pledge was evaluated as having ‘reasonable integrity’; three with ‘moderate,’ ten with ‘low’ and the remaining 12 were rated as having ‘very low’ integrity.” The reasons for such results vary: from unreliable offsetting approaches to hiding critical information and excluding upstream/downstream emissions in their value chain (which account for as much as 90% of all controlled emissions.)
However, it’s not just unrealistic targets that prevent corporate giants from making such an expected change. Their desire to lead any policy-related climate initiative is – to say the least – not very evident.
According to the report by Influence Map, while the big five (Apple, Alphabet, Amazon, Facebook, and Microsoft) have been relatively successful in their own climate programs, the group lags behind when it comes to climate policy engagement. As a result, they fail to use their strong economic stance to support the implementation of the Paris Agreement.
At the same time, the relevant climate-engagement levels of Amazon, Alphabet, and Microsoft had declined in comparison to 2020.
“This report shows how Big Tech has refused to lift a finger to push comprehensive climate action in Congress. That may soon change, and the best place to start is ensuring that Big Tech’s trade associations are powerful advocates for ambitious climate action,” Senator Sheldon Whitehouse commented on the findings.
This view is echoed in the words of Bill Weihl, who stated that big tech is refusing to lead the climate change initiative and is significantly falling behind “despite their pro-climate philosophy.”
Who’s to blame?
Since the implementation of the Paris Agreement, many positive developments have been pushed forward while others are stalling. In 2017, former US President Donald Trump announced the withdrawal from the agreement, which could, according to him, “put the country at a permanent disadvantage.” The decision was later revoked by President Joe Biden, showing that political commitments are of utmost importance to climate development.
Either way, the enforcement of the agreement on its own is not likely to prevent climate change. In the beginning, it was supposed to be only the first step towards more actions taken on the national and corporate levels. But where do big tech companies belong in this equation?
The tech industry accounts for 2-3% of global emissions while big tech contributes around 0.3%. It may not seem like much, but it amounts to millions metric tons of carbon dioxide produced annually.
In 2021, a European Green Digital Coalition was formed to support the sustainability of the tech sector. Its members include Microsoft, IBM, and Vodafone Group. Going forward, it’s essential to have a broader cross-national consensus on the climate change approach, both within big tech organizations and countries globally.
More from Cybernews:
Subscribe to our newsletter
|
<urn:uuid:12951f22-27f6-4115-a253-0409f2763e3c>
|
CC-MAIN-2022-40
|
https://cybernews.com/editorial/will-big-tech-aggravate-or-solve-the-climate-change-problem/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00041.warc.gz
|
en
| 0.959104 | 1,118 | 2.65625 | 3 |
Harald Ott at Massachusetts General Hospital and his colleagues have now taken a step toward the dream of building a biolimb.
Losing an arm can feel like losing a family member. Though prosthetic limbs have been improving, they are far from ideal. And while the improving field of limb transplants gives hope, the patient has to take immunosuppressant drugs all their life to prevent the body rejecting the new addition because it once belonged to someone else.
Harald Ott at Massachusetts General Hospital and his colleagues have now taken a step towards the dream of building a biolimb—made with cells from the recipient so that it looks and feels like the natural part of the body. In a study published in Biomaterials, they show the reconstruction of the first rat biolimb.
The idea is simple. First, they take an arm from a dead rat and put it through a process of decellularization using detergents. This leaves behind a white scaffold. The scaffold is key because no artificial reconstructions come close to replicating the intricacies of a natural one.
Next, they seed the arm with human endothelial cells, which recolonize the surfaces of the blood vessels and make them more robust than rat endothelial cells would. Finally, scientists inject mice cells, such as myoblasts that grow into muscles. After two to three weeks, these cells use the scaffold to regrow the arm, on which rat skin is then grafted. Among the 100 rat forelimbs, he succeeded in recellularizing at least half.
When Ott applies electrical pulses to activate the muscles, he is able to make the rat paw clench and unclench. He has also attached the biolimbs to anaesthetized rats, and saw that blood vessels functioned as he had hoped.
Next, Ott has to show that the arms will develop a nervous system, which will allow the arm to be controlled by the recipient rat.
This has been shown to work in hand transplants, but remains to be seen in biolimbs. Also, Ott needs to prove that, as the theory suggests, these limbs will indeed not be rejected by the rat without the use of drugs.
If the above is successful, the next target would be a primate’s arm. Ott has already shown that they can be decellularized.
(Image via angellodeco/ Shutterstock.com)
NEXT STORY: The Best Soundtrack for Productivity
|
<urn:uuid:2edec358-aaa7-4eb1-b9c1-93b961d70d1d>
|
CC-MAIN-2022-40
|
https://www.nextgov.com/cxo-briefing/2015/06/worlds-first-biolimb-scientists-are-growing-rat-arms-petri-dishes/114709/?oref=ng-next-story
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00041.warc.gz
|
en
| 0.956106 | 516 | 3.046875 | 3 |
The Virtual File System (VFS) allows you to make files and folders available to EFT users through the granting of permissions. The VFS allows you to create physical folders and virtual folders.
Physical folders are folders you create on the local hard drive from EFT.
Virtual folders refer (point) to existing folders on your computer or another system, similar to a Windows shortcut. Because a virtual folder name is only an alias for the real folder, when you create a virtual folder, you do not have to give it the same name as the folder it references.
On the VFS tab of the administration interface, you can specify which files and folders are available to users, and then specify Group and user permissions for the folders. You make the files, physical folders, and virtual folders available to users by granting permissions based on their Group membership. VFS permissions are constructed to allow users the least restrictive access to folders.
For example, suppose a user is a member of a Group that has read, upload, download, and delete permissions to a folder. Even if the user is a member of another Group that has only download permissions to the same folder, the user will be able to read, upload, download, and delete files from that folder.
User permissions are given priority.
In the folder that the user wants to access, if EFT finds user-specific permissions that are not those from Groups, EFT does not look for any Group permissions. EFT gives priority to individually configured permissions. For example, suppose there is a user with the user name Bob. Bob is a member of two permission Groups that have only download and list permissions for Folder1. However, you have decided to give Bob full permissions for Folder1 without creating a new permission Group. Because EFT looks for these individual user permissions first, then Bob will have full permissions for Folder1 no matter how his Group membership is configured. This same rule implies that if Bob has individual permissions that only allow him to download files from that particular folder, it does not matter if he is a member of two Groups that have full permissions for the folder. Bob will only have permission to download files.
If a user does not have individual permissions for a folder and is a member of more than one Group, EFT gives the user the least restrictive access for the folder.
From their Groups, users receive all the permissions available for the folder. For example, suppose a user with the user name Jan is a member of two Groups, Group1 and Group2, that both have permissions for a particular folder, Folder2. If Group1 has download permission and Group2 has upload permission then Jan will have both upload and download permissions for Folder2.
The All Users Group is the same as any other Group except that it can't be removed from the root folder permissions list.
You can use the All Users Group to determine inherited permissions from the parent folder. If you change any inherited permissions for the All Users group, EFT displays a confirmation message to make sure you want to change the inherited permissions.
EFT supports multiple concurrent administration for most setting changes but not for changes made to the VFS. This means that last committed changes will overwrite changes made by other administrators when both administrators are working from the same version of the configuration.
|
<urn:uuid:6b8d123f-a2e9-41ee-81e3-54a8b1930ef6>
|
CC-MAIN-2022-40
|
https://hstechdocs.helpsystems.com/manuals/globalscape/eft8-0-7/content/mergedprojects/admin/introduction_to_the_virtual_file_system_vfs.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00041.warc.gz
|
en
| 0.943085 | 673 | 2.703125 | 3 |
Russia has been making the news in recent years for bold incursions into the utility systems of other nations, including a much-publicized hack of the United States utility grid in 2018. The U.S. government claimed that this cyber attack on the power grid involved more than just espionage and probing; the Russian hackers allegedly left behind the virtual tools needed to later disrupt the grid by shutting off vital systems.
This time, the US is on the offensive. A recent New York Times article cited anonymous current and former Trump officials in reporting that the country’s intelligence agencies have been doing much the same thing in recent months. Both countries have been regularly probing each other’s grid defenses since at least 2012, but this is the first known occurrence of the Americans planting malicious code in the Russian systems. The code is believed to be able to compromise the Russian power grid in the event of a conflict between the two countries.
The move is in keeping with a more aggressive US Cyber Command national security approach that began with a 2018 executive order that gives government agencies more freedom to conduct their own offensive operations without presidential approval. Russia appears to have been a central consideration in this policy shift, given that the country has shown little hesitation or fear in testing US defenses and deploying internet-based psychological operations within their rival’s borders.
Cyber attacks on power grids and modern cyber warfare
Thus far, these reports of cyber attacks on power grids have not been backed up by any concrete action. While Russia has been directly responsible for (or at least suspected in) a number of utility grid attacks around the world, most notably the December 2015 attack on Ukraine, the country has not shut off the power in the United States. Likewise, the United States has been known to deploy active cyber measures to disrupt enemy utilities and industry (such as the 2015 Stuxnet attack on Iran), but has not done so to Russia as of yet.
There is some concern that Russia may try to make active use of their cyber capabilities during the 2020 election, however. Given the country’s propensity to meddle in US elections using disinformation and hacking, some observers fear that Russian hackers will try to initiate selective blackouts in key areas with cyber attacks on the power grid as voters head to the polls.
While American retaliation in kind might seem to be a natural development, the planting of these bits of malware in Russian power grids also contributes to the legitimacy of such attacks in an emerging area of warfare that does not yet have clear rules and protocols.
Do these cyber attacks on power grids represent an escalation of cyber war conditions? The Russian response from spokesman Dmitry Peskov was certainly heated, describing the act exactly that way. On the American front, President Trump went in the other direction and condemned the New York Times for what he feels is “virtual treason” for reporting the story. There is also some legitimate basis for questioning the purpose of these anonymous disclosures given the Trump administration’s seeming inclinations toward Russian interests and willingness to share sensitive cyber information with the country. It makes little sense to announce covert capabilities such as this unless one wants the enemy to begin scouring their networks for them. At least two of the administration officials that served as sources for the articles believed the president had not been briefed on the action.
Cyber attacks on an electricity grid are generally seen as out-of-bounds during peacetime. Principles of responsible cyber behavior drafted by both the UN and the G7 specifically condemn cyber-based infrastructure attacks. Planting malware in infrastructure without triggering it appears to be running right up to the border of what is currently considered acceptable behavior by the international community, but not crossing it. Causing actual damage would definitively cross that line, and would have a significant possibility of sparking actual war.
Protecting the grid
A natural question springs to mind when cyber attacks on power grids are mentioned. Can’t the grids just be air-gapped from the internet, as they were in the days before there was one? And why aren’t they?
Public utilities security researcher Theodore Kury of the University of Florida explored this issue in a 2018 article. To summarize, one of the obstacles is the competing interest in limiting wasteful government spending. If infrastructure replacements and changes translate into rate hikes, state and federal regulations require the utility companies to disclose in detail what they are spending the money on. When it comes to cybersecurity, disclosing in detail is often not possible without compromising the security measures.
The actions of “non-malicious insiders” are the greatest problem, however. Employees clicking on malware links and spoofed sites were responsible for opening the door to most of the documented cyber attacks on power grids. In some cases, the Russian hackers compromised actual sites that employees were known to frequent on their work computers – a practice known as a “watering hole” attack.
Modern energy industry systems have a number of legitimate needs for internet accessibility, such that simply air-gapping everything that touches the grid operation is probably not going to be possible. There is some separation of internet-connected servers from offline functions, but an ongoing problem appears to be systems that were implemented two or three decades ago with greater levels of internet integration that are now very expensive to replace.
Even with computer systems in place that are properly firewalled from making changes to the critical infrastructure, there remains the risk of “non-malicious insiders” at both the power plants and their various vendors toting in and hooking up personal equipment to the control systems for the sake of convenience.
As is the case with all types of organizations both large and small, it would appear that defeating cyber attacks on the power grid ultimately comes down to across-the-board employee training as well as resiliency measures to limit or prevent downtime.
|
<urn:uuid:ecfea5c1-8724-436d-a439-73a0e0aafcb5>
|
CC-MAIN-2022-40
|
https://www.cpomagazine.com/cyber-security/possible-u-s-cyber-attack-on-power-grid-in-russia-is-cyber-warfare-on-the-table/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00041.warc.gz
|
en
| 0.964171 | 1,181 | 2.765625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.