text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In this topic: Snapshot is a feature used to create a logical usable image and/or an independent fully usable copy of the data residing on a virtual disk at a specific point in time (snapshot point). At the moment a snapshot is created from a virtual disk, the data contained on the "source" virtual disk is logically or physically copied to another virtual disk, referred to as the "snapshot." Snapshots can be served to hosts. The difference between the Continuous Data Protection (CDP) feature and the Snapshot feature is that the snapshot preserves the contents of the virtual disk at a point in time, while a CDP-enabled virtual disk allows the creation of a rollback at any point in time preserved in the history log. It is possible to create a snapshot at a time when the data is in a known good state and this is the best practice. In the case of a rollback, the image is created regardless of the state. Benefits of Snapshot Snapshots provide a foundation to solve the following types of problems: - Disaster Protection – Snapshots provide an efficient way to identify and return to specific virtual disk data. - Application Testing – Snapshots provide a way to perform test activities on a consistent and re-usable copy of operational data while the source data remains unaltered. Exposure to data corruption and system failure can be minimized by testing against a persistent image of real data before bringing new application changes online. - Fast Data Cloning – Snapshots provide a way to duplicate data quickly without disruption of data availability. - Error Protection – Snapshots provide a way to perform faster and more frequent backups translating to faster recovery from data corruption problems. - Alternative to File-level Data Sharing – Snapshots provide a way to present multiple copies of the same virtual disk to multiple hosts. In other words, multiple users can have individual copies of the same data. Use snapshots to provide file-level data sharing in order to: - Off-load machine cycles related to sharing data between machines - Reduce data traffic in the SAN/LAN - Increase performance and data availability - Allow better scalability There are two snapshot types: - Differential Migration – The "snapshot" is an image of the "source" virtual disk at the time of the snapshot (snapshot point). The image is logical and dependent on the source. Deleting the source virtual disk, also deletes any differential snapshots due to their dependency on the source. Differential snapshots are not protected against unexpected restarts of the DataCore Server. - Full Migration – The "snapshot" is a tangible clone copy of the "source" virtual disk at the time of the snapshot (snapshot point). After migration is successfully completed, the snapshot is fully usable and can exist and operate independently of the source from which it was created. After creating snapshots, additional operations can be performed, see Performing Snapshot Operations. Snapshot operations can also be performed on virtual disk groups, see Virtual Disk Groups for more information. The mapstore is a dedicated storage location within a disk pool that is used internally for the specific purpose of holding state and delta map information for all snapshots on the source DataCore Server. The mapstore ensures that snapshots remain valid when a server is stopped or restarted. It does not protect against unexpected computer restarts. The first time a snapshot is created on a server, the disk pool where the mapstore will reside can be selected. The mapstore can add up to 256 GB of virtual disk allocated space in the pool. The mapstore is hidden, but the disk pool containing the mapstore can be displayed and changed in the DataCore Server Details page>Settings tab. The delta map is a record of the differences between the source virtual disk and the snapshot. The delta map is displayed as a whole number percentage. The percentage will be displayed as 0% until 1% is reached. If you update the snapshot, the delta map will be reset to 0%. Migration Map is a record of the amount of data that has been migrated to the snapshot. The migration map is displayed as a whole number percentage. The percentage will be displayed as 0% until 1% is reached. The migration map percentage can be viewed in the Snapshot Details page. A pool can be designated as the preferred pool to use when creating snapshots for a virtual disk. The preferred snapshot pool is set per virtual disk in the Virtual Disk Details page>Settings tab. When set, all snapshots for that virtual disk will be created from the designated pool. See Snapshot Operations for more information.
<urn:uuid:8b2dcfb2-94d3-4ecf-b098-fd1c3c04412d>
CC-MAIN-2022-40
https://docs.datacore.com/SSV-WebHelp/SSV-WebHelp/Snapshot.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00233.warc.gz
en
0.881862
956
2.921875
3
Data modeling is the process of defining datapoints and structures at a detailed or abstract level to communicate information about the data shape, content, and relationships to target audiences. Data models can be focused on a very specific universe of discourse or an entire enterprise’s informational concerns. The final product for a data modeling exercise varies from a list of critical subject areas, an entity-relationship diagram (ERD) with or without details about attributes, or even a data definition language (DDL) script containing all the SQL commands to build a set of physical structures within some chosen database management system (DBMS). Use of ERDs Target audiences for these deliverables may be solution architects, solution developers, business users, executives, or database administrators (DBAs). ERDs express entities and entity-to-entity relationships. Entities are the data objects in focus, where examples may include Customer, Order, Invoice, Address, Phone, and so forth, and attributes are the descriptors of the entity. As an example, a Customer entity may contain attributes such as Customer First Name, Customer Last Name, Customer Credit Rating, and so on. Entities and attributes are considered the logical idea, so the table and column become the physical equivalents. Because of their focus, ERDs are a pervasive tool used in the data modeling process. ERDs are assembled based on sets of data structural approaches, including normalized, dimensionalized, data vault, or something else. Each structural approach groups the attributes in differing ways. Similarly, ERD tools may have a variety of symbols,, since the data modeling industry has not achieved standardization, and these notations include Crow’s Foot, IDEF1X, Barker, and others. Levels of Data Models There are three generally agreed-upon levels of data models that may be attempted: conceptual, logical, and physical. However, opinions on what constitutes each of these levels, and who their targeted audiences are, varies based on schools of thought. Conceptual data models communicate a high-level perspective into the idea of the data under discussion. Because of this high-level nature, conceptual data models often lack much of the detail found in other data model types. For an enterprise perspective, the data model may be a list of data subject areas. A conceptual data model for a focused solution may result in a draft of an ERD, minus the attributes, or even a diagram not following any standard but conveying the idea. In some cases, an unattributed ERD may be looked at as the logical data model. At the other end of the continuum, a logical data model will have every piece of detail fleshing out the entities, all attributes, data types, optionality, definitions, and only stop at things that are a part of the implementation on the chosen DBMS. Physical data models, in their own quiet way, are the most controversial data modeling component. Every DBMS operates in a unique fashion. Consequently, many details are unique to a given DBMS. ERD tools offer an agreed-upon-fiction in representing a “physical ERD.” Each tool may offer unique ways of presenting some physical-only characteristics. One may be able to flag entities/tables or attributes/columns as logical-only or physical-only. But ultimately, many necessary facets are settings buried within the tool, not represented in the diagram. These hidden elements are only seen as clauses and keywords inside the DDL script that is to be executed by the DBAs implementing the solution. Because of this, that final DDL script can be viewed as the actual physical model. Shortcomings are natural, as the diagram is called an “entity-relationship diagram,” not a “table-foreign key diagram.” Data Modeling Is Still Young Data modeling is still quite a young practice. Standard data modeling practices are only “standard” within a single organization, or even a single team within an organization. With the wide variety of possibilities, our data modeling youth still shows.
<urn:uuid:f0888785-e6d7-40f8-81c9-39abb93d07a5>
CC-MAIN-2022-40
https://www.dbta.com/Columns/Database-Elaborations/Modeling-Data-Is-a-Many-Splendored-Thing-153240.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00233.warc.gz
en
0.911107
830
3.46875
3
Wikipedia defines Robocalls as follows: A “robocall” is a phone call that uses a computerized autodialer to deliver a pre-recorded message, as if from a robot. Robocalls are often associated with political and telemarketing phone campaigns, but can also be used for public-service or emergency announcements. Some robocalls use personalized audio messages to simulate an actual personal phone call. This is the good news. Unfortunately, robocalls are all too often blatantly geared to relentless selling of a product or service that is not wanted by the person receiving such calls.
<urn:uuid:efb2e4ea-f338-4724-86db-7f982f140441>
CC-MAIN-2022-40
https://blog.atso.com/topic/sip
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00233.warc.gz
en
0.94096
127
2.9375
3
Six Sigma is a collection of techniques used to design, improve and deliver high-quality processes and business outcomes. It derives its name from the statistical concept level 6 standard deviation (6 sigma deviation) from the process norm that is generally understood to represent 3.4 defects per million opportunities. Standard process steps are followed to minimize and control variability, as well as eliminate defects. For design of a new process, common process steps are often DMADV (i.e., Define needs, Measure critical to quality items, Analyse processes, Design product or service, and Verify need alignment). For process improvement, common process steps are often DMAIC (i.e., Define opportunity, Measure performance, Analyze opportunity, Improve performance, and Control performance).
<urn:uuid:217f8b9d-c9f9-43a3-aa38-bb5e3b7ecdf0>
CC-MAIN-2022-40
https://www.gartner.com/en/information-technology/glossary/six-sigma
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00233.warc.gz
en
0.938146
154
2.59375
3
What is the Difference Between a Cyber Security Career and an IT Security Career? - October 22, 2018 - Posted by: Juan van Niekerk - Category: Security Trying to decide on a career is not an easy task, especially when you’re faced with a decision which contains options that are very similar. Cyber security and IT security are two of the most popular careers today as more emphasis is being placed on security within organisations. Security has proven to play a massive role in business today with security breaches costing companies millions and sometimes even billions. However, even though cyber security and IT are similar, in terms of career paths they are slightly different. To help you decide which career is best for you I’ve outlined the main differences between cyber security and IT security careers. First we’ll start off with the difference between the two fields. IT Security vs Cyber Security Cyber security and IT security share a significant amount of similarities and when combined ensure maximum protection for organisations. However, they are often considered to be the same. IT security and cyber security are in fact two different fields that tend to overlap from time to time. To better understand the difference in careers, let’s first unpack the difference between IT security and cyber security. IT security refers to: “…the information security which is applied to technology and computer systems. It focuses on protecting computers, networks, programs and data from unauthorised access or damage.”– technojobs IT security is a broad topic that incorporates different forms of technology such as physical computer hardware, networks, mobile technology and more. IT security merges into different technology forms and involves the safeguarding of various kinds of data. This data can be in electronic form, or even in the form paper! IT security ensures the protection and safety of information, in all it’s different forms. This includes physical and electronic data. The overall consensus is therefore that IT security covers the protection and prevention of physical and cloud-based data from being hacked. Cybersecurity covers similar aspects to IT security, however there is more focus on electronic data. “Cyber security comprises technologies, processes and controls that are designed to protect systems, networks and data from cyber-attacks. Effective cyber security reduces the risk of cyber-attacks and protects organisations and individuals from the unauthorised exploitation of systems, networks and technologies.”– IT Governance Cyber security serves the purpose of ensuring data is protected from unauthorised access. Through cyberspace or online platforms, unauthorised access may result in cyber-attacks. Cyber security therefore is the field which focuses on the protection and prevention of cloud-based network attacks. In a nutshell IT security and cyber security go hand in hand. IT security gravitates more towards the protection of the physical information which also includes electronic data, while cyber security generally refers to the protection of online or cyber information. Cyber Security Career vs IT Security Career In terms of careers, cyber security and IT security are equally great career choices with both offering excellent opportunities and competitive salaries. The difference between a cyber security career and an IT security career lie in the different fields from which these careers originate. The career progression as a result is slightly different too. However, at the end of the day both careers deal with the value of data, in its many different forms. IT Security Career According to IT Jobs Watch, IT Security Managers command salaries of £62,500 on average in the UK. This impressive salary shows no sign of decreasing as IT Security is becoming more and more important in society today. As we place more and more value on information, so organisations place more importance on security. A career in IT Security often begins in general IT and then branches out into a specialised field. Many IT security professionals started in general IT positions such as an IT Technician role, Field Engineer or even in a Help-desk role. IT security as a career involves a focus on making sure an organisation’s data is kept secure. As an IT security professional, you could be responsible for front-line defence of networks and protecting information from unauthorised access and violations. You will be expected to prepare technical reports, run tests and monitor for abnormal activity. As an IT Security Professional you can expect to be the gatekeeper of all organisational information. IT Security Career Path As an IT Security Professional, careers which you can pursue include, but not limited to: - IT Security Administrator - IT Security Analyst - IT Security Officer - IT Security Architect - Head of IT Security IT Security Training In order to move up the salary scale, gaining industry recognised certifications is an excellent step. If you’re new to IT and are considering heading into the field of IT security, the IT Placement Program is an excellent first step. As stated earlier, many IT security professionals started their careers in IT Technician roles or Field Engineer roles. The IT Placement Program from ITonlinelearning offers students a job opportunity and study package geared to IT beginners. This course prepares students for a general IT career while also laying the foundation for those who wish to move into the direction of IT security. For more information on this program click here or alternatively call a Course and Career Advisor on 0800 160 1161 (UK business hours). Cyber Security Career According to IT Jobs Watch, Cyber Security Managers command salaries of £72,500 on average in the UK. Of all the IT careers today, cyber security is one that is definitely trending with a top-notch salary to match! Cyber security professionals are some of the most in-demand professionals today as the UK is experiencing a major skill shortage in the field of cyber security. As a Cyber Security Professional, you will have a range of career options. However, it is important to note that this is a specialised field. This means that you’re unlikely to begin your career in cyber security and will often come from a general IT security position first. As a Cyber Security Professional, you will be expected to prevent detect and manage cyber threats. You’ll be expected to be up to date with the latest security trends, plan for disaster recovery and use advanced analytic tools. You will be tasked with investigating security alerts and providing incident response. Many Cyber Security Professionals started their careers in a general IT position such as an IT Technician or a Field Engineer role. Cyber Security Career Path: As a Cyber Security professional, careers which you can be pursue include, but not limited to: - Cyber Security Specialist - Cyber Security Analyst - Cyber Security Engineer - Cyber Security Architect - Cyber Security Professional Cyber Security Training There are many different avenues one can receive training for a rewarding cyber security career. We recommend online learning through a trusted and recognised training provider. What better way to study the intricate world of cyber than through the readily accessible platform of online learning. If you’re an IT beginner and looking for a package that is more focused on a cyber security career, look no further. ITonlinelearning’s New To Cyber Security program has been tailored to those interested in making a career out of cyber security. With online courses like CompTIA Security+, CompTIA CySA+ and a job opportunity, this is where Cyber Security Professionals start their careers! For more information on this programme chat to a Course and Career Advisor today on 0800 160 1161 or alternatively click here for more information. To conclude, these roles do not, and cannot, exist in isolation of each other. Cyber security is as much a part of IT security, as IT security is a part of cyber security. In terms of careers, each career is concerned with the preservation of data however each from a different angle. As IT security tends to be more in the physical and virtual realm, where cyber focuses more on the virtual, both compliment one another. As the end goal is the preservation of data, we require both fields to ensure optimum security. Whether you decide to head into the realm of IT security or cyber security, both have various advantages and career paths.
<urn:uuid:e2a85278-590d-44c1-adfb-14d2ff5fb72f>
CC-MAIN-2022-40
https://www.itonlinelearning.com/blog/what-is-the-difference-between-a-cyber-security-career-and-an-it-security-career/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00233.warc.gz
en
0.943229
1,673
2.5625
3
You don’t want to lose points for silly mistakes in your papers, assignments, or essays. You might think that a little formatting mistake here and there doesn’t matter. But that might not be the case if the person who is grading your essay is particularly strict. Even established academics need to brush up on their formatting skills sometimes. This is especially true if you’re new to using Google Docs. Or if it’s the first time you’re using Google Docs for a formal piece of work. Now, there are various style guides for academic work. So, the first thing to do is check, double-check and triple-check what your institution, department, publisher, etc. requires. If the format is MLA and you’re creating your work in a Google doc, we’ve got you covered: What Is MLA Format? MLA style, or MLA format, is a set of guidelines for formatting academic or research papers. It was originally introduced by the Modern Languages Association and used in the fields of literature and language. Its purpose is to ensure the consistency and uniformity of submitted works. Nowadays, students and scholars of various disciplines use the MLA style. Here are the basic MLA standards you need to adhere to: - One-inch margins on the top, bottom and sides - Indent the first word in every paragraph by one half inch - Times New Roman font, size 12 - Indent block quotations by one inch - Double-space the entire paper - Your last name and the page number in the top-right of the header of every page - Your full name, instructor’ name, course name and due date at the top of the first page - The title centralized on the first page - A Works Cited page at the end of the doc (with sources correctly formatted) There are quite a few items to check off the list here. But we’ll guide you through: Step 1: Apply One-Inch Margins Your Google doc should be set to one-inch margins by default. But, you may wish to check to be on the safe side. 1. Go to File > Page setup. 2. Ensure all margins are set to 1 and this is applied to the whole document. Step 2: Change the Font and Size 1. Choose Times New Roman from the dropdown font menu. 2. Click on font size and select 12. Step 3: Insert a Header 1. Go to Insert > Headers & footers > Header. 2. Click the right align button. 3. Enter your last name and hit space. 4. Go to Insert > Page numbers and click the first option. If your font defaults back to the original go ahead and highlight your last name and the page number and change them to Times New Roman, size 12. Step 4: Change the Line Spacing 1. Click the line spacing button in the toolbar. 2. Select Double. Step 5: Enter Your Details and the Title on the First Page 1. Type in the following on separate lines: - Your name - Instructor’s name - Course name - Due date 2. Hit the return key then click the center align button. 3. Type in the title of your paper. Step 6: Add Indentations 1. Press the tab key to indent the first word of every paragraph. Indent Block Quotations 1. Highlight the text. 2. Go to Format > Align & indent > Indentation options. 3. Next to Left, type in 1 and click Apply. Step 7: Add the Works Cited Page The Works Cited page has some unique formatting details. Therefore, there are a few steps you need to complete to format it correctly. Add a Page Break After the final paragraph of your paper, you must add a page break to ensure your Works Cited page appears on a separate, new page. 1. Go to Insert > Break > Page break. Add the Title As with the main title, the title for this page needs to be centrally aligned. 1. Click the central align button. 2. Type in Works Cited. Add Your List of Sources There are a few points from the MLA guidelines you need to incorporate here. First, each of your sources must be listed alphabetically. Ensure you cite your sources in the proper format, for example: Pinker, Steven. The Sense of Style. Penguin Random House, 2014. And finally, each source must have a hanging indentation. Here’s how to do it: 1. Go to Format > Align & indent > Indentation options. 2. Click the dropdown menu under Special indent and select Hanging. 3. Set the indent to 0.5 and click Apply. How Do You Cite in Google Docs? Google Docs has a citations tool that will help you cite your sources in the proper format as you go along. You can also use this tool to automatically create a Works Cited page when you’ve finished your paper. Naturally, this is super useful as it will help you get the formatting correct every time and save you some time and energy. 1. Go to Tools > Citations. 2. Select MLA from the dropdown menu in the sidebar. (Other options are APA and Chicago.) 3. Click + Add citation source. 4. From the dropdown menu select a source type, e.g. book or journal, then select how you accessed the source, e.g. print or website. 5. Enter the details of your source into the form. (Click + Contributor if there are multiple contributors.) 6. Click Add citation source. This will now be added to your list of sources in the citations tool. 7. Place your cursor where you want to cite the source, then click Cite next to the source in the sidebar. You’ll now see an in-text citation in MLA format. Change the # to the appropriate page number. 8. When you’ve finished your paper, create a new page by adding a page break as mentioned above. Go to the citations tool and click Insert bibliography. You’ll now see that Google Docs has automatically generated a list of sources in the correct MLA formatting based on the information you entered. You will need to change the title from Bibliography to Works Cited. You may wish to double-check the formatting of the page for yourself. But, apart from the title, everything should be formatted correctly. Docs even adds in the hanging indentation which, of course, you need for your Works Cited page in MLA format. How to Use the MLA Template in Google Docs By far the simplest and quickest way to format your paper is to use a pre-formatted template. Google Docs has a number of templates in different academic styles, including MLA. The template comes with some text as a placeholder. You just need to replace it with your own details and writing. Here’s how to find and open up the template: 1. Open a new Google doc. 2. Go to File > New > From template. 3. Under the header Education, you’ll see a template entitled Report with MLA written underneath it. 4. Simply click on this template and alter it as necessary. Note that you may still need to go to the checklist above and double-check that everything is formatted correctly. For example, there isn’t a placeholder for your last name in the header of the template. Thus, you may forget the little things like this if you don’t check through the doc. More Essay Writing Tips for Google Docs 1. Try Voice Typing The voice typing tool is a useful tool for students with accessibility needs. Furthermore, ideas flow quicker than you can get them down on the page sometimes. It might help to use voice typing on such occasions. You can go back to the text to work it into nice, neat sentences for your paper later. To access voice typing, go to Tools > Voice Typing. Click on the microphone in the small window that appears to start dictating your text. 2. Speed Up Editing Got a paper due tomorrow and need to make edits fast? It happens. Thankfully, there are ways to speed up the editing process in Google Docs. First, if you want to reorganize your draft, you don’t need to copy and paste sections or paragraphs. You can simply highlight the text and drag it to its new location in your essay. Moreover, when formatting your essay, you don’t need to do everything manually. Rather, you can use the paint format button to copy your formatting quickly. This would be useful, for instance, if you have a lot of block quotations in an MLA style essay and you don’t want to manually indent the text every single time. Similarly, if you realize you’ve made a recurring mistake in your document, you don’t have to go through your paper and correct the mistake every time. For example, you may have referenced the wrong author or failed to capitalize a word. Fix it with the Find and replace tool. Go to Edit > Find and replace. Enter the correction and click Replace all. 3. Make Your Doc Available Offline There are many instances where offline editing may come in handy. For example, if the wi-fi in your dorm goes down, it doesn’t mean you have to stop working on your essay. Or if you need to make a few quick changes to your doc while you’re on the train. To turn on offline editing, go to File > Make available offline. Any changes you make offline will be saved locally on the device you’re using. Then the changes will be synced to Google Drive the next time you go online. 4. Use Bookmarks Bookmarks are a useful way to draw attention to a part of your essay. You can create bookmarks for yourself. Perhaps you want to come back to a section later and add more. You may also want to leave bookmarks for an advisor. Say they’re checking your first draft before you submit the final piece. To add a bookmark in Google Docs, first place your cursor where you want your bookmark to appear. Then go to Insert > Bookmark. 5. Consider Add-Ons You can do a lot in Google Docs. However, if there’s a tool you need that Google doesn’t offer yet, there’s likely an add-on you can use instead. You may want to add a plagiarism checker or thesaurus to your arsenal, for example. To browse add-ons, simply go to the Add-ons tab and select Get add-ons. 6. Share or Save Your Doc in the Right Format There are tons of ways you can share, save or submit an essay using Google Docs. The first thing to do is check with the course advisor how they would prefer you to submit your essay. To send it directly to them, click the share button in the top-right corner and enter their email address. Note that you can add a message alongside your submission. You can also change the permissions of the recipient using the dropdown menu next to their email. Alternatively, you may wish to create a shareable link which you can also do via the share button. To adjust the permissions here, click where it says Change to anyone with the link. If you don’t want to send the doc but rather save a copy, go to File > Download. Here you have the option of saving your document in various formats, including .docx and PDF. This is useful if your advisor wants you to send a copy in a certain format using a file attachment. Correctly formatting a paper or essay in Google Docs may feel like hard work at first. But, soon enough, it will become second nature. This guide will help you get the fundamentals of the MLA style guide right. It’s up to you whether you wish to use the MLA template for your paper. But, we’d recommend manually formatting your work in the MLA style if you have the time. This way you know for sure there are no formatting mistakes. Finally, don’t forget that Google Docs is rich in useful features that could help you during the writing, editing and formatting processes. Go explore what Google Docs can do.
<urn:uuid:02a1d690-8023-48b0-92fe-ba7914e31b1c>
CC-MAIN-2022-40
https://nira.com/mla-format-google-docs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00233.warc.gz
en
0.857239
2,662
3.15625
3
The COVID-19 pandemic has conveyed a strong message to leverage technology to its full potential, not just for convenience but to remain safe. Although QR Codes are the new normal and help us follow COVID-19 safety regulations, bad actors of society exploit the vulnerabilities associated with this technology. As per a survey, 18.8% of consumers in the US and UK strongly agreed with an increase in the use of QR Codes since the outbreak of COVID-19. A recent research report on consumers revealed that 34% of respondents have zero privacy, security, financial, or other concerns while using QR Codes. Since any kind of malware or phishing links in QR Codes pose significant security risks for both enterprises and consumers, stringent security measures should be considered to mitigate the risk. Let’s learn how cyber-attackers exploit QR Codes and how businesses and users can mitigate the risk, especially in a world where contactless transactions are the new normal. Cybersecurity Risks Associated with QR Codes Since a QR Code cannot be deciphered by humans, many cases of QR Code manipulation have been reported across the globe, which increases the risk of using these Codes for processing payments. Cybercriminals could easily embed any malicious or even phishing URL in the QR Code for exploiting consumer identity or even for monetary benefits. The pixilated dots can be modified through numerous free tools that are widely available on the internet. These modified QR Codes look similar to an average user, but the malicious one redirects the user to another website or other payment portal. Is there anything else attackers can do with QR Code tampering? Yes, absolutely! Cybercriminals may also sneak into a user’s personal and confidential details, which can further be exploited. Many businesses utilizing QR Codes have reported several instances of consumer data and privacy breaches over the past couple of years.b Shockingly, the number of breaches has significantly surged in the uncertain times of the COVID-19 pandemic as more and more people have started using QR Codes in the new contactless era. Here are some actions attackers can initiate by exploiting QR Codes: 1. Redirect a payment One of the most common ways hackers exploit QR Codes is to send payments to their bank accounts automatically. This trick works when the actual QR Code is replaced by the fraudsters in grocery stores or other areas where consumers scan the Code and pay. On the other hand, individuals using online shopping websites may receive a phishing email containing a message that urgently requires your consent regarding your payment history on a shopping website. They may ask you to pay for the product you purchased as your previous payment is canceled and ask you to scan a QR Code for the same. Apart from this, many cyber-attackers cunningly replace the landing URL with the one that resembles the real one. The user may find the webpage authentic that builds trust, and the user processes the payment. Users need to be aware of the altered QR Codes and carefully examine the preview link before clicking on it. Checking for spelling errors or possible alterations in the domain that makes it resemble the original one can be very helpful in determining a cloned URL. In addition to this, one should avoid scanning a QR Code embedded in an email from an unknown source to avoid being phished. Email authentication protocols such DMARC, DKIM, BIMI, and SPF records help add an extra protective layer to prevent phishing attacks and keep one’s domain reputation intact 2. Reveal user’s PII Another common way of exploiting QR Codes by attackers is to get their hands on a user’s personally identifiable information (PII). These attackers can utilize the PII in multiple ways and for various personal benefits including, but not limited to financial benefits, online shopping, or other activities. Once a user scans a QR Code available at any store or even on the internet, a malicious software program gets installed on the device, which quickly reveals sensitive information about the user. Moreover, cases of duplicate contact tracing by cybercriminals have been reported in Australia, where hackers exploited consumers’ identities for monetary benefits. According to ACCC (Australian Competition and Consumer Commission), more than 28 scams involving QR Codes have been reported with damages of over AU$100,000. The most common attack through malicious software installed with an altered QR Code is intended to get personal details, including passport number, contact number, or even on-time-passwords for payment processing. 3. Reveal user’s current location While the scope of exploiting QR Codes is enormous, many attackers keep an eye on a user’s real-time location. Cybercriminals are continuously tracking some people who get attacked by malicious software installed on their device after scanning a QR Code for their numerous benefits. Hackers may alter the original QR Code and link malicious software that automatically gets installed on a device as soon as someone opens the link after scanning the QR Code. This software program can further access a device’s location, contact lists, or even data, which hackers exploit. One may not even be aware of his/her location tracking, but cybercriminals may be continuously tracking his/her location and keeping an eye on its behavior. How to Mitigate the Risk Associated with QR Exploit: A User’s Guide Let’s quickly learn about the ways that can help you in ensuring adequate safety while using QR Codes: 1. Scan only from trusted entities It’s crucial to stick to the QR codes shared by trusted vendors, and users shouldn’t just randomly scan any QR Code they come across. This ensures adequate safety from malicious and phishing attacks. A user needs to check the website and security aspects, including the SSL (Secure Sockets Layer) certificate, before proceeding with a transaction on a website after scanning a QR Code. Ensure that the QR Code is customized by including your brand’s logo, changing the shape of the eyes, patterns, and even including gradient and a CTA to make it difficult for hackers to duplicate the QR Code. In addition, rename the domain to your brand name so users can easily identify the source of the QR Code to avoid being phished. SSL certificate ensures secure connections and also provides secure transactions. However, if a website doesn’t contain the SSL certificate in the domain, one should be alert and verify the source before proceeding to payment or permission. 2. Use a QR Code scanner that first displays the link Many people open the link just after they scan a QR Code without even checking the link. This can be pretty risky when it comes to privacy and security. Most devices have an in-built QR scanner in their camera application, which is entirely secure, while others rely on third-party QR scanners. It is best to use the in-built scanner (if available) and check the preview of the link. If you find anything suspicious regarding the link, it’s best to verify the source before opening it in your browser. 3. Pay close attention to details Users need to pay close attention even to the small details while making payments or proceeding with transactions through a QR Code. The best way is to utilize the same in a familiar and secure environment. Cybercriminals can easily replace some public QR Codes, including the fuel station or kiosks, and they may receive the benefits whenever a user pays by scanning the Code. If you find something wrong with the QR Code or if it feels tampered with, it’s best to avoid using the same and find other modes of transactions to remain on the safe side. 4. Update your device’s security and overall defense system Installing and regularly updating your device’s security software could help a lot in preventing a security breach. However, QR Codes and the overall mechanism are considered secure, but your device’s first layer of defense shouldn’t be outdated. Installing regular security updates would not only ensure you get maximum safety from malicious activity but you would be made aware immediately regarding any unnecessary or unauthorized access to your device’s data. What Should Enterprises Do? QR Codes help us establish a secure contactless payment option when it comes to the spread of the novel coronavirus. But individuals and enterprises can put their best foot forward to minimize the risks associated with QR Code cybersecurity threats by ensuring adequate measures in place. Here are some efficient ways to minimize the risks for consumers: - Using multi-factor authentication - Having a mobile defense system in place that blocks unauthorized downloads, phishing attempts, and repetitive login requests - Enabling risk-based authentication - Improve enterprise password security With the rise in QR Code exploits, both the users and enterprises offering contactless payment options need to take crucial steps. Users should be aware of the latest QR frauds that not only could lead to financial losses but eventually can cause a threat to an individual’s privacy and sensitive data. On the other hand, enterprises must have best security practices in place that helps them secure sensitive information and prevent transaction frauds. Enterprises must design their websites keeping this in mind, and expert web development companies can help the implementation of a robust security architecture. The aforementioned aspects can be quite helpful in minimizing the risks for individuals and organizations that are striving to protect consumer identities and data. Adequate device security measures like mobile threat defense systems can also be a game-changer for mitigating security threats associated with QR Code exploits. Originally published at Beaconstac
<urn:uuid:da002877-edfa-47e0-93e4-366c22cdc92b>
CC-MAIN-2022-40
https://guptadeepak.com/qr-codes-exploitation-how-to-mitigate-the-risk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00433.warc.gz
en
0.912833
1,986
2.890625
3
Q: What’s the SIDHistory Active Directory (AD) attribute, and how can a malicious user exploit it to mount elevation-of-privilege attacks against AD? A: In Windows 2000, Microsoft added the SIDHistory attribute to AD user account objects. SIDHistory facilitates resource access in inter-domain account migration and intra-forest account-move scenarios. For example, when you migrate a user account from a Windows NT 4.0 domain to a Win2K domain, Windows can populate the SIDHistory attribute of the newly created user account in the Win2K domain with the SID of the corresponding user account in the NT 4.0 domain. At logon, when the user’s authorization data (e.g., group memberships) are gathered in the Win2K domain, the domain controller (DC) also adds the user’s old SID and the old SIDs of the groups the user belonged to--which are stored in the SIDHistory attribute--to the authorization data. Therefore, you don't need to update the ACLs of the resources in the old domain with the new SIDs. Users can continue to access the resources located in the old domain with their new account. The SIDHistory exploit is based on the concept that a malicious AD administrator might try to modify the SIDHistory attribute of a user account object to elevate its privileges. Looking at a trust relationship between two domains, for example, the administrator of the trusted domain could try to add an administrator account SID of the trusting domain to the SIDHistory attribute of a user account in the trusted domain. The user account of the trusted domain then would get administrator access to the trusting domain. In the first Win2K releases, the DCs of the trusting domain wouldn't check the authorization data included with incoming resource access requests from the trusted domain. They automatically assume the requests contain only SIDs for which the DCs of the trusted domain are authoritative. Tools are available to help malicious administrators populate the SIDHistory attribute of AD user accounts. A good example is the SHEdit tool that you can download from http://www.tbiro.com/projects/SHEdit/index.htm. Although modifying the SIDhistory attribute isn’t easy (you can modify it only if AD is in offline mode), it is possible. A malicious administrator could carry out this attack in any kind of Windows domain trust setup: between the domains of a single forest, and between domains that are linked using an external or forest trust relationship. In a single forest setup, for example, rogue child-domain administrators or any rogue user with physical access to a DC can attempt to leverage the SIDhistory exploit to elevate themselves to Enterprise Administrators. To mitigate the risks related to the SIDhistory attribute, you must in the first place make sure that Enterprise Administrators and Domain Administrators are highly trusted individuals. You must also ensure a high level of physical security on your DCs to prevent rogue users from taking DCs offline and exploiting this attack. With trust relationships that are set up between forests and trust relationships set up with external domains (in this context, external means “not in the proper forest”), you can use the SID filtering feature to quarantine domains. When SID filtering is enabled, the DCs of the trusting domain will check whether incoming authorization data is related to the trusted domain. The DCs will automatically remove SIDs that aren't related to the trusted domain. Because this operation also removes SIDs that were added to the authorization data because of the values in the SIDHistory attribute, SIDHistory and SID filtering are mutually exclusive features. SID filtering is available only with Win2K Service Pack 2 (SP2) and later. You can turn the feature on or off using the Netdom command-line utility: Use the trust and /filtersids switches as described in the Microsoft article "MS02-001: Forged SID could result in elevated privileges in Windows 2000" (http://support.microsoft.com/?kbid=289243). For Windows Server 2003 domains use the trust and /quarantine netdom.exe switches. SID filtering is turned on by default on Windows 2003 external and forest trust relationships. Don't use SID filtering on trust relationships between domains in the same forest. Doing so breaks AD replication and transitive trust relationships. If you want to quarantine a domain, you should put it in a separate forest.
<urn:uuid:face3a02-42f5-488b-b5ce-0edf60c79f31>
CC-MAIN-2022-40
https://www.itprotoday.com/windows-78/exploiting-sidhistory-ad-attribute
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00433.warc.gz
en
0.886587
922
2.75
3
They went mainstream only a few years ago, but drones are already making a big splash in the market. Thanks to the ability to buy them off of a shelf, drones are being adapted for commercial use. There are only so many human controllers can do though and, future drone models may not see this limitation thanks to the use of AI, freelance blogger Jake Carter tells us more. Artificial Intelligence, or AI, has existed for some time now. If we could develop an AI that could operate drones without humans, what could this mean? What sort of new opportunities could come from the melding of AI with drone technology? Moreover, what could go wrong? Thanks to the eye in the sky nature their cameras afford them, drones are an excellent way to get a birds-eye view of the land. Drone enthusiasts are already using this to get fantastic views of landscapes, which means that they can be used for things such as land surveillance and mapping, but they can go beyond such uses into realms like construction. In March 2018, Chinese manufacturer DJI announced its most massive shipment of commercial drones. One thousand drones designed by U.S. drone company Skycatch were sold to Japanese construction company Komatsu. Each of these drones contains software designed by Skycatch that allows them run with minimal human interaction. Webinar: How to implement an effective ATM strategy: The role of simulation and validation This webinar will present HungaroControl’s simulation and validation capabilities. Simulation is a key assessment technique in air traffic control used for training programs and for the validation of new concepts. Will you be able to attend on 21 November at 9am GMT? The drones in question are the model Skycatch Explore1. Equipped with Skycatch’s software, the drone can create maps with accuracy within five centimetres. What’s more, the base station, Edge1, can process images without the need for Wi-Fi. The most impressive feature, though, is its own set of algorithms that can identify basic materials on a construction site, and take stock of how much is left. What this means is that construction projects become more efficient. In a video demo for the Explore1, the drone was able to identify a potential design flaw in a construction site. This saved the site time and money that would have gone to correct the defect. The Komatsu order is only one example of Skycatch’s increased popularity; their drones have already been used in sites around the world. Skycatch CEO Christian Sanz says, though, that the massive order shows that the industry’s growing: “Automation in construction is no longer something to look out for, four or five years in the future. When you go to a job site you should expect to see robots on the ground.”[…]
<urn:uuid:dea43433-c823-4b74-a1a0-e17bd4063dcc>
CC-MAIN-2022-40
https://swisscognitive.ch/2018/11/15/how-drones-and-artificial-intelligence-can-be-used-together/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00433.warc.gz
en
0.95356
579
2.671875
3
Thank you for Subscribing to CIO Applications Weekly Brief How Advanced Technology Trends are Reshaping Healthcare Sector Diagnostics, therapy management, and other aspects of healthcare are altered due to AI advancements. Fremont, CA: Artificial intelligence has carved out a strong presence in human civilization and our daily lives during the last two decades. Most of us are unaware that AI powers everything from our social media feeds to our online shopping experiences. Fortunately, healthcare is a significant industry that is highly dependent upon technology. The impact of artificial intelligence is transforming diagnostics, treatment management, medication development and manufacture, procedures, and other fields. AI sub-technologies such as machine learning, natural language processing (NLP), and data science also aid in the faster and more accurate identification of healthcare requirements and solutions. On the other side, we are concerned that these skills will outstrip human competence and render us worthless. Although this may be the case in the far future, technology is keeping humans on their toes for the time being. Let's see key technological advancements in the healthcare sector. · Automatic Stroke Detection When a person is suffering from a stroke, every minute counts. Obtaining professional stroke care takes a long period, which may impact the patients' health in the future. Aside from that, determining the precise location of the collapse remains a difficult challenge for medical personnel. Automation and artificial intelligence (AI) techniques can help medical personnel diagnose and cure strokes. Doctors can address gaps in high-quality neuroimaging that can identify the kind of stroke and the location of such clot or bleed with an automated stroke detection system. · Digital Nurses During the Covid-19 medical tragedy, people learned the value of medical institutions worldwide. In addition, humans discovered insufficient medical personnel, including physicians and nurses, to handle the influx of patients. However, digital nurses are coming to change the face of patient care. They can aid in monitoring patients' conditions in between medical appointments. Molly, for example, is a digital nurse built by Sense.ly to monitor patient symptoms. · Personalized Medical Care Giving the same therapy to 100 people with the same condition is not a smart idea. Everyone's body type, metabolism, and medical characteristics may differ. As a result, healthcare institutions are developing individualized treatment facilities that may leverage specialized patient care through rigorous analysis of their health data. It will lower the cost of comprehensive healthcare while increasing its efficacy. Healthcare facilities use machine learning technology to match patient data with the most effective treatments.
<urn:uuid:3b6a4dce-45ae-4d10-aaf1-e431b8f5cfe3>
CC-MAIN-2022-40
https://www.cioapplications.com/news/how-advanced-technology-trends-are-reshaping-healthcare-sector-nid-9815.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00433.warc.gz
en
0.933758
514
2.890625
3
PCIe definition: a PCIe SSD is a solid state drive (SSD) that connects to a computer system using a PCIe interface. PCIe stands for Peripheral Component Interconnect Express, which is also known as PCI Express, or PCI-E. Modern computer systems are equipped with PCIe “slots” or sockets, and these sockets can be used to connect many different types of devices such as graphics cards and PCIe SSDs. Clearly, PCIe flash-based storage is a key element in today’s data storage landscape. What is PCIe? PCIe is a much newer flash storage interface standard, and this has gone through various increasingly fast iterations, including PCIe 2.0 and PCIe 3.0, which is the current PCIe spec used for SSD connectivity. (In fact the first NVMe SDD that supports PCIe 4.0 was released in July 2018, but PCIe 4.0 SSDs are not commonly available yet.) There are various different PCIe slot types of differing lengths, because they may include varying numbers of “links” over which data can travel. The smallest PCIe slot contains one link and is known as a PCIe x1 slot. Other slots include PCIe X4, PCIe X8, and PCIe X16, which has 16 links and is currently the fastest PCIe interface commonly available. A PCIe 3.0 X16 interface offers a total bandwidth of 16 GBps, while PCIe 2.0 X16 offers 8 GBps. In contrast, a PCIe 1.1 X16 manages 4 GBps. All are vastly faster than the PCI interface’s maximum bandwidth of 532 MBps. Aside from the enhanced speed of PCIe compared to PCI, PCIe also offers other benefits, including advanced error detection and reporting. It enables the ability to hot-swap devices so that they can be inserted and detected without the need to reboot the system. The Samsung 970 EVO is an example of a high performance PCIe. PCI vs PCIe PCIe should not be confused with plain PCI, which is a much older interface standard first proposed by Intel as far back as 1990, and was implemented widely in computer systems five years later. The internal architecture of PCIe resembles a local area network with each link being connected to a central switch in the computer. PCI differs in that all devices share the same parallel bus. PCI slots are generally longer than PCIe slots, but the key difference is that the older PCI technology runs at a much slower speed than that which is attainable by PCIe SSDs. The standard 32-bit PCI slot has a maximum throughput of 133MBps, while a 64-bit PCI slot can run at up to 532 MBps. In practice, most SSD buyers will choose between a PCIe SSD, which uses a PCIe interface, or a SATA SSD, which uses a common modern alternative interface: the Serial ATA or SATA interface. PCIe SSD vs. SATA SSD PCIe SSD Benefits The key benefit of PCIe SSDs is the huge performance boost that they provide compared to SATA drives, as described above. In fact this performance gain is not the whole story, because as well as plain vanilla PCIe SSDs it is also possible to buy PCIe NVMe SSDs, which are faster still. SATA SSD benefits - Compatibility: SATA is a much older interface for SSDs than PCIe, dating back to 2003. As a result, SATA interfaces are much more common, and that in turn means that SATA SSDs are likely to be compatible with many more older systems than PCIe SSDs. - Efficiency: SATA is also a less sophisticated interface than the newer PCIe interface, and that means that the energy consumption of SATA SSDs is likely to be lower than more modern PCIe SSDs. This may not be important for desktop users, but it can have important implications for battery life for laptop users. It may also be important in large data centers where power and cooling costs can be considerable. - AHCI: SATA SSDs can be hot-swapped in and out of a system as long as the system is using AHCI (Advanced Host Controller Interface), which can usually be activated in the system’s BIOS. AHCI also enables systems to use a technique called native command queuing (NCQ), which can also enhance the performance of SATA SSDs. (PCIe SSDs can be hotswapped by default) - Cost: SATA SSDs tend to cost significantly less than PCIe SSDs. This may be important for some purchasers, but this cost differential may not be a true comparison because SATA SSDs tend to offer poorer performance than PCIe SSDs. For example, A SATA 3.0 SSD may offer an effective data rate of about 560 MBps, while a PCIe 3.0 SSD may offer performance 3 to 6 time higher. With each new generation, PCIe moves ever faster than SATA/SAS. What is PCIe NVMe? NVMe stands for Non Volatile Memory Express, a specification for accessing SSDs which are connected using a PCIe interface (in a similar way that AHCI is used with the SATA interface). NVMe offers a performance boost by using the parallelism of the flash storage medium to reduce latency and increase performance. In fact NVMe is not restricted to flash storage: some of the very fast SSDs use NVMe to control Intel’s 3D XPoint non-volatile storage medium which is an alternative to regular NAND (flash) memory. PCIe types PCIe 2.0 vs 3.0 PCIe 2.0 dates back to 2007, offering a data transfer rate of 5 GTps (and throughput per lane of 500 MBps). That means that a 32 lane connector (i.e. PCIe X32) can support a throughput of up to 16 GBps. PCIe 3.0 (sometimes called PCie gen3) was released three years later in 2010, and in theory it offers about double the performance of PCIe 2.0. PCIe SSD form factors M2 PCIe SSD One SSD form factor which is worth mentioning here is the M.2 PCIe SSD “gumstick” form factor mentioned earlier, which is popular in laptops and other systems where space is at a premium. M.2 PCIe SSDs support PCIe 3.0 (as well as SATA 3.0 and USB 3.0) interfaces, and they can also benefit from the NVMe protocol, which offers significant performance benefits when running over PCIe. U2 PCIe SSD Another important form factor is the U.2 PCIe SSD form factor. U.2 is actually a connector, which uses up to four PCIe 3.0 lanes. U.2 PCie SSDs tend to be designed for the enterprise market, especially in solid state storage systems, and are often produced as slim 2.5-inch devices, or thin cards similar to M.2 PCIe SSDs PCIe SSD Speed So what kind of performance can be expected from today’s PCIe SSDs? The fastest PCIe SSD is impressive. The most common benchmarking software to come up with accurate figures are CrystalMark and AS SSD. Some of the fastest PCIe SSDs currently available use Intel’s 3D XPoint storage media, such as the Intel Optane SSD 900P series. This uses a PCI3 3.0 X4 interface and NVMe, offering: - Sequential read speed: up to 2,500 MBps - Sequential write speed: up to 2,000 MBps - Random read speed: 550,000 IOPS - Random write speed: 500,000 IOPS More typical PCIe 3.0 X4 performance figures are: - Sequential read speed: up to 2,300 – 3,300 MBps - Sequential write speed: up to 1,300 – 2,000 MBps - Random read speed: 140,000 – 400,000 IOPS - Random write speed: 150,000 – 350,000 IOPS Most modern PCIe SSDs are designed for PCIe 3.0 slots, but since PCIe is backwards compatible they can still be used in older systems with PCIe 2.0 slots. The resulting performance will be about half the performance if the SSD were connected to a PCIe 3.0 slot, so typical PCIe 2.0 performance figures would be: - Sequential read speed: about 1500 MBps - Sequential write speed: about 1400 MBps - Random read speed: about 200,000 IOPS - Random write speed: about 180,000 IOPS |Link Speed||3Gbps||6Gbps||8Gbps (x2) |Effective Data Rate||~275MBps||~560MBbps||~780MBps The speed of PCI Express made a huge leap from 2.0 to 3.0, increasing its speed advantage or Serial ATA.
<urn:uuid:cea3799d-d6b9-44d8-9041-dbc5d0cbb7f3>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/hardware/pcie-ssd-enhanced-speed/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00433.warc.gz
en
0.921331
1,913
3.59375
4
What is ROM? As indicated by its name, ROM stands for Read-only-memory, i.e., a memory that can only be read without offering modifications. ROM is a non-volatile storage medium that stores information permanently, even in case of loss of power in the component. The primary aim of ROM is to store the crucial instructions necessary to boot (start) the system. The contents of ROM cannot be re-programmed, re-write, tampered with, altered, or erased. ROM is inexpensive, reliable, and doesn’t require frequent refreshing. The different types of Read-only-memory are stated beneath: What is a DVD ROM? DVD typifies Digital Versatile Disc commonly apprehended as Digital Video Disc. It is a digital optical disc storage mechanism that acts as a reservoir for high-capacity data. DVD digitally stores various text, audio, and video data allowing users to download and disseminate data from their computers to other devices. A DVD ROM (Digital Versatile Disk-Read Only Memory) is an optical drive used to read the constituents of a DVD and doesn’t allow writing over the stored data. A DVD-ROM can play all the formats of a DVD, including single-sided, single-layered, double-sided, and double-layered discs. The downfall of DVD-ROM: The sale of DVDs has been on the decline for over a decade. According to CNBC, DVD sales account for less than 10% of the total market. The Global DVD Market is projected to reach USD 31 million by 2026, at a negative CAGR of -27.8% during the forecast period of 2022-2026. The heyday for DVDs started with the rise of digital downloads and streaming services. Unfortunately, the DVD-ROM yields far fewer benefits than modern storage solutions, rendering it obsolete. The drawbacks of DVD-ROM encompass more space acquisition, limited storage capacity, lack of security, larger size, cost-ineffectiveness, portability, compatibility issues, and physical transfer of data. The final nail in the coffin of DVD-ROM landed in 2018 when car manufacturing companies declared that there would be no DVD players in future cars. The emergence of ROM USB: Today’s data storage technology and faster transmission mechanisms are entirely digital and miniaturized in the sense that tiny pieces of hardware can handle the amount of information that would have needed dozens of individual DVDs. In this fast-paced era, people prefer devices that are time-saving and less labor-intensive one such device is ROM USB, which helps consumers to experience a futuristic digital storage slant. The Global ROM USB market is valued at USD 7.12 billion in the year 2021 and is projected to reach a value of USD 10.75 billion by 2028, growing at a (CAGR) of 7.1% in the forecast period (2022-2028). ROM USB retains information without power and can be reprogrammed without removing it from the computer. In addition, ROM USB allows the supplementary benefits stated underneath: - High Security: Password is required to enable/disable the write access - Enhanced Storage Capacity: Supports capacity of 8GB-32GB - High Durability - Facile integration - It doesn’t require software installation for reading access - Fast access time - Faster transfer rate - Low energy consumption FLEXXON PUTS FORWARD A SOLUTION FOR INVIOLABLE DATA STORAGE: ROM USB!!! FLEXXON bids a top-grade robust tamper-proof storage alternative ROM USB to save crucial information. Flexxon’s ROM USB, with its compact design, is one of the most convenient and reliable ways to move data around systems. The ROM USB ensures the authenticity of stored data and provides an advanced feature that permits alterations in the data. The state-of-the-art feature of the Read-only-mode of ROM USB allows a flexible workflow for the users. Users can enable or disable the Read-only mode to prohibit or permit modifications in the data as per their requirements. The ROM USB is a multi-purpose device that can either keep the data unaltered from edit access or authorizes the desired edits in the data. In addition, the ROM USB uses its security function as a defense mechanism to ensure the integrity of sensitive data with encrypted access. Significant features of ROM USB: The security function, along with the Read-only mode, empowers our ROM USB to play a pivotal role in applications where security, reliability, and authenticity are paramount concerns. Business organizations, financial institutes, the cybersecurity industry, medical and healthcare facilities daily encounter the challenge of securing valuable information. Flexxon comprehends the severity of ensuring data integrity, so it proposes an ingenious storage solution of ROM USB to cater to the demands of these industries. ROM USB extends a pliable non-rewritable or rewritable storage solution to allow for numerous industries’ varying uses and demands. With our effective ROM USB, the data remains secure, and its key features include the following advantages: - Read-Only Mode: Once the Read-Only mode is switched on, ROM USB prohibits altering and deleting the stored data, keeping it safe and sound. - Safeguard Option:This feature guards the stored data of ROM USB by enabling or disabling the Read-Only option permitting modification of data when the mode is disabled. - Security Function:The password-protection feature of ROM USB validates the integrity of stored data. Applications of ROM USB in Medical and Healthcare Facilities: Medical and healthcare facilities are obligated to secure critical patient records and maintain their confidentiality. However, the use of DVDs to keep sensitive medical reports is on its way out in the medical industry because of the primary issue that patients don’t have a DVD-ROM at their end to read these records. Flexxon’s ROM USB emanated as a quick fix to all the DVD compatibility, data integrity, and storage-related problems faced by medical and healthcare facilities. Owing to its reliability, security, capacity, portability, compatibility, and flexibility, ROM USB minimizes the risk factors and finds numerous applications in medical and healthcare facilities. Software updates for Diagnostic Machines: The most common and widely used wired protocol to communicate between the data acquisition devices and the hub is USB. USB has become so prevalent because it is the first wired communication protocol to become a standard of the Continua Health Alliance, a consortium of nearly 240 healthcare providers, communications, medical, and fitness device companies. Medical facilities employ ROM USB to hoard the software updates for Medical devices. Medical diagnostic machines such as medical imaging or X-ray machines necessitate continuous software updates, which are available in Flexxon’s ROM USB. The authorized person can use the password-protected ROM USB with software updates to update the device. Unauthorized users who don’t have the password cannot use the ROM USB for software updates. Digital robotics, with its potential for accuracy and reduced workload, is transforming the medical industry’s landscape across the globe and is being employed in telemedicine, surgery, rehabilitation, radiation treatment, healthcare, and infection control. A USB establishes the connection between the computer and the robot, which is used to send the stored programs to the robot. Flexxon’s ROM USB plays a vital role in digital robotics by establishing a connection between the robot and a computer. ROM USB is used to share programs, instructions, medical records, patient history, online data, and configuration files between the computer and the medical sector robot. The medical and healthcare system can uninterruptedly and accurately monitor the human body’s heart rate, blood pressure, pulse, body temperature, physiological information, and other vital sign parameters. Flexxon’s ROM USB is a portable storage device for the patient’s medical records and is maneuvered to transfer the data to the doctor or healthcare monitoring system. As a result, ROM USB has surfaced as a coherent storage device and has eradicated the difficulty of moving medical data from a monitoring system. ROM USB replaced the hackneyed DVDs. Flexxon’s high-tech ROM USB is an intelligent, super-robust, and fast-speed storage solution that safeguards the essential information while allowing alterations. Furthermore, the groundbreaking technology of Flexxon’s ROM USB defends stored confidential information by extending password-protected access and is a promising next-generation storage solution embedded in various fields.
<urn:uuid:93d32853-b3ed-4772-a76d-5d3593552abc>
CC-MAIN-2022-40
https://www.flexxon.com/rom-usb-applications-medical-and-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00433.warc.gz
en
0.908178
1,779
2.6875
3
The X Factor: Form Follows Function November 6, 2006 Timothy Prickett Morgan In nature, the shape of an animal or a plant is the result of the interplay between the organism and its environment–the latter being the sum result of the forces at play, the competitive pressures between competing life forms, and the materials at hand with which to build and sustain the life form. In the data center, similar competitive pressures are at work on computer designs, and instead of working at periodic timescales, evolution happens over a human generation or less. But sometimes evolution is stalled by greed. While there has been plenty of evolution under the skins of servers in the data center, there has been less in the skins themselves. Rack-mounted server form factors that are decades old persist, and the blade server form factors that should have easily replaced them have seen a slower uptake than many would have predicted. (Having said that, blade servers are seeing very large revenue and shipment growth–in the double digits each quarter–but the growth is slowing each year.) Mounting electronics gear in racks that are a standard 19-inches in width has been a customary practice in the electronics industry for decades, and the reason why the height of a unit of measure in a rack is 1.75 inches is a bit of a mystery. (When people say 1U, 2U, or 4U, this is a multiple of that rack unit.) Somewhat humorously, the vershok is a standard unit of measure that Russia used prior to adopting the metric system in 1924. So we could blame the Russian scientific and military community for picking such a bizarre and non-round unit of measure for the height of a piece of rack-mounted equipment. 44.45 millimeters is a very precise unit of measure, but it is somewhat silly. Then again, the width of 482.6 millimeters of rack-mounted equipment is not exactly round, either. Racks usually come in 42U-high versions, and sometimes in 20U and 25U variants. In any event, Compaq and Sun Microsystems usually get credit for using standard racks first in the server business with pizza box servers in the 1990s; IBM‘s AS/400 and 9370 minicomputer chassis from the 1980s were all rack-mounted gear, and used the 19-inch form factor standard. But the rack-mounting of server gear started in earnest as air-cooled computing became the norm in data centers and as companies installed RISC/Unix and X86 servers by the dozens, hundreds, and thousands to support new kinds of infrastructure workloads–application, e-mail, Web, file, print serving being the common ones. The move from host-based, mainframe-style computing to distributed, n-tier computing saved companies a lot of money, but with tower-based PC servers stacked up all over the place, computing was sprawled out all over the place and took up a lot of very expensive space in the data center. And so, the industry embraced rack-mounted, pizza box servers. Now, X86-style servers could be packed 21 or 42 to a rack, which meant X86 servers could be packed into data centers with two, three, or four times the density. In the early 2000s, the industry went nuts over the idea of blade servers, which flipped servers and their chasses on their sides, put the servers on cards that resembled fat peripheral cards more than they did whole servers, and integrated networking functions, and mounted a blade chassis inside of a standard rack. By moving to blades, the compute density within a rack could be doubled or tripled again. The blade servers had an integrated system management backplane that all machines plugged into, and internalized switches to outside networks and storage, all of which cut down substantially on wiring. All of which saves money on system administration and real estate. And by having an integrated backplane, the blade server chassis allows something not available with rack-based servers–account control. And that is why there is still not a standard for form factors for commercial blade servers, and why customers should demand one. In fact, the time has come to offer a unified blade server standard that spans both the telecom and service provider world and enterprises. No computer maker can afford to make both enterprise and AdvancedTCA blades, the latter being the latest in a long line of blade standards for the telecom industry. To its credit, Hewlett-Packard‘s “Powerbar” blade server, which was killed off in the wake of the Compaq merger so HP could sell the “QuickBlade” ProLiant blade servers instead, adopted the predecessor to the ACTA telecom blade server standard. Sun has also been an aggressive supporter of the telecom blade form factors. And these and other companies who make ACTA blades did so because their telecom customers, who pay a premium for DC-based ACTA blades, gave them no choice. This is the power of a true standard. It levels the playing field, unlike IBM’s Blade.org pseudo-standard, announced in conjunction with Intel, which seeks to make IBM’s BladeCenter chassis the standard other vendors have to adhere to. The density that blade servers allow are important to data centers today, since they are running out of space. Blade servers have shared peripherals and shared power supplies, too, which means that they are inherently more efficient than standalone, rack-mounted servers. But there are other issues that are related to server form factors that need to be standardized. First, power distribution should be built into the rack, whether a customer is opting for rack-mounted or blade servers. Power supplies are wickedly inefficient and often over powered compared to the loads that are typically in the machine; moreover, they generate heat inside the box, which only makes the box that more difficult to cool. Putting a power supply into each server makes little sense in a server world where shared resources is becoming the rule. As long as the power supplies are redundant. Rather than have AC power go into a server and then converted into DC, racks should come with DC power modules that can scale up as server loads require. Conversion from AC to DC should be done in the rack. And all blade server chassis and rack-mounted servers should be able to plug into this power standard. No server of any kind should have an AC power supply. This is an idea that has been commercialized by Rackable Systems within its own racks, but now it is time to take it to the industry at large. The other thing that needs to be standardized is the blade server itself. Just like peripheral cards adhere to standards, a blade server’s shape and the way it plugs into a blade server chassis needs to be standardized so customers can mix and match blades from different vendors within a chassis and across racks. The way that chasses interconnect should also be standardized, so they can share power and extend the systems management backplane beyond a single chassis and across racks if necessary. Switches, storage blades, and other devices should also be standardized so they work within this blade server standard. Finally, the rack that holds blade chasses and rack-servers should have integrated cooling features, too. As little heat as possible should leave a rack, and if that means integrating water blocks onto processors and other components inside servers (as PC gamers do today) and putting water chillers on the outside of racks (as many supercomputer centers are starting to do), then so be it. Data centers cost millions to hundreds of millions of dollars to build, and the goal should be to use the density afforded by blades without melting all of the computers. Cooling with moving air does not work. Data centers develop hot spots, and moving huge volumes of conditioned air around is very inefficient. These cooling features should be standardized, just like the blades and rack servers themselves. The form factors of servers are supposed to serve the needs of customers, not those of vendors.
<urn:uuid:965a8ce7-4133-4d9e-9c5b-4b8bda900805>
CC-MAIN-2022-40
https://www.itjungle.com/2006/11/06/tfh110606-story04/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00633.warc.gz
en
0.95373
1,642
2.59375
3
In this blog post Industry 4.0 is a name given to the current trend of automation and data exchange in manufacturing technologies. It includes cyber-physical systems, the Internet of things, cloud computing and cognitive computing. Industry 4.0 is commonly referred to as the fourth industrial revolution. Industry 4.0 is the paving the path for digitization of the manufacturing sector, where artificial intelligence (AI) and machine-learning based systems are not only changing the ways we interact with information and computers but also revolutionizing it. Compelling reasons for most companies to shift towards Industry 4.0 and automate manufacturing include - Increase productivity - Minimize human / manual errors - Optimize production costs - Focus human efforts on non-repetitive tasks to improve efficiency Manufacturing is now being driven by effective data management and AI that will decide its future. The more data sets computers are fed, the more they can observe trends, learn and make decisions that benefit the manufacturing organization. This automation will help to predict failures more accurately, predict workloads, detect and anticipate problems to achieve Zero Incidence. GAVS proprietary AI led predictive analytics solution – GAVel can successfully integrate AI and machine learning into the workflow allowing manufacturers to build robust technology foundations. This means creating a purpose-built, big data architecture that can aggregate data from disparate systems, such as enterprise resource planning (ERP), manufacturing execution systems and quality management software. To maximize the many opportunities presented by Industry 4.0, manufacturers need to build a system with the entire production process in mind as it requires collaboration across the entire supply chain cycle. Top ways in which GAVS expertise in AI and ML are revolutionizing manufacturing sector: - Asset management, supply chain management and inventory management are the dominant areas of artificial intelligence, machine learning and IoT adoption in manufacturing today. Combining these emerging technologies, they can improve asset tracking accuracy, supply chain visibility, and inventory optimization. - Improve predictive maintenance through better adoption of ML techniques like analytics, Machine Intelligence driven processes and quality optimization. - Reduce supply chain forecasting errors and reduce lost sales to increase better product availability. - Real time monitoring of the operational loads on the production floor helps in providing insights into the production schedule performances. - Achieve significant reduction in test and calibration time via accurate prediction of calibration and test results using machine learning. - Combining ML and Overall Equipment Effectiveness (OEE), manufacturers can improve yield rates, preventative maintenance accuracy and workloads by the assets. OEE is a universally used metric in manufacturing as it combines availability, performance, and quality, defining production effectiveness. - Improving the accuracy of detecting costs of performance degradation across multiple manufacturing scenarios that reduces costs by 50% or more. Direct benefits of Machine Learning and AI for Manufacturing The introduction of AI and Machine Learning to industry 4.0 represents a big change for manufacturing companies that can open new business opportunities and result in advantages like efficiency improvements among others. - Cost reduction through Predictive Maintenance that leads to less maintenance activity, which means lower labor costs, reduced inventory and materials wastage. - Predicting Remaining Useful Life (RUL). Keeping tabs on the behavior of machines and equipment leads to creating conditions that improve performance while maintaining machine health. By predicting RUL, it reduces the scenarios which causes unplanned downtime. - Improved supply chain management through efficient inventory management and a well monitored and synchronized production flow. - Autonomous equipment and vehicles: Use of autonomous cranes and trucks to streamline operations as they accept containers from transport vehicles, ships, trucks etc. - Better Quality Control with actionable insights to constantly raise product quality. - Improved human-machine collaboration while improving employee safety conditions and boosting overall efficiency. - Consumer-focused manufacturing – being able to respond quickly to changes in the market demand. Touch base with GAVS AI experts here: https://www.gavstech.com/reaching-us/ and see how we can help you drive your manufacturing operation towards Industry 4.0.
<urn:uuid:03ec591b-adc7-4c5d-ba2a-e4f5c42e34e9>
CC-MAIN-2022-40
https://www.gavstech.com/pivotal-role-of-ai-and-machine-learning-in-industry-4-0-and-manufacturing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00633.warc.gz
en
0.913724
829
2.515625
3
We live in so-called ‘exponential times’, where companies and their businesses are evolving at an ever-increasing speed. A method for these companies to stay competitive is the continuous investigation of new approaches, technologies, and materials that enable the introduction of new products or services. However, these new products and services usually tend to be complex in themselves, as well as in regard to their particular functions. Thus, mastering processes for new products or service developments and understanding how they can be successfully managed and adapted for end-users have become keys elements of achieving competitiveness in a modern enterprise. The above statements cover just some of the many reasons why business process management (commonly abbreviated as ‘BPM’) is becoming so important. BPM represents a disciplined approach to ‘working’ with automated and non-automated business processes in order to achieve consistent and targeted results that are aligned with an organization’s strategic goals. Working with processes means ‘taking care’ of processes, which commonly consists of the following activities: identifying, designing, executing, automating, documenting, monitoring, controlling, and measuring business processes. This is commonly known as a ‘Plan-Do-Check-Act’ process improvement cycle. By utilizing these activities, the BPM approach allows organizations to become more efficient, more effective, and more capable of change when compared to companies following traditional functionally-focused and hierarchical management approaches. Unified Modeling Language (UML) and Business Process Model and Notation (BPMN) BPM’s success depends on having transparent, constantly improving business processes, which mostly results from business process modeling. Business process modeling is abbreviated by some as ‘BPM’, though this clashes with the abbreviation for ‘business process management’. In order to avoid this confusion, it is better to abbreviate business process modeling as ‘BPMo’. BPMo is concerned with the representation of organizational processes so that current processes may be analyzed and improved in the future. BPMo is not just a requirement for many ISO 9000 quality programs, but also plays an important role in the implementation of work-flow management and enterprise resource planning systems. In order to be understood by teams and become interoperable between IT tools, BPMo must be based on standardized notations (or languages) that are usually symbol-based or graphical. Stepping away from vendor-based and non-standardized notations, two standardized graphical notations for business process modeling exist: Unified Modeling Language (UML) and Business Process Model and Notation (BPMN 2.0). The main difference between the two is that UML is object-oriented, where BPMN takes a process-oriented approach, which is more suitable within a business process domain. This is why BPMN is becoming the global leader and de-facto standard for BPMo and BPM. If we model using BPMN, we are graphically representing a business process in the form of a business process diagram (BPD). BPDs are commonly used to represent, analyze, and implement current (‘as is’) and improved (‘to be’) processes. So, it is of great importance that BPDs reflect real-world processes accurately and precisely. BPDs are commonly equated with business process models, though ‘model’ is more generic than the business process terms you would typically use. A business process model can also be a non-visual model (e.g., a program code or XML file) suited for being deployed on a business process engine. It can be argued that a well-designed BPD, which is based on a standardized notation, such as BPMN, can positively affect most BPM activities and improve both intra- and inter-organizational communication, not to mention collaboration. The result is a well-established and flexible BPM practice that can be continuously improved and adapted to suit business requirements – a precondition for staying competitive in the turbulent corporate environment of today.
<urn:uuid:1d99435a-d50c-42da-9110-f8d0c53d3867>
CC-MAIN-2022-40
https://blog.goodelearning.com/subject-areas/bpmn/bpm-bpmn-bpd-bpmo-an-explanation-of-business-process-related-terms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00633.warc.gz
en
0.958075
849
2.625
3
10 Gbps passive optical network (10G-PON) is a next-generation solution following the current-generation gigabit passive optical network (GPON) (ITU-T G.984) and Ethernet passive optical network (EPON) (IEEE 802.3ah) solutions, basically offering higher bandwidth and additional features. Like its predecessors, 10G-PON will allow multiple users to share the capacity over a passive fiber-optic “tree” infrastructure, where the fibers to individual users branch out from a single fiber running to a network node. In September 2009, the Institute of Electrical and Electronics Engineers (IEEE) approved 802.3av as a 10G-EPON standard, including both 10/1 Gbps and symmetrical 10 Gbps implementations. In October 2010, the International Telecommunication Union (ITU) approved the ITU-T G.987.1 and G.987.2 10G-PON standards with asymmetrical and symmetrical implementations.Asymmetrical 10G-PON (specified by the Full Service Access Network [FSAN] as XG-PON1) provides 10 Gbps downstream and 2.5 Gbps upstream. Symmetrical 10G-PON (specified by FSAN as XG-PON2) provides 10 Gbps both ways.10G-PON uses different downstream and upstream wavelengths (1,577 nanometers [nm] and 1,270nm respectively) to those used by GPON, so that both systems can coexist on the same fiber architecture. This allows communications service providers (CSPs) to supply GPON services to the majority of their subscribers, while providing higher-bandwidth 10G-PON to premium subscribers, such as enterprises, or for the deployment of broadband to high-density multidwelling units.
<urn:uuid:0bf7fb5e-167a-4016-86e3-d0cbd7cdb76e>
CC-MAIN-2022-40
https://www.gartner.com/en/information-technology/glossary/10g-pon
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00633.warc.gz
en
0.878072
380
2.875
3
Ensuring uptime is vital for data center operations. With huge volumes of data being processed through data centers worldwide, the banking system, businesses, and governments are relying on data center uptime to function. Hotspots cause downtime and thermal cameras can be used to quickly identify problem areas. A high-density data center will have multiple layers of IT infrastructure. The installed hardware facilitates the transfer of data through the system. Servers process data and store it as needed. However, despite regular maintenance and monitoring, potential issues can be missed. As complex as IT systems are threats may arise under the radar. Unable to detect such risks can be detrimental to the whole operation. Surveillance with Thermal Imaging Cameras Thermal camera surveys are part of operational compliance for many organizations. They have become common in the post covid world to check employees and the public when entering malls, sports events, and the workplace. Thermal cameras are also valuable tools that any data center management can use. Also referred to as infrared (IR) cameras, thermal cameras capture videos and images in the Infrared spectrum not visible to the naked eye. Like a snake who sees in the dark using Infrared, the cameras show the reflected heat signature of objects. Infrared cameras can capture images of the following: - A person entering a building - Substation transformers - LV switchboards - Critical power sources - Batteries and other power distribution units - HVAC, including chillers and individual cooling units What Are The Benefits Of Thermal Camera Surveys? Thermal camera surveys are either a standalone function or complemented with existing monitoring processes. The direct impact is visible when camera surveys are done alongside preventive maintenance procedures. Detection of Hotspots Within a server rack, high-temperature accumulation can be a problem. These areas are referred to as hotspots. Hotspots are the results of poor airflow management. Also, it can result from impractical server layouts. Identification of hotspots is easy through a thermal camera. Images are captured and stored in a cloud. Such a storage option enables the easy relay of camera images. These hotspot images are also an essential account requirement for thermal audit records. Added to that, it presents temperature change within a given period specific to IT hardware. The image accounts are a quick reference to evaluate if an investigation needs to be done. It also qualified evidence to perform any maintenance procedure. It will serve as an indicator of the need to upgrade the IT facility. More in-depth than detection of hotspots, image records serve as a gauge to analyze surface temperatures. Some thermal cameras can also yield two-dimensional images, which is helpful to compare and contrast temperature levels. Analysis of these image records yields better identification of anomalies and threats. It provides a concrete report on the temperature status of an IT component. Image records are also preventive variables to tackle potential risks. If not responded immediately, these risks will become huge issues that may result in data center failure in the end. Checklist For A Thermal Survey In A Data Center There is a standard guideline in administering thermal camera surveys. The primary prerequisite is for a qualified engineer to conduct the thermal survey. It also has to follow a set procedure. In the case of data centers and IT infrastructure, a comprehensive survey is necessary. A thorough survey will single out potential and ongoing failures that may have been missed out from regular maintenance checks. Usually, surveys are carried out from the incoming point. It will then trail into the critical power and cooling route. This will lead into the server room. An essential aspect of the survey procedure for data centers is to recognize the timing of its conduct. It is a priority to survey during peak hours. This is to take into account the peak operation and workload run of IT components. As such, the probability of getting a comprehensive heat image capture is high at this particular period. There are varied types of transformers. Some transformers are installed just outside a data center building. A few of them are owned by local electric providers. There are also plant registered transformers. Despite the nature of these transformers, checking them during thermal surveys is imperative. Checking transformers is rounding up the changes in thermal temperature in general. Monitoring will take into account windings and lug connections that are contributory to temperature fluctuations. Switchgears will always require maintenance. Switchboards, on the other hand, need checking after a long period. Capacitors in active harmonic filters require a change every 7-8 years. The survey will prevent temperature rise in switchboards. Consequently, these temperature increases will fasten the aging of a component. Thermal surveys are methods to determine if switchboards will require swap out. Electrical Wirings And Sub-Distribution Panels Power is transmitted on sub-distribution panels. Electrical cables connect these panels through circuit breakers. Proper cable sizes are critical to gauge the correct voltage and current flow. When thermal temperatures in these electrical wirings are higher than the regular reading, overheating is possible. Thermal surveys will track such temperature levels. It can also check for fault paths that can result in short circuits. Backup Power, AMF Panels, and Static Transfer Switches (STS) The thermal image captured during the survey will record the standby and power-on temperature of backup generators. It is crucial to ensure that temperature does not hike beyond thresholds at rest and during run-time. Other than backup power, a similar method is necessary to check static transfer switches. UPS Systems And Batteries Load conditions of UPS and batteries need to be measured during the survey. This is to avoid a change in power intensity during use. It is part of the primary power protection plan to ensure an uninterruptible power supply. Energy Storage Systems An energy storage system functions the same as that of a UPS system. They are of lithium-ion battery in nature. Lithium batteries especially need annual checking as they are more complex than lead-acid batteries. A thermal camera inspection is also a means to yield income as energy demands come in. Power Distribution Units (PDU) In a server rack, PDUs serve as the last PowerPoint of connection of the server. Because server racks are prone to hotspots, this can detrimental to existing PDU. Thermal overloads are the effects of these hotspots. When overload happens, it can hamper power reliability. The thermal survey can detect occurrences of these overloads. It can also pick out wiring faults that are also aggravating thermal loads. Just like power sources, a cooling system is an integral survey checkpoint. Surveying cooling units should comprehensively include cooling sub-components. It needs to check external chillers and heat exchangers as well. The survey can track failure points in the cooling system as a result. Server Racks And Containment A quick and accurate assessment is the byline of thermal imaging. These assessments are efficiency gauges on how cooling is done within racks and containment systems. One primary consideration in doing a thermal survey is the air intake and exhaust areas. These are attributed to power densities and heat generation. Server rack surveys should also identify the operational capacity of the rack’s airflow. This will provide determinable proof of whether the cold and hot aisle is mixing. Doing thermal surveys along with airflow checks can thoroughly monitor airflow in server areas. Raised Floors And Ceiling Voids The data center layout is also prone to hotspots and varying temperature loads. Raised floors and ceiling voids are design ideas to facilitate better cooling and airflow. However, as the operational demand rises, the layout can only do so much. This is where thermal images help identify thermal issues. These issues can derive from many factors such as cable damage, connection faults, and even wrong area layout. Integrating Thermal Camera Solutions In Data Center Operations Thermal camera monitoring has adapted to many demands in data center make-up. As such, particular specifications are intended to customize a design for specific usage. AKCP remote camera monitoring technology has the same capacity to survey all possible checkpoints in the data center. AKCP Cameras can connect through SecurityProbe base units. This can enable remote site monitoring. The base units harness the capacity of cameras to do surveillance and complement it with sensors’ ability to gauge temperature levels. AKCP cameras have access control that guarantees video security. To get the most comprehensive thermal survey, SecurityProbe is an advanced environmental sensor monitoring device. When connected with the AKCPro server, users have overall access to many monitoring parameters. This includes a video playback window, sensor events display, and standard image capture. Detect Hotspots In Your Cabinet with Thermal Map Sensor Datacenter monitoring with thermal map sensors helps identify and eliminate hotspots in your cabinets by identifying areas where temperature differential between front and rear are too high. Thermal maps consist of a string of 6 temperature sensors and an optional 2 humidity sensors. Pre-wired to be easily installed in your cabinet, they are placed at the top, middle, and bottom – front and rear of the cabinet. This configuration of sensors monitors the air intake and exhaust temperatures of your cabinet, as well as the temperature differential from the front to the rear. Integrating these solutions in a data center operation addresses the increasingly complex demand for environmental monitoring. As data centers evolve into high-density facilities, an increase in thermal loads is inevitable. Preventive maintenance will only be a successful endeavor with proper solutions that supplements it. The use of thermal cameras in thermal audits and surveys is vital addition to ensure data center sustainability. It proves that despite being low-cost, its comprehensive monitoring coverage and in-depth image capture are indispensable aid to improving data center operations.
<urn:uuid:1accd644-dd16-43d7-9ea2-9fcbde817f43>
CC-MAIN-2022-40
https://www.akcp.com/blog/improving-data-center-operations-with-thermal-cameras/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00633.warc.gz
en
0.908149
2,202
2.515625
3
When considering the impact converged infrastructure and hyperconvergence are having on cabling, it’s important to first understand how data center topologies are evolving. Over the past several years, the emergence of software-defined networking (SDN) has pushed data center designs to move from three-layer topologies (Figure 1) to leaf-spine topologies (Figure 2). Three-layer design is where the bottom (or access) layer connects hosts to the network. The middle layer is the distribution or aggregation layer. The core layer provides routing services to other parts of the data center as well as services outside of the data center space, such as internet access and connectivity to other data center locations. An example of this topology would be using Cisco Nexus 7000 Series as the core switch, Cisco Nexus 5000 Series as the aggregation switch and Cisco Nexus 2000 Series as the access switches. While this design is simple, it does feature limitations in scalability. It can be subject to bottlenecks if uplinks between layers are oversubscribed. This can come from latency incurred as traffic flows through each layer and from the blocking of redundant links using protocols like spanning tree. Leaf-spine is an alternative design where leaf switches form the access layer. These leaf switches are fully mesh-connected to all the spine switches. The mesh ensures that each leaf switch is no more than one connection away from any other leaf switch. This topology is easily scalable. The links between the leaf and spine layer can be either routed or switched. All links are forwarding – which means that none of the links are blocked in a path. For example, you could use technology like Transparent Interconnection of Lots of Links (TRILL) or Shortest Path Bridging (SPB) software. Converged infrastructure works by grouping multiple technology components into a single computing package. Some of the components of a converged infrastructure may include servers, data storage devices, network hardware and software for IT management. Many customers are using converged infrastructure to build Apache Hadoop clusters. Apache Hadoop is an open-source software framework written in Java for distributed storage and distributed processing of very large data sets. Figure 3 shows a typical Hadoop cluster using Arista equipment. In the far left core cabinet is an Arista 7500R chassis with 12-port MXP line cards. Connecting out of the MXP line cards are 24-fiber MTP® to MTP® trunks going up into MTP® coupler panels. Each MXP line card has 12 ports and each MTP® coupler panel has 12 couplers to one-to-one line card mirroring. On the backside of the MTP® coupler panels are 48-fiber MTP® to MTP® trunks with (2) 24 MTP® connectors used as backbone cabling running into the back of 24-port MTP®-LC cassette modules. Each 24-port cassette module replicates one core switch to connect a row of compute cabinets. The center cabinet, or end-of-row (EoR) or middle-of-row (MoR) cabinet, replicates four network core switches with two as main data connections and two as management ports. The cabinet on the right is the compute cabinet and has two top-of-rack (ToR) switches. This configuration uses one ToR for data at 10G or 25G and one for management at 1G or 10G. The ToR switches are connected back to the EoR cabinet using 8-fiber/4-port LC to LC cable assemblies. Below the ToR switches in the compute cabinets are servers and disk arrays. Converged infrastructure can have all the components of a data center (servers, data storage devices, networking equipment and software) contained in a group of cabinets or one cabinet as shown in Figure 4. As data center owners need more computing power, they can add infrastructure either one cabinet at a time or multiple cabinets at a time. When end-users install these systems, in-cabinet connectivity can become a challenge due to the different media types available. These media types include Twinax Direct Attach Copper (DAC), Active Optical Cables (AOCs) and traditional transceivers and optics with patch cords and jumpers. Note, DAC can come as either passive or active. In-cabinet connectivity for Figure 4 uses Twinax DAC cabling. The DAC cables plug into the servers and software appliances to the network switch or uplink switch. Generally, DAC cables (as shown in Figure 5) that are shorter than five meters are passive and do not consume power. They do nothing to the signal, only acting as a pass-through transmission medium. Prior to the signal entering the passive DAC cable, the switch will set signal conversion, conditioning, amplification and equalization or skew. Properly utilizing passive DAC cables requires switches that have signal processing chipsets to maintain acceptable skew. Typically, DAC cables longer than 5 meters are active and draw power from each end, but this may vary from vendor to vendor. Active cables cost three times more than passive cables on average. SFP+ DAC is a popular choice for 10G Ethernet. It reaches up to 10 meters, has low latency and lower cost. One of the challenges of using DAC comes with cable management. They come in standard lengths and breakouts, which can leave long service loops and cause cable congestion. Generally, DAC cables can support 1G to 1G, 10G to 10G, 40G to 40G and 100G to 100G connections. They can also break out from 40G to (4) 10G and 100G to (4) 25G. AOCs are made of fiber optic glass with optics attached to each end (Figure 6). They are more expensive than DAC but can run longer distances and higher speeds up to 100G. AOCs can support 10G to 10G, 40G to 40G and 100G to 100G connections. They can also break out from 40G to (4) 10G and 100G to (4) 25G. A limitation that comes with using AOCs is that they only support one transmission speed and vendor type. When your next equipment upgrade occurs, the AOC cables will most likely need to be replaced. Although less expensive than individual optics and jumpers, they are not scalable to higher speeds. When using transceivers mounted in devices, copper patch cords can be used effectively to run speeds from 1G to 10G. Category 6 copper cabling (CAT6) is the current industry leader for these connections. Recently released mini CAT6 (Figure 7) cabling has reduced the diameter of standard CAT6 by 50%. Mini CAT6 also has a more flexible copper core to better route the cable in-cabinet. It can be bundled to break out and stagger on each end. One end can stagger into a ToR switch and the other end into any server or appliance ports needed. If the application uses patch panels or a ToR switch with required optical ports, fiber optic jumpers and harnesses (Figure 8) can be used effectively. One advantage of using pre-terminated fiber is that the lengths can be precise to make the required connection. Each connector can be pre-labeled to port destination for ease of installation. This greatly reduces the service loops in-cabinet and helps with managing all other connections, such as power cables and monitoring equipment. Another advantage is a longer potential lifespan. Unlike DAC cables that feature a shorter length and lower power capabilities, fiber jumpers and harnesses will be forward-compatible as speeds increase. Custom fiber optic harnesses (Figure 9) can also be made to specific lengths and breakouts and pre-labeled for ease of installation and repeatability. Hyperconvergence uses a software architecture to integrate compute, storage, networking, virtualization, and other technologies in a single hardware box. As data centers started to use converged infrastructure to bring compute systems into several cabinets (or just one cabinet), hyperconvergence has now reduced that footprint to 1U or 2U of rack space. This new technology has relied on the advances of SDN. The software can communicate with all of the required components in the compute cycle, not just in one location but in many locations. Along with SDN, Network Functions Virtualization (NFV) has become a predominant enterprise data center model. The implications of these models are far-reaching. As an example, an organization may have its main data center located at its headquarters environment. This institution could have a disaster recovery site in another state that is housed in a colocation facility. It could also have its email and other business applications in the cloud, using a large cloud vendor like Amazon. Each of these three locations can run the same SDN on their equipment and NFV will allow them to appear and act as the same compute system to the user. When deploying hyperconvergence, the devices used are 1U and 2U in rack space and many can be installed in a single cabinet (Figure 10). Each of these devices would typically require two or four connections to an uplink switch. This uplink switch can be mounted at the top of the cabinet or in the center of the cabinet to reduce the length of the in-cabinet connectivity. Generally, hyperconvergence has more in-cabinet connections than both three-layer topologies and converged infrastructure because of the number of converged machines used per cabinet. Understanding the connections necessary by speed and optimizing media type usage will reduce costs and best support the components in the cabinet. As data centers deploy hyperconvergence, proper planning can help prepare for in-cabinet connections. For instance, determining the best lengths for DAC cables, breakouts, harnesses, and power cords is highly recommended. Once power cord lengths are determined, they can be color-coded for both power strips. In addition, selecting the necessary optics per cabinet ahead of time will help with quick and efficient deployment. Optics can be programmed to work with several different vendors’ equipment to reduce the amount of part numbers to stock and manage. A single cabling supplier box can include all the needed connectivity and optics for one cabinet. Data center topologies continue to migrate from three-layer to leaf-spine. SDN is helping the evolution to converged infrastructure and hyperconvergence. As the individual components for computing are coming in smaller packages, such as 1U and 2U rack unit machines, the cabling is being compacted. The vast majority of the connections are now being made in-cabinet and accomplished with Twinax DAC cabling and/or AOCs. Planning ahead and standardizing on cabinet equipment rack positions helps define the best cable for the connection. Acquiring the cables at proper length, breakout, color and labeling will ease installation, cable congestion, cooling and repeatability. This, in turn, will reduce downtime related to connectivity. Have questions? Need help with a project? Contact us and let us know!
<urn:uuid:267243b7-347f-48a0-9ae0-e373d44a1253>
CC-MAIN-2022-40
https://www.cablexpress.com/education/white-papers/cabling-designs-for-hyperconvergence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00633.warc.gz
en
0.929469
2,307
2.515625
3
Endangered Species - Top 10 Endangered Animals Here you can learn about top 10 endangered animals from around the world. Currently, 1,556 known species in the world have been identified as endangered, or near extinction, and are under protection by government law. Numbers of Endangered Animals remaining are estimated in the list below. What are Endangered Animals? An endangered species is a population of organisms which is at risk of becoming extinct because it is either few in numbers, or threatened by changing environmental or predation parameters. Many nations have laws offering protection to conservation reliant species: for example, forbidding hunting, restricting land development or creating preserves. Only a few of the many species at risk of extinction actually make it to the lists and obtain legal protection. Many more species become extinct, or potentially will become extinct, without gaining public notice. Being listed as an endangered species can have negative effect since it could make a species more desirable for collectors and poachers. This effect is potentially reducible, such as in China where commercially farmed turtles may be reducing some of the pressure to poach endangered species. - Black Rhino: less than 60 left in the wild rhino: less than 60 left in the wild > - Mountain Gorilla: 720 in the wild gorilla: 720 in the wild > - South China Tiger: No recent sightings in the wild china tiger: no recent sightings in the wild > - Sumatran Orangutan: 3700 remaining orangutan: 3700 remaining > - Giant Panda: 3,000 to 5,000 remaining panda: 3,000 to 5,000 remaining > - Blue whale: 3,000 to 5,000 remaining whale: 3,000 to 5,000 remaining > - Loggerhead Sea Turtle: 60,000 remaining sea turtle: 60,000 remaining > - The Bonobo: 5,000 to 60,000 remaining bonobo: 5,000 to 60,000 remaining > - Polar Bear: 20,000 to 25,000 remaining bear: 20,000 to 25,000 remaining > - African Elephant: 10,000 remaining elephant: 10,000 remaining >
<urn:uuid:b95ee56e-6253-419e-95e1-a649cd9bb680>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/906/endangered-species-top-10-endangered-animals.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00033.warc.gz
en
0.911028
442
3.59375
4
The Human Reality of Cyber Security Cyberspace. We live in it, we work in it, we transact in it, we exist in it. We spend enormous amounts of money on it, to make it better, to improve our lives and our work. While we strive to make it better, it remains one of the most unsafe places. It is rife with threats. Threats that we can’t see, that we can’t touch. Threats that are caused by adversaries thousands of kilometres away. At the click of a mouse or a stroke on a keyboard these adversaries can assume our identities, steal our information, our identities and our money. Cyber security professionals are fighting a never-ending battle. Cyber criminals seem to always be one step ahead. Statistics show that security spending has been growing around 15% year on year since 2014, as cyber security became more of a priority for many organisations. So why are we not winning this battle? The answer may be simpler than you think. As security professionals, we tend to think that the answer lies primarily in technology. This is where the problem starts. Traditionally, Information Technology (IT), and IT Security, is technology centric. We develop and implement frameworks, standards and architectures that primarily centre around technology. We understand the risks and threats that the technology faces, we tend not to think about the business as a whole. Furthermore, IT security is seen as being IT’s job, so what happens? IT does what they know best: they protect the technology. In essence this is not wrong as our information normally resides on technology platforms. But we forget the user, the human behind the technology. Statistics indicate that around 80% of security breaches are aided by humans, either knowingly or unknowingly. Tactics such as social engineering and phishing is by far the most widely used and is also the most successful. They exploit human vulnerabilities and not technology vulnerabilities. Our response to this problem is to throw some more technology at the problem. We spend significant amount of money on technology to fight cyber-crime, but we don’t see a decrease in cyber-crime figures. This means that there is still a problem somewhere. Is it the technology, or maybe it is how the technology was implemented? The problem, in most cases, is not with the technology, it is in the way that we approach cyber security. To effectively protect against cyber-crime, our approach to cyber security must evolve. The conversation must change from the notion of protecting technology to a notion of protecting the organisation as a whole, which includes its technology, people and processes. To assume that you are protected simply because your technology is protected is a false reality. Technology is not the solution; it is only part of the solution. Humans must become part of our defence strategy, in fact, humans are critical to our defence strategy! Technology has not yet evolved to the level where it can actively monitor human behaviour. Sure, human actions on technology systems can be monitored and analysed, but the reality is that technology can only monitor other technology systems. If you have paper based or manual processes in your organisation that can be targeted by cyber criminals (e.g. invoice processing and payment), then technology cannot be your first line of defence. In this instance, humans have to be your first line of defence, they must become your organisation’s firewall. This is becoming more critical as we have seen a significant rise in supply chain compromise, where adversaries are interfering with supply chain processes to get falsified invoices paid to fake beneficiaries. Building human firewalls is not a simple task; there are no firmware updates, or security patches that can be applied to humans. It requires the organisational culture to change. You must create culture where all your users, from the cleaning staff to the CEO and the Board, shares and owns the responsibility for IT security. You must create a culture where IT security becomes a practice that is embedded across the organisation, in all the technology systems and in all the business processes and practices, whether digital or manual. Creating this culture is a journey, one that never ends. It requires continuous awareness and education, strategy, vision, innovation, leadership, commitment, a passionate IT Security team, buy-inform the organisation, and most importantly, it must make your users feel empowered. The key component in building this culture is understanding the weaknesses in the organisational defences. Conducting practical exercises using penetration testing and social engineering tactics that simulate actual and plausible cyber-attacks will highlight the real gaps in your defences. So why are we not winning the battle against cybercrime? The answer is simple. We are focussing our efforts on technology while spending little or not enough effort on building human defences. We know that humans are the weakest link, we know that 80% of breaches exploit the human factor, and yet we don’t spend 80% of our efforts on the humans? Think of an analogy with a Formula 1 race: Imagine technology is the car, your users the driver, and your IT and Security teams the pit crew. At the end of the day, the race is won with a reliable and well setup car, a competent and practiced driver, and a good pit crew. A car can only do so much, it is the driver that decides when to turn, when to accelerate and when to brake, it is the pit crew that maintains the car and keeps the driver informed of their and the car’s performance. How good are your human drivers? Does your pit crew give your drivers enough support? Or are you relying too much on your car to win the race?
<urn:uuid:45c1f177-0312-4866-b707-b97dc3c9f105>
CC-MAIN-2022-40
https://africa-tech-africa.cioreview.com/cxoinsight/the-human-reality-of-cyber-security-nid-31430-cid-304.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00033.warc.gz
en
0.958164
1,150
2.59375
3
Members of underserved populations are less likely to know whether they have even been victimized by a cyber attack, and they have lower awareness of cybersecurity risks. Partly as a result, they are also less likely to access vital online services, such as banking, health services, educational programs, and other resources, which could lead to them falling behind economically, according to a survey of more than 150 San Franciscans at diverse community-based organizations across San Francisco, as well as a survey of 142 people from a comparison group. The paper, “Improving Cybersecurity Awareness in Underserved Populations,” was released by the University of California, Berkeley Center for Long-Term Cybersecurity (CLTC), and was authored by Ahmad Sultan, a recent graduate of UC Berkeley’s Goldman School of Public Policy, who partnered with officials from the City and County of San Francisco to study the cybersecurity awareness of underserved citizens. “This cybersecurity gap is a new ‘digital divide’ that needs to be addressed—with urgency—by the public and private sectors alike,” Sultan wrote. “The report is intended to help city leaders understand how they could better understand this issue in their own cities, and how they might forge public-private partnerships to address cybersecurity concerns at the system level.” Among the key findings outlined in the report: - When underserved residents were asked about their knowledge of core cybersecurity concepts, 20 percent did not know about online crime, 21 percent didn’t know about email spam, 26 percent didn’t know about computer or phone “viruses,” and 31 percent did not know about anti-virus software. Underserved residents generally suffer from low levels of confidence in their ability to protect themselves online and have low trust in technology companies to secure their data. As a result, they are deterred from using online services, such as banking or social services, that can bring important economic and social benefits. - A significant percentage of underserved residents likely have been victim of a cyber scam, and many may have been scammed multiple times. - Underserved residents often possess an incomplete understanding or distorted view of the online security landscape. A large number of respondents were unable to comment on cybercrime impact because they did not understand basic cybersecurity concepts. - Respondents who said they were confident in their ability to protect themselves online are often not taking basic security precautions that could justify some of that confidence. - Underserved citizens whose primary language is not English often struggle to find resources on cybersecurity in their own language, and many do not know what resources to trust. - Residents often turn to friends or relatives and receive partially accurate information at best. - Respondents generally have a poor understanding of basic cybersecurity concepts such as online scams and viruses. They also have low skill level and motivation to follow best practices as gauged by cyber-hygiene relevant questions. These include setting a complex password for online accounts and employing preventative methods when reading and interacting with the contents of an email. The report encourages city leaders to study their own populations’ cybersecurity awareness, and to provide targeted trainings. The paper also suggests that city leaders develop resources, such as advice websites and public awareness campaigns, and it encourages them to participate in state and federal programs focused on boosting cybersecurity awareness, while also partnering with private-sector partners to help improve the practices and behaviors of underserved populations. “While the field of cybersecurity impact evaluation is young, experiences in the field of public health can serve as a helpful guide for city leaders hoping to chip away and define the future of cybersecurity for underserved residents,” Sultan’s report advises. “Cities have opportunities to work together to develop joint cybersecurity initiatives, including digital literacy trainings to improve cybersecurity outcomes, while also creating strong, sustainable, and actionable partnerships with private-technology firms to address system-level cybersecurity concerns.”
<urn:uuid:89a425ec-1a92-4884-afd9-9fe14ec9eb5d>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2019/04/23/underserved-populations-unaware-of-cybersecurity-risks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00033.warc.gz
en
0.966124
810
2.625
3
Passwords (even strong ones) can sometimes fall into the wrong hands. To minimize the risk of granting access to an impersonator who might have managed to obtain someone else's username and password, you might need to employ what is known as two factor authentication (2FA). What is it? What two factor authentication is not Two factor authentication or 2FA is a combination of two different methods of authentication. Password authentication, for example, is one method. So if you add another method to that, then you already have 2-step authentication? Not really. You see, password authentication is a knowledge-based method. It requires something the user knows, i.e., his password. If the second method of authentication is still knowledge based, say a secret question like "What is your mother's maiden name", then the combination wouldn't qualify as two factor authentication. Combining two passwords, likewise does not qualify as 2-step authentication. Again, because it authenticates a person based on what the person knows. No matter how many secret questions you ask the user, the security of your authentication wouldn't increase that much. That's because there are now many ways for an attacker to obtain the information only the user is supposed to know. In fact, that's why hackers were still able to get past the IRS' multi-step Get Transcript authentication. They first aggregated the needed information from other sources (like social media sites). Once they had the information they needed, passing through the question-based authentication process became a walk in the park. Factors of authentication There are currently three commonly used factors of authentication: Knowledge factors - This is the factor we were discussing earlier. It authenticates based on something the user knows. Most of the time, that something is a password. It can also be a personal identification number (PIN) or the answer to a secret question. Possession factors - As its name implies, a possession factor of authentication authenticates based on something the user has. Examples of this "something" include: a private key, a client digital certificate, a smart card, or an ATM card. Inherence factors - Finally, an inherence factor of authentication authenticates based on something inherent to the user. The biometric methods that we see in movies, like retina scans, voice recognition, and fingerprint reads, are examples of this type of authentication. It is when you combine any two of these three factors that you're able to arrive at 2FA. For example, all these combinations are considered 2FA: - password and retina scan; - password and thumbprint read; - private key and password; - Card and retina scan More specific examples of two factor authentication Technically speaking, an ATM card, by itself, already exemplifies 2-step authentication. The magnetic stripe at the back of the card already contains the card owner's name and account number. As soon as the card's inserted into the ATM machine, the machine will automatically recognize the card's owner. Ideally, that card should only be in the possession of the card owner. So, as you can see, this part of the ATM card authentication process is still based on a possession factor. At this point, it's still just single factor authentication. However, after the user enters his/her PIN number, which now is a knowledge factor of authentication, the entire process would now qualify as two factor authentication. Another two-factor authentication-in-one-object is mobile phone two-factor authentication. You're probably familiar with the ones used by Microsoft, Google and Apple, wherein you're sent a one-time code to verify. Another variety of mobile 2FA is the one used by JSCAPE MFT Server, which requires the user to enter his/her username and password upon login and then reply personally to a phone call that confirms whether the login was legit. The advantage of using two factor authentication If it still isn't obvious at this point, the advantage of using 2FA is that it's more difficult to deceive. If we recall the IRS breach (see link above), the attack compromised no less than 330,000 accounts. Because the authentication process was purely knowledge-based, all the attackers had to do was obtain the needed information. At this day and age, where almost every bit of information has been digitized and made accessible through networks, that's no longer so hard to so. In fact, many usernames and passwords, obtained from previous hacks, are already shared or sold in hacking forums and other dark corners of the web. The answers to those secret questions, on the other hand, can likewise be mined from social media sites. The hackers would have had a harder time if, instead of those secret questions, the IRS reinforced the password authentication with perhaps a possession factor like phone authentication or maybe a private key or digital certificate. Perhaps difficult to implement. But also difficult to hack. Your choice.
<urn:uuid:70d37044-9bb1-4a7c-918f-3e8ba56e0162>
CC-MAIN-2022-40
https://www.jscape.com/blog/two-factor-authentication
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00033.warc.gz
en
0.957252
1,018
3.65625
4
A Quick View of 10-Gigabit Ethernet The improvements in 10GbE technology are extending its reach beyond enterprise enterprise-grade users to the broader market to replace 1 Gigabit Ethernet. This article introduces the types of 10GbE and its market requirements. What Is 10 Gigabit Ethernet? 10 Gigabit Ethernet is a telecommunications technology that transmits data packets over Ethernet at a rate of 10 billion bits per second. 10GbE standards were first defined by the IEEE 802.3ae in 2002. To implement different 10GbE physical layer standards, PHY modules including XENPAK(and related X2 and XPAK), XFP, and SFP+ were released specified by multi-source agreements (MSAs). The sizes of these five modules are getting smaller and smaller. SFP+ is the newest module standard and has become the most popular socket on 10GbE systems. Types of 10 Gigabit Ethernet There are two types of 10 GbE network, which differ by the type of cable used to connect devices. Fiber-based 10 Gigabit Ethernet There are two basic types of optical fiber used for 10 Gigabit network: single-mode (SMF) and multi-mode (MMF). In SMF light follows a single path through the fiber while in MMF it takes multiple paths resulting in differential mode delay (DMD). SMF is used for long-distance communication and MMF is used for distances of less than 300m (more information about SMF and MMF, click Fiber Optic Cable Types: Single Mode vs Multimode Fiber Cable). 10 Gigabit Ethernet can also run over active optical cables (AOC). |10GBASE-SR/SW||IEEE 802.3ae-2002||850nm||MMF||Duplex LC/SC||300m over OM3; 400m over OM4| |10GBASE-LRM||IEEE 802.3aq-2006||1310nm||MMF/SMF||Duplex LC/SC||220m over OM3 MMF; 300m over SMF| |10GBASE-LR/LW||IEEE 802.3ae-2002||1310nm||SMF||Duplex LC/SC||10km| |10GBASE-ER/EW||IEEE 802.3ae-2002||1550nm||SMF||Duplex LC/SC||40km| |10GBASE-ZR/ZW||proprietary (non IEEE)||1550nm||SMF||Duplex LC/SC||80km| |10GBASE-LX4||IEEE 802.3ae-2002||1310nm||MMF/SMF||Duplex LC/SC||300m/10km| |10GBASE-PR||IEEE 802.3av-2009||TX: 1270 nm; RX: 1577 nm||SMF||SC||20km| Optical fiber can also be divided into three types according to the application scenarios: 10 Gigabit LAN Ethernet, 10 Gigabit WAN Ethernet, and 10 Gigabit PON over Optical Fiber. 10GBASE-SR, 10GBASE-LR, 10GBASE-LRM, 10GBASE-ER, 10GBASE-ZR, and 10GBASE-LX4 are used in the Local Area Network (LAN). 10GBASE-SW, 10GBASE-LW, 10GBASE-EW, and 10GBASE-ZW are applied to the Wide Area Network (WAN). They are set to work in OC-192/STM-64/ SDH/SONET. And 10GBASE-PR is a 10 Gigabit Ethernet PHY for passive optical networks. Copper-based 10 Gigabit Ethernet 10 Gigabit Ethernet can run over twin-axial cabling, twisted pair cabling, and backplanes. 10GbE based on double-twisted cable includes 10GBASE-CX4, 10GBASE-T, 10GBASE-KX4, 10GBASE-KR, and SFP+ DAC. |10GBASE-CX4||IEEE 802.3ak-2004||Twinax Copper||4 Lanes||15m| |10GBASE-T||IEEE 802.3an-2006||CAT6A or 7 UTP||Twisted Pair||100m| |10GBASE-KX4||IEEE 802.3ap-2007||Improved FR-4||4 Lanes||1m| |10GBASE-KR||IEEE 802.3ap-2007||Improved FR-4||Serial||1m| |10GBASE-CR||SFF-8431-2006||Twinax Cable||Twisted Pair||15m| 10 Gigabit Ethernet Application Scenarios There are broad demands for 10 Gigabit Ethernet in the Local Area Network (LAN), Metropolitan Area Network (MAN), and the Wide Area Network (WAN) markets. Each market typically has different requirements for linkspan and cost. 10 Gigabit Ethernet in the LAN In the LAN markets, applications typically include in-building computer servers, building-to-building clusters, and data centers. In this case, the distance requirement is usually between 100m and 300m. In the medium-haul markets, applications usually include campus backbones, enterprise backbones, and storage area networks. In this case, the distance requirement is moderate, usually between 2km and 20km. Figure 1: Example of 10 Gigabit Ethernet in Expanded LAN 10 Gigabit Ethernet in the MAN/WAN The WAN markets typically include Internet service providers and Internet backbone facilities. A Point of Presence (PoP) is typically considered the node that links a long-distance network to a serving area, giving a service provider or enterprise a presence in the area and giving area users an economical way to access the provider’s services. The demand for WAN-compatible 10 GbE in Service Provider PoPs already exits, particularly as eCommerce/eBusiness applications and high-speed Ethernet-based residential Internet access markets accelerate. Most of the access points for long-distance transport networks require the OC-192c data rate. Figure 2: Example of 10 Gigabit Ethernet in a MAN As the boundaries of LAN, MAN, and WAN continue to blur, 10 Gigabit Ethernet, providing high performance, high efficiency, and low cost at all scales, also delivers faster speeds with less compression and latency. Therefore, 10 GbE will be a low-cost solution for high-speed and reliable data networking and it will dominate the LAN, MAN, and WAN markets in the near future.
<urn:uuid:1a919fc6-2368-4002-8a2f-01d81b528aba>
CC-MAIN-2022-40
https://community.fs.com/blog/10-gigabit-ethernet-overview.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00033.warc.gz
en
0.834549
1,451
3.078125
3
Researchers from MIT, Harvard University, and Seoul National Laboratory have developed an autonomous robot that mimics the movement of earthworms, MIT News reports. The soft autonomous robot, called Meshworm, crawls across surfaces by contracting segments of its body. MIT Mechanical Engineering Esther and Harold E. Edgerton Assistant Professor Sangbae Kim said the Meshworm is designed to navigate rough terrain and squeezing into tight spaces. Kim added that the Meshworm could remain unscathed, even when stepped on or struck with a hammer. Jennifer Chu wrote that the Defense Advanced Research Projects Agency-sponsored invention stretches and contracts with heat due to its artificial muscle, made of nickel and titanium wires. Biology professor from Lewis and Clark College Kellar Autumn said the Meshworm is designed for future endoscopes, implants, and prosthetics. Autumn added that the technology could be used in the next decade for mobile phones, computers, and automobiles. Other researchers who worked on the Meshworm project include MIT graduate student Sangok Seok and postdoc Cagdas Denizel Onal, Harvard Assistant Professor Robert J. Wood, Seoul National University assistant professor Kyu-Jin Cho, MIT Computer Science and Artificial Intelligence Laboratory director Daniela Rus. Credit: MIT News
<urn:uuid:c2542f6f-e90a-4b91-80d5-43e72272263b>
CC-MAIN-2022-40
https://blog.executivebiz.com/2012/08/mit-harvard-researchers-develop-earthworm-like-robot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00033.warc.gz
en
0.905916
259
3.5
4
An Overview of DWDM Technology and DWDM System Components Telecommunications makes wide use of optical techniques in which the carrier wave belongs to the classical optical domain. The wave modulation allows transmission of analog or digital signals up to a few gigahertz (GHz) or gigabits per second (Gbps) on a carrier of very high frequency, typically 186 to 196 THz. In fact, the bitrate can be increased further, using several carrier waves that are propagating without significant interaction on a single fiber. It is obvious that each frequency corresponds to a different wavelength. Dense Wavelength Division Multiplexing (DWDM) is reserved for very close frequency spacing. This blog covers an introduction to DWDM technology and DWDM system components. The operation of each component is discussed individually and the whole structure of a fundamental DWDM system is shown at the end of this blog. Introduction to DWDM Technology DWDM technology is an extension of optical networking. DWDM devices (multiplexer, or Mux for short) combine the output from several optical transmitters for transmission across a single optical fiber. At the receiving end, another DWDM device (demultiplexer, or Demux for short) separates the combined optical signals and passes each channel to an optical receiver. Only one optical fiber is used between DWDM devices (per transmission direction). Instead of requiring one optical fiber per transmitter and receiver pair, DWDM allows several optical channels to occupy a single fiber optic cable. As shown below, by adopting high-quality AAWG Gaussian technology, FS DWDM Mux/Demux provides low insertion loss (3.5dB typical), and high reliability. With the upgraded structure, these DWDM multiplexers and demultiplexers can offer easier installation. A key advantage of DWDM is that it's protocol and bitrate independent. DWDM-based networks can transmit data in IP, ATM, SONET, SDH and Ethernet. Therefore, DWDM-based networks can carry different types of traffic at different speeds over an optical channel. Voice transmission, email, video and multimedia data are just some examples of services that can be simultaneously transmitted in DWDM systems. DWDM systems have channels at wavelengths spaced with 0.4nm or 0.8nm spacing. DWDM is a type of Frequency Division Multiplexing (FDM). A fundamental property of light states that individual light waves of different wavelengths may coexist independently within a medium. Lasers are capable of creating pulses of light with a very precise wavelength. Each individual wavelength of light can represent a different channel of information. By combining light pulses of different wavelengths, many channels can be transmitted across a single fiber simultaneously. Fiber optic systems use light signals within the infrared band (1mm to 750nm wavelength) of the electromagnetic spectrum. Frequencies of light in the optical range of the electromagnetic spectrum are usually identified by their wavelength, although frequency (distance between lambdas) provides a more specific identification. DWDM System Components A DWDM system generally consists of five components: Optical Transmitters/Receivers, DWDM Mux/DeMux Filters, Optical Add/Drop Multiplexers (OADMs), Optical Amplifiers, Transponders (Wavelength Converters). Transmitters are described as DWDM components since they provide the source signals which are then multiplexed. The characteristics of optical transmitters used in DWDM systems is highly important to system design. Multiple optical transmitters are used as the light sources in a DWDM system. Incoming electrical data bits (0 or 1) trigger the modulation of a light stream (e.g., a flash of light = 1, the absence of light = 0). Lasers create pulses of light. Each light pulse has an exact wavelength (lambda) expressed in nanometers (nm). In an optical-carrier-based system, a stream of digital information is sent to a physical layer device, whose output is a light source (an LED or a laser) that interfaces a fiber optic cable. This device converts the incoming digital signal from electrical (electrons) to optical (photons) form (electrical to optical conversion, E-O). Electrical ones and zeroes trigger a light source that flashes (e.g., light = 1, little or no light =0) light into the core of an optical fiber. E-O conversion is non-traffic affecting. The format of the underlying digital signal is unchanged. Pulses of light propagate across the optical fiber by way of total internal reflection. At the receiving end, another optical sensor (photodiode) detects light pulses and converts the incoming optical signal back to electrical form. A pair of fibers usually connect any two devices (one transmit fiber, one receive fiber). DWDM systems require very precise wavelengths of light to operate without interchannel distortion or crosstalk. Several individual lasers are typically used to create the individual channels of a DWDM system. Each laser operates at a slightly different wavelength. Modern systems operate with 200, 100, and 50-GHz spacing. Newer systems that support 25-GHz spacing and 12.5-GHz spacing are being investigated. Generally, DWDM transceivers (DWDM SFP, DWDM SFP+, DWDM XFP, etc.) operating at 100 and 50-GHz can be found on the market nowadays. DWDM Mux/Demux Filters Multiple wavelengths (all within the 1550 nm band) created by multiple transmitters and operating on different fibers are combined onto one fiber by way of an optical filter (Mux filter). The output signal of an optical multiplexer is referred to as a composite signal. At the receiving end, an optical drop filter (DeMux filter) separates all of the individual wavelengths of the composite signal out to individual fibers. The individual fibers pass the demultiplexed wavelengths to as many optical receivers. Typically, Mux and Demux (transmit and receive) components are contained in a single enclosure. Optical Mux/DeMux devices can be passive. Component signals are multiplexed and demultiplexed optically, not electronically, therefore no external power source is required. The figure below is bidirectional DWDM operation. N light pulses of N different wavelengths carried by N different fibers are combined by a DWDM Mux. The N signals are multiplexed onto a pair of optical fiber. A DWDM Demux receives the composite signal and separates each of the N component signals and passes each to a fiber. The transmitted and receive signal arrows represent client-side equipment. This requires the use of a pair of optical fibers; one for transmit, one for receive. Optical Add/Drop Multiplexers Optical add/drop multiplexers (i.e. OADMs) have a different function of "Add/Drop", compared with Mux/Demux filters. Here is a figure that shows the operation of a 1-channel DWDM OADM. This OADM is designed to only add or drop optical signals with a particular wavelength. From left to right, an incoming composite signal is broken into two components, drop and pass-through. The OADM drops only the red optical signal stream. The dropped signal stream is passed to the receiver of a client device. The remaining optical signals that pass through the OADM are multiplexed with a new add signal stream. The OADM adds a new red optical signal stream, which operates at the same wavelength as the dropped signal. The new optical signal stream is combined with the pass-through signals to form a new composite signal. OADM designed for operating at DWDM wavelengths are called DWDM OADM, while operating at CWDM wavelengths are called CWDM OADM. Both of them can be found on the market now. Optical amplifiers boost the amplitude or add gain to optical signals passing on a fiber by directly stimulating the photons of the signal with extra energy. They are "in-fiber" devices. Optical amplifiers amplify optical signals across a broad range of wavelengths. This is very important for DWDM system application. Erbium-Doped Fiber Amplifiers (EDFAs) are the most commonly used type of in-fiber optical amplifiers. EDFAs used in DWDM systems are sometimes called DWDM EDFA, compared to those used in CATV or SDH systems. To extend the transmission distance of your DWDM system, you can choose from different types of optical amplifiers, including DWDM EDFA, CATV EDFA, SDH EDFA, EYDFA, and Raman Amplifier etc. Here is a figure that shows the operation of a DWDM EDFA. Transponders (Wavelengths Converters)/OEO Transponders convert optical signals from one incoming wavelength to another outgoing wavelength suitable for DWDM applications. Transponders are Optical-Electrical-Optical (O-E-O) wavelength converters. A transponder performs an O-E-O operation to convert wavelengths of light, thus some people called them "OEO" for short. Within the DWDM system, a transponder converts the client optical signal back to an electrical signal (O-E) and then performs either 2R (Reamplify, Reshape) or 3R (Reamplify, Reshape, and Retime) functions. The figure below shows bi-directional transponder operation. A WDM transponder is located between a client device and a DWDM system. From left to right, the transponder receives an optical bit stream operating at one particular wavelength (1310 nm). The transponder converts the operating wavelength of the incoming bitstream to an ITU-compliant wavelength. It transmits its output into a DWDM system. On the receive side (right to left), the process is reversed. The transponder receives an ITU-compliant bitstream and converts the signals back to the wavelength used by the client device. Transponders are generally used in WDM systems (2.5 to 40 Gbps), including not only DWDM systems, but also CWDM systems. And WDM transponders (OEO converters) can come with different module ports (SFP to SFP, SFP+ to SFP+, XFP to XFP, etc.). How DWDM System Components Work Together with DWDM Technology As a DWDM system is composed of these five components, how do they work together? The following steps give out the answer (also you can see the whole structure of a fundamental DWDM system in the figure below): 1. The transponder accepts input in the form of a standard single-mode or multimode laser pulse. The input can come from different physical media and different protocols and traffic types. 2. The wavelength of the transponder input signal is mapped to a DWDM wavelength. 3. DWDM wavelengths from the transponder are multiplexed with signals from the direct interface to form a composite optical signal which is launched into the fiber. 4. A post-amplifier (booster amplifier) boosts the strength of the optical signal as it leaves the multiplexer. 5. An OADM is used at a remote location to drop and add bitstreams of a specific wavelength. 6. Additional optical amplifiers can be used along the fiber span (in-line amplifier) as needed. 7. A pre-amplifier boosts the signal before it enters the demultiplexer. 8. The incoming signal is demultiplexed into individual DWDM wavelengths. 9. The individual DWDM lambdas are either mapped to the required output type through the transponder or they are passed directly to client-side equipment. Using DWDM technology, DWDM systems provide the bandwidth for large amounts of data. In fact, the capacity of DWDM systems is growing as technologies advance that allow closer spacing, and therefore higher numbers, of wavelengths. But DWDM is also moving beyond transport to become the basis of all-optical networking with wavelength provisioning and mesh-based protection. Switching at the photonic layer will enable this evolution, as will the routing protocols that allow light paths to traverse the network in much the same way as virtual circuits do today. With the development of technologies, DWDM systems may need more advanced components to exert greater advantages.
<urn:uuid:46477107-f498-4788-8db8-60d98f1deb63>
CC-MAIN-2022-40
https://community.fs.com/blog/an-overview-of-dwdm-technology-and-dwdm-system-components.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00033.warc.gz
en
0.915228
2,661
3.625
4
By Kathryn M. Farrish, CISSP Common Controls are security controls whose implementation results in a security capability that is inheritable by multiple information systems (IS). For example, the information systems hosted in a data center will typically inherit numerous security controls from the hosting provider, such as: - Physical and environmental security controls - Network boundary defense security controls Other inheritance scenarios include agency or departmental-level policies or procedures that can be leveraged by all IS within the organization, organizationside security monitoring capabilities, public key infrastructures (PKI), etc. Organizations implementing common controls are referred to as Common Control Providers. The obvious benefit of common controls is to eliminate the need for redundant development and operation of security controls by multiple system owners. Additionally, common controls provide for uniformity that would just not be possible if each In order for an IS to inherit a particular security control, the following should be true: - The control is implemented and managed outside the system boundary of the inheriting IS - The Common Control Provider has designated the particular control as inheritable - The Common Control Provider has an Authorization to Operate (ATO) or equivalent evidence that the control is in fact in place It is possible for an IS to inherit just part of a control from a Common Control Provider, with the remainder of the control provided within the system boundary. This is referred to as a hybrid control. Also, it is possible for an IS to inherit a control from two or more Common Control Providers. For example, an IS whose system boundary spans multiple sites (i.e., a primary site and an alternate processing site) will most likely inherit physical and environmental security controls from the data center providers at both sites. IT Dojo offers a comprehensive course on the transition from DIACAP to RMF. Please take a look at our RMF training courses here.
<urn:uuid:a23c7b15-60f6-44a2-bbb9-3256680c4145>
CC-MAIN-2022-40
https://www.itdojo.com/common-controls-and-inheritance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00234.warc.gz
en
0.929628
388
2.8125
3
Addresses in Europe and North America are based on The streets with names and numbers are the grid, the blocks are the unnamed spaces between them. But in Japan the streets have no names. Addresses are defined by numbered blocks. Streets are the narrow nameless spaces between the blocks. You first find the neighborhood, then the block, and then the building on that block. The good news is that maps and assistance are available. Look for a kōban, a small police post, like the one seen here. It will have a map. This map shows Kaminarimon, within Taito City, within Tōkyō. It's an area four blocks north to south and about 7 blocks east to west if we just count the larger streets, 13 blocks of varying length if we walk along the central east-west street and count every side street. If you zoom in a step or two you will begin to see block numbers. But there are almost no street names! Here is the map on the side of the Sugabashi Police Box. The next police box and map is just 200 meters to the north. Tōkyō is divided into 23 wards or ku. One example is Taitō-ku. Within Taitō ward there is Asakusa machi. The machi or districts are then divided down into chōme or neighborhoods. Blocks are numbered in a somewhat organized order. Maybe a clockwise spiral, or maybe scanning back and forth. Buildings on each block are then ordered by age. The first one built is #1, then the second oldest is #2, even if it's on the opposite side of the block. Sometimes blocks are renumbered to put the building numbers in increasing order as you walk clockwise around the block. In that case they may skip some numbers for later use if it seems likely or at least possible that new buildings might be inserted. Detailed maps go down to block numbering, possibly to building numbers. Here's a sign with added romanji, next to a kappa statue. We're in Taitō city or Taitō-ku, in the Higashi-Ueno machi, 6-chōme, at a corner of block 30. And above, another, in Taitō-ku, Nishi-Asakusa machi, 3-chōme, at the corner of block #5.Katakana and Zooming in long after I was there, I feel a little less stupid when I see that the sign includes hiragana phonetically spelling out the kanji: This kōban is close to Tōkyō Station, the city's main train station. Below is the Kuramae police box, like a tiny castle with a green roof. The Toei Kuramae train and subway station is nearby, off to our left in this view. The Tōkyō Skytree is just across the Sumida river. To its right and closer are the Asahi Beer Hall and the Flamme d'Or, colloquially referred to as kin no unko or 金 の うんこ, the "Golden Poo". I had a navigational misadventure here. I had a guidebook that mentioned a bar somewhere in this area. It listed its name, explaining that it was small and somewhat obscure, and located upstairs in a building. Then it listed its address. However, the address was given in terms of the name of the large street I'm looking across in the above picture, plus what purported to be a street number. I went up and down the street a few blocks each way, and of course the building numbers were chaotic. Each listed the building's rank by age for its block. Cross a side street and you're on a new block with a new chaotic number sequence. I went to the kōban to ask directions. I had a photocopy of the relevant pages of the guide book. I had it open to the page with the business name highlighted, and I also had a map out. I was obviously a person looking for something. The policeman outside tried to help, experimentally sounding out the romanji name of the place I was looking for. "Hai, hai! Bar-u!", I said. "Yes, yes! A bar!" But then I pointed out the address within the text. He made a puzzled face and sucked his teeth. He took me inside to ask his partner, who was similarly stumped by the mysterious request. They got out a neighborhood directory, a book that looked up to the task of listing every resident of Tōkyō and their telephone number, and started going through it. The directory was the size of the Chicago telephone directory, back when they printed such things, made from similarly thin pulp pages. Even if it covered all of Taitō ward, it would be adequate to list every single business, every resident, every resident's cat, and who knows how much more. After 15 minutes of research and the use of some older copies of the directory, they determined that the location had been horribly misrepresented. But it was only about 50 meters to the north. They wrote the actual block and building number in my guidebook. I thanked them profusely, and went to investigate. I went up and down the staircase in the 8-story building. Several of the floors housed a restaurant or a bar, but none were the one I was looking for. I had a four-year-old guidebook, and the place no longer existed.Shintō and It's the cyclic rebirth of Buddhism and the impermanence of Shintō. All 125 of the shrines at the Grand Shrine complex at Ise are reconstructed every 20 years. The new shrine is built next to the current one, the kami or spirit transferred to the new shrine, and the old one disassembled.Visiting the Asakusa district in Tōkyō Here is a koban next to the Kaminari-mon or the Thunder Gate, the great outer gate of the Sensō-ji temple complex in the Asakusa district of Tōkyō. This is the second most visited religious facility in the world. Plenty of people, both Japanese and foreign, will have questions for the police officers in this koban. I was up early, still jet-lagged, and took this picture early in the morning before the crowds arrived. You come across maps of small areas within neighborhoods, like these on the fence around a parking lot. The maps can name businesses and organizations by building, and as with the two to the right, serve as advertising. The above is specific to travel logistics. Maybe you're looking for information on specific places in Japan. Akihabara, Tōkyō's Electric Town Electronics parts and tools, the otaku lifestyle, cosplay, anime, and manga Travel through Kyūshū, the Harbor, Temples, Shrines, the Samurai Path, and a World War II Bunker
<urn:uuid:3740f5b3-3f59-477b-920b-940a34a0f384>
CC-MAIN-2022-40
https://cromwell-intl.com/travel/japan/logistics/koban-and-maps/Index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00234.warc.gz
en
0.967938
1,542
2.765625
3
A modern society requires infrastructure — water, electricity, liquid waste disposal, data both analog and digital — and these are generally by pipes and wires and optical fibres run through underground conduits. Manhole covers provide maintenance access to these essential arteries. At the very least they should be round to ensure they can't fall into the hole. In Japan the manufacturers soon realized that manhole covers should be tapered, so they don't bang and rattle when a vehicle drives over it. It's vital for a manhole cover to have a textured pattern or design. In wet weather, a smooth steel manhole cover would be extremely hazardous for pedestrians and two-wheeled vehicles such as bicycles, scooters, and motorcycles. There should be lines in multiple directions to prevent slips. So — why not give them interesting designs? Below is a manhole cover in Ueno park in Tōkyō showing a tree with cherry blossoms. Custom Covers Appear Some of Japan's larger cities had developed their own original cover designs by the late 1950s. There was the "Tōkyō design" and the "Nagoya design", developed within those cities and used elsewhere. In the early to mid 1980s, only 60% of Japanese households were connected to municipal sewer systems. The construction ministry was trying to build up public support for the expensive public projects of expanding sewer systems. Yasutake Kameda was a high-ranking bureaucrat of the construction ministry. He came up with the idea of locally customized manhole covers, popularizing the subterranean and under-appreciated infrastructure. The Japan Ground Manhole Association, a Tōkyō-based alliance of 32 companies making manhole covers, reports that Yasutake's original idea has led to nearly 95% of the 1,780 municipalities in Japan now having their own custom manhole cover designs. Trees, flowers, animals, and local sites of natural beauty and touristic interest dominate. Asakusa District in TōkyōVisiting Asakusa and Sensō-ji Asakusa is a district within Taito Ward of Tōkyō. Asakusa is home to the Sensō-ji Buddhist temple, the second most visited religious site in the world. The manhole covers there have the classic "Tōkyō design". These feature two trees common in Japan. First, the larger part with five petals is the sakura or the somei yoshino cherry blossom. The second tree is the ginkgo biloba with its triangular or fan-shaped leaves spaced between the cherry blossom petals. The ginkgo is very similar to fossils from the Middle Jurassic period approximately 170 million years ago. Then, around the exterior are 13 vaguely shown birds. These represent the black-headed gull, a favorite subject or poetry and painting in Tōkyō. There are 13 in this one, the center one is replaced on the lid in the second picture by "T-25" indicating the standard 600 mm diameter, The "Tōkyō design" with the cherry blossom first appeared in the 1950s, and they were installed throughout Japan. The second one, in the Kappabashi district just west of the Sensō-ji temple complex, is slightly newer. These newer covers have a central strip with four octagons with alphanumeric codes. This 4-octagon code system began with a manhole-lid-laying ceremony on March 28, 2001. This one says: The octagons are colored, although the paint has mostly faded on this one. The first one is still faintly yellow, indicating that it covers a sewage line. Blue indicates a rainwater drain. The first digit, 35 here, is the number of this cover on this line through this ward. The second and third octagons, 61 on this one, are originally painted green and indicate a unique code in the control chart. There are about 470,000 manhole lids in the 23 wards of Tōkyō, four alphanumeric characters 36 × 36 × 36 × 36 = 1,679,616 possible lids. Someone has attempted to make a manhole lid database in which you could look up code numbers and find location and other information. The fourth octagon, reading 27 here, indicates the year the pipe was installed, painted yellow for the 1900s and blue for the 2000s. 27 means that this pipe was laid in 1927! Tōkyō was devastated by the incendiary bombing raids of 1945, especially Operation Meetinghouse, a raid on the night of 9-10 March, 1945. About 16 square miles of the city was destroyed; about 100,000 people were killed, a million injured, and a million left homeless. It was the single deadliest air raid of World War II, larger than the incendiary raids on Dresden and Hamburg, larger than the nuclear bombings of Hiroshima and Nagasaki. Almost everything above ground in the Asakusa district was destroyed, but this underground pipe survived and was put back into use. It is still used today. Here's a rather plain cover near the Imperial Palace. A waste line installed in 2010. The poster below is at the Bureau of Sewerage headquarters in Tōkyō, along the Sumida river between Asakusa and Akihabara. Ise is home to Shintō's most sacred shrine. Japan's earliest "histories", which are really collections of myths and legends, describe how early deities created Japan and the rest of the universe. The universe was created here, and the deities live here. Or so the stories go. Of course everyone wants to see the Outer Shrine, near the center of town, and the Inner Shrine. The manhole covers in Ise commemorate the many visitors. Kaikū, the main shrine at the Inner Shrine complex, is believed to be inhabited by Amaterasu, the Sun Goddess. The shrine is believed to house the Yata no Kagami, a mirror that is the most precious of the Three Sacred Treasures, the Imperial Regalia of Japan. Elite priests present the three sacred objects to the new Emperor during the enthronement ceremony. This confirms the Emperor's status as a descendant of Amaterasu and his legitimacy as ultimate ruler of Japan.Katakana & This manhole cover at the Inner Shrine complex has been painted. The hiragana at the bottom says: Meoto Iwa it close to Ise. It's known as the "Married Couple Rocks". They're a pair of small rocky stacks along the coast just east of Ise. They're joined by a heavy rice straw rope, which is replaced several times a year in special ceremonies. The rocks represent Izanagi and Izanami, the creator deities of Shintō cosmology. Brother and sister and also husband and wife, they gave birth to the islands of Japan and to many of Shintō's deities or spirits, the kami in Japanese. From them came Ameterasu, the sun goddess, ancestor of Jimmu, the first Emperor of Japan. The manhole covers at Meoto Iwa show the two rocks and the sacred rope, with the sun rising in the background, and blossoms and leaves all around. The words at the bottom aren't profound: Kyōto was the imperial capital for centuries. I can't decide if this is meant to be decorative, or if it's simply how they make cast-iron manhole covers. Osaka is famous for its enormous castle, which is depicted along with cherry blossoms on this custom manhole cover. An alternative Osaka design features shipping along with the castle.Osaka Sewerage Science Museum Osaka has a Sewerage Science Museum with its own line of custom manhole covers. Or at least Osawa used to have such a museum. It had closed shortly before my visit. The port of Kobe welcomes English-speaking visitors. Takamatsu is on the island of Shikoku, the fourth-largest island of the Japanese archipelago, some 225 km long and 50 to 150 km wide. The manhole covers in Takamatsu show Nasu No Yoichi, a Minamoto samurai in the nearby battle of Yashima in 1185. He had pursued Taira fighters on horseback. They were slipping away on a ship, and were waving a fan on a stick to taunt him. He shot a perfectly aimed arrow that pierced the fan. The punctured fan washed ashore on a small island a few days later, the island was named Ogi-jima or Fan Island to commemorate the event. The hiragana reads: Fire hydrants go under distinctive covers. Here is Takamatsu's. Naoshima is an art-focused island in the Inland Sea, a short ferry ride north from Takamatsu. Here is their custom manhole cover design. The hiragana label at bottom says: Hiroshima has a boldly colored custom design. The nearby Itsukushima shrine has cherry blossoms on the covers of its small valve covers. Fukuoka is on Kyūshū, the southwesternmost of Japan's four large Home Islands. It's just across the strait from the southwestern end of Honshū, the largest island. Some of Fukuoka's custom covers are pretty standard, more blossoms. Others are more abstract. The custom covers in Nagasaki are pretty standard — three blossoms with leaves. Some Are Plain But Well Made Two friends of mine had insisted that I needed to look for custom manhole covers. OK, I'll keep an eye out. The thing is, I didn't see any interesting ones at all early in the trip. I had gotten as far as Kyōto and I was still seeing nothing but sturdy and fairly standard looking manhole covers. Like this one. "Hokusei", it says, not Hokusai the famous print artist. Here's a cover for NTT, Nippon Telegraph and Telephone. There will be data and signal lines under this one. The above manhole cover and the below valve or meter cover seem to be the same network or company. Not Just Japan Cities in other countries do customized manhole covers. Here are two in West Lafayette where I live. I don't even have to cross a street to find this one, it's on the block where I live. The above is specific to travel logistics. Maybe you're looking for information on specific places in Japan. Akihabara, Tōkyō's Electric Town Electronics parts and tools, the otaku lifestyle, cosplay, anime, and manga Travel through Kyūshū, the Harbor, Temples, Shrines, the Samurai Path, and a World War II Bunker
<urn:uuid:bc4ca846-3aec-454a-8363-e70b06cda6ba>
CC-MAIN-2022-40
https://cromwell-intl.com/travel/japan/logistics/manhole-covers/Index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00234.warc.gz
en
0.950734
2,530
2.6875
3
When it comes to wifi, we often lump together a wireless router and a wireless access point, but in reality, they are not the same thing. They do share similarities, however. Let’s break down what they are and how they are different. What is a Wireless Router? Nearly everyone who has home or business internet will have a router. A router is the device that connects the network to the building, supplying both wireless and wired internet service. A router often acts as an access point, but an access point is not a router itself. A router will give many devices around a home or business access to the internet simultaneously. This means various phones, printers, and computers can all get their wireless internet from a single router. The router is the means of getting the internet itself, which comes with a built-in wireless capability. What is a Wireless Access Point? Wireless access points or Aps have been a constantly changing piece of home and business internet. Routers have not always come with a standard built-in wifi. In these cases, an AP was added to the network in order to have wifi. So where many of us out there now have smartphones, we could not connect to the internet without having an AP set-up. With modern technology, APs are not the same as they used to be. Since routers usually come with built-in wifi, the router will act as the AP, making the need for a dedicated AP obsolete. APs are still around, however, since wifi is not perfect. There are still many dead spots out there or wifi with short-range, making Aps necessary to fill in the gaps from the internet. Instead of being the sole part to provide wifi to a building, APs now expand the wifi to include areas that may not have had it otherwise. Why Would Anyone Want a Wireless Access Point? It may seem futile to purchase a wireless access point if you have already installed your router, since the router does act as an AP. But even routers have their limits. If you are not receiving strong or reliable wifi at your home or business, an AP can be added in to provide secure and reliable internet coverage. This is especially important in a business that relies upon the internet for daily tasks. If there are dead spots in a place of business, the results could be loss of funds or customers. Additionally, homes or businesses that are computer-based and run services may want the additional support and power that is supplied by having a dedicated AP. It acts as a back-up system to keep everything running smoothly, supplying an added piece of mind. Most homes will not require a wireless access point separate from the router since it is built-in to the routers themselves. If you have reliable home internet and have been satisfied with your internet, most likely you will not need an additional AP.
<urn:uuid:01d4f82a-d278-4bee-b205-a33f52689b3b>
CC-MAIN-2022-40
https://bluegadgettooth.com/wireless-router-vs-wireless-access-point/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00234.warc.gz
en
0.972627
585
2.828125
3
Rapid technological advancements in the manufacturing industry have allowed companies to supercharge their production lines, reduce unplanned downtime and manage their IT assets with increasing precision. The ongoing automation boom (widely referred to as Industry 4.0 or SMART manufacturing) has only accelerated this transformation, helping manufacturers consolidate their legacy systems and analog processes into one intelligent, centralized IT management framework. The push toward modernization positively impacts the industry and comes with more than a few risks. Like most commercial industries, manufacturers of all sizes have had to invest in a range of cybersecurity software, tools, and services to protect their production equipment and data from digital exploitation. One 2018 study from Gartner estimated that global spending on information security would exceed $124 billion by the end of 2019, in part due to the growing number of high-profile data breaches. Manufacturers invest in a host of defensive capabilities to help mitigate these and other cybercrime types, from identity and access management to data loss prevention. But there are some threats that automated cybersecurity systems cannot completely negate, such as ransomware attacks. To get a clearer picture of how ransomware infections can impact manufacturing operations, let’s dive a bit deeper into the details. A Brief Overview of Ransomware Ransomware is a specialized form of malware that infects computers and data stores, encrypts important files, and locks down computer terminals until a ransom is paid, according to the U.S. Department of Homeland Security. While many different types of ransomware are circulating the web, nearly all can quickly spread across connected systems, shared storage drives, and private networks. Once the ransomware has identified critical drives on an infected computer, the malicious code starts encrypting every file it can access. Users are locked out of their devices until the ransom is paid or the malware is wiped from the data stores. Research from the cybersecurity firm Coveware found that the average amount spent per ransomware incident in the first quarter of 2019 stood around $12,762, nearly double the average from the end of 2018, ZDNet reported. Law enforcement agencies like the DHS and FBI advise against paying the ransom. It only encourages cybercriminals to continue developing new ransomware families with enhanced capabilities. In terms of specific ransomware variants, anti-virus developer Malwarebytes sorts strains into three categories based on severity: - Scareware: The least severe type of ransomware, scareware is often relatively easy to detect and remove. This form of ransomware infection typically creates persistent pop-up messages that claim malware was discovered on a user’s computer and that a “support fee” must be paid to remove it. These tech support scams often target less tech-savvy users and rarely have a lasting impact on files and data stores. - Screen lockers: Unlike scareware, this mid-tier category of ransomware can completely freeze users out of their workstations, even after a restart is been performed. Once infected, a computer terminal will permanently display a locked window until the ransom is paid, preventing users from accessing files and performing basic administrative tasks. While screen lockers can be highly disruptive in the short term, you can usually clear them without fear of data loss. - Encrypting ransomware: This form of ransomware is undoubtedly the most severe. It can be impossible to fully restore the encrypted data without paying the ransom, even with advanced cybersecurity software. However, giving in to cybercriminals’ demands is no guarantee that the hijacked data and files will be usable. Recovering from this type of ransomware infection often requires a complete wipe of all drives and a complete reinstallation from safe backups. According to Kaspersky Lab’s research, the total number of users who encountered ransomware decreased by almost 30% between 2017 and 2018. Manufacturers have seen a notable uptick in cyber attacks over the past year. A recent study by Deloitte discovered that close to 40% of manufacturing companies encountered at least one cyber attack between 2018 and 2019, suggesting a real need for continued improvement. But how can manufacturers protect their data and workstations from ransomware before and after an attack has occurred? Ransomware Protection and Response Generally speaking, manufacturing firms are highly susceptible to ransomware due to the large volume of mission-critical production data involved in their day-to-day operations. A single encrypting ransomware attack can lockdown everything from production schedules and work orders to component schematics and more. Manufacturing environments that heavily rely on automation and internet of things technologies, in particular, can suffer significant outages and prolonged downtime. Simultaneously, the ransomware is being removed, leading to costly operational delays and missed business opportunities. That’s where ransomware protection can help, but only if the right cybersecurity tools and IT policies are in place. “40% of manufacturing companies encountered at least one cyber attack between 2018 and 2019.” First, it’s important to note that the vast majority of ransomware attacks are orchestrated through phishing emails or drive-by downloads, according to the DHS. Generally speaking, end users are the most vulnerable access point that cybercriminals can exploit. That is why cybersecurity training and IT governance policies are crucial to any ransomware protection plan. While there are plenty of anti-ransomware applications on the market, few can decrypt all the different ransomware families, making prevention the most effective approach. To that end, a robust anti-ransomware training program should teach employees how to spot phishing emails and cover the dos and don’ts of on-the-job internet use. Vulnerability assessment is another critical practice in ransomware protection, as new delivery methods are constantly under development. Cybercriminals favor ransomware because it offers an immediate return on their activities, instead of identity theft which typically requires a buyer. However, in both scenarios, hackers capitalize on the sensitive nature of an organization’s data, which makes proactive backup operations essential to long-term security. Backing up business-critical systems and files to an offsite location can significantly reduce the leverage ransomware attackers have while also ensuring IT administrators can restore essential data stores without paying the ransom. Even under the most favorable conditions, ransomware attacks can still penetrate a manufacturer’s network perimeter due to user error, poor controls, and negligence. In these scenarios, it’s crucial to contact law enforcement as soon as possible and resist the urge to pay the attacker. If you cannot remove the malware or ransomware, the next best option is to completely wipe all drives and data stores and reinstall from clean backups. Keep in mind; some ransomware variants seek to infect your backups to render them useless. This point of leverage means a simple re-imaging of systems followed by backup restores may not be enough to recover your operation. If your organization wants to improve its security posture and prevent costly ransomware attacks, reach out today. As a proud supporter of American manufacturing, Certitude Security is working diligently to inform leaders and facilitate essential asset protection priorities for supply chain businesses throughout the United States. When you are interested in learning about the empowering services that Certitude Security can offer, visit our website or coordinate a time to speak to a team member today.
<urn:uuid:a750513c-837a-4a27-aa18-f83e5113e85d>
CC-MAIN-2022-40
https://www.certitudesecurity.com/blog/vulnerability-management/ransomware-protection-how-to-survive-a-cyberattack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00234.warc.gz
en
0.936429
1,459
2.75
3
One of the most valuable concepts in the world of data and analytics is making data "readable" by both machines and humans. Data that is only human-readable is great for cartoons, and endless lines of code is perfect for making software work. But the real magic happens when computers and humans get on the same page... One of the easiest ways to understand what is XSLT (extensible stylesheet language transformations) is that it's simply a template for displaying specific information from more complex XML data sets. The first step is getting data into XML format. Then, a tool that performs XSLT transformations can create documents with accurate data laid out in any given format or structure. The information in data sets is now understandable to humans and machines. Here's an Example of How XML is Transformed into XSLT: Step 1) Let's start with an XML document (text colors were added to make this easier to understand): <?xml version="1.0" encoding="UTF-8"?> <born>New York City</born> <alias>Clark Kent, Kal-El</alias> Step 2) And here is an XSLT Stylesheet that is a template for how to transform and display the XML document's information: <?xml version="1.0" encoding="UTF-8"?> <th style="text-align:left">Super Name</th> <th style="text-align:left">Born in</th> Step 3) Finally, after running the documents through the XSLT processor, here is what the XML file now looks like on our computer display / monitor: How XSLT Works Here's another way to understand what is XSLT: it's similar to what happens behind the scenes to make websites readable. If you're curious, hit the F12 key on your keyboard to see all the code behind this article. So if we can do the same thing as a website — take a whole lot of hard-to-understand data and present it in a way that's easy to understand — we will turn more data into actionable and valuable information. The code that makes websites work is called HTML, and the code behind many electronic documents (including Microsoft Office) is called XML. XML separates information from the presentation layer. Basically, it is a language that defines what each piece of data is. This is very different than HTML. In HTML a header tag (such as <h2>) is used to format text, but it doesn't say what that text is. But XML does specify what the text is. For example, <productCategory title="OCR Software for Oil and Gas"> defines exactly what the text represents. Just like there are standards with website code, there are also standards for the way XML is formatted. There are specially defined characters, markup rules, tags, elements, attributes, etc. that all help to bring data to life. You can learn more about XML here. Essentially, XML formats data so there can be no question about the information it represents. Once data is in an XML format it's no longer locked inside a proprietary computer language. Here's where XSLT comes into play. Because XML is designed to separate data from the presentation of the information, it needs a bit of processing to become more transparent. Take a look at some XML code for a document below: Exactly What is XSLT and What is its Use? XSLT is a programming language used to transform that code into beautiful PDF documents, web pages, plain text, or even printer-friendly PostScript format. And the great thing is that the information can be arranged in virtually any desired layout. With XSLT, information from multiple XML "documents" can even be combined into a single final document. This aids in the creation of dashboards and reports which must pull data from more than one source. Because XML is so precisely formatted, it acts as a sort of "single version of the truth." The data contained in XML can be used to match and integrate information from external databases or document repositories. Looking for an even deeper dive, other than 'What is XSLT'? look at this article or check out our data science solutions that work with XSLT files every day:
<urn:uuid:e533ddfe-0fc1-48b5-86d4-1d1a39a9dbcf>
CC-MAIN-2022-40
https://blog.bisok.com/via/what-is-xslt-and-why-is-it-so-important
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00234.warc.gz
en
0.819526
1,260
3.875
4
The reason for this blog post was a lecture I had at university where lecturer talked about ERP systems (enterprise resource planning) and a question came up from one of the other students about ERP in the cloud and how Cloud Computing is defined. I am not really happy with the answer he gave, because the answer was totally focused on Software as a Service hosted from a service provider and accessible over the internet. Well this is a part of cloud computing but doesn’t not really cover the real definition. I know I will maybe get a lot of comments on this post, because there is no official definition of “Cloud Computing” and every company maybe thinks different about it, depending on their product range As someone who has worked in the hosting business and now is working as a consultant for mostly building private or hosted private clouds the definition looks really different. One important statement first. Virtualization is not Cloud Computing, virtualization is a great enhancement for Cloud Computing and is also a important enabler of Cloud computing because without virtualization Cloud Computing could be really hard to do. I my opinion Cloud Computing is not a technology, Cloud Computing is a concept you can use to provide access to resources. There are three different scenarios in cloud computing. Image Source: blogs.technet.com - Infrastructure-as-a-Service – IaaS basically allows customers to use compute, storage and networking resources and deploy for example virtual machines with full access to the operating system. (Example: Windows Azure, Amazon,…) - Platform-as-a-Service – PaaS provides customers with a platform for their application, for example Windows Server with IIS where customers can deploy their application but don’t have to think about the server itself. (Example: Windows Azure, Webhosting Providers,…) - Software-as-a-Service – SaaS allows customer to use just a software without caring about the installation or platform itself. For example hosted mailservers or CRMs (Example: Office365, Microsoft Dynamics Online, Xbox Live, Outlook.com,…) Well another common mistake is to think cloud is always hosted in the internet. Since Cloud Computing is a concept to deliver services, companies can do this also internally which is mostly known as Private Cloud. The Private Cloud can of course also be IaaS, PaaS or SaaS and could be accessible from the internet, but it could also only be available company internal. - Public Cloud – The Public Cloud is maybe the Cloud people think of mostly when they are talking about Cloud Computing. This is mostly shared services hosted from a services provider which is accessible from the internet. - Private Cloud – The Private Cloud is a Cloud made for a just one customer or company for example this could be an on premise Cloud hosted in my own datacenter. In some cases the Private Cloud could also be hosted from a services provider. - Hybrid Cloud – The Hybrid Cloud model will be the model a lot of companies will go for, or already did even without knowing about it. The Hybrid Cloud is a scenario where I have a Private Cloud hosted on premise in my datacenter but I also extend my Cloud to the Public Cloud by connecting cloud services such as Windows Azure or Office 365 to my Private Cloud. I already wrote about 500 words, but I still didn’t not really answers the question what Cloud Computing is, so we going to have a look at Wikipedia: Cloud computing – correctly: a Computing Cloud – is a colloquial expression used to describe a variety of different computing concepts that involve a large number of computers that are connected through a real-time communication network (typically the Internet). Cloud Computing is a jargon term without a commonly accepted non-ambiguous scientific or technical definition. In science Cloud computing is a synonym for distributed computing over a network and means the ability to run a program on many connected computers at the same time. The popularity of the term Cloud computing can be attributed to its use in marketing to sell hosted services in the sense of Application Service Provisioning that run Client server software on a remote location. So with this definition there are five common properties every Cloud has, doesn’t matter if it’s IaaS, PaaS or SaaS based or hosted in the Private or Public Cloud. - Elastic and Scalable – I think this is one of the overall parts of a cloud. It’s important to be very flexible to get new resources if your business grows over time or has some special peaks where you need more resources. Resources could be more compute power, more virtual machines, more users, or more mailboxes. - Pooled Compute Resources – From a cloud provider perspective I want to pool my compute, storage and network resources and share them for different customers or services. - Provides Self-Service Provisioning – To request new resources (virtual machines, Mailboxes or whatever) over a self-service portal which automatically kicks of the specific tasks. - Highly Automated Management – Because we want to use Self-Services provisioning and doing this in large scales, it’s important that the environment is highly automated. If you think about a simple example: A new employee starts at your company and you want to create a new mailbox for him, you can create a it over a self-services portal. The creation of the mailbox has to me automated in the background because you don’t want to wait for someone to create the mailbox manually maybe two days later. Usage-Based Chargeback – Trough the pooled resource you want to be able to do chargeback based on consumed resources. Even if you do another billing system you still want to know how much resources customers have used. This could be how many mailboxes did I use last month, how many minutes my virtual machines was running this month, or much disk space did I use. I think this 5 things do cover the properties of Cloud Computing in basically all the common scenarios. This there are a lot of things I did not cover in my blog post but it should help people which are new to cloud computing help to understand the different scenarios.Tags: Amazon, Azure, Cloud, Cloud Computing, Definition of Cloud Computing, Hybrid Cloud, IaaS, MAS, MASBEM, Microsoft, Office 365, PaaS, Private Cloud, Public Cloud, SaaS, University, Virtualization, Windows Azure Last modified: January 7, 2019
<urn:uuid:b83698a0-4e12-4704-9562-5a4e1860fe3c>
CC-MAIN-2022-40
https://www.thomasmaurer.ch/2013/06/the-definition-of-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00234.warc.gz
en
0.942178
1,328
3.140625
3
WHAT IS DEVOPS SECURITY? DevOps security refers to the practice of safeguarding an organization’s entire development/operations environment through the use of coordinated policies, processes, and technology. DevOps gives information security (infosec) groups the opportunity to integrate security earlier in the software development process, building best practices into all parts of the DevOps lifecycle. From inception, design, build, test, release, support, maintenance, and beyond, DevOps teams can use DevOps security tools such as automated security monitoring and automated pen testing to deliver security-as-code that: - Empowers developers to proactively solve security problems - Makes application security elastic - Automates security into the pipeline - Monitors attacks the same way performance is monitored
<urn:uuid:d53bc677-0052-4e1b-be9a-f750bbfe7898>
CC-MAIN-2022-40
https://www.contrastsecurity.com/glossary/devops-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00434.warc.gz
en
0.891695
160
2.546875
3
What is Cloud Encryption? Cloud encryption is the process of transforming data from its original plain text format to an unreadable format, such as ciphertext, before it is transferred to and stored in the cloud. As with any form of data encryption, cloud encryption renders the information indecipherable and therefore useless without the encryption keys. This applies even if the data is lost, stolen or shared with an unauthorized user. Encryption is regarded as one of the most effective components within the organization’s cybersecurity strategy. In addition to protecting the data itself from misuse, cloud encryption also addresses other important security issues, including: - Compliance with regulatory standards regarding data privacy and protection - Enhanced protection against unauthorized data access from other public cloud tenants - In select cases, absolving the organization of the need to disclose breaches or other security events How Does Cloud Encryption Work? Encryption leverages advanced algorithms to encode the data, making it meaningless to any user who does not have the key. Authorized users leverage the key to decode the data, transforming the concealed information back into a readable format. Keys are generated and shared only with trusted parties whose identity is established and verified through some form of multi-factor authentication. 2022 CrowdStrike Global Threat Report Download the 2022 Global Threat Report to find out how security teams can better protect the people, processes, and technologies of a modern enterprise in an increasingly ominous threat landscape.Download Now Cloud encryption is meant to protect data as it moves to and from cloud-based applications, as well as when it is stored on the cloud network. This is known as data in transit and data at rest, respectively. Encrypting data in transit A significant portion of data in motion is encrypted automatically through the HTTPS protocol, which adds a security sockets layer (SSL) to the standard IP protocol. The SSL encodes all activity, ensuring that only authorized users can access the session details. As such, if an unauthorized user intercepts data transmitted during the session, the content would be meaningless. Decoding is completed at the user-level through a digital key. Encrypting data at rest Data encryption for information stored on the cloud network ensures that even if the data is lost, stolen or mistakenly shared, the contents are virtually useless without the encryption key. Again, keys are only made available to authorized users. Similar to data in transit, encryption/decryption for data at rest is managed by the software application. There are two basic encryption algorithms for cloud-based data: Symmetric encryption: The encryption and decryption keys are the same. This method is most commonly used for bulk data encryption. While implementation is generally simpler and faster than the asymmetric option, it is somewhat less secure in that anyone with access to the encryption key can decode the data. Asymmetric encryption: Leverages two keys—a public and private authentication token—to encode or decode data. While the keys are linked, they are not the same. This method provides enhanced security in that the data cannot be accessed unless users have both a public, sharable key and a personal token. Which Cloud Platforms are Encrypted? Every reputable cloud service provider (CSP)—the business or entity that owns and operates the cloud— offers basic security, including encryption. However, cloud users should implement additional measures to ensure data security. Cloud security often follows what is known as the “shared responsibility model.” This means that the cloud provider must monitor and respond to security threats related to the cloud’s underlying infrastructure. However, end users, including individuals and companies, are responsible for protecting the data and other assets they store in the cloud environment. For organizations that use a cloud-based model or are beginning the shift to the cloud, it is important to develop and deploy a comprehensive data security strategy that is specifically designed to protect and defend cloud-based assets. Encryption is one of the key elements of an effective cybersecurity strategy. Other components include: - Multi-factor authentication: Confirming the user’s identity through two or more pieces of evidence - Microsegmentation: Dividing the cloud network into small zones to maintain separate access to every part of the network and minimize damage in the event of a breach - Real-time, advanced monitoring, detection and response capabilities: Leverage data, analytics, artificial intelligence (AI) and machine learning (ML) to generate a more precise view of network activity, better detect anomalies and respond to threats more quickly The benefits of cloud encryption Encryption is one of the primary defenses organizations can take to secure their data, intellectual property (IP) and other sensitive information, as well as their customer’s data. It also serves to address privacy and protection standards and regulations. Benefits of cloud encryption include: - Security: Encryption offers end-to-end protection of sensitive information, including customer data, while it is in motion or at rest across any device or between users - Compliance: Data privacy and protection regulations and standards such as FIPS (Federal Information Processing Standards) and HIPPA (Health Insurance Portability and Accountability Act of 1996) require organizations to encrypt all sensitive customer data - Integrity: While encrypted data can be altered or manipulated by malicious actors, such activity is relatively easy to detect by authorized users - Reduced risk: In select cases, organizations may be exempt from disclosing a data breach if the data was encrypted, which significantly reduces the risk of both reputational harm and lawsuits or other legal action associated with a security event Cloud encryption challenges Cloud encryption is a relatively simple, but highly effective security technique. Unfortunately, many organizations overlook this aspect of the cybersecurity strategy, likely because they are unaware of the shared responsibility model associated with the public cloud. As discussed above, while the cloud provider must maintain security within the cloud infrastructure, private users are responsible for securing the data and assets stored in the cloud and ensuring its safe transmission to and from the cloud. Additional challenges may include: Time and cost: Encryption is an added step, and therefore an added cost for organizations. Users that wish to encrypt their data must not only purchase an encryption tool, but also ensure that their existing assets, such as computers and servers, can manage the added processing power of encryption. Encryption can take time and therefore the organization might experience increased latency. Data loss: Encrypted data is virtually useless without the key. If the organization loses or destroys the access key, the data may not be able to be recovered. Key management: No cloud security measure is foolproof, and encryption is no exception. It is possible for advanced adversaries to crack an encryption key, particularly if the program allows the key to be chosen by the user. This is why it’s important to require two or more keys to access sensitive content. Should I encrypt my cloud storage? Cloud encryption is one of the most practical steps organizations can take to protect their data, as well as sensitive customer information. Organizations should consult their cybersecurity partner to select an optimal third-party encryption tool and integrate it within the existing security tech stack. Topics to discuss with your cybersecurity partner about cloud storage encryption may include: - How to identify data that requires encryption, either due to its sensitive nature or as a matter of compliance with regulatory standards - When and where data will be encrypted and the process it will follow - How to supplement the cloud provider or CSP’s existing cloud security protocols - How access keys will be generated and shared to reduce the risk associated with weak passwords - Who will oversee key management and storage (the CSP or the organization) - How and where encrypted data will be backed up in the event there is a breach with the CSP - How a cloud access security broker (CASB) can coordinate data access throughout the organization and improve visibility
<urn:uuid:bf2aeb94-8034-4893-86c0-a3092a6ee3f7>
CC-MAIN-2022-40
https://www.crowdstrike.com/cybersecurity-101/cloud-security/cloud-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00434.warc.gz
en
0.929825
1,615
3.28125
3
A period of rapid broad-based increase in international commodity prices, also known as a commodity supercycle, started in the early 2000s. The economies in Latin America — largely dependent on the export of commodities, especially in the mining sector — benefited from these price increases with an outstanding growth in the region’s gross domestic product (GDP). As a result, the need for large-scale infrastructure projects to support the commodity boom led to an influx of international construction-related companies into the region. From owners to contractors and other construction industry players, Latin America became a hot spot for multicultural projects. But as the supercycle slowed down, starting around 2011, so did GDP growth in Latin America. This slowdown, combined with other factors such as corruption allegations, increased political uncertainty, and slow recovery of global oil prices, had a direct impact on the construction industry in the region. However, as shown in the following graphic, after a few years of contraction, the industry is forecasted to start rebounding in 2018 based on projections of global oil price increases, dissipation of political uncertainties, and the predicted support by both stronger government spending and greater private investment in the construction industry. Moreover, the use of the public-private partnership (PPP) structure has become more popular in the region, requiring an adequate regulatory framework, as well as teams with extensive experience in design, construction, operations, and project financing. These projects involve not only the mining projects to directly support the commodities market, but also the infrastructure projects to support the economic growth and social progress of the region. As a result, the influx of international construction industry players into Latin America is expected to continue for both private and public projects. In the following sections, and based on our experience, we describe several challenges that international construction industry players face in developing and managing projects in Latin America. With that said, we cannot forget that, while there is a tendency to view Latin America as a homogeneous region, it is really the amalgamation of more than 20 countries with a common language, but different laws, codes, regulatory environments, and cultures. Applicable Codes And Standards Due to the nature of the work that requires the participation of international players in Latin America, most of these projects are contracted under engineering, procurement, and construction structures. In addition, a significant number of these projects are procured under PPP agreements, which include operations and, usually, financing. The applicable codes and standards play a key role, especially in the engineering and operational phases of these projects. It is in this area where the diversity of Latin America becomes evident. For example, several of these countries are in highly seismic areas, requiring special attention to the local seismic requirements. Another typical issue is the use of international standards. In some instances, contracts require the use of United States standards, some of which may be unknown to some of the project participants, while others recognize several international standards. Some of these risks are mitigated by engaging local design consultants who have insight into obtaining proper permits (a potentially time-consuming effort) and who fully understand local requirements. In addition, companies often involve experts in international standards, although it requires certain coordination efforts among their design teams to obtain the desired compliance results. Environmental regulations, important from an operational standpoint, are closely linked to project design. There has been a push in recent years for projects to be environmentally friendly not only during the construction phase, but also, and more importantly, during their operations. As a result, projects have to be designed to comply with these local regulations, equipment should be procured in order to meet the necessary requirements, and operations must be vigilant in maintaining the proper levels of service. Equipment And Materials The project participants have to recognize early on what equipment and materials will need to be imported. A similar assessment has to be done for equipment that will be used to construct the project. Once that assessment is done, the analysis switches to the regulatory structure and prohibitions the host country has in place related to importing materials and equipment, and obtaining clearance through customs. This is more than reading the regulations; a true understanding of how the process works and potential pitfalls, costs, and delays should be taken into account during negotiations and before entering into the contract. Concessions from the host government may be required in order for the equipment and materials to be brought in. It is important to engage a highly qualified and reputable in-country consultant to assist in the customs clearance process to increase the likelihood of timely delivery. This includes careful review and correction of the necessary paperwork long before the shipments arrive in port. It also includes practical advice as to packing and identifying the contents in order to avoid inspection delays once the equipment or materials arrive. In addition to impacts to the project, customs delays often result in increased direct costs in the form of port and handling charges. If problems occur, the contractor must react quickly to identify the root cause and implement measures to avoid repeated mistakes. This may include increasing qualified staff and oversight at the point of origin of the equipment and materials. With respect to construction equipment, the contractor must understand the regulations and costs associated with removing (or abandoning) the equipment when the project is done. From a contractual standpoint, the parties should identify who bears the risk of delays in the various phases from the point of origin through delivery to the site, including customs clearance delays. This risk is usually borne by the contractor, as it is in the best position to ensure compliance with regulations and practices. The issue becomes more complicated, however, if the process is affected by issues such as changes in customs regulations and/or staffing of the customs offices, strikes, increases in port traffic, and transportation restrictions. Whether these issues would allow for adjustment in contract price or time would depend on the contract terms. Failure to consider these issues up front will likely lead to a meaningful dispute. The proper assessment of how the project can and will be staffed will, in a large part, dictate the success or failure of the project. The first issue is the availability of qualified in-country labor. The assessment must include whether other major projects will be competing for labor when the project is scheduled to go forward and whether that changes if the project is delayed. The next issue is what it will take to attract labor to the project. In addition to compensation, issues include accommodations at or near the project site, time off, and travel to/from hometowns for travelers. Local labor requirements and potential union issues must be taken into account. This includes any required ratios of foreign labor to local labor. The contractor will also want to bring in its own people to manage and supervise the project, as well as consider bringing in certain skilled labor that may not otherwise be readily available in the country. The contractor must understand the regulatory and practical difficulties that it may face in bringing people into the country and how long they may be allowed to stay. Coordination and contingencies are required, as the visa/ immigration process rarely goes as smoothly as planned. Labor risk is generally borne by the contractor. Impacts caused by changes in immigration laws or regulations after execution of the contract may be a basis for a change/variation or force majeure event; however, it would still be incumbent upon the contractor to prove there was a change and that any immigration issues did not result from its own failures. Understanding the cost and schedule ramifications of not obtaining the quantity and/or quality of labor needed to perform the work, and taking measures up front to mitigate that risk, is paramount. If unanticipated events occur, notice and documentation of the problems should be undertaken immediately. In addition to the issues already discussed, it is important to keep in mind that local regulations vary from country to country. In this regard, there are two specific issues that appear to affect a significant number of projects in many of the countries in this region: (1) archaeological findings; and (2) social and community issues. With respect to archaeology, many countries in Latin America have a rich history of pre-Columbian societies, and the remnants of those societies are widespread throughout the region. As construction projects are initiated further away from cities, and more in undeveloped areas, the possibility of finding ancient artifacts increases. This is an issue that must be considered in the early stages of the project at various levels. The involved parties should be aware of the local regulations regarding these types of findings, starting with the required initial explorations, and discussions and clearances from local enforcement agencies. Furthermore, attention should be paid to the process required in case an archaeological finding is made during the project, as this process may be time consuming and significantly delay certain areas of work or the overall project. Issues related to contractual responsibility for obtaining permits and clearances, performing remedial work, potential force majeure declarations, time extensions, and cost considerations should be taken into account as well. In addition, some jurisdictions will require that a contractor hire an archaeologist as a full-time project employee. However, and more importantly, any found artifacts are typically considered to be national treasures and, as such, the local governments and communities expect the involved parties to manage them with the proper level of respect and professionalism. To the extent archaeology is not given the same amount of priority in the country of origin of the foreign entity, this could be an unexpected cultural and project progress shock. With respect to social issues, and again considering that a significant number of projects are being built in undeveloped areas, it is important to recognize that local communities will be affected. As a result, it is not uncommon to have consultations with these communities prior to the start of the project to obtain their consent, as well as to understand their concerns and needs. To mitigate these kinds of issues, projects often include not only the main construction work for the development of the project, but also significant social and community improvements, which are, in many cases, a win for everyone involved. Despite efforts to coordinate with local communities, there has been an increase of those communities filing legal injunctions opposing projects, often resulting in both schedule and cost impacts. Project developers and contractors should be aware of and prepared to address these potential issues. Furthermore, some of these injunctions have been presented by people that are not necessarily the legal owners of the land where construction is taking place, but by people that claim an interest in that land. The developer and contractor should be aware of local regulations regarding land ownership in each specific country. Thus, it is important to understand the contractual ramifications of these issues, from responsibility to obtain the required permits and consent, to time extensions and potential additional compensation rights. These issues are key for one of the initial steps in the construction process: site access. Without the proper release of the site property or right-of-way, the progress of the work could be affected significantly. As such, it is vital to understand, early on, who bears the risk in obtaining the land for the project, and all the intricacies involved in the process of obtaining access to the site. It is no secret that security is a key concern in Latin America. From potential robberies and stolen materials and equipment to the presence of drug cartels in areas where projects are developed, security can present some complex issues for all parties involved. Companies typically hire a security consultant to perform assessments for the protection of the site, personnel, materials, and equipment, as well as work in place. It is important that this consultant has local experience and contacts, as they will provide the best source of information and intelligence during the course of the project. Protection of project personnel is crucial and it should be comprehensive, including protection at the site, accommodations (be it at a project camp or private residence), and travel to and from the project site. Materials and equipment must be protected while in transit to the site, while stored, and after they have been installed. The latter shall also be part of a comprehensive security plan that includes gates at the site entrance (including random inspections), as well as protection of the perimeter and access roads, which are often unpaved, secluded, and unprotected. Security during construction should not be the only protection from criminal elements. For instance, if an owner decides to hire an international contractor to perform the fabrication, transportation, and installation of certain pieces of equipment, the owner should take into account the security of these parts throughout the life cycle of the project. One key consideration is ownership and protection of the equipment, at various stages of fabrication and installation, if the contract for the work is terminated. Securing those assets based on the owner’s contractual rights should be part of the comprehensive security plan, as some of these pieces of equipment are fabricated in other countries, which may increase the difficulty of securing them. It is well known that bribery and corruption are pervasive throughout the construction industry, to the point that the losses related to these issues, in addition to mismanagement and inefficiency, are estimated to reach $6 trillion by the year 2030. Combined with the corruption issues that affect Latin America, some of which are making recent news headlines, this can become a recipe for failure if not properly analyzed and mitigated. Therefore, it is important for international players to understand the local regulations and perform the necessary due diligence with respect to potential local or regional partners, subcontractors, suppliers, and government authorities. The incorporation and enforcement of guidelines such as the Foreign Corrupt Practices Act in contractual documents is suggested to minimize and mitigate the risks associated with potential corruption and bribery issues. On all international construction projects, participants must consider who bears the risk of fluctuations in exchange or interest rates and the limits, if any, on repatriation of profits. Because the economies in Latin America have a history of being more volatile than in more developed parts of the world, there is an increased risk that the economics of a project will look vastly different at the end as compared to when it started. “Foreign exchange exposure” refers to the risk of changes in a country’s exchange rate hurting a company. The important issue is recognition of the risk and a plan to manage it. This may be in the form of a contract provision that allows for an adjustment in the contract price if there is significant change in value of one currency as compared to another. There are other products in the market that are available to a participant in order to hedge the risk. The different strategies are beyond the scope of this article, but our experience tells us that currency fluctuation is a risk that is often not fully appreciated until it is too late. With respect to repatriation, participants need to appreciate restrictions that the host country’s government have in place which block cash flow back to the parent company. Are they taxed at an unfavorable rate? Is a certain percentage of money earned required to be reinvested in the country? The overall economics of the project must include these considerations. There is expected to be significant opportunity in Latin America for international construction industry players due to stronger government spending and greater private investment in the industry. However, along with that opportunity come risks. The issues addressed here, although not exclusive, are important considerations before entering into any contract. Early planning and attentive contract administration can significantly mitigate or help manage these risks. Again, we need to remember that Latin America is really the amalgamation of more than 20 countries with different laws, codes, and cultures. Therefore, there are no one-size-fits-all rules of thumb. Careful and independent evaluations should be made for each locale. In collaboration with Todd Metz, Founding Partner – Varela, Lee, Metz & Guarino* Bertrand Gruss, “After the Boom — Commodity Prices and Economic Growth in Latin America and the Caribbean” (International Monetary Fund working paper, August 2014). Data extracted from the World Bank’s database (https://data.worldbank.org/region/latin-america-and-caribbean). BMI Research, Industry Trend Analysis — Latin America Construction Growth Recovery Shifted to 2018, October 2017, http://www.infrastructure-insight.com/industry-trend-analysis-latin-america-construction-growth-recoveryshifted-2018-nov-2017. Catalina Garcia-Kilroy and Heinz Rudolph, Private Financing of Public Infrastructure Through PPPs in Latin America and the Caribbean, The World Bank, 2017. The Economist Intelligence Unit, Evaluating the Environment for Public-Private Partnerships in Latin America and the Caribbean: The 2017 Infrascope, 2017. Peter Matthews, “This Is Why Construction Is So Corrupt,” World Economic Forum, Feb. 4, 2016, https://www.weforum.org/agenda/2016/02/why-is-the-construction-industry-socorrupt-and-what-can-we-do-about-it/. David Lipton, Alejandro Werner, and Carlos Goncalves; “Corruption in Latin America: Taking Stock”; IMFBlog; Sept. 21, 2017; https://blogs.imf.org/2017/09/21/corruption-in-latinamerica-taking-stock/. © Copyright 2019. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.
<urn:uuid:34aabeff-7234-4d4c-8754-0664f11b2ba7>
CC-MAIN-2022-40
https://angle.ankura.com/post/102hcp3/building-in-latin-america-when-projects-go-south-of-the-border
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00434.warc.gz
en
0.951755
3,638
2.546875
3
The IT industry has gotten good at developing computer systems that can easily work at the nanosecond and millisecond scales. Chip makers have developed multiple techniques that have helped drive the creation of nanosecond-scale devices, while primarily software-based solutions have been rolled out for slower millisecond-scale devices. For a long time, that has been enough to address the various needs of high-performance computing environments, where performance is a key metric and issues such as the simplicity of the code and the level of programmer productivity are not as great of concerns. Given that, programming at the microsecond level as not been a high priority for the computing industry. However, that’s changing with the rise in warehouse-size computers running in hyperscale datacenter environments like Google and Amazon – where workloads change constantly, putting a premium on code simplicity and programmer productivity, and where the cost of performance is also of high importance – and the development of new low-latency I/O devices that work best at the microsecond level. Given such trends, vendors need to begin developing software stacks and hardware infrastructure that can take advantage of microsecond-level computing and reduce the growing inefficiencies in modern datacenters, according to a group of researchers from Google and the University of California, Berkeley. The researchers argue that the “oversight” of focusing on nanosecond and millisecond levels “is quickly becoming a serious problem for programming warehouse-scale computers, where efficient handling of microsecond-scale events is becoming paramount for a new breed of low-latency I/O devices ranging from datacenter networking to emerging memories.” “Techniques optimized for nanosecond or millisecond time scales do not scale well for this microsecond regime,” they wrote. “Superscalar out-of-order execution, branch prediction, prefetching, simultaneous multithreading, and other techniques for nanosecond time scales do not scale well to the microsecond regime; system designers do not have enough instruction-level parallelism or hardware-managed thread contexts to hide the longer latencies. Likewise, software techniques to tolerate millisecond-scale latencies (such as software-directed context switching) scale poorly down to microseconds; the overheads in these techniques often equal or exceed the latency of the I/O device itself. … It is quite easy to take fast hardware and throw away its performance with software designed for millisecond-scale devices.” That hasn’t been much of a problem in the past, the researchers said. HPC organizations traditionally have gotten along with low-latency networks. They have workloads that don’t change nearly as often as those in high-end web-scale environments, and programmers don’t need to handle the code as much. Data structures within the HPC field also tend to be more static and less complex, all of which makes high-speed networks a good fit. In addition, while the HPC field is most concerned with performance, costs and resource utilization play much larger roles in the calculations for companies like Google and Amazon. “Consequently, [HPC organizations] can keep processors highly underutilized when, say, blocking for MPI-style rendezvous messages,” the authors wrote. “In contrast, a key emphasis in warehouse-scale computing systems is the need to optimize for low latencies while achieving greater utilizations.” Hardware and software development to this point has targeted minimizing event latencies at the nanosecond and millisecond levels. On the nanosecond scale are such features as the deep memory hierarchy developed in processors, which include a simple synchronous programming interface to memory that is supported by a range of other microarchitectural techniques, such a prefetching, out-of-order execution and branch prediction. At the millisecond level, the innovations have primarily been in software. As an example, the researchers pointed to OS context switching, in which once a system call is made to a disk, the OS puts the I/O operation in motion while performing a software context switch to another thread to use the chip during the disk operation. “The original thread resumes execution sometime after the I/O completes,” they wrote. “The long overhead of making a disk access (milliseconds) easily outweighs the cost of two context switches (microseconds). Millisecond-scale devices are slow enough that the cost of these software-based mechanisms can be amortized.” In dealing with nanosecond- and millisecond-scale devices, Google engineers prefer the synchronous model over the asynchronous. Synchronous code is simpler, making it easier to write and debug, and works better at scale, where organizations can take advantage of consistent APIs and idioms to leverage the applications that are usually written in multiple languages, such as C, C++, Go and Java, and touched by many developers and new releases are issued on a weekly basis. All of this makes synchronous coding better for programmer productivity. Now microsecond-level technologies are hitting the industry that better suit the needs of hyperscale environments. There are new low-latency I/O devices that won’t work well in nanosecond or millisecond time scales. Fast datacenter networks tend to have latencies in the microseconds, while a flash device can have latency in the tens of microseconds. The authors also pointed to new non-volatile memory technologies, such as Intel’s Xpoint 3D memory and the Moneta change-based memory system, and in-memory systems that will have latencies in the low microseconds. There also are microsecond latencies when using GPUs and other accelerators. The slowing of Moore’s Law and the end of Dennard scaling also have highlighted the need to use lower-latency storage and communication technologies, they said. Cloud providers, to address the changes in Moore’s Law and Dennard scaling, have grown the number of computers they use for queries from customers, and that trend will increase. Techniques used for nanosecond and millisecond scales don’t scale well when microseconds are involved, and can result in inefficiencies in both datacenter networking and processors. What computer engineers need to do is create software and hardware that are optimized for microsecond scale, the researchers wrote. There need to “microsecond-aware” software stacks and low-level optimizations – such as reduced lock contention and synchronization, efficient resource utilization during spin-polling and job scheduling – designed for microsecond-level computing. On the hardware side, new ideas “ideas are needed to enable context switching across a large number of threads (tens to hundreds per processor, though finding the sweet spot is an open question) at extremely fast latencies (tens of nanoseconds). … System designers need new hardware optimizations to extend the use of synchronous blocking mechanisms and thread-level parallelism to the micro-second range.” There also needs to be hardware features for everything from orchestrating communication with I/O and queue management to task scheduling and new instrumentation for tracking microsecond overheads. In addition, “techniques to enable micro-second-scale devices should not necessarily seek to keep processor pipelines busy,” the researchers wrote. “One promising solution might instead be to enable a processor to stop consuming power while a microsecond-scale access is outstanding and shift that power to other cores not blocked on accesses.” Given the growing demands from hyperscale datacenters, computer engineers need to begin designing systems that offer support for microsecond-scale I/O, they wrote. “Today’s hardware and system software make an inadequate platform, particularly given support for synchronous programming models is deemed critical for software productivity,” the researchers wrote, adding that “novel microsecond-optimized system stacks are needed. … Such optimized designs at the microsecond scale, and corresponding faster I/O, can in turn enable a virtuous cycle of new applications and programming models that leverage low-latency communication, dramatically increasing the effective computing capabilities of warehouse-scale computers.”
<urn:uuid:3b1a8e1c-d237-41a5-aa1f-2b2e78133bc2>
CC-MAIN-2022-40
https://www.nextplatform.com/2017/04/03/google-researchers-measure-future-microseconds/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00434.warc.gz
en
0.926913
1,698
2.828125
3
Most Commented Posts Structured vs unstructured data – it’s a common way of categorising things. But it’s not quite that simple. Although structured data is easy to grasp, the world of unstructured data and its transformation to more easily understandable, usable and analysable semi-structured data, is less simple. In this article, we look at structured data, unstructured data, and how semi-structured data brings some order from potential chaos. And brings benefits to organisations that want to gain value from often very large stores of documents, images, sound files, video, social media posts, and so on. Structured data has… structure Business information is mostly generated by systems or people. Data from systems is most likely to be structured. In its traditional format, this is most typified by data in relational databases that use SQL (structured query language). In these, structure is everything. Columns that represent variables are set up in advance and populated by rows of data in which a value sits at the intersection of each. It’s something we can all visualise. It’s like we see in a spreadsheet – though whether spreadsheets are structured data is up for debate – but complex SQL database schemas involve the equivalent of numerous spreadsheets (tables, in database-speak) that relate (whence “relational”) to each other and can be filtered, joined and manipulated in many ways because they have common elements (keys). Despite the prevalence of unstructured data and the rise of formats that are better described as semi-structured, structured databases are important and won’t go away soon. They are easy to use, by everything from large-scale enterprise applications to machine learning tools, but can be limited in how they are accessed and used and can be relatively onerous to maintain and to change once initially configured. The mass of unstructured data Unstructured data is often generated by people – although not solely – and includes media such as images and sound recordings, social media posts, agent notes, websites and emails. Unstructured data holds to no predefined data model and files and objects come in a wide range of sizes, from a few kilobytes for a social media post, for example, to potentially terabytes for uncompressed video footage. Estimates often suggest that the vast bulk of data is unstructured – up to 80% or 90% of data held by organisations. If that is the case – and we can safely assume it often is – then this presents huge challenges for organisations. Unstructured data is, to a greater or lesser extent, undefined and opaque to search and classification. That means organisations may not know what is actually there, and that can be a security and compliance risk. At the same time, it means missing out on opportunities to interrogate that data to gain insights and value from it. No such thing as unstructured data? But in fact, it is arguable that no data is truly unstructured. The most unstructured data you can think of – image and sound files, for example – comes with metadata headers that provide high-level information on file contents that can be searched and questioned. And it is increasingly possible to examine the contents of such files using artificial intelligence/machine learning techniques to, for example, examine and categorise the contents of sound and video files. YouTube does this to ensure copyright on music is not contravened when you upload a video, for instance, so these types of data can be tagged with new metadata-based, algorithm-based interrogation, should an organisation wish to throw compute at it. The semi-structured data revolution At the same time, there is a growing trend towards more use of semi-structured ways of holding data. Some forms of semi-structured data have been around for some time, such as CSV and XML. A bit later came JSON. All these brought with them something like a key:value format for representing variables and values. Later came a wide range of ways of holding and analysing data that were not restricted by predefined structure. Broadly speaking, these can be lumped together as so-called NoSQL databases, but there are a number of types within that catch-all. They include column store databases like Hadoop and Cassandra, document stores like MongoDB and CouchDB, key value stores like Riak, as well as graph databases, object databases, and so on. The list gets pretty long. But, what links these is the lack of the predefined structure – schema-on-write – by which SQL is defined. So, with these non-SQL formats, potentially any data in any existing format, ie unstructured, can be provided with a structure – schema-on-read – as data is queried. It is even possible to include sound and video files – the ultimate in unstructured-ability – in things that get called databases, such as with MongoDB (although there are limitations). The big advantage of being able to put unstructured data into some form of semi-structured format is that it enables a range of use cases to emerge, such as analytics to spot consumer behaviour, market trends, sentiment analysis. Arguably, analytics on this kind of data gives deeper insight into users. An SQL database might hold name, date of birth, address, etc, but analysing unstructured data – via making it semi-structured – can get closer to what consumers think. It is also possible to put some structure on the unstructured and make use of it. A photograph of delivered item would be unstructured data, but metadata from the image file could be combined with geo-tracking information from delivery vehicles in a business intelligence tool.
<urn:uuid:1c5c6dd6-1f18-447c-bb47-15d314e0d114>
CC-MAIN-2022-40
https://ihowtoarticle.com/unstructured-vs-semi-structured-data-order-from-chaos/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00434.warc.gz
en
0.93964
1,210
2.90625
3
Sweet potatoes benefits are a great in providing good health. Sweet potatoes source of B6 vitamins. These are brilliant at breaking down homocysteine, a substance that contributes to the hardening of blood vessels and arteries. For as sweet as they are, sweet potatoes have a low glycemic index they release sugar slowly into the bloodstream. The Beauregard sweet potato, an orange-skinned variety grown in North Carolina. However, similar to a white-skinned variety used in Japan to make a dietary supplement called Caiapo. Furthermore, some studies have suggested that beta-carotene may reduce the risk of breast cancer in premenopausal women and ovarian cancer in postmenopausal women. One sweet potato contains about half of the daily recommended intake of vitamin C,Vitamins A and E for health benefits. Moreover, sweet potatoes are a good source of dietary fiber, which helps the body maintain a healthy digestive tract and regulates digestion.
<urn:uuid:c99387fd-f720-46f3-b715-728c4d81c7ba>
CC-MAIN-2022-40
https://areflect.com/2018/01/24/sweet-potato-benefits-for-woman/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00434.warc.gz
en
0.917048
196
2.640625
3
The term info stealer is self-explanatory. This type of malware resides in an infected computer and gathers data in order to send it to the attacker. Typical targets are credentials used in online banking services, social media sites, emails, or FTP accounts. Info stealers may use many methods of data acquisition. The most common are: The age of info stealers started with the release of ZeuS in 2006. It was an advanced Trojan, targeting credentials of online banking services. After the code of ZeuS leaked, many derivatives of it started appearing and popularized this type of malware. In December 2008, a social media credential stealear, Koobface, was detected for the first time. It originally targeted users of popular networking websites like Facebook, Skype, Yahoo Messenger, MySpace, Twitter, and email clients such as Gmail, Yahoo Mail, and AOL Mail. Nowadays, most botnet agents have some features of info stealing, even if it is not their main goal. Info stealers are basically a type of Trojan, and they are carried by infection methods typical for Trojans and botnet agents, such as malicious attachments sent by spam campaigns, websites infected by exploit kits, and malvertising. Info stealers are usually associated with other types of malware such as: Early detection is crucial with this type of malware. Any delay in detecting this threat may result in having important accounts compromised. That's why it is very important to have a good quality anti-malware protection that will not let malware be installed. If the user suspects his or her computer is infected by an info stealer, he or she should do full scan of the system using automated anti-malware tools. Removing malware is not enough. It is crucial to change all passwords immediately. Info stealers are dangerous for all the users of an infected machine. The consequences are proportionally serious to the importance of stolen passwords. Common dangers are: violated privacy, leakage of confidential information, having money stolen from an account, and being impersonated by the attacker. Stolen email accounts can be northerly used to send spam, or a stolen SSH account can be used as a proxy for attacks performed by cybercriminals. Avoidance procedures are same as for other types of Trojans and botnet agents. First of all, keep up good security habits. Be careful about visited websites and don't open unknown attachments. However, in some cases this is not enough. Exploit kits can still install the malicious software on the vulnerable machine, even without any interaction. That's why it is important to have quality anti-malware software. Select your language
<urn:uuid:d1ebaa32-7ae0-42f2-beb1-86a2c341ec0a>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/threats/info-stealers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00434.warc.gz
en
0.949716
548
3.234375
3
Cyber-attacks are becoming increasingly common and with high profile attacks, they are being reported widely in the mainstream media. One of the most common threats is malware, and according to a number of studies, as many as 1 in 4 organisations have malware lurking on their network right now! What is malware? In simple terms, malware is any type of software written and deployed with the intent of causing harm to data, devices or individuals. The term malware includes many types of cyber threats, such as viruses, Trojans, spyware and ransomware. Each of these operate in a slightly different way but they all have similar goals. Malware, depending on the type, has the ability to freeze entire systems, corrupt files or simply lurk in the background, gathering information about your online activities and steal information such as passwords and credit card details. How can the threat be mitigated? Each form of malware has its own way of infecting and damaging computers and data, and so each one requires a different malware removal method. The implementation of basic cyber security practices and increased employee awareness will help to mitigate threats. However, for malware specifically, it is recommended to install a reputable anti-virus and anti-malware solution. Unfortunately, sometimes these solutions aren’t even enough as, due to the fast changing threat landscape, there is no such thing as 100% secure. Eventura have a team of cyber security experts able to help businesses improve their cyber security practices. The team advocates a model of prevent, detect, respond and recover. How this model could work for you depends on your business and risk appetite. To discuss your current security solutions and how you can plan for the future, please do not hesitate to contact us.
<urn:uuid:5251c0bc-d227-49e3-a98a-8d5ea18cb414>
CC-MAIN-2022-40
https://eventura.com/cyber-security/malware-the-monster-that-could-be-lurking-on-your-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00434.warc.gz
en
0.953403
355
2.859375
3
IPv6 migration, the transition to a successor standard to IPv4, is an unavoidable response to IPv4 exhaustion. At the time of the internet’s creation, the IPv4 standard was introduced to allow a unique public IP address to be assigned to each internet-connected computer. Encompassing nearly 4.3 billion different values, IPv4 seemed to be an ample supply at the time, but by the late 1980s, it became apparent that this pool would be depleted sooner rather than later. With IPv4 exhaustion becoming a real and present problem for carriers and subscribers, the industry turned its focus to a long-term solution. IPv6, a successor to IPv4, was published by the IETF (Internet Engineering Task Force) as a draft standard in December 1998, went live in June 2012, and was ratified as an internet standard in July 2017. Lengthening the IP address from 32 bits to 128 bits, IPv6 alleviates the IPv4 exhaustion crisis for the conceivable future. Other IPv6 enhancements include improvements in efficiency, performance, and security. Since its adoption, IPv6 migration has been widespread but uneven. Major web content providers such as Google, Alexa, Facebook, Yahoo, YouTube, and others have achieved full IPv6 migration, while most mobile operators, ISPs, and mobile device manufacturers support both standards concurrently. At the same time, the high cost of changing existing network infrastructure has slowed enterprise IPv6 deployment. Due to the large number of websites, devices, and networks that remain primarily IPv4, most service providers, enterprises, and other organizations need to support connectivity for both IPv4 and IPv6 even if their own networks have achieved full IPv6 migration. Learn about various techniques for IPv6 Migration, IPv4 Preservation and IPv4/IPv6 Translation such as Carrier Grade NAT (CGN/CGNAT). The current hybrid environment encompassing both IPv4 and IPv6, as well as the necessarily staged nature of IPv6 migration, has led to the introduction of technologies to ease the transition and extend the life of existing IPv4 investments. Network address translation (NAT) and carrier-grade NAT (CGNAT) solutions enable a single IPv4 address to be shared across multiple connected devices or sites, thus enabling organizations to leverage their existing investment in IPv4 and to avoid purchasing additional and costly IPv4 addresses on the open market. Transition technologies enable translation between IPv4 and IPv6 addresses or tunneling to allow traffic to pass through the incompatible network, allowing the two standards to coexist. These include NAT64, DNS64, MAP-T, MAP-E, DS-Lite, LW406, 6rd, 464XLAT. A10 Networks Thunder® Carrier Grade Networking (CGN) helps service providers, content providers, higher education institutions and enterprises achieve seamless IPv6 migration by supporting both IPv4 preservation and translation and tunneling between IPv4 and IPv6 networks. Simple, cost-efficient CGN with high availability and superior performance extends the life of existing IPv4 investment and provides the full range of IPv6 transition technologies. A10’s IPv4 preservation & IPv4 to IPv6 migration solution is specifically built for processor-intensive, high-volume networking tasks and can scale to hundreds of Gbps of throughput and hundreds of millions of concurrent sessions. Public Telecomm Solves IPv4 Exhaustion & Saves ~$2 Million How much is growth in subscribers or locations going to cost you in the next 5 years for additional IPv4 addresses?Estimate Your IPv4 Costs Now
<urn:uuid:fec9a131-ea49-4e1c-b64a-c19157836b36>
CC-MAIN-2022-40
https://www.a10networks.com/glossary/what-is-ipv6-migration-and-why-is-it-necessary/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00434.warc.gz
en
0.924497
725
2.828125
3
Passwords are a pain, they’re both a massive security risk and an operational inconvenience for admins and users alike, but when we eliminate passwords, what do we replace them with? There’s a lot of buzz around passwordless authentication but it isn’t clear what should replace passwords. The security industry often discusses passwordless authentication in broad strokes, without sharing exactly how to authenticate users, securely and easily. If we’re going to rethink the entire login process, it better be with something that’s more secure, convenient, and inexpensive. So what should we use instead of passwords? There are alternative authentication methods to choose from. Hypothetically, you could use any of these methods as a first step or second step to login. Yet each of these methods vary on the security and convenience scale. In the past, we’ve just assumed that we were stuck with passwords and the risks that came with it. So we decided to add alternative authentication methods on top of the use of a password, most commonly a mobile push or one-time passcode, what’s also known as multi-factor authentication (legacy MFA). These are added as a bandaid, a second step on top of a password. However, this doesn’t get rid of the pains and security concerns of passwords. With legacy MFA, users still use passwords; they still need to create, remember, and change passwords. Since it’s so easy to copy, guess, and steal other people’s passwords, that means user accounts still need to be protected from unauthorized access. Instead, what if we went back to the drawing board to rethink how users login so users never need to use a password at all? What’s changed so that we can securely and easily use other authentication methods as a first step to log in? - The proliferation of Trusted Platform Modules (TPMs) and secure enclaves - The prevalence of built-in biometric readers in modern devices Biometrics and other sensitive identifiers such as cryptographic keys have been around for a while, but we didn’t have a safe place to put them. Biometrics are great because they’re something you are, such as your fingerprint or your face. It’s hard to steal someone’s finger or face. It’s also not something that you need to remember, it’s something you innately are. Biometrics are also extremely sensitive because it’s something you can’t change, so it’s crucial that it’s stored securely. That’s where TPMs and secure enclaves come in. TPMs and secure enclaves are cryptographic-processors on a separate hardware chip on your device. Most modern devices, including mobile devices and computers, have TPMs or secure enclaves. Think of TPMs as a containerized magic box, where sensitive data and applications can be run (see Nishank Vaish’s description on why enclaves are taking over the security world on Infosecurity). Secure enclaves are also the reason why the government, nor Apple can access your device without your permission. Granted, it is possible to steal a biometric - such as chopping off your finger - but that’s much harder to do than to simply purchase or steal a password off the dark web, to capture a string of characters and numbers to login. Fingerprint and face readers have also become more prevalent on modern computers and mobile devices, and consumers have become more familiar and comfortable with this technology. It’s become second nature for consumers to utilize their biometric to login to devices. In fact, most modern devices, if they don’t support a biometric, support a local pin, which is much more secure than a password. This has primed users for passwordless elsewhere in their lives. This has created the perfect breeding ground for people to login securely, easily, and password free. It’s time we use the momentum from TPMs and built-in biometrics to identify users and authenticate them into applications. It’s time for users to take back control of their digital identity. We should be able to login with “who you are” and “what you have” (your biometric and keys on your device), not “what you know” (a password).
<urn:uuid:f04cd180-3cfb-498d-902a-1d4f997cbe81>
CC-MAIN-2022-40
https://www.beyondidentity.com/blog/if-not-passwords-then-what
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00634.warc.gz
en
0.936817
923
2.53125
3
Does sugar directly feed cancers, boosting their growth? The answer seems to be ‘Yes’ at least in mice according to a study led by researchers at Baylor College of Medicine and Weill Cornell Medicine. Their study, published in Science, showed that consuming a daily modest amount of high-fructose corn syrup – the equivalent of people drinking about 12 ounces of a sugar-sweetened beverage daily – accelerates the growth of intestinal tumors in mouse models of the disease, independently of obesity. The team also discovered the mechanism by which the consumption of sugary drinks can directly feed cancer growth, suggesting potential novel therapeutic strategies. “An increasing number of observational studies have raised awareness of the association between consuming sugary drinks, obesity and the risk of colorectal cancer,” said co-corresponding author Dr. Jihye Yun, assistant professor of molecular and human genetics at Baylor. “The current thought is that sugar is harmful to our health mainly because consuming too much can lead to obesity. We know that obesity increases the risk of many types of cancer including colorectal cancer; however, we were uncertain whether a direct and causal link existed between sugar consumption and cancer. Therefore, I decided to address this important question when I was a postdoc in the Dr. Lewis Cantley lab at Weill Cornell Medicine. First, Yun and her colleagues generated a mouse model of early-stage colon cancer where APC gene is deleted. “APC is a gatekeeper in colorectal cancer. Deleting this protein is like removing the breaks of a car. Without it, normal intestinal cells neither stop growing nor die, forming early stage tumors called polyps. More than 90 percent of colorectal cancer patients have this type of APC mutation”, Yun said. Using this mouse model of the disease, the team tested the effect of consuming sugar-sweetened water on tumor development. The sweetened water was 25 percent high-fructose corn syrup, which is the main sweetener of sugary drinks people consume. High-fructose corn syrup consists of glucose and fructose at a 45:55 ratio. When the researchers provided the sugary drink in the water bottle for the APC-model mice to drink at their will, mice rapidly gained weight in a month. To prevent the mice from being obese and mimic humans’ daily consumption of one can of soda, the researchers gave the mice a moderate amount of sugary water orally with a special syringe once a day. After two months, the APC-model mice receiving sugary water did not become obese, but developed tumors that were larger and of higher-grade than those in model mice treated with regular water. “These results suggest that when the animals have early stage of tumors in the intestines – which can occur in many young adult humans by chance and without notice -consuming even modest amounts of high-fructose corn syrup in liquid form can boost tumor growth and progression independently of obesity,” Yun said. “Further research is needed to translate these discovery to people; however, our findings in animal models suggest that chronic consumption of sugary drinks can shorten the time it takes cancer to develop. In humans, it usually takes 20 to 30 years for colorectal cancer to grow from early stage benign tumors to aggressive cancers.” “This observation in animal models might explain why increased consumption of sweet drinks and other foods with high sugar content over the past 30 years is correlating with an increase in colorectal cancers in 25 to 50-year-olds in the United States,” said Cantley, co-corresponding author, former mentor of Yun and professor of cancer biology in medicine and director of the Sandra and Edward Meyer Cancer Center at Weill Cornell Medicine. Dr. Lewis Cantley, Director of the Meyer Cancer Center at Weill Cornell Medicine, describes his new study being published in Science. Credit: Weill Cornell Medicine The team then investigated the mechanism by which this sugar promoted tumor growth. They discovered that the APC-model mice receiving modest high-fructose corn syrup had high amounts of fructose in their colons. “We observed that sugary drinks increased the levels of fructose and glucose in the colon and blood, respectively and that tumors could efficiently take up both fructose and glucose via different routes.” Using cutting-edge technologies to trace the fate of glucose and fructose in tumor tissues, the team showed that fructose was first chemically changed and this process then enabled it to efficiently promote the production of fatty acids, which ultimately contribute to tumor growth. “Most previous studies used either glucose or fructose alone to study the effect of sugar in animals or cell lines. We thought that this approach did not reflect how people actually consume sugary drinks because neither drinks nor foods have only glucose or fructose. They have both glucose and fructose together in similar amounts,” Yun said. “Our findings suggest that the role of fructose in tumors is to enhance glucose’s role of directing fatty acids synthesis. The resulting abundance of fatty acids can be potentially used by cancer cells to form cellular membranes and signaling molecules, to grow or to influence inflammation.” To determine whether fructose metabolism or increased fatty acid production was responsible for sugar-induced tumor growth, the researchers modified APC-model mice to lack genes coding for enzymes involved in either fructose metabolism or fatty acid synthesis. One group of APC-model mice lacked an enzyme KHK, which is involved in fructose metabolism, and another group lacked enzyme FASN, which participates in fatty acid synthesis. They found that mice lacking either of these genes did not develop larger tumors, unlike APC-model mice, when fed the same modest amounts of high-fructose corn syrup. “This study revealed the surprising result that colorectal cancers utilize high-fructose corn syrup, the major ingredient in most sugary sodas and many other processed foods, as a fuel to increase rates of tumor growth,” Cantley said. “While many studies have correlated increased rates of colorectal cancer with diet, this study shows a direct molecular mechanism for the correlation between consumption of sugar and colorectal cancer.” “Our findings also open new possibilities for treatment,” Yun said. “Unlike glucose, fructose is not essential for the survival and growth of normal cells, which suggests that therapies targeting fructose metabolism are worth exploring. Alternatively, avoiding consuming sugary drinks as much as possible instead of relying on drugs would significantly reduce the availability of sugar in the colon.” While further studies in humans are necessary, Yun and colleagues hope this research will help to raise public awareness about the potentially harmful consequences consuming sugary drinks has on human health and contribute to reducing the risk and mortality of colorectal cancer worldwide. More information: M.D. Goncalves el al., “High-fructose corn syrup enhances intestinal tumor growth in mice,” Science (2019). science.sciencemag.org/cgi/doi … 1126/science.aat8515 Provided by Baylor College of Medicine
<urn:uuid:70283f73-9300-42ba-8b22-402edd4de146>
CC-MAIN-2022-40
https://debuglies.com/2019/03/22/high-fructose-corn-syrup-accelerates-the-growth-of-intestinal-tumors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00634.warc.gz
en
0.939187
1,489
3.09375
3
Establishing an impactful baseline energy reduction Today, many data centre owner-operators, particularly in the wholesale sector, are deploying primary power and cooling strategies that offer substantial energy savings, improvements in Power Usage Effectiveness (PUE), and in some cases, the ability to run on their own “micro-grid” in order to achieve a baseline energy reduction. Today, many data centre owner-operators, particularly in the wholesale sector, are deploying primary power and cooling strategies that offer substantial energy savings, improvements in power usage effectiveness (PUE), and in some cases, the ability to run on their own “micro-grid” in order to achieve a baseline energy reduction. According to the Natural Resources Defense Council (NRDC), an environmental action organisation, data centres nationwide used 91 billion kilowatt-hours of electrical energy in 2013, and are projected to use 139 billion kilowatt-hours by 2020. That 53% increase in just five short years will mean the equivalent annual output of 50 power plants, costing American businesses $13 billion annually in electricity bills and emitting nearly 100 million metric tons of carbon pollution per year. Photovoltaics (PV) and wind power are among the sustainable energy sources that are becoming more commonplace within data centres to effectively reduce their carbon footprint and save money on electricity consumption, in keeping with their green initiatives. In fact, PV and wind can be incorporated into a 380V high voltage DC primary power system that includes storage and the ability to deliver power directly to computer equipment. In addition to eliminating power conversion requirements, this also provides the benefit of increased reliability due to reduced components. Fuel cells can also offer a primary power alternative, utilising natural gas or clean landfill gas, to provide a reliable and highly sustainable alternative to the grid, and can even connect to the micro-grid. Baseline reductions can also extend to free cooling and renewable energy, translating into immediate savings for facilities. Take Google, for example. After re-purposing a 60-year-old paper mill into a data centre in Finland, the American multinational tech giant reused the cooling infrastructure that drew water from a nearby bay to cool the facility’s equipment. The rise is indirect cooling solutions, such as adiabatic cooling, which enables data centre owner-operators to take advantage of environmental conditions without risk of airborne contamination when compared to direct air economisation. Adiabatic cooling is the process of reducing heat through a change in air pressure caused by volume expansion. In data centres, adiabatic processes have enabled free cooling methods, which use freely available natural phenomena to regulate temperature. Highly efficient, pumped refrigerant systems provide both quick deployment and immediate savings. These systems replace aging Direct Expansion (DX) systems and present an alternative to a traditional chilled water system. To get a better grasp of what baseline energy reductions can mean to a data centre’s bottom line, let us consider a 500kW critical load and a $.10/kW hourly cost. In this scenario, if a facility can reduce their PUE rating of 2.0, a $72,000 monthly cost, to just 1.8, the reduced total utility power will be $64,800, a monthly savings of $7200, or 100kW of increased capacity. Establishing an impactful baseline energy reduction makes good fiscal sense, is consistent with the green and sustainable initiatives of our nation’s progressive data centers and their customers, and sets the stage for greater environmental responsibility going forward on behalf of the planet. On every level, it’s the right thing to do.
<urn:uuid:4cd7e393-7117-487b-b56f-343699838af9>
CC-MAIN-2022-40
https://www.capacitymedia.com/article/29ot7kafj5tgk6h24gcn4/blog/establishing-an-impactful-baseline-energy-reduction
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00634.warc.gz
en
0.912921
751
2.71875
3
What is IAM As this in-depth article states, “Identity and Access Management (IAM) is a framework of business processes, policies, and technologies that facilitates the management of electronic or digital identities.” Our previous post further defines IAM as: Access management gives authorized users access to the right services while preventing access to unauthorized users. It hinges on the concept of providing the right access at the right time. And it may encompass anyone or any system authorized to handle company data, both people and systems. IAM is for employees and third parties such as vendors, contractors, and customers. The following are some of the primary IAM tools including access management software with their benefits: 1. Single Sign-On (SSO) + IAM Single sign-on is an identity and access management software that refers to the ability of an employee to log in once using one set of credentials (username and password) and have access to all authorized systems and applications. SSO increases and simplifies security when properly used because users log in once and only use one set of credentials. SSO also slightly increases employee productivity. It has significant benefits for IT, greatly simplifying issuing and managing credentials and determining who has access to what data. One of the secondary benefits of SSO is that it eliminates password fatigue. And reducing password fatigue increases software adoption rates across the organization. SSO also creates opportunities for analytics that can help an organization demonstrate that it is handling identity and access management appropriately according to specified compliance requirements. 2. Multi-Factor Authentication (MFA) + IAM MFA is a core component of a strong IAM policy. MFA requires users to provide two or more verification factors to access resources such as applications, online accounts, or VPNs. MFA ensures users are who they say they are by requiring that they provide at least two pieces of evidence to prove their identity. Each piece of evidence must come from a different category—something they know, something they have, or something they are. MFA is particularly beneficial in reducing the possibility of unauthorized access if an account is compromised. Ease of use is a primary consideration for implementing MFA in an organization. The IAM policy should balance usability and security. Making MFA overly restrictive also restricts productivity. Organizations should also plan for situations when employees may not have access to their mobile phones or have disabilities that limit access. Organizations implementing MFA would also take care to institute and foster a culture of compliance with MFA. 3. User Lifecycle Management (ULM) + IAM ULM refers to a strategic solution that facilitates enterprise administration, replacing multiple online identities with a single, secure, trusted, and efficiently managed credential for each user—one user, one identity, and one infrastructure. To fulfill regulatory requirements, many enterprises implement a ULM strategy with a common infrastructure to launch, centrally configure, manage, and report on the various components of the ULM solution. Consider these ULM features. Smart provisioning systems can automatically assign the right level of access to each employee upon hire. Then as employees change projects or roles, entitlements and permissions change automatically in real-time, eliminating backlogs. A geo-fence can control the locations where users or systems can access sensitive information. ULM facilitates the decommissioning of user accounts when employees or contractors leave or no longer require access. Up to 40% of employees log into their accounts after their termination dates. Thus decommissioning these accounts, particularly privileged access accounts, is a critical security protocol. How to Implement IAM IAM can be implemented in-house, by a third party, or as a hybrid model. IAM tools are generally implemented within an organization in a particular order, depending on an organization’s IAM maturity level. Determining an organization’s IAM maturity level relies on several factors. Gartner’s maturity levels are: According to Gartner, if an organization doesn’t have any IAM technology in place and the organization proceeds in a decentralized, ad hoc method, its IAM maturity level would be in the initial phase. An organization with IAM architecture embedded within its enterprise architecture and optimized IAM governance would be at the operational excellence maturity level of IAM (optimized). Most organizations fall somewhere in between. Compared to other IT systems that don’t require much discovery (e.g., upgrading other types of software or firewalls), IAM is not a standalone application. All other company applications and systems should connect to IAM to work as designed and properly secure an organization’s data and access. Thus, the first step for any organization implementing, upgrading IAM, or moving from an on-premises solution to the cloud is to conduct an IAM audit to determine maturity level and document processes, architecture, and infrastructure design. The importance of this step cannot be overstated. During one such audit, SecurIT discovered unnecessary provisioning of Office 365 licenses to the tune of over $1M in organization savings. On the flip side, large organizations that choose to skip the audit and address issues as they come up during implementation can find timelines drawn out by years and costs expanded by the millions. Conducting an audit, assessment, and discovery, preferably by a third party, will ensure an organization’s IAM roadmap is accurate both from a budget and timeline perspective.
<urn:uuid:ab40acd3-08d9-4d2a-86d5-8f1522995e4b>
CC-MAIN-2022-40
https://www.securit.biz/en/blog/selecting-the-right-identity-access-management-tools-for-your-organization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00634.warc.gz
en
0.925296
1,149
2.765625
3
What Is SNMP In Computer Network Science? Defining SNMP Components. SNMP Protocol Data Units. This post is also available in: Danish In the last decade, we have witnessed an intense spread of computer networks that have been further speeded up with the introduction of wireless networks. At the same time, this growth has significantly enhanced network management issues. In small organizations, where there is no discussion on what is SNMP in computer network science and there is no specialized personnel assigned to these tasks, the management of such networks is often complex and breakdowns can have significant impacts on their businesses. What Is SNMP in Computer Network Science and How Does It Work? A possible solution for network management is the adoption of the Simple Network Management Protocol (SNMP). But what is SNMP in computer network science? SNMP is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to manage network performance, find and solve network issues, and plan for network growth. But just how “simple” is the Simple Network Management Protocol? Since SNMP is a set of protocols for network monitoring and management, it’s supported by network devices such as routers, switches, servers, workstations, printers, and other network components and devices. These devices are all network-attached items that must be monitored to detect conditions. These conditions must be addressed for proper, appropriate and ongoing network administration. According to Technopedia, SNMP standards include an application layer protocol, a set of data objects and a methodology for storing, manipulating and using data objects in a database schema. Defining SNMP Components The biggest value of an SNMP is when it’s used in larger networks. With SNMP, a network administrator can manage and monitor all SNMP devices from a single interface. These are the main runtime components in an SNMP-enabled environment: - The SNMP Agent – This software runs on the hardware or service being monitored by SNMP, collecting data on various metrics like CPU usage, bandwidth usage, or disk space. When requested by the SNMP manager, the agent finds and sends this information back to SNMP management systems. - Network Devices and Resources – This component represents all the devices and network elements on which an agent runs. - The SNMP Manager (also known as SNMP server) – This component functions as a centralized management station running an SNMP management application on different operating system environments. The SNMP Manager requests agents to send SNMP updates on a regular basis. - The Management Information Base (MIB) – SNMP agents collect and maintain network device information which is stored in the MIB database and used to supply the response to a Manager request. This data structure is a text file (with a .mib file extension) that describes all data objects used by a particular device that can be queried or controlled using SNMP including access control. It’s important to mention that SNMP is among the most deployed networking industry protocols and is supported on a variety of hardware—from common network elements like routers, switches, and wireless access points to endpoints such as printers, scanners, and Internet of Things (IoT) devices. In addition to hardware, SNMP can be used to monitor Dynamic Host Configuration Protocol (DHCP) configuration services. Heimdal® Threat Prevention - Endpoint - Machine learning powered scans for all incoming online traffic; - Stops data breaches before sensitive info can be exposed to the outside; - Advanced DNS, HTTP and HTTPS filtering for all your endpoints; - Protection against data leakage, APTs, ransomware and exploits; SNMP Protocol Data Units When discussing what is SNMP in computer network science, we need to take into consideration Protocol Data Units (PDUs). When commands or messages are sent between an SMNP manager and an SNMP agent, they are transported via User Datagram Protocol (UDP) or Transmission Control Protocol/Internet Protocol (TCP/IP) and are known as protocol data units. The SNMP protocol data units are as follows: This request is sent by the SNMP manager to the managed device. By performing this command, you can retrieve one or more values from the managed device. This request retrieves the value of the next Object Identifier (OID) in the MIB tree. An object identifier is used to name and point to an object in the MIB hierarchy. Each network device has its own MIB (including information system status, availability, and performance information). Each piece of this information is known as an object and identified by a specific OID. In general, this operation is used for retrieving a large amount of data, particularly from large MIB tables. #4. Set Request The SNMP SET operation is used by the managers to modify or assign the value of the managed device. TRAPS are alert messages sent to the SNMP manager by the agent when an event occurs. This feature allows SNMP agents to send inform requests to SNMP managers. While this sounds similar to SNMP TRAPS, there is no way of knowing whether an SNMP TRAP has been received by the SNMP manager. However, in this case, the inform requests are sent continuously till an acknowledgment of reception is triggered by the SNMP manager. This request is used to carry back the values or signal of actions directed by the SNMP manager. Wrapping It Up The purpose of SNMP is to provide a similar communication language to devices for exchanging information among network information systems. It is a simple and flexible network protocol allowing the network admins to efficiently manage the organization’s network. At the moment, there is no other monitoring protocol standard like SNMP. Almost all network devices and data center equipment support it. As it is a common standard, SNMP has to be supported by any monitoring system today. I hope that through this quick guide I provided you with everything you needed to know about what is SNMP in computer network science. What are your thoughts on SNMP? Do you think SNMP is still relevant? Let me know in the comments below!
<urn:uuid:8e7820ce-cbf2-49a9-b471-002e75506712>
CC-MAIN-2022-40
https://heimdalsecurity.com/blog/what-is-snmp-in-computer-network-science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00634.warc.gz
en
0.892809
1,328
3.484375
3
As organizations evolve from private, on-premises communications to unified or cloud-based communications and/or collaboration, security becomes a critical component of any solution. There are so many factors reshaping enterprise communication networks today, from virtualization to mobilization and SIP integration to video collaboration, that administrators need to consider their encryption requirements for communications. Communication over the network poses the risk of interception by persons who might have unauthorized access to the network or intercepting traffic across the internet to any cloud base solution. This risk can be reduced by securing the communication data using digital encryption with certificates. Encryption is used to protect information from being stolen or copied. However, encryption by itself is insufficient. Suppose that you have some private information that you want to send to a trusted recipient like a cloud base service provider. If you encrypt that information, but you mistakenly send the information to the wrong people and encrypt it in a way that the thieves can read it, then you have not protected the information at all. The certificate system is supposed to provide the basis for you to be able to trust that you are sending the data to the intended people and that you have encrypted it in a way that only the intended recipients can read it. Understanding TLS for Network TLC Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communications security over computer networks. TLS is the recommended security mechanism for Session Initiation Protocol (SIP). HTTP /HTTPS in SIP environments are based on TLS and SIP client implementations natively supports TLS. PKI Hierarchy and TLS TLS handshake has two specific functions, one is Authentication and Verification, the other is Data Encryption. The PKI system is built on the use of public key encryption. With public key encryption, there are two keys. One of those keys is made public and the other is kept private and known only to the owner of the key. In one typical use of PKI (Public Key Infrastructure), a message is encrypted using the public key. Then only the owner of the private key can decrypt it. In an alternative use, a special message is encrypted with the private key and it can be decrypted with the public key. In this use, the message is typically a signature, and the arrangement is intended to allow only the owner of the private key to create the signature while anybody can verify that the signature is correct because it is made public so it is available to everybody. The mathematical relationship between the public key and the private key is sufficiently complex that it would be extremely difficult to compute the private key from the known public key. The Digital Certificate contains information about whom the certificate was issued to, as well as the certifying authority that issued it. Additionally, some certifying authorities may themselves be certified by a hierarchy of one or more certifying authorities (Intermediate CAs), and this information is also part of the certificate chain. In TLS, servers are configured with an identity certificate issued by a certificate authority. When clients connect to servers, the server presents its identity certificate for the client to validate. The client checks whether the server identity certificate was issued by a certificate authority that the client trusts (among other items). If everything checks out, then the client proceeds and a secure connection is established. A trusted public body, such as a Certificate Authority (CA) will sign certificates for other servers/applications to use. When you trust a certificate, you are implicitly trusting the CA. If the verification of the certificate handshake succeeds, then you can trust that the certificate was signed by a CA that you have trusted. What you can trust, and how trustworthy a CA is, should be decided outside of the certificate system. In most cases, CAs have their certification policies posted on their website. Many CAs do some checking of the identity of the server (the name of the certificate owner, as shown on the certificate). However, most do not check whether the server runs an honest business and in cases where the CA does extra checks, their certificate signing process will cost more. However, the final decision on what your organization trusts is yours. Digital signatures are composed of two different algorithms, the hashing algorithm (SHA-1 for example) and the signing algorithm (RSA for example). Over time these algorithms, or the parameters they use, need to be updated to improve security. As computational power increases, the hashing algorithms start to become susceptible to hashing collisions. MD5 was used for a number of years until it was found to have a security flaw in 2004 which set the stage for SHA-1. As a result, Certificate Authorities have aimed to comply with NIST (National Institute of Standards and Technology) recommendations, by ensuring all new RSA certificates have keys of 2048 bits in length or longer. As with other forms of identification, certificate technology has progressed over the years. Current standards mandate using newer algorithms in this ever-evolving industry. The CA/Browser Forum, an industry body made up of Certificate Authorities (CAs), web browsers and operating systems, recently passed ballot 193 to reduce the maximum validity period for SSL/TLS Certificates to two years (825 days). Prior to this, the standard validity was three years for Domain Validated (DV) and Organization Validated (OV) Certificates; Extended Validation (EV) Certificates have always been capped at two years. The change went into effect March 1, 2018, and affects all CAs and all types of SSL/TLS Certificates. Longer certificate validity periods can delay widespread compliance with new guidelines since changes wouldn’t go fully into effect until all existing (issued before the update) certificates expired. Decreasing the maximum lifetime of certificates from three years to two years helps reduce the presence of older, outdated and possibly vulnerable certificates that were issued before new guidelines were put in place. This also means that the process and time needed to replace and update certificates will happen more often causing higher operating costs to deal with the time need to update such servers and applications with new certificates. Because of everything needed to keep track of on a certificate, administrators should track where certificates are, for what purpose and relationship they might have. The documents used in most organization are forever being updated to include additions or changes. These documents must be considered a live document to stay up to date in order capture all the latest information which mean there must be policies in place to keep them that way. This is an excellent reminder about the role certificate management and inventory tools can play in simplifying administration. Most CAs offer these types of services, which help centralize certificate activity so you can monitor where you have certificates and when they need to be renewed. Items that need to be documented include: • Relationship maps where the communication and handshakes take place • Certificate FQDN with split DNS considerations • Certificate expiration and projected project replacement timelines • Proactive alert/alarms for certificate expirations. Equal to the evolution of your Network, attacks are evolving in their own sophistication and their ability to evade detection. Whether it’s by intercepting traffic on the internet or causing a break in security, cyberattacks are becoming more targeted and have greater financial consequences for your organization. Encryption and digital certificates are important precautions that every organization should consider to help fight cyberattacks and stay safe.
<urn:uuid:ed27c1b3-788a-4276-a877-73574c40f967>
CC-MAIN-2022-40
https://ceriumnetworks.com/using-tls-encryption-to-secure-your-unified-communications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00634.warc.gz
en
0.945657
1,530
3.328125
3
Microsoft Developing State-of-the-Art Algorithm to Accelerate Path for Quantum Computers to Address Climate Change (Microsoft) Matthias Troyer , Distinguished Scientist, has penned this recent must-read Microsoft Research blog. Troyer explains that the question emerges that is both scientific and philosophical in nature: once a quantum computer scales to handle problems that classical computers cannot, what problems should we solve on it? Quantum researchers at Microsoft are not only thinking about this question—we are producing tangible results that will shape how large-scale quantum computer applications will accomplish these tasks. We have begun creating quantum computer applications in chemistry, and they could help to address one of the world’s biggest challenges to date: climate change. Microsoft has prioritized making an impact on this global issue, and Microsoft Quantum researchers have teamed up with researchers at ETH Zurich to develop a new quantum algorithm to simulate catalytic processes. In the context of climate change, one goal will be to find an efficient catalyst for carbon fixation—a process that reduces carbon dioxide by turning it into valuable chemicals. One of our key findings is that the resource requirements to implement our algorithm on a fault-tolerant quantum computer are more than 10 times lower than recent state-of-the-art algorithms. These improvements significantly decrease the time it will take a quantum computer to do extremely challenging computations in this area of chemistry. The research presented in this post is evidence that rapid advances in quantum computing are happening now—our algorithm is 10,000 times faster than the one we created just three years ago. By gaining more insight into how quantum computers can improve computational catalysis, including ways that will help to address climate change while creating other benefits, we hope to spur new ideas and developments on the road to creating some of the first applications for large-scale quantum computers of the future.
<urn:uuid:1c80981e-4110-4571-8581-0d9a0ac89a6f>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/microsoft-developing-state-of-the-art-algorithm-to-accelerates-path-for-quantum-computers-to-address-climate-change/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00634.warc.gz
en
0.921826
380
2.75
3
The Cyber Kill Chain offers a comprehensive framework as a part of the Intelligence Driven Defense model. In this article, we will discuss what the cyber kill chain is and what its steps are. Cyber intrusions are the worst nightmare of many of us. That is why many cyber security professionals and developers offer unique solutions for the identification and prevention of cyber intrusions activity. Being one of those developers, Lockheed Martin has brought the Cyber Kill Chain into our lives. In this article, we will explain what Cyber Kill Chain is in great detail and also provide a comprehensive, 7-step guide. Keep reading to learn! The term “kill chain” was first used as a military concept that defines the structure of an attack that covers: The idea of interrupting the opponent’s kill chain activity is often employed as a defence. Inspired by the whole kill chain concept, Lockheed Martin (an aerospace, security, arms, defence and advanced technologies company based in the United States of America) created the Cyber Kill Chain. It is a cybersecurity framework that offers a method to deal with the intrusions on a computer network. Since it first emerged, the Cyber Kill Chain has evolved significantly in order to anticipate and recognize insider threats much better, detect various other attack techniques like advanced ransomware and social engineering. The Cyber Kill Chain consists of seven steps that aim to offer a better attack visibility while supporting the cyberattack / cybersecurity analyst to get a better understanding of the adversary’s tactics, procedures and techniques. The seven steps of the Cyber Kill Chain illustrates the different phases of a cyberattack starting from reconnaissance, reaching to the exfiltration. The Cyber Kill Chain consists of 7 steps: Reconnaissance, weaponization, delivery, exploitation, installation, command and control, and finally, actions on objectives. Below you can find detailed information on each. 1. Reconnaissance: In this step, the attacker / intruder chooses their target. Then they conduct an in-depth research on this target to identify its vulnerabilities that can be exploited. 2. Weaponization: In this step, the intruder creates a malware weapon like a virus, worm or such in order to exploit the vulnerabilities of the target. Depending on the target and the purpose of the attacker, this malware can exploit new, undetected vulnerabilities (also known as the zero-day exploits) or it can focus on a combination of different vulnerabilities. 3. Delivery: This step involves transmitting the weapon to the target. The intruder / attacker can employ different methods like USB drives, e-mail attachments and websites for this purpose. 4. Exploitation: In this step, the malware starts the action. The program code of the malware is triggered to exploit the target’s vulnerability/vulnerabilities. 5. Installation: In this step, the malware installs an access point for the intruder / attacker. This access point is also known as the backdoor. 6. Command and Control: The malware gives the intruder / attacker access in the network/system. 7. Actions on Objective: Once the attacker / intruder gains persistent access, they finally take action to fullfil their purpose, such as encryption for ransom, data exfiltration or even data destruction.
<urn:uuid:6820d80c-84c4-4991-bc89-46ffa358a71b>
CC-MAIN-2022-40
https://www.logsign.com/blog/7-steps-of-cyber-kill-chain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00634.warc.gz
en
0.91028
655
2.625
3
It’s a promise that has been made by three previous presidents Today Donald Trump will sign Space Policy Directive 1, an order to send humans back to the moon and beyond. A draft copy of the order seen by Quartz declares that “the United States will lead the return of humans to the Moon for long-term exploration and utilization, followed by human missions to Mars and other destinations.” It’s a bold promise, timed to the 45th anniversary of Apollo 17, the final human mission to the moon. It’s also a promise that has been made by three previous presidents, each of whom was defeated by the political and financial challenges of deep space exploration. Trump isn’t having an easy time so far: His nominee to run NASA, Jim Bridenstine, faces opposition from lawmakers. And real questions about the US return to the moon will be answered when NASA’s next budget is written, not today. The US space agency has not designed a moon landing vehicle or other infrastructure for taking astronauts to the moon, and it will struggle to perform a moon landing during Trump’s term in office. (NASA’s current deep space exploration plan includes a new heavy rocket called the Space Launch System and a space capsule, called Orion, which will fly astronauts around the moon in 2019; it is also considering building a new space station in lunar orbit as a kind of stepping-stone.) One advantage Trump has over his predecessors is an array of private companies investing in space exploration beyond low-earth orbit. NASA is already working with closely with lunar exploration companies like Moon Express, which received regulatory permission for a moon mission last year, and Astrobotic, a Carnegie Mellon spin-off that says it has a $1 billion manifest (pdf) to deliver to the lunar surface. “A permanent presence on the moon and American boots on the surface Mars are not impossible,” Gwynne Shotwell, SpaceX’s president, said in October; the company’s founder, Elon Musk, has said his next rocketwill be designed around visiting the moon as well as his beloved Mars. “It’s time for America to return to the Moon—this time to stay,” Blue Origin executive Brett Alexander said in September, describing a lunar lander being developed by Jeff Bezos’ space company and promising additional investment if NASA was willing to partner with the firm. Meanwhile, Boeing’s CEO has promised the first astronauts to visit Mars will get there on one of his company’s rockets. But why are so many interested in getting back to the moon, anyway? Water and money One irony about the Apollo astronauts is that they missed what newer robotic explorers didn’t: There is likely water, and perhaps quite a bit of it, on the moon. The presence of water could make new activities: Cheaper long-term space habitation, thanks to the ability to grow food and create oxygen from water; and cheaper rocket propellant, if engineers can produce hydrogen and oxygen in space rather than bringing it up from earth. This could in turn bring futuristic business plans, like space tourism, asteroid mining, and orbital manufacturing, within reach of entrepreneurs. And, there may be other useful chemicals to be extracted from the moon, like Helium-3. George Sowers, who leads the space resources program at Colorado School of Mines, compares water on the moon to oil in the Persian Gulf, suggesting that there will be soon be an international scramble for claims on the moon. Which brings us to a second motivator: China’s ambitious space program has announced that it wants to land humans on the moon by 2036. The European Space Agency has long argued in favor of a lunar village exploration concept. The US government doesn’t want to find itself left out a return to the moon, especially because American companies are likely to be among the first to stretch the current legal framework for space to its breaking point. International space treaties, written in the early days of space exploration, leave much to interpretation and don’t account for commerce in space. Facts on the ground—or the lunar regolith—will matter in future debates over how people cooperate in space. US military is already talking up its new approach to space as a warfighting environment. Certainly, space entrepreneurs aren’t hesitant to invoke the international conflict. Robert Bigelow, who wants to build hotels on the moon, shared this slideshow during a recent conference to encourage the US to take action: Exploration and science There are plenty of people in the space policy world who think that humans should set their sights directly on Mars and not waste time with a return to the moon. Yet lunar missions could enable, rather than hinder, more ambitious journeys into space. Returning to the moon could help researchers understanding the health challenges faced by people who spend a long time in space. If ideas about water on the moon prove true, manufacturing propellant there could enable cheaper missions to Mars. Building out scientific infrastructure on the moon could create new opportunities for astronomers to get a clearer picture of the universe and planetary scientists to learn about the history of the earth. There’s still much to learn about the earth’s most important satellite. NEXT STORY: Congress Resets Shutdown Clock
<urn:uuid:f1aacdd7-7a98-475d-9151-84bb1e0de1b5>
CC-MAIN-2022-40
https://www.nextgov.com/policy/2017/12/why-donald-trump-wants-go-back-moon/144461/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00634.warc.gz
en
0.941936
1,098
2.515625
3
February is Black History Month. To raise awareness and pay tribute, we want to educate and recognize the history and foundations of the rich cultural heritage, triumphs, and adversities that play a large part in our country’s history and the history of fire protection. The origins of Black History Month date back to the 1920s when “the father of Black History” Carter G. Woodson set out on a mission to designate a time to educate people about Black history and culture through a weeklong celebration to coordinate with public schools to teach students. He chose the second week of February because it coincides with the birthdays of Frederick Douglass and Abraham Lincoln, both of whom were prominent and influential leaders during emancipation and the abolitionist movement. By the late 1960s, this weeklong celebration evolved into what we now know as Black History Month. In 1976, fifty years after the first Black history celebrations, Black History Month was officially recognized on a national platform during the nation’s 1976 bicentennial celebration when President Gerald Ford called upon Americans to take this opportunity to honor and learn about the “too-often neglected accomplishments of Black Americans in every area of endeavor throughout our history.” African Americans and Fire Service Today in the United States, there are more than 90,000 African American volunteer and career firefighters equating to 8.4% of the overall total number of firefighters in our nation. That number has more than tripled in 10 years. The oldest documents that identify African American firefighters were in New Orleans in 1817. In July of 1817, a devastating fire that destroyed many acres of land and in response the governing body in New Orleans organized people, men of color, and slaves to avoid another fire of this magnitude. One year later, in New York City, the first woman firefighter, African American Molly Williams, served as a volunteer firefighter for the Oceanus Volunteer Fire Company No. 1. It is important to recognize the accomplishments of African Americans in the fire service industry. Some of the milestones this community has achieved are: 1871: the first African American Fire Chief in the United States is appointed. Patrick H. Raymond of Cambridge, MA held this honor. 1966: Robert O. Lowery becomes the first African American Fire Commissioner of New York City. He served in this role until September 1973. 1970: In Hartford, CT, the International Association of Black Professional Fire Fighters is formed. 1976: the first woman to become a career firefighter serves in the Pittsburgh Bureau of Fire. Her name is Toni McIntosh. 1984: Cecelia O. Salters becomes the first woman to be assigned to a New York City truck company. 1988: Black Women in the Fire Service is established as a subcommittee of the International Association of Black Professional Fire Fighters. This subcommittee is created to address increased issues related to African American firefighters. The Black Women in the Fire Service becomes a standalone committee in 1996. 1994: Carrye B. Brown becomes the first African American United States Fire Administrator. 2002: Chief Rosemary Cloud is the first African American woman appointed as Fire Chief for a career fire department. She serves as Chief for the East Point Fire Department in Georgia. Black History Month Theme Each year, the Association for the Study of African American Life and History, which was founded in 1916 by Carter Woodson, recognizes a theme for Black History Month. Woodson recognized the importance of having a theme to focus the public’s attention on. The themes reflect changes in how African Americans have viewed themselves, the influence of social movements on racial ideologies, and the aspirations of the black community. Some themes of years’ past include Negro Labor (1940), Negro History in the Home, School, and the Community (1967), Black Business (1998), and The History of Black Economic Empowerment (2010). This year’s theme is Black Health and Wellness, and it is extremely timely given we are entering our third year of the pandemic. Minority communities have suffered throughout the pandemic and their less favorable health outcomes have become more prominent given COVID-19. This theme honors medical and health care professionals, as well as first responders, who have been placed under a unique burden throughout the entire length of the pandemic. Black History Month is a time to recognize, remember, and celebrate the African American community and the ancestors who began paving the way hundreds of years ago. There are many activities you can do to honor this history and further educate yourself. Learn more here or check out the following resources:
<urn:uuid:eab4b765-34c3-4b5c-adb4-32c7cf62dbb6>
CC-MAIN-2022-40
https://www.certasitepro.com/news/black-history-month-2022
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00034.warc.gz
en
0.94996
945
3.75
4
India’s QpiAI Tech Inventing AI System Generating Processor to Overcome Shortcoming of Current Quantum Computers (AnalyticsIndia) India’s QpiAI Tech is inventing ASGP (AI System Generating Processor) to overcome the shortcomings of current quantum computers. “Hybrid classical-Quantum computers are the solution,” shares Nagendra Nagaraja. QpiAI relies on CMOS-based quantum dots to fabricate hybrid chips which work at Cryogenic temperature and use current semiconductor processes. “Our aim is to put 1 million qubits on a chip to enable the solving of AI/ML model generation problems,” says Nagaraja. Nagaraja believes that Quantum computers in the form of their invention ASGP would revolutionize computing. The ambitious startup has been approached by many VCs for funding and has good cash flows currently from AI modelling platforms. As Nagaraja shares, funding will be mainly used in scaling the business and designing a commercial-grade Quantum chip.
<urn:uuid:af356276-2a2c-4975-bf7f-23bc674e364f>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/indias-qpiai-tech-inventing-ai-system-generating-processor-to-overcome-shortcoming-of-current-quantum-computers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00034.warc.gz
en
0.903909
215
2.5625
3
Researchers have developed an artificial tissue in which human blood stem cells remain functional for a prolonged period of time. Scientists from the University of Basel, University Hospital Basel, and ETH Zurich have reported their findings in the scientific journal PNAS. In the bone marrow, several billion blood cells are formed each day. This constant supply is ensured by blood stem cells located in special niches within the marrow. These stem cells can multiply and mature into red and white blood cells, which then leave the bone marrow and enter the bloodstream. For several years, researchers have been trying to reproduce natural bone marrow in the laboratory in order to better understand the mechanisms of blood formation and to develop new therapies, such as the treatment of leukemia. However, this has proven to be extremely difficult because in conventional in vitro models, the blood stem cells lose their ability to multiply and to differentiate into different types of blood cells. Now, researchers have engineered an artificial bone marrow niche in which the stem and progenitor cells are able to multiply for a period of several days. These findings were reported by researchers working under Professor Ivan Martin from the Department of Biomedicine at the University of Basel and University Hospital Basel and Professor Timm Schroeder from ETH Zurich’s Department of Biosystems Science and Engineering. The researchers have developed an artificial tissue that mimics some of the complex biological properties of natural bone marrow niches. To do this, they combined human mesenchymal stromal cells with a porous, bone-like 3-D scaffold made of a ceramic material in what is known as a perfusion bioreactor, which was used to combine biological and synthetic materials. This gave rise to a structure covered with a stromal extracellular matrix embedding blood cells. In this respect, the artificial tissue had a very similar molecular structure to natural bone marrow niches, creating an environment in which the functionality of hematopoietic stem and progenitor cells could be maintained. The new technique could also be used to produce tailor-made bone marrow niches that have specific molecular properties and that allow the selective incorporation or removal of individual proteins. This opens up a whole host of possibilities, including researching factors that influence blood formation in humans, and drug screening with a view to predicting how individual patients will respond to a certain treatment. “We could use bone and bone marrow cells from patients to create an in vitro model of blood diseases such as leukemia, for example. Importantly, we could do this in an environment that consists exclusively of human cells and which incorporates conditions tailored to the specific individual,” the authors write. More information: Paul E. Bourgine et al, In vitro biomimetic engineering of a human hematopoietic niche with functional properties, Proceedings of the National Academy of Sciences (2018). DOI: 10.1073/pnas.1805440115 Journal reference: Proceedings of the National Academy of Sciences search and more info website Provided by: University of Basel
<urn:uuid:3ad63f58-1936-4ae6-acaa-b98c51f93ea8>
CC-MAIN-2022-40
https://debuglies.com/2018/06/06/researchers-engineer-human-bone-marrow-tissue/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00034.warc.gz
en
0.937445
631
3.421875
3
ARLINGTON, VA, February 22, 2022 — The U.S. Patent and Trademark Office (USPTO) issued two patents on January 18, 2022, (patents #US11,224,345B1 and US11,224,346B1), to Ideal Innovations, Inc. (I-3), each called Personal Warning Temperature (PWT) system. PWT uses daily monitoring to compute a person’s unique range of normal body temperature and flag abnormalities that indicate infections such as COVID-19. PWT is more accurate than the U.S. Government fever standard of 100.4°F (38°C). The PWT system uses a website to help users monitor their own temperature measurements, typically taken at home. With each new entry, the website determines whether the temperature is above normal. A small, abnormal rise in temperature can warn of COVID-19 up to 24 hours before other symptoms appear, and before conventional testing would confirm the infection. “PWT gives people the earliest possible warning that they might be infected with COVID-19 or any other illness for that matter,” stated Bob Kocher, CEO of I-3. “Temperature shows when the body is fighting an infection. The U.S. Government standard ‘fever temperature’ of 100.4°F, based on data from 150 years ago, is too high for today’s population. Most people can have anomalous temperatures as low as 99°F. Quite simply, PWT is a game-changer in the long-term fight against COVID-19.” Modern medicine defines a fever as a temperature elevation above normal, but each person’s normal temperature range depends on many factors. Elevated temperature can predict the onset of COVID-19 infection if defined in this new, more accurate way. PWT simply helps individuals determine their own personal fever threshold. U.S. regulations imply that persons with a temperature less than 100.4°F cannot transmit disease. Organizations use this flawed threshold for entry point screening or quarantine/testing policies. Studies show that 100.4°F is a faulty standard and that transmission can occur at lower temperatures. “PWT is a long-term approach that can be deployed immediately and cheaply,” said Kocher. “As a screening tool, PWT will reduce the need for costly PCR and antigen tests. The repeal of 100.4°F as the one-size-fits-all fever standard temperature will save lives and hasten the end of the COVID-19 pandemic. PWT is ready to be deployed to hundreds of millions of people.” I-3 is an inventions company working primarily with the Department of Defense. Mr. Kocher is a West Point graduate who served 21 years in the military with six years at the Defense Advanced Research Projects Agency (DARPA), where he gained a reputation as a rapid solution innovator. Other I-3 efforts include secure access systems, early identification of a person’s natural ability through neuroscience, and novel implementations for biometrics. For any inquiries regarding PWT, please contact Ideal Innovations, Inc. at [email protected]
<urn:uuid:e6fb1c98-4d36-4bde-a01d-7804655feff9>
CC-MAIN-2022-40
https://www.idealinnovations.com/uspto-issues-two-patents-for-a-daily-screening-solution-for-covid-19-and-other-infections/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00034.warc.gz
en
0.920802
670
2.5625
3
If you have an unnecessary account on your computer, you can delete or disable it. Also, If you delete it — then all data from it will be lost and it can not be restored. If you disable an account, you can enable it in the future without losing any data. A disabled account disappears from the login screen and from switching to the Start menu. In today’s article we will look at various ways to disable (enable) an account in Windows 10. To disable user accounts, the user under which you are logged in must have administrator rights. 1. Enable or Disable an Account on the Command Line - Open a command prompt on behalf of the administrator: one of the ways is to right-click on the “Start” menu and select “Command Prompt (Administrator)” from the menu that opens. - To disable the local account, enter the command Net user “account name” /active: noreplacing the highlighted red with your own and press the Enter key. For example, we need to disable the Sa account — the command to enter will look like this: Net user “Sa” /active: no - If you need to disable a domain account you need to use the command Net user “username” /active:no /domain - To enable a local account, type the command Net user “account name” /active:yesreplacing the highlighted red with your own and press the Enter key. For example, we need to enable the account Sa — the command for input will look like this: Net user “Sa” /active:yes - If you need to enable a domain account you need to use the command Net user “username” /active:yes /domain - After the caption “Command completed successfully” you can close the command line. 2. Enable or Disable an Account in Local Users and Groups The Local Users and Groups tool is available only in Windows 10 Pro, Enterprise, and Education. - In the search bar or in the menu to execute (execute is called with the Win + r keys) type lusrmgr.mscand press Enter. - Go to “Users” ⇨ right-click on the user whom you want to disable (enable) and select “Properties.” - In the “General” tab, check the “Disable account” field and click “OK.” If you need to enable an account — uncheck the box “Disable account” and click “OK”. 3. Hide Account in Registry Editor After completing the instructions below, you will not see a hidden account on the login screen and in the switching of the Start menu, but the account will appear as enabled in the Local Users and Groups. Before editing the registry, it is recommended to create a system restore point. - Open the registry editor: in the search bar or in the menu to execute (run with the Win + R keys) enter the regeditcommand and press the Enter key. - In the left column, go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon. In the Winlogon section you need to go to SpecialAccounts and select UserList, if you don’t have data partitioning — right-click on Winlogon and select “New” ⇨ Section ⇨ name it SpecialAccounts - Click on SpecialAccounts with the right mouse button and select “Create” ⇨ Section ⇨ name it UserList - Click on the UserList section with the right mouse button and select “New” ⇨ DWORD Parameter (32 bits) - Name the new parameter the name of the user you want to hide. For example, we want to hide the user User1, which means we call the parameter User1. - You can close the registry editor, now the hidden user will not be displayed on the lock screen. If you want to return its display — delete the created parameter with the username (in our example, we click on the Sa parameter in the UserList section with the right mouse button and select “Delete”, accept the deletion and close the registry editor). Consider using Action1 to disable the local account if: - You need to perform an action on multiple computers simultaneously. - You have remote employees with computers not connected to your corporate network.
<urn:uuid:a4b773fa-978f-4cec-a059-f8206a583f6c>
CC-MAIN-2022-40
https://www.action1.com/how-to-enable-or-disable-a-user-account-in-windows-10/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00034.warc.gz
en
0.823698
951
2.515625
3
Electromechanical relays (EMRs) and Solid state relays (SSRs) are designed to provide a common switching function. An EMR provides switching through the use of electromagnetic devices and sets of contacts. An SSR depends on electronic devices such as SCRs and triacs to switch without contacts. In addition, the physical features and operating characteristics of EMRs and SSRs are different. See Figure 1. Figure 1. An Electromechanical relay provides switching using electromagnetic devices. A solid state relay depends on SCRs and triacs to switch without contacts. An equivalent terminology chart is used as an aid in the comparison of EMRs and SSRs. Because the basic operating principles and physical structures of the devices are so different, it is difficult to find a direct comparison of the two. Differences arise almost immediately both in the terminology used to describe the devices and in their overall ability to perform certain functions. See Figure 2. Advantages and Limitations Electromechanical relays and solid state relays are used in many applications. The relay used depends on the electrical requirements, cost requirements, and life expectancy of the application. Although SSRs have replaced EMRs in many applications, EMRs are still very common. Electromechanical relays offer many advantages that make them cost-effective. However, they have limitations that restrict their use in some applications. Figure 2. An equivalent terminology chart is used as an aid in the comparison of EMRs and SSRs. Electromechanical relay advantages include the following: - normally have multi-pole, multi-throw contact arrangements - contacts can switch AC or DC - low initial cost - very low contact voltage drops, thus no heat sink is required - very resistant to voltage transients - no OFF-state leakage current through open contacts Electromechanical relay limitations include the following: - contacts wear, thus having a limited life - short contact life when used for rapid switching applications or high-current loads - generate electromagnetic noise and interference on the power lines - poor performance when switching high inrush currents SSRs provide many advantages such as small size, fast switching, long life, and the ability to handle complex switching requirements. SSRs have some limitations that restrict their use in some applications. Solid state relay advantages include the following: - very long life when properly applied - no contact to wear - no contact arcing to generate electromagnetic interference - resistant to shock and vibration because they have no moving parts - logic compatible with programmable controllers, digital circuits, and computers - very fast switching capability - different switching modes (zero switching, instant- on, etc.) Solid state relay limitations include the following: - normally only one contact available per relay - heat sink required due to the voltage drop across switch - can switch only AC or DC - OFF-state leakage current when the switch is open - normally limited to switching only a narrow frequency range such as 40 Hz to 70 Hz The application of voltage to the input coil of an electromagnetic device creates an electromagnet that is capable of pulling in an armature with a set of contacts attached to control a load circuit. It takes more voltage and current to pull in the coil than to hold it in due to the initial air gap between the magnetic coil and the armature. The specifications used to describe the energizing and de-energizing process of an electromagnetic device are coil voltage, coil current, holding a current, and drop-out voltage. A solid state relay has no coil or contacts and requires only minimum values of voltage and current to turn it on and off. The two specifications needed to describe the input signal for an SSR are control voltage and control current. The electronic nature of an SSR and its input circuit allows easy compatibility with digitally controlled logic circuits. Many SSRs are available with minimum control voltages of 3 V and control currents as low as 1 mA, which makes them ideal for a variety of current state-of-the-art logic circuits. One of the significant advantages of a solid state relay over an electromechanical relay is its response time (ability to turn on and turn off). An EMR may be able to respond hundreds of times per minute, but an SSR is capable of switching thousands of times per minute with no chattering or bounce. DC switching time for an SSR is in the microsecond range. AC switching time for an SSR, with the use of zero-voltage turn-on, is less than 9 ms. The reason for this advantage is that the SSR may be turned on and turned off electronically much more rapidly than a relay may be electromagnetically pulled in and dropped out. The higher speeds of solid state relays have become increasingly more important as industry demands higher productivity from processing equipment. The more rapidly the equipment can process or cycle its output, the greater its productivity. Voltage and Current Ratings Electromechanical relays and solid state relays have certain limitations that determine how much voltage and current each device can safely handle. The values vary from device to device and from manufacturer to manufacturer. Datasheets are used to determine whether a given device can safely switch a given load. The advantages of SSRs are that they have a capacity for arc-less switching, have no moving parts to wear out, and are totally enclosed, thus allowing them to be operated in potentially explosive environments without special enclosures. The advantage of EMRs is that the contacts can be replaced if the device receives an excessive surge current. In an SSR, the complete device must be replaced if there is damage. When a set of contacts on an electromechanical relay closes, the contact resistance is normally low unless the contacts are pitted or corroded. However, because an SSR is constructed of semiconductor materials, it opens and closes a circuit by increasing or decreasing its ability to conduct. Even at full conduction, a solid state relay presents some residual resistance, which can create a voltage drop of up to approximately 1.5 V in the load circuit. This voltage drop is usually considered insignificant because it is small in relation to the load voltage and in most cases presents no problems. This unique feature may have to be taken into consideration when load voltages are small. A method of removing the heat produced at the switching device must be used when load currents are high. Insulation and leakage The air gap between a set of open contacts provides an almost infinite resistance through which no current flows. Due to their unique construction, solid state relays provide a very high but measurable resistance when turned off. SSRs have a switched-off resistance not found on EMRs. It is possible for small amounts of current (OFF-state leakage) to pass through an SSR because some conductance is still possible even though the SSR is turned off. OFF-state leakage current is not found on EMRs. OFF-state leakage current is the amount of current that leaks through an SSR when the switch is turned off, normally about 2 mA to 10 mA. The rating of OFF-state leakage current in an SSR is usually determined at 200 VDC across the output and should not usually exceed more than 200 mA at this voltage.
<urn:uuid:9dbdb6b5-1aa7-4f82-abdf-ee082d253239>
CC-MAIN-2022-40
https://electricala2z.com/electrical-power/solid-state-relay-vs-electromechanical-relay/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00034.warc.gz
en
0.929823
1,528
2.84375
3
This document reviews known issues with enabling the Cisco IOS® software features of compression and Quality of Service (QoS) on the same router. Cisco IOS software offers many features that optimize wide-area network (WAN) links to ease the WAN bandwidth bottleneck. Compression is an effective optimization method and includes two types: Data Compression - Provides each end with a coding scheme that allows characters to be removed from the frames at the sending side of the link, and then correctly replaces them at the receiving side. Since the condensed frames occupy less bandwidth, greater numbers can be transmitted per unit of time. Examples of data compression schemes include STAC, Microsoft Point-to-Point Compression (MPPC), and Frame Relay Forum 9 (FRF.9). Header Compression - Compresses a header at various layers of the Open System Interconnection (OSI) reference model. Examples include Transmission Control Protocol (TCP) header compression, compressed RTP (cRTP), and compressed Internet Protocol/User Datagram Protocol (IP/UDP). There are no specific requirements for this document. This document is not restricted to specific software and hardware versions. The information presented in this document was created from devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If you are working in a live network, ensure that you understand the potential impact of any command before using it. For more information on document conventions, refer to Cisco Technical Tips Conventions. The basic function of data compression is to reduce the size of a data frame transmitted over a network link. Reducing the size of the frame reduces the time required to transmit the frame across the network. The two most commonly used data compression algorithms on internetworking devices are Stacker and Predictor. The following sample configurations show two ways of enabling payload compression on a Frame Relay interface or subinterface. ip address 10.0.0.1 255.255.255.0 no ip directed-broadcast encapsulation frame-relay IETF frame-relay map ip 10.0.0.2 16 broadcast IETF payload-compression FRF9 stac interface Serial0/0.105 point-to-point ip address 192.168.162.1 255.255.255.0 no ip directed-broadcast frame-relay interface-dlci 105 IETF frame-relay payload-compression FRF9 stac Hardware-assisted data compression achieves the same overall functionality as software-based data compression, but accelerates compression rates by offloading this computationally from the main CPU. In other words: Software Compression - Compression is implemented in the Cisco IOS software installed in the router's main processor. Hardware Compression - Compression is implemented in the compression hardware installed in a system slot. Hardware compression removes compression and decompression responsibilities from the main processor installed in your router. The following table lists Cisco compression hardware and supported platforms: | Compression Hardware || Supported Platforms | SA-Comp/1 and SA-Comp/4 service adapters (CSA) || Cisco 7200 Series Routers and the second-generation Versatile Interface Processor (VIP2) in the Cisco 7000 and 7500 Series Routers || Supports Stacker algorithm over serial interfaces configured with Point-to-Point Protocol (PPP) or Frame Relay encapsulation. || Cisco 3600 Series Routers || Supports Stacker algorithm over PPP links and Frame Relay links with the FRF.9 compression algorithm. || Cisco 3660 Series Routers only || Supports Lempel-Ziv Standard (LZS) and MPPC algorithms. Configuring compression on a serial interface with a command such as compress stac automatically enables hardware compression if it is available. Otherwise, software compression is enabled. You can use the compress stac software command to force the use of software compression. This section discusses a resolved issue with the Cisco legacy priority queueing (PQ) feature and compression hardware. Compression hardware originally dequeued packets aggressively from the PQs, effectively removing the benefits of PQ. In other words, PQ worked properly, but queueing functionally moved to the compression hardware's own queues (holdq, hw ring and compQ), which are strictly first-in, first-out (FIFO). The symptoms of this problem are documented in Cisco bug ID CSCdp33759 (marked as a duplicate of CSCdm91180). The resolution modifies the compression hardware's driver. Specifically, it throttles the rate at which the compression hardware dequeues packets by reducing the size of the hardware queues based on the interface's bandwidth. This back pressure mechanism ensures that packets stay in the fancy queues instead of being held in the compression hardware's queues. Refer to the following bug IDs for more information: Note: More information on these bug IDs can be found by using the Bug Toolkit (registered customers only) . CSCdm91180 - Applies to Frame Relay encapsulation and the Compression Service Adapter (CSA). CSCdp33759 (and CSCdr18251) - Applies to PPP encapsulation and the CSA. CSCdr18251 - Applies to PPP encapsulation and the asynchronous interface module-compression (AIM-COMPR). The hardware-level queues of the Cisco 3660 compression can be seen in the following sample output of the show pas caim stats command. If the hardware compression queues are storing many packets, a packet dequeued from the PQ waits at the tail end of this queue, and thus experiences delay. Router> show pas caim stats 0 422074 uncomp paks in --> 422076 comp paks out 422071 comp paks in --> 422075 uncomp paks out 633912308 uncomp bytes in --> 22791798 comp bytes out 27433911 comp bytes in --> 633911762 uncomp bytes out 974 uncomp paks/sec in --> 974 comp paks/sec out 974 comp paks/sec in --> 974 uncomp paks/sec out 11739116 uncomp bits/sec in --> 422070 comp bits/sec out 508035 comp bits/sec in --> 11739106 uncomp bits/sec out 433 seconds since last clear holdq: 0 hw_enable: 1 src_limited: 0 num cnxts: 4 no data: 0 drops: 0 nobuffers: 0 enc adj errs: 0 fallbacks: 0 no Replace: 0 num seq errs: 0 num desc errs: 0 cmds complete: 844151 Bad reqs: 0 Dead cnxts: 0 No Paks: 0 enq errs: 0 rx pkt drops: 0 tx pkt drops: 0 >dequeues: 0 requeues: 0 drops disabled: 0 clears: 0 ints: 844314 purges: 0 no cnxts: 0 bad algos: 0 no crams: 0 bad paks: 0 # opens: 0 # closes: 0 # hangs: 0 Note: CSCdr86700 removes the changes implemented in CSCdm91180 from platforms not supporting a CSA. In addition, while troubleshooting this problem, packet-expansion issues with small packets (around 4 bytes) and particular repetitive patterns, such as Cisco pings with a pattern of 0xABCDABCD, were resolved with bug ID CSCdm11401. Small packets are less likely to be related to other packets in the stream, and attempting to compress them may result in expanded packets, or cause dictionary resets. The root cause is a problem with the chip used on the CSA. Cisco bug ID CSCdp64837 resolves this problem by changing the FRF.9 compression code to avoid compressing packets having less than 60 bytes of payload. In contrast to hardware compression, software compression and fancy queueing, including custom, priority, and weighted fair queueing, are not supported on interfaces configured with PPP encapsulation. This limitation is documented in bug IDs CSCdj45401 and CSCdk86833. The reason for the limitation is that PPP compression is not stateless and maintains a compression history over the data stream to optimize the compression ratios. The compressed packets must be kept in order to maintain compression history. If packets are compressed before queueing, they must be put in a single queue. Putting them in different queues, as custom and priority queueing do, may lead to the packets arriving out of sequence, which breaks compression. Alternative solutions are sub-optimal and have not been implemented. Such alternatives include compressing packets as they are dequeued (unacceptable for performance reasons), maintaining a separate compression history for each queue (unsupported and involving significant overhead), and resetting the compression history for every packet (substantially impacts compression ratios). As a workaround, you can configure high-level data link control (HDLC) encapsulation, but this configuration may affect system performance and is not recommended. Instead, use hardware compression. RFC 1889 specifies the RTP, which manages the audio path transport for Voice over IP (VoIP). RTP provides such services as sequencing to identify lost packets and 32-bit values to identify and distinguish between multiple senders in a multicast stream. Importantly, it does not provide or ensure QoS. VoIP packets are composed of one or more speech codec samples or frames encapsulated in 40 bytes of IP/UDP/RTP headers. 40 bytes is a relatively large amount of overhead for the typical 20-byte VoIP payloads, particularly over low-speed links. RFC 2508 specifies compressed RTP (cRTP), which is designed to reduce the IP/UDP/RTP headers to two bytes for most packets in the case where no UDP checksums are being sent, or four bytes with checksums. The compression algorithm defined in this document draws heavily upon the design of TCP/IP header compression as described in RFC 1144 . RFC 2508 actually specifies two formats of cRTP: Compressed RTP (CR) - Used when the IP, UDP, and RTP headers remain consistent. All three headers are compressed. Compressed UDP (CU) - Used when there is a large change in the RTP timestamp or when the RTP payload type changes. The IP and UDP headers are compressed, but the RTP header is not. Cisco IOS software release 12.1(5)T introduced several improvements for compression over Frame Relay permanent virtual circuits (PVCs) on the Cisco 2600, 3600, and 7200 Series Routers. These improvements include the following: | Before Cisco IOS Release 12.1(5)T || Cisco IOS Releases 12.1(5)T and 12.2 | Slow-speed WAN edge fragmentation methods needed to ensure voice quality did not work on interfaces with hardware compression. These fragmentation methods, which include MLPPP/LFI, FRF.11 Annex C, and FRF.12, do work with software-based compression. || Fragmentation (FRF.12 or Link Fragmentation and Interleaving (LFI)) are supported together with hardware compression. In addition, FRF.12 and FRF.11 Annex-C Fragmentation are supported with FRF.9 hardware compression on the same PVC. Voice packets from the priority queue with low latency queueing (LLQ) bypass the FRF.9 compressor engine. Data packets are compressed. | FRF.9 compressions is supported only on IETF-encap PVCs || cRTP and FRF.9 compression are supported on the same PVC. FRF.9 compression is supported on PVCs configured with Cisco and Internet Engineering Task Force (IETF) encapsulation. | cRTP is supported on Frame Relay PVCs configured with Cisco encapsulation only. || cRTP continues to be supported only on Cisco-encapsulated PVCs. The following table lists known issues with cRTP and Cisco IOS QoS features. This list is accurate at the time of publishing. Also refer to the the Release Notes for your version of Cisco IOS software for more information. | Bug ID || When a hierarchical QoS policy, using the commands of the modular QoS CLI, is applied to an outbound interface and specifies a two-level policer, the conformed traffic rate may be less than expected. The problem occurs when the action taken on the packet in one level is different from that in the second level. For example, conform at the first level and exceed at the second level. An example policy is illustrated below: police 10000 1500 1500 conform-action transmit exceed-action transmit police 20000 1500 1500 conform-action transmit exceed-action transmit || Unexpected packet drops may be seen when using low latency queueing (LLQ) over Frame Relay. The problem was caused by the queueing system not taking the bandwidth gains of cRTP into account. || Originally, cRTP happened after queueing. The result was that queueing (potentially) saw a much larger packet than what actually was transmitted on the wire. This behavior is changed with this bug. Queueing now considers compressed packets. With this change, you can configure bandwidth statements with CBWFQ based on compressed data rates.
<urn:uuid:125f3f55-3f08-4f5e-8c56-49d1ac03adc1>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-link-efficiency-mechanisms/22308-compression-qos.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00034.warc.gz
en
0.854556
2,988
2.78125
3
Most of us are familiar with this scenario: “Hey Siri (or insert your favorite ‘wake’ word for your digital voice assistant), turn on the hallway light.” Instantly and magically, that hallway light turns on, and life is good. But then, reality kicks in, and all of a sudden your internet service provider is experiencing an outage in your area! Now, you curse your way to find the darn switch to turn the light on yourself. Do you wonder why such a simple thing cannot happen the way it’s supposed to without an internet connection? As with most artificial intelligence tasks, the majority of the workload is performed in the cloud, also known as massive data centers. In the above example, other than the ‘wake’ word for your digital voice assistant, all of your babbling is transmitted to Amazon, Google, or Apple’s data centers. At these data centers, your voice gets inferred, or in other words, processed to something the computer can understand. In this case, the AI realizes this is a command for your home automation system and sends the command back to your home controller, which then fulfills the action by sending a “turn it on” command to that hallway light. Typically, there is a delay of a few milliseconds to a few seconds before the command completes. Did you know your voice command is stored in these data centers for training and retraining? While I want my lights immediately turned on when I finish my command, I’m also expecting privacy. I don’t want this information stored anywhere, much less anyone to know that I turned my lights on or know how silly or smart I sounded with that command! Now, imagine the same requirements if you are running an automated manufacturing plant, driving your car through one of the national parks, or operating an oil and gas platform offshore. You want to utilize the maximum benefit that AI has to offer, but not be held hostage due to bad network connectivity, data privacy, or constrained environments. Here’s another predicament we’re facing, data storage scarcity. It’s predicted by 2020 that we will be generating 44 trillion gigabytes annually. However, the world of digital storage can only accommodate a small fraction (15%) of that data! Do you remember looking at your fridge after your Costco run? Do we really need to collect all this data? Can’t we process this data where it originates and derives the inferences? At Latent AI, these issues are fundamental to our existence; our mission is “Enabling Adaptive AI for a Smarter Edge.” In this context, we continually ask ourselves, how do we help solve these issues to enable a better quality of service for a consumer or an enterprise? Founded on the years of research done at SRI International, (and we are thrilled to follow in the footsteps of Nuance Communications, Siri, Abundant Robotics, among other successful SRI Ventures), Latent AI is backed by leading investors and we just closed our seed round, led by Steve Jurvetson at Future Ventures, followed by Perot Jain, Gravity Ranch and super angels such as Frank Blake (Chairman of Delta Airlines, Board Member at Macy’s, P&G), Dave Rosenberg (Co-Founder, Mulesoft) and Bruce Graham (Chairman of Cellink Circuits). At Latent AI, we are providing tools that integrate with your existing workflow and tool flow to help AI engineers deliver neural net models that are optimized and compressed. We enable those models to be executed efficiently on any chosen platform and edge device. If you or your organization identifies with these problems, we would love to talk to you and evaluate the opportunity to solve your current issues! Please feel free to reach out to me directly at [email protected] Also, as any startup CEO would emphatically say, “Yes! We’re Hiring!” Please check out our open positions at https://latentai.wpengine.com/careers. By Jags Kandasamy, Co-Founder, CEO, Latent AI, Inc. Photo Credits: Adobe Stock
<urn:uuid:49b20893-45cc-4efc-91f5-e5092cef2fb1>
CC-MAIN-2022-40
https://latentai.com/its-time-for-adaptive-ai-to-enable-a-smarter-edge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00034.warc.gz
en
0.934567
866
2.671875
3
Climate change is one of the most critical problems humanity faces today, with carbon dioxide emissions being the most significant factor contributing to rising global temperatures. The 2021 COP26 (UN Climate Change Conference 2021) in the UK will urge all economies to achieve net-zero emissions by 2030, as per the Paris Agreement and the UN Framework Convention on Climate Change. While we regard a move to the cloud as a step in the right direction for businesses and the environment, it's not completely clean. The Shift Project, a French think tank, estimated that the data centers powering cloud computing and cloud-native applications would produce 4% of global greenhouse gases in 2020— a pre-pandemic forecast. The toll exacted by traditional data centers on the environment A single Google search generates up to seven grams of carbon dioxide emissions— sufficient to power up a lightbulb for a few seconds. Using a smartphone for an hour a day throughout the year results in 1250 kilograms of carbon dioxide emissions. Considering the billions of people who run hundreds of Google searches and stream countless videos on their smartphones daily, and the internet’s carbon footprint becomes alarming. A commonly used analogy to explain the impact is the case of the music video Despacito. It reached five billion views on YouTube in 2018— equivalent to the amount of energy consumed by 40,000 US homes in a year. As the consumption of data soars and cloud computing becomes the norm, the reliance on the ICT industry will continue to rise. Without optimizing efficiency for cloud-native applications, the sector could consume 20% of all electricity and emit up to 5.5% of the world’s carbon emissions by 2025. At this rate, the ICT industry’s electricity consumption would cross that of countries globally (except the US, China, and India), according to a peer-reviewed Swedish study forecasting total worldwide power consumption. How the cloud has helped enterprises reduce their carbon footprint Switching to the cloud is equivalent to carpooling or public transportation instead of owning personal vehicles (i.e., on-premise servers). On-premises servers require tons of hardware, massive facilities equipped with round-the-clock power supply and cooling to keep a business operational. By "carpooling" with cloud service providers (Amazon, Microsoft, Google), businesses not only save resources but also operate more efficiently. A move to cloud computing comes with cost and environmental benefits as cloud-native applications consume less infrastructure, physical space, and energy per user. Moreover, as more organizations adopt cloud computing, they're better equipped to support remote work, which is here to stay. The move also reduces the need for large office spaces, which significantly reduces each enterprise's carbon dioxide emissions. The cloud can enable a greater degree of efficiency in the use of power, heating, and resources. However, IT enterprises must reimagine traditional data centers for zero waste and greater efficiency for that to happen. Large cloud service providers are best equipped to lead the change in building a carbon-neutral cloud with their abundant resources (financial, technology, and human capital). How big tech is leading the fight against climate change The primary reason traditional data centers aren't environmentally sustainable is their reliance on dirty energy. Merely switching to renewable or low-carbon energy sources will solve a sizeable portion of the problem. Redesigning data centers to ramp up their energy efficiency and building green facilities (OLED lighting, green cooling, renewable power) can go a long way in achieving a near-zero carbon footprint. IT, especially the tech giants such as Microsoft, Google, Amazon, Facebook, and IBM, have taken note and started reimagining their infrastructure to incorporate sustainable technologies and lead the fight against climate change. According to The Guardian, these companies are the biggest buyers of renewable energy to sustain their data centers. For instance, Google claims to be the first organization of its size and scale to be operating with 100% renewable energy. Powered by wind farms and solar panels, Google's data centers are seven times more energy efficient today. Google is also tapping into AI and machine learning to optimize its data centers. If there's a temperature change, the amount of energy used to cool the servers gets adjusted accordingly. Microsoft has set a target of becoming carbon negative by 2030. By 2050, the corporation aims to remove more carbon from the environment than it emitted since its establishment in 1975. Besides using renewable energy sources, it's also investing in carbon reduction and removal technologies to become carbon negative. Microsoft is also experimenting with an underwater data center facility off the coast of Scotland to keep its data centers cool without draining massive amounts of electricity. AWS has pledged to reach net-zero carbon across all its businesses by 2040. Its data centers in Virginia, dubbed as the "Data Center Alley," account for 70% of the world's internet traffic. According to Greenpeace, these data centers eat up energy equivalent to the amount of electricity powering up 1.4 million US homes in a year. The pledge is a critical step toward building a greener, more sustainable cloud. Global technology enterprises like HCLTech share the responsibility of supporting their customer journeys to a greener cloud by choosing the right ecosystems, providing the right skills, leveraging sustainable technologies, and driving cloud transformation initiatives in traditional organizations. For instance, HCLTech has helped 70% of its customers in the EU and the UK to move to a cloud or a data center powered by renewable energy sources. Reducing carbon footprint with green data centers: The next moonshot for cloud service providers The EU expects European data centers to be carbon neutral by 2030. With the right investments in innovation and technology, IT has the power to enable sustainable business operations. A greener cloud is also good news for cloud service providers as it reduces costs, improves efficiency, builds a better reputation for their brands, and future-proofs operations. That's why championing more efforts to make data centers greener should be a mandate for all tech corporations. Investing in cloud service providers that prioritize sustainability is the first step toward building environmentally sustainable operations. Additionally, all organizations dedicated to running carbon-neutral operations should move to the cloud and run cloud transformation initiatives to get started right away. The way forward is to adopt renewable energy sources, source energy-efficient hardware and software for data centers, incorporate energy-efficient lighting and power supply within facilities, and design intelligent cooling systems to reduce wastage. IT leaders worldwide should take up the responsibility of making greener choices when dealing with suppliers— they should push their suppliers to offer assurance of more sustainable technologies reducing carbon emissions through relevant KPIs.
<urn:uuid:4e2a40e0-f162-4e45-a345-f00345cf192f>
CC-MAIN-2022-40
https://www.hcltech.com/blogs/ensuring-greater-environmental-sustainability-switching-cloud
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00234.warc.gz
en
0.931381
1,351
3.53125
4
A lot of fuss has recently been made about the abuse of personal data that users share via social media, but seemingly half the planet has forgotten that we all still use email on a daily basis, and that email is far from secure. 3.7bn people have an email address, and between them they send 269bn emails every day. So why do we fail to pay attention to the security risks and the privacy dangers? Is it only because we are moved by fashions, and when we become used to something we stop worrying about the dangers? There are many good reasons to love email. It is free, commonplace, and it can be relied upon to work, allowing anybody to send a message, and attachments, to anybody else. There are 3.7bn people you can contact just by knowing their address, which makes email an incredibly powerful tool, as spammers and phishers appreciate. The ubiquity of email prevails even though email was never intended to be secure. When it comes to preventing an unauthorized user from intercepting or altering a message, the words of an email might as well be written on a postcard. Worse still, many are addicted to sending supposedly important documents and other files as attachments, irrespective of the ease with which they may be copied by anyone with access to the communication chain. One of the great strengths of email is also a weakness; there is reluctance to make changes that would affect the widespread interoperability of email between different systems and users. A new academic paper entitled “Securing Email” by Jeremy Clark, P.C. van Oorschot, Scott Ruoti, Kent Seamons and Daniel Zappala discusses how to improve the protections around email. Broadly speaking, the authors recognize the tensions between solutions that rely on centralized authorities to oversee security and those which seek to encrypt messages from end to end. Reliance on authority reduces the burden on users, but begs questions about who is trusted to be the authority, and the decisions that the authority will make, especially as governments have an interest in snooping on emails too. Encrypting messages from sender to recipient requires no central authority, but the story of PGP, the most popular encryption protocol, demonstrates that it will never be popular because of the burden placed on users. “Securing Email” is a longish read but rewarding, giving a clear and comprehensive account of the history of each major attempt to improve the security of email, analysis of why those attempts failed to gain more extensive support, and synthesis of potential solutions that draw on the lessons learned. The paper should be essential reading for anyone wanting a decent understanding of how to secure electronic communications and the barriers that get in our way. The authors cogently explain how different users have different security and privacy objectives, and why this leads to clashes when anyone proposes a universal enhancement to email. Put simply, there is no one-size-fits-all solution for email security. There are competing objectives instead, and gaining an appreciation of the tensions would help anyone to understand conflicts when making decisions about comms security and privacy in other contexts too. It is worth ruminating on the authors’ observation that conflicting human priorities pose more of a security challenge than technological mastery. As they put it: Those affirming the view that all secure email products should allow warranted law enforcement access by design should never be expected to agree with the traditional supporters of end-to-end encryption. Email service providers and organizations placing high value on malware and spam filtering methods that require plaintext access are also implicitly opposing (current) end-to-end encryption technologies, whether or not they are philosophically opposed. Ironically, even those favoring end-to-end encryption might put up road-blocks due to their own internal divisions; e.g., on whether S/MIME is better than PGP or vice-versa. “Securing Email” can be found on the Cornell University Library Website; see here.
<urn:uuid:b75c85c7-41b6-48d6-a4dc-4b9ed2ba0174>
CC-MAIN-2022-40
https://commsrisk.com/how-to-secure-email/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00234.warc.gz
en
0.955235
810
2.609375
3
Crack the Code: Getting Started in Computer Programming, Part 1 "Learn to code." This three-word sentence has become something like Batman villain Two-Face's coin in recent years. On one side of the coin, "Learn to code" is a snarky comment from zero-empathy online bullies, usually aimed at someone who writes about how they are struggling to find gainful employment in their chosen profession. On the other side of the coin, "Learn to code" is meant as a genuine piece of advice, recognition of the fact that computer programming remains a prime source of job opportunities for people looking to either move on from their current IT industry role, or to redirect their career path entirely. In this instance, "Learn to code" is not some snide remark — it is a genuine, well-intended kick in the pants. The U.S. Bureau of Labor Statistics reports that the median pay for computer programmers in 2018 was $84,200 per year, or roughly $40 per hour. This kind of earning potential has led many working professionals to consider programming as a desirable career option. So, how does someone get started in computer programming? The best place to begin is to develop an understanding of what different types of programmers do in their day-to-day jobs. One of the best descriptions of what computer programmers do also comes from the U.S. Bureau of Labor Statistics (BLS). To whit: "Computer programmers write and test code that allows computer applications and software programs to function properly. They turn the program designs created by software developers and engineers into instructions that a computer can follow. In addition, programmers test newly created applications and programs to ensure that they produce the expected results. " This paragraph, while informative, illustrates a problem that typically comes up when discussing computer programming: mix-and-match job titles. The description above includes "computer programmers," "software developers," and "software engineers," as three distinct roles, yet all three job titles are commonly used to describe the function of writing software. A few searches of any large employment site will turn up plenty of jobs with similar work duty descriptions, but the actual job title will vary between the three we've mentioned and these: Web developer Systems programmer Mobile application developer Programmer analyst Firmware developer Database programmer You can't tell your coding jobs without a program, folks! Let's simplify things a bit. A computer programmer (we'll stick with that term from now on) uses one or more programming languages to write the source code that goes into the creation of software. Computer programmers are commonly split into two categories: application programmers and system programmers. Application programmers create software applications that run on top of various operating systems. Every game, word processor, and web browser you've used on a mobile phone, tablet, or laptop was created by one or more application programmers. System programmers write software that controls computing hardware and information systems. Examples of software written by system programmers include operating systems (e.g. Windows, iOS), database management systems (used to power cloud computing and Big Data solutions), and firmware (the code that gets embedded into devices to control and manage their functions). The more entry-level the position, the less input a computer programmer will have on the design and functionality of the software being created. A junior programmer is often employed as a figurative assembly line worker, helping to build a product that was envisioned and designed by a more experienced software expert. Alternatively, senior programmers can be actively involved in the design of a new software product and the product's evolution throughout future versions. Computer programmers are usually responsible for finding and fixing the software bugs discovered by product testers, also known as quality assurance (QA) experts. QA workers are often skilled programmers in their own rights, but that's a topic for another time. Choose Your Own (Programming) Adventure Like many careers in IT, the computer programming field can be split into several specialties. When choosing a specialty, you should consider the type of work you will enjoy the most, as well as the job market in your area. Deciding on a specialty can also help you make better, more focused choices when planning your training and certification efforts. Here are the most popular computer programming roles you'll find in the industry. Again, keep in mind that the terms "developer," "programmer," and "engineer" are often used interchangeably in posted job listings. Web developers create ecommerce websites for businesses, online portfolios for designers and other artists, information portals for government departments, and much more. It is very common for businesses to farm out web development to contractors, which makes this job category ideal for those who want to set themselves up as freelancers. (One web development specialty worth its own mention is WordPress. WordPress is a content management system with tremendous online presence; it is used on over one-third of the top 10 million websites. Web developers fluent in PHP and skilled with MySQL databases — the two main technologies powering WordPress — may find they have an advantage when looking for job opportunities.) Database developers design and create the databases used by applications and websites. These specialists may also perform basic or advanced analysis of database records, sometimes known as data mining. Structured Query Language (SQL) is the most common database programming language; other languages like Ruby, C#, and Java are also used. Mobile app developers have seen demand for their services grow significantly during the last decade of the mobile computing boom. It is hard to imagine any organization that doesn't have offer some sort of mobile app to the public. Mobile apps for Apple's iOS devices are written in the Swift programming language, while Android apps are typically written using the Java or Kotlin programming languages. Software application developers might be the largest category of computer programmer job roles. Some of the largest tech companies in the world — Microsoft, SAP, IBM, VMware, and others — are still very much dedicated to the creation and ongoing support of software applications. Java is the dominant programming language found in software application development, along with Python, C++, and C#. In Part Two of our look at computer programming, we'll look at what training options are available for those looking to become developers, as well as what certifications are most relevant to computer programming.
<urn:uuid:995427eb-d3d3-409c-b885-012fa4369aad>
CC-MAIN-2022-40
https://www.gocertify.com/articles/crack-the-code-get-started-in-computer-programming-part-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00234.warc.gz
en
0.950689
1,329
2.828125
3
“Trust no one.” This quote calls to mind multiple action thriller blockbusters that featured star-powered casts and a renegade fugitive running for his life. But in the world of IT, it characterizes a new cybersecurity strategy that’s gaining a lot of traction among enterprises: Zero Trust Network Access (ZTNA). In their mission to combat the growing threat of cybercrime, Cybersecurity solution providers are helping customers implement the zero-trust model, to provide stronger cyber threat defense and a better end-user customer experience. In this post, we’ll explain the concept of ZTNA, how it works, and why device recognition technology is the basis of a zero-trust architecture. A Zero Trust architecture guards against unauthorized access by enforcing access policies based on the context of the device or user attempting access. The approach is a paradigm shift from older perimeter-based network architectures that rely on approved IP addresses, ports and protocols to establish access controls and validate trusted entities, where anyone connecting over a VPN is considered trusted. The problem with these legacy approaches is that VPNs enable remote and unprotected user devices to connect to the network, and a bad actor who gets their hands on leaked credentials can easily break in and launch an attack via spyware or ransomware. By contrast, Zero Trust looks at the user’s role and location, the device being used and the information they’re requesting, and assumes the user is guilty until proven innocent. Each user, machine, and application has its own perimeter security, and access is controlled based on users having “just-enough” and “just-in-time” access according to their identity, role, and company policy. Zero Trust is applied not only to users, but to devices and applications – whether on-premises, remote, or in the cloud – and assumes no device or person can be trusted. It doesn’t make a difference if someone has accessed the network previously – their identity is considered potentially malicious until verification is complete. There are three key technologies in place in a Zero Trust architecture: Working together, these technologies reduce the risk of unauthorized access, and thereby mitigate the increasing risk of cybercrime. Device identification and recognition create a solid foundation for implementing zero-trust network access. Why? Because the Zero Trust model requires the authentication and authorization of every device and person before any access to data is granted. To achieve this, you must be able to identify and recognize the devices used to make the network connection. Zero-trust policies constantly look for signals of a potential threat – such as a user attempting to access the network using an unknown device, or a device logging on from an unknown location. If the device or the user exhibits unfamiliar behavior, access is denied. It’s therefore critical to understand the organization’s “protect surface” – the users, devices, data, and applications that comprise the corporate infrastructure, and where all of those resources are located. Having a full inventory of all of the devices on the network enables IT teams to map out where zero-trust security policies should be enforced. Lansweeper Embedded Technologies delivers Device Recognition and Identification capabilities to provide complete visibility across the growing and distributed technology infrastructure. By embedding our Device Recognition Technology into your cybersecurity solution, you can offer an essential service to your clients to help them build out their Zero-Trust infrastructure while differentiating your cybersecurity products from your competitors. Lansweeper quickly and automatically scans and identifies all devices on a network. It analyzes common protocols to identify billions of wireless and wired devices, revealing their make, model, category, and OS with limited input data. Lansweeper generates a unique device fingerprint for each device, then encrypts and stores it in our vast and growing database. Cybersecurity providers can quickly and easily integrate Lansweeper’s Device Recognition Technology into their products using our SDKs and Cloud API. We also offer offline database and on-premise solutions to meet special requirements, for example in government or other sensitive environments. With the ability to identify connected devices in real-time, implementing zero-trust network access policies to protect your organization from malware, ransomware and other forms of cybercrime is a goal that’s within reach. Learn more about how Lansweeper Embedded Technologies can help you level up your products and services.
<urn:uuid:79879878-1fc1-4235-ad88-a1c26a812b60>
CC-MAIN-2022-40
https://embedtech.lansweeper.com/industry-insights/device-identity-the-foundation-for-zero-trust-network-access/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00234.warc.gz
en
0.920548
898
2.75
3
Matthew Leising (Bloomberg) -- Users and developers of the world’s most-used blockchain have been wrangling with its carbon-footprint problem for as long as it’s been around. Now, they say, several recent breakthroughs will finally enable them to drastically cut energy use in a year or less. Ethereum and better-known-rival Bitcoin both operate using a proof-of-work system that requires a global network of computers running around the clock. Software developers at Ethereum have been working for years to transition the blockchain to what’s known as a proof-of-stake system — which uses a totally different approach to secure the network that also eliminates the carbon emissions issue. The change — delayed time and again by complicated technical setbacks — couldn’t come soon enough for the cryptocurrency world, which weathered one of its biggest bouts of volatility ever this month after Elon Musk announced that Tesla Inc. would stop accepting Bitcoin as payment for cars because of the surging energy use. Bitcoin’s network currently uses more power per year than Pakistan or the United Arab Emirates, according to the Cambridge Bitcoin Electricity Consumption Index. The compilers of the index don’t measure Ethereum energy use. “Switching to proof of stake has become more urgent for us because of how crypto and Ethereum have grown over the last year,” Vitalik Buterin, the inventor of Ethereum, said in an interview. He’s hoping the change is made by year end, while others say it will be in place by the first half of 2022. That’s about a year earlier than was expected in December. “I’m definitely very happy that one of the biggest problems of blockchain will go away when proof of stake is complete,” said Buterin, who has been advocating for the shift since the blockchain was launched in 2015. “It’s amazing.” The change could help boost the price of the cryptocurrency Ether, which is necessary to use Ethereum, as investors who are environmentally conscious take note of its vastly smaller carbon footprint. Much of the criticism of proof of work has come from millennials and investors who value positive environmental, social and governance, or ESG, standards. “It’s hard to ignore that the ESG narrative is going to be big,” said Wilson Withiam, an analyst at Messari who specializes in blockchain protocols. “If you’re looking at Ether as an investment, it doesn’t have that looming over it.” Pantera Capital, an early Bitcoin investment firm, agreed. “Ethereum has a massive ecosystem of decentralized finance use cases with rapidly growing adoption,” Dan Morehead, founder of Pantera, wrote in a May 10 note to investors. “Combine these two dynamics and we think Ethereum will keep gaining market share relative to Bitcoin.” The transition Ethereum developers are making is a huge undertaking. They have to create, test and implement an entirely new way of securing their network while maintaining the existing blockchain. Then when the time is right, they’ll merge the existing blockchain into the new architecture that uses proof of stake to verify transactions. The shift will also radically increase the speed of transactions that Ethereum can process, making it more competitive with established payment networks like Visa or Mastercard. Proof of work uses the capital costs of buying and maintaining computer hardware as well as the electricity to run them as the economic investments that must be paid by the people who are securing the network, known as miners. In return, the first miner to verify the latest batch of Bitcoin or Ethereum transactions is rewarded with free Bitcoin or Ether. That system has come under fierce criticism for years, most recently by Musk, who called recent consumption trends “insane.” In proof of stake, the cryptocurrency Ether replaces hardware and electricity as the capital cost. A minimum of 32 Ether is required for a user to stake on the new network. The more Ether a user stakes the better chance they have of being chosen to secure the next batch of transactions, which will be rewarded with a free, albeit smaller, amount of Ether just as in proof of work. So far, more than 4.6 million Ether have been staked in what’s called the beacon chain, worth about $11.5 billion at an Ether price of $2,503. That means once proof of stake is in place, the only electricity cost will come from the servers that host Ethereum nodes, similar to any company that uses cloud-based computing. “Nobody talks about Netflix’s environmental footprint because they’re only running servers,” said Tim Beiko, who coordinates the developer work on the new network for the Ethereum Foundation, set up to fund and oversee development of the Ethereum protocol. Danny Ryan, a researcher at the foundation, said Ethereum’s proof of work uses 45,000 gigawatt hours per year. With proof of stake, “you can verify a blockchain with a consumer laptop,” he said. “My estimates is that you’d see 1/10,000th of the energy than the current Ethereum network.” One of the first breakthroughs came when developers created a system where contracts on Ethereum can be executed off the main chain, what’s known as roll ups. That takes an enormous amount of pressure and demand off of the main underlying network, and also means fewer changes to the network need to be made. The next leap was linked to roll ups. The move to a new Ethereum, known as ETH 2.0, has always envisioned the network being broken into 64 geographic regions in what’s called sharding. Transactions on one shard would then be reconciled with the main network that’s linked to all the other shards, making the overall network much faster. Yet it was complicated and a tricky security question and it was slowing down progress. Once roll ups could be used for transactions, that meant the shards only needed to house data, Beiko said. In the prior model, the sharding system would’ve had to be up and running before Ethereum could move to proof of stake. That’s no longer the case, he said. “Sharding goes from being very complicated to not too complicated,” Beiko said. “It’s not a blocker in the road map any more.” Roll ups are limited by how much data that’s linked to the blockchain they can contain, Buterin said. This was a problem before developers realized shards could hold the data. “If you can publish data on-chain, which you can do with shards, then the scaling goes up by a lot,” Buterin said. The progress on proof of stake was shown recently by a test net where transactions on the existing Ethereum blockchain were successfully merged onto the proof of stake system, Beiko said. “I’m more confident than I was a month ago,” he said. “There’s a bunch of non-trivial issues to figure out, but the fundamental architecture is set and pretty promising.”
<urn:uuid:bb05ce92-0bac-48d6-9968-54531da71c8c>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/energy/ethereum-closes-long-sought-fix-cut-energy-use-over-99-percent
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00234.warc.gz
en
0.956681
1,493
2.640625
3
Why Multi-Factor Authentication Is Vital to Securing Your Enterprise In a world where cybersecurity threats, hacks, and breaches are exponentially increasing in frequency and sophistication, it’s more important than ever to take actionable steps to mitigate the risk these attacks have on your enterprise. One surefire method to further protect your network and assets is to implement multifactor authentication policies; also known as MFA security or 2FA for shorthand reference, utilizing this technology is proven to radically reduce attacks levied against your business. In fact, a 2019 report from Microsoft concluded that multifactor authentication effectively blocks 99.9% of automated attacks. Since they fight more than 300 million fraudulent log-ins every day, it’s safe to say that Microsoft is the perfect example of how well this technology works. Let’s dive into what MFA security is and how it can bolster your cybersecurity posture. What Is Multi-Factor Authentication? Multi-factor authentication is an authentication method that necessitates a user to verify their identity with at least two forms of identification. What constitutes a form of identification can vary and usually involves a combination of the following: - Something you know: a password or PIN - Something you are: a fingerprint or other form of biometric identification - Something you have: a trusted smartphone or device that can validate your identity via confirmation codes Typically, basic systems will only require usernames and passwords in order to access whatever information a user is trying to get into; with multifactor authentication, the standard log-in information will be used in conjunction with another layer of verification, such as entering a one-time temporary passcode that is sent to a trusted device. Overall, 2FA ensures that an account can’t be accessed without double-checking the identity of the user with one or more additional forms of credentials. Why Passwords Alone Aren't Enough Let’s start with some troubling statistics about passwords: - The password “123456” is used by 23 million account holders. - An analysis of more than 15 billion passwords reveals the average password has eight characters or less. - A single password is used to access five accounts on average. Passwords are essential for nearly everything we need access to, especially in this age of the Internet and the profound integration of smart devices and the Internet of Things. However, as these statistics illustrate, passwords alone are simply not secure enough to provide ample security. When millions of accounts can be accessed with the same, simple six-digit password and that one password might be able to access five separate accounts, it’s only a matter of time before a person’s entire digital presence is compromised. Essentially, password practices are still far too basic for them to be effective on their own; that’s why MFA security is necessary for better cybersecurity protection. The Importance of MFA Security Multifactor security is vital to securing your enterprise because it ensures that malicious actors can’t access your network or assets due to the additional layer—or layers—of security that MFA practices offer. With MFA cybersecurity practices in place, cybercriminals would have to know a person’s username, password, and have access to the person’s phone, for example. With the ubiquity of smartphones and how essential they are to our lives in the 21st-century, a missing phone would be immediately noticed and reported. Following this example of MFA security necessitating a smartphone, the device itself would presumably be locked, which is yet another layer of security for criminals to try and hack into. Basically, it would trigger a long line of obstacles for virtual thieves and hackers to access your network or assets with 2FA policies established and followed. Benefits of Utilizing Multiple Access Controls If it wasn’t already clear, there are many advantages to utilizing multiple access controls in your enterprise, including: MFA’s Flexibility in the Evolving Work Environment With the pandemic continuing to force businesses to shift and adapt to different operations, like remote operations, it’s essential to keep your company’s remote workers protected from a cybersecurity perspective. MFA security enables your employees to access all of their usual required sites and accounts but with added protection. This means that both your employee accounts and company assets are further secured against malicious activity, such as fraudulent log-ins. Multifactor Authentication Doesn’t Disrupt the User Experience As we learned earlier, passwords are often one and the same—so it’s likely that your employees’ passwords aren’t exactly Fort Knox, and necessitating lengthy passwords might result in poor security practices like writing down passwords because they’re too difficult to remember. With multifactor authentication, you can gain peace of mind knowing that even if your accounts don’t have the strongest passwords, they’re fortified against cybercriminals since multiple forms of identification are needed. This reduces the amount of work that your internal IT team needs to spend addressing employee access issues like password resets and empowers them to focus on more strategic tasks. MFA Security Significantly Reduces Risk A security breach caused by anything, but especially if caused by a flimsy user password, would have significant consequences for your company and your clients; after all, the average security breach cost rose to more than $4 million in 2021. Passwords and general bad practices regarding passwords are obviously a huge risk for enterprise security; that’s why implementing MFA practices would significantly reduce cybersecurity risk at your organization. Partner with Compquip to Manage Your Enterprise’s Cybersecurity Posture! For more than four decades, Compuquip has been entrusted to secure and strengthen cybersecurity practices across dozens of enterprises. We provide a wide range of products and services, including Managed Security Services, firewall automation, virtual CISO services, and more. Contact us today to learn how we can fortify your organization’s cybersecurity posture today!
<urn:uuid:76bdc559-594d-4e87-8a7a-0a3346cb6ed3>
CC-MAIN-2022-40
https://www.compuquip.com/blog/why-multi-factor-authentication-is-important
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00234.warc.gz
en
0.929883
1,241
2.671875
3
How to Be Environmentally Friendly When Printing You’ve probably heard the term “going green” in relation to business, but what does that mean? Green is often used as a way to describe an environment or organization that is environmentally friendly. This means it uses less energy, generates less waste and uses renewable resources. Printing can be a big cost for your business, so you can reduce this cost by going green and choosing an eco-friendly printer. You’ll also be helping out the environment at the same time! Here are four green printing tips, from Century Business Products, you can implement at home or in the workplace: Opt for Recycled Paper Paper made from 100% recycled paper has been used to print on since the 1990s, and it’s better for the environment than wood pulp. The U.S. Environmental Protection Agency (EPA) estimates that each ton of recycled paper saves 17 trees, 7,000 gallons of water and almost 2,000 kilowatts of electricity. Recycled paper is also beneficial in terms of energy savings: It uses up to 25% less energy during production than virgin fiber papers do, which means you can feel good about saving more energy by switching over to recycled paper. Choose Eco-Friendly Inks and Toner Take a look at the printer’s specifications. Many printers have eco-friendly options, so you can choose to use them if available. If you don’t readily see available options, contact CBP today to learn what options are available for your specific printer model. Kyocera printers from Century Business Products use high-quality, soy-based inks and toners made from renewable resources and biodegradable materials. These products offer exceptional print quality with lower environmental impact than other inks and toners. Use Both Sides of the Paper If you need to print a lot and want to conserve resources, including ink, paper and electricity, consider printing on both sides of your document. This is called Smart Duplex printing. This method of printing can be found in your print settings. Use an Environmentally-Friendly Printer To avoid the environmental impact of printing multiple documents, it is best to use a Kyocera that is Energy Star-certified. This label means that certain Kyocera products were tested and found to use less energy than other comparable printers. A CBP representative can recommend the best Energy Star-certified printer or copier for your needs. How are Kyocera printers environmentally friendly? Kyocera printers are designed to minimize environmental impact. The key features that make Kyocera printers so environmentally friendly include: - A minimal amount of ink used. These printers are designed to use a very small amount of ink, reducing the consumer’s waste and cost - Minimal paper usage. Kyocera printers print on both sides of the paper instead of just one side, which means you use less paper overall and can be more cost-effective with your printing habits - Very low electricity consumption compared to other brands means less greenhouse gases created from power plants to produce electricity Kyocera printers are also easy for people who want to recycle their used printer cartridges after they’ve run out of ink or toner. Kyocera has an Eco Program where consumers can return their empty cartridges to Century Business Products for recycling. These recycled products will be remanufactured into new products that help reduce demand in manufacturing processes while improving resource recovery efficiency. Printing is an essential and inevitable aspect of many jobs. It’s important that we make environmentally-friendly choices when we do print in order to protect the planet for future generations. Kyocera printers are designed with eco-friendly technology to use less energy and reduce waste. Contact Century Business Products for more information.
<urn:uuid:900dcf85-97cd-46f1-978b-49de722235d4>
CC-MAIN-2022-40
https://cbpnow.com/century-business-products-news/how-to-be-environmentally-friendly-when-printing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00434.warc.gz
en
0.933668
786
2.828125
3
Carbon dioxide gases can be released in many ways, such as transport, land clearance, and the production and consumption of food, fuels, manufactured goods, materials, wood, roads, buildings, and services. Once the size of a carbon footprint is known, a plan can be formulated to reduce it using methods such as technological developments, better process and product management, consumption strategies, and alternative projects, such as carbon offsetting, which includes solar or wind energy or reforestation. Carbon footprints are affected by size of a population and their economic output. Individuals and businesses look at these main factors when trying to decrease their carbon footprint. Researchers advise the most useful way to reduce a carbon footprint is to cut the amount of energy required for production or to decrease the dependence on carbon producing fuels.
<urn:uuid:6f258bbc-ca0f-4014-93b8-b8b9e3511f84>
CC-MAIN-2022-40
https://cyrusone.com/resources/tools/carbon-footprint/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00434.warc.gz
en
0.94952
160
3.921875
4
If you read the title of this post and think I’m crazy, you’re probably right. On the other hand, most people seem to be saying this by their actions. How many times have you been in an office and seen passwords attached to monitors on sticky notes? How about people who use the password “password”? We’ve all read stories about using strong passwords and how easy it is to guess people’s passwords. The fatal flaw in the system is that we need something that isn’t obvious, but something that we can remember. Some of the simplest methods of creating a more complex password is to use upper and lower case alphanumerics plus a symbol. There is a great site that can help you understand this. Go to http://howsecureismypassword.net/ and type in combinations of letters, numbers and symbols to see what it tells you. This is not a foolproof method of choosing a password, but it will give you a good idea of what is secure and what’s not. Here are a few examples. If I use “password”, a person or program will crack my password and access my information in seconds. If I add some symbols into it and use “pa$$word”, it would take a desktop PC about 6 days to crack it using a brute force attack. If I add a capital letter to make it “Pa$$word”, it would take a desktop PC about a year to crack. And if I use “eDo(ument$c!ence$”, it will take more time to crack than the history of the universe. You can see by adding some simple variety the job of stealing your password becomes harder. Here are a few easy to remember tips for passwords: - Don’t use a simple word or phrase, like password or 123456 - Use at least 8 characters, but preferably 10 or more - Use upper & lower case letters, numbers and symbols in your password - Use something that you can remember, so you aren’t tempted to write it down - Don’t write your password on a sticky note and put it on your monitor There are many systems, such as biometrics and smart cards, that are more sophisticated than using passwords. Unfortunately these aren’t ubiquitous across computer systems and websites. I frequently use OpenID, where it’s supported, which is a bit more sophisticated, but still uses a password construct. Until the computer industry comes up with another authentication system as simple as the password, we are stuck with them. Make sure you use a little common sense when choosing yours. Here are some more tips on choosing a strong password.
<urn:uuid:09931a6f-127f-42f0-a3b1-8b0700ac72ed>
CC-MAIN-2022-40
https://en.fasoo.com/blog/please-steal-this-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00434.warc.gz
en
0.935219
571
3.078125
3
Information Security threats can be many like Software attacks, theft of intellectual property, identity theft, theft of equipment or information, sabotage, and information extortion. What are the ethical issues in the security of MIS? The major ethical, social, and political issues raised by information systems include the following moral dimensions: - Information rights and obligations. … - Property rights and obligations. … - Accountability and control. … - System quality. … - Quality of life. What is MIS security? Security of an Information System Information system security refers to the way the system is defended against unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction. What are the security issues with information technology? MIS security refers to measures put in place to protect information system resources from unauthorized access or being compromised. Security vulnerabilities are weaknesses in a computer system, software, or hardware that can be exploited by the attacker to gain unauthorized access or compromise a system. What are the four ethical issues in MIS? PAPA. Privacy, accuracy, property and accessibility, these are the four major issues of information ethics for the information age. What is security information system in MIS? Information systems security, more commonly referred to as INFOSEC, refers to the processes and methodologies involved with keeping information confidential, available, and assuring its integrity. It also refers to: Access controls, which prevent unauthorized personnel from entering or accessing a system. What are the 3 components of information security? The CIA triad refers to an information security model made up of the three main components: confidentiality, integrity and availability. What is Information Security examples? Information security is the area of information technology that focuses on the protection of information. … As examples, pass cards or codes for access to buildings, user ids and passwords for network login, and finger print or retinal scanners when security must be state-of-the-art. What are the major security problems? Avoid IT security issues in Your Organization The best way to stay safe is to arm yourself with adequate information and resources on how to prevent the most prevalent IT security issues today. What are the top 10 security threats? Trending Cybersecurity Threats to Watch - Ransomware and as-a-service attacks. - Enterprise security tool sprawl. - Misconfigured security applications at scale. - Sophisticated spear phishing strategies. - Increased frequency of credential theft. - Mobile device and OS vulnerabilities left unchecked. What are security problems? These security problems are management and personnel issues, not problems pertaining to operating systems. 3. Operating system. The system must protect itself from accidental or purposeful security breaches. A runaway process could constitute an accidental denial-of-service attack. What is ethics in security? Ethics can be defined as a moral code by which a person lives. … In computer security, cyber-ethics is what separates security personnel from the hackers. It’s the knowledge of right and wrong, and the ability to adhere to ethical principles while on the job. Information systems raise new ethical questions for both individuals and societies because they create opportunities for intense social change. … These issues have five moral dimensions: information rights and obligations, property rights and obligations, system quality, quality of life, and accountability and control. What are the four ethical issues in business? Ethical Issues in Business - Harassment and Discrimination in the Workplace. … - Health and Safety in the Workplace. … - Whistleblowing or Social Media Rants. … - Ethics in Accounting Practices. … - Nondisclosure and Corporate Espionage. … - Technology and Privacy Practices.
<urn:uuid:2e7a9a6f-f1e9-4973-96be-41364539d2cc>
CC-MAIN-2022-40
https://bestmalwareremovaltools.com/physical/what-are-the-security-issues-in-mis.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00434.warc.gz
en
0.895366
774
2.984375
3
Parental controls are tools that allow parents to manage their child’s internet use. Parental controls come in a lot of shapes and sizes, but can include features like website and content filtering, screen time management and more. Some of your child’s favorite devices, apps and websites have parental controls built right in, but it’s important to learn everything they can do to see if you need a more robust solution. But first, it’s important to think about your child’s age and maturity level to help you decide what they should and should not access. When it comes to online safety for kids, you may be asking yourself if it’s okay to let your child have internet access at all. But the stats are undeniable: according to Pew Research Center, kids from ages eight to 12 spend about six hours online every day, while teens spend an average of 9 hours online each day. And even if you restrict access at home, children are sure to spend time online at school or friends’ houses. With so many ways for your child to access the internet, it’s important to ask yourself the following questions to help your kiddo safely navigate this brave new world. Before going any further, determine if your child is ready to go online. There’s no magic number to determine at what age your child will be ready for the internet, but according to Child Trends, 41% of children age 3 to 5 and 57% of children age 6 to 11 use the internet at home. And even if you do restrict access at home, by the time they reach kindergarten, they’ll likely start accessing it at school. But just because your child can access the internet, doesn’t mean they need to access the entire internet. It’s important to research age-appropriate apps and sites to understand when your child may be ready to use them and to assess if your child is mature enough to do so. Luckily, many sites and apps, especially social media sites, have age restrictions that can help guide you. Facebook, Instagram, Tik Tok and YouTube all require users to be at least 13 years old. But these are just guidelines: it’s up to you as a parent to determine if your 13-year-old is ready to access these apps and sites. If you do decide your child is ready to set up accounts and apps, or if they’re ready for a phone or device of their own, there are a few ways to help keep them safe. Sometimes, even with the best intentions, children can stumble upon dangerous sites and content without meaning to, so it’s great to set up online safety for kids with a few free tools on some of your kid’s favorite sites, apps, and devices. iPhone Parental Controls: Restrict certain content and apps and set screen time limits. If your kid has their own iPhone, add your kid's device to your “family” with the phone’s Screen Time settings. From there, even if you share a device, you can set a variety of controls. Google Family Link (for Android devices): Set controls and screen time limits remotely, from your device. To get started, search for Google Family Link on Google Play. YouTube Safety Mode: Blocks mature content. On any YouTube page, find the footer that lists your settings. Click the “Restricted Mode” button to turn this feature on or off. Google SafeSearch: Filter sexually explicit content from Google search results. Check out your search settings to set it up, or turn it on for all users under 13 through the Google Family Link app. Social Network Privacy Settings: Keep your child’s activity and information restricted to just their friends and control who can follow them. Check out your child’s social network account settings to change who can find them. These tools won’t block every malicious site out there, so you may want to consider finding an additional parental control solution to improve your child’s online safety. If you’re asking yourself this question, it’s important to know you’re not alone. According to a 2019 study by Pew Research Center, 52% of parents use parental controls to restrict access to certain sites. And there’s a good reason for that: when used in partnership with parenting and guidance, parental controls can help encourage healthy online habits in your kid and help protect your home network and devices. Parental controls can not only limit screen time and restrict explicit content, but also help prevent cybercrime and data theft by turning your child away from risky online behavior that can lead to a breach. That’s why it’s a great idea to consider bundling your parental control and cybersecurity software, so they can work together to keep you and your family safe online. McAfee® Total Protection comes with McAfee® Safe Family, which gives you many of the parental control features we’ve talked about here, as well as award-winning antivirus to help protect your home devices and network. But even with bulking up your protection at home, at some point, your child will be going online on a different device or network, maybe with one of their friends, that doesn’t have your parental controls set. That’s where education comes in. The key to success for parenting in the digital age is to keep an open conversation with your child and to take the time to teach them about key internet safety tips, like the tried-and-true rules below: Don’t give out any personal information online before talking to your parents, including your name, address, phone number and more. Don’t share your passwords with anyone, even friends. Don’t say anything online you wouldn’t say in person, and if you receive messages or comments that are mean, tell your parents. Don’t upload any photos or download any files without talking to your parents. Don’t talk to anyone you don’t know online, and don’t meet anyone in person you’ve already met online. If you come across anything online that makes you uncomfortable, don’t hide it. Talk to your parents about how you came across it, how you can avoid it in the future, and ask any questions you may have about the content. In some cases, your kids may know more about going online than you, so keeping open lines of communication is important. The more open you are, the more ready they’ll be to talk about their activity online, allowing you to guide them toward good habits.
<urn:uuid:4d1bbf80-5cc8-4dad-b613-78a9949ae5a1>
CC-MAIN-2022-40
https://www.mcafee.com/en-us/parental-controls.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00434.warc.gz
en
0.924004
1,384
3
3
The Ukraine-Russia War is Impacting Global Sustainability Initiatives and Derailing Progress in Meeting SDG Goals The Ukraine-Russia War has hindered the progress of nations and businesses toward achieving global sustainability goals. Along with its humanitarian and economic consequences, the crisis has altered investment in energy, defense, and autocratic states. Can the enthusiasm the world felt just seven years ago about reaching Sustainable Development Goals (SDGs) be recaptured, and what does the future hold for sustainability enablement service providers? Read on to find out. The optimism around achieving SDGs, also known as the Global Goals, has waned since its adoption by the United Nations in 2015 with the promise of improving people’s lives and preserving natural resources. Global sustainability initiatives have been impacted by the Ukraine-Russia War, the pandemic, and supply chain issues. According to the UN, income for about 60% of the global workforce declined during the pandemic. Supply chain issues further exacerbated the economic contraction and humanitarian losses by inflating food and fuel prices. The war is impacting progress in accomplishing SDGs, directly through its humanitarian and economic consequences, and indirectly through its effect on Environmental, Social, and Governance (ESG) investments. The following three major challenges have emerged due to changing perceptions about ESG investments in light of this crisis: - The war has ramifications on global energy transition The Ukraine-Russia war has slowed down the global energy transition to renewables in two ways: Increased metal and gas prices slowing renewable technology investment – The region is a leading supplier of “energy transition metals” like nickel, palladium, copper, and lithium. Russia accounts for 7% of the world’s mined nickel and 33% of the world’s mined palladium, which are used in electric vehicle batteries and to reduce automobile emissions, respectively. Ukraine is the largest supplier of noble gases like krypton, which is used in renewable technologies. The war has reduced the already sluggish rate of renewable technology investment by increasing the prices of these metals and gases. Ramped up coal production and fossil fuel investment – Russia accounts for 17% of the world’s natural gas supply, which is perceived as a transition fuel globally. Before countries develop sustained sources of renewable energy, natural gas is replacing fossil fuels due to its lower carbon emissions. The issue is more pronounced in Europe, as about 80% of Russia’s natural gas is exported to Europe, fulfilling about 40% of Europe’s gas demand. The war has inflated gas prices. Although the US has agreed to supply more gas to the region, this raises the question of sustained gas supply and puts pressure on European governments to accelerate their net-zero strategies. The market is optimistic that Europe will transition to clean energy faster than expected because it needs to become energy self-reliant. Slow investment in renewable energy has further dipped since 2018. While renewable energy requires patient and risk-tolerant investors, fossil-fuel investment generates considerable returns quickly due to the massive existing hydrocarbon infrastructure. In the war’s wake, fossil fuels are seeing an investment frenzy, with Canada, the US, Norway, Italy, and Japan increasing production. Many countries across Europe again are ramping up coal production to avoid depending on Russian gas. In the short run, it seems that the world has taken steps back on global warming - Investment in defense is being reclassified as sustainable Before the war, steering away from investing in arms and ammunition was considered prudent and ESG conforming. However, the war has brought back fears of traditional warfare. Now, many nations have started taking a U-turn from this narrative by categorizing defense investment as sustainable for national security and global alliances. Many global defense suppliers’ share prices spiked upward the first day Russia invaded Ukraine. Many European nations, including Germany, Poland, and Sweden, have announced increases in their defense budgets. SEB Investment Management, a leading asset-management firm in the Nordics, has revised its sustainability policy to allow some of its equities and corporate bonds to be invested in the defense sector. With skepticism associated with traditional warfare restored, investors and governments are bound to pump more money into arms and other defense products. - Investors are steering away from autocratic states Investors are facing heightened reputational risks for associating with authoritarian regimes. The boundary between investing in government bonds of an autocratic state and investing in companies conducting business in/with the autocratic states is now blurred for investors. Western investors are striking Russia off their investment list, especially if the investment is ESG-compliant. This can dampen investments in other autocratic states and the businesses associated with them. How does the war impact sustainability enablement service providers? The war has temporarily derailed the uptake of renewable energy investments. To start, this will impact enterprises’ Scope 2 emissions reduction goals. Scope 2 emissions are generated from purchased electricity, and reducing these emissions requires enterprises to turn towards renewable electricity sources. The sustainability enablement technology industry also will experience a short-term supply crunch of semiconductor chips, which is an important input in producing sustainability technologies. To deal with these choppy waters, organizations will need help from consulting and technology providers to shift their sustainability mix to access net-zero strategies to still achieve their committed targets for global sustainability initiatives. Moreover, as the sustainability ecosystem matures, forward-looking investments in scaling undertakings such as enhancing trust in data and reporting (avoiding greenwashing claims), scaling operations to accelerate net-zero targets, and creating persistent governance systems will continue to create momentum. You can read more about the impacts of Russia’s military action in Ukraine on services jobs and global sourcing in our blog, “Will Ukraine’s Invasion Have a Domino Effect on Other Geopolitical Equations?”
<urn:uuid:90ede678-01ce-495c-993e-d3da00e007ce>
CC-MAIN-2022-40
https://www.everestgrp.com/category/sherpas-in-blue-shirts/locations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00434.warc.gz
en
0.939186
1,195
2.734375
3
Have your students ever begged you to start the lesson early? Or spent hours upon hours of their free time, invested – not by force! – in educational games? They will now! Making learning fun, removing the pressure and adding engaging elements to bring learning to life has a name – it’s called gamification and it’s the way we’ve encouraged thousands of students to discover their hidden talent and passion for cyber security. Can education keep up with today’s technology? The world today is geared up for smarter living – from smart appliances and home hubs to security systems, every aspect of our lives is now enhanced by technology. As we embrace change and welcome a new era of connected living in our personal lives, have we also thought about the people who will keep those systems safe and protected from cyber criminals? The short answer is no. Not only is there is a huge shortage of cyber security practitioners, there are also very few cyber security resources for students to discover this career path and develop their skills in an affordable and accessible way. How to teach cybersecurity to the next generation of digital defenders The cyber security industry is notoriously known as a tough field to crack. Not only do you need the time and patience to persevere with studying such an in-depth, constantly evolving topic, but there is also a high barrier to entry that requires expensive training courses and certificates. With plenty of hurdles already in place, traditional text books and learning can make the subject matter seem dry and unengaging, encouraging fewer students to even consider it at school. And this is where the issue lies. Cyber security is one of the most hands on, fascinating industries that has roles to suit students with varying personality traits and interests but the key is breaking down stereotypes and inspiring them to have a go. In this TED talk, CyberStart Founder, James Lyne discusses in more detail the issue of not training enough cyber security experts for the future Gamification has the unique and powerful ability to change people’s opinion on learning, particularly with topics such as cyber security where you can have a go at real world simulations, imagine yourself in these cool cyber settings, and experience the thrill of problem solving. When it doesn’t feel like a chore, the task at hand becomes so much more entertaining and addictive, just like their favourite video game. Our founder James Lyne first discovered this when he was part of a summer camp that trained 600 kids to learn cyber security in a traditional way. You can guess what happened – the kids were reluctant to go, they weren’t interested in the cyber security resources provided and the atmosphere lacked any sense of excitement. So, he did some research and decided to try and gamify cyber security. He took all his passion for cyber security and created a new vision, teaching young adults how to hack into things and how to make software more secure through fun, immersive challenges. Over 3 days; enthusiasm grew until the kids were turning up EARLY for camp, begging for the cybersecurity lessons to begin. That’s the dramatic difference gamification can make. With cyber security, it takes a complex, technical subject and transforms it into a bite-sized, accessible format, meaning kids of all ages can start right at the beginning and pace themselves to develop skills they didn’t know even existed. An educator and parent’s role in the future Cyber security is essential to protect tomorrow’s digitally connected world and now more than ever, we need to inform, encourage and inspire every single student who has the passion and potential to thrive. You don’t need to be an expert to teach cyber security. You simply need to provide the opportunity, break down the barriers and let them explore an entirely new career path and interest. Got a class full of eager students, who you think could crack codes and analyse puzzles? Or maybe your children love creative problem solving? Get started right now with CyberStart Game - the most proven cyber security learning tool for young adults!
<urn:uuid:046d2a32-e4d8-4717-ab2f-4d004371bb78>
CC-MAIN-2022-40
https://cyberstart.com/blog/how-gamification-is-revolutionising-cyber-security-education/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00634.warc.gz
en
0.959364
828
2.953125
3
In this article we will explain what are email headers and how to change them. What is an email header? An email header is a fragment of code in an HTML email that comprises information about the sender, receiver, email’s route to the inbox, and authentication details. What does an email header look like? Main points of email headers 1. Received: from Received: from reveals which address the email was sent from. It also reveals the sender’s IP address. Timestamps and the destination email address are also included. ESMPT ID is the Enhanced Simple Mail Transport Protocol ID that computers connected to the Internet use to send emails. It is also the protocol that servers use to transfer email between them. SMTP transactions typically have 4 parts: - HELO (EHLO): Extended Simple Mail Transfer Protocol used by email servers that communicate with one another - MAIL FROM: Command that starts a mail transfer - RCPT TO: Command that identifies the recipient - DATA: Message and message headers, including From: and To:. Many spam filters run after HELO, MAIL FROM and RCPT TO but before DATA. That is because once you accept the DATA, you cannot bounce the message. This is why the filters that run on the message content, including the filters you set up yourself, cannot bounce mail. 2. Message-ID of Email Header Message-ID reveals a unique message identification number. MIME (Multipurpose Internet Mail Extensions) extends the format of an email by supporting text and non-text attachments like application files, audio and video files, images and message bodies with multiple parts. Rules that are set up to identify spam. Used by Spamcop, SpamAssassin and similar services. CSA (Certified Senders Alliance) provides a whitelist for bulk email senders. An encryption method that allows viewing older emails using new technology. 7. Email signatures The DKIM (DomainKeys Identified Mail) signature is included in email messages to reveal information about the sender, the message and the public key location. DKIM is required by such mailbox providers as AOL, Google Mail, Outlook and Yahoo Mail to verify the sender’s identity and prevent email spoofing. The DKIM signature can include these values: - v – version of the DKIM standard that is used - a – cryptographic algorithm used to create the hash - c – identifies whether changes to the email like line wrapping or adding whitespace is allowed (canonicalization) - s – reveals the selector record name to query the correct public key from the d value - d – the domain that signed the message - h – the SMTP headers that are included in the cryptographic hash - i – the identity of the signer in email address format - b – the cryptographic signature that is encoded in Base64 How can I see the headers of a message? - Gmail: In the top right corner of the message, click the down arrow next to the Reply button. Select the option to display the original. - Yahoo!: Select the ellipses (…) in the toolbar at the top of the message and choose View Raw Message. - Outlook: In a new window, open the email. Select Properties from the File tab. In the Internet Headers box, look for email headers. - Mac Mail app: Click View, then Message and All Headers. As an alternative, you can use shortcut keys: Shift–Command–H. Why do so many headers start with X-? Computers that handle messages append their own headers. It is accepted to start custom headers with X-, which helps to ensure that custom headers do not use defined headers. What is an envelope sender? An email has envelope sender and From: addresses. The envelope sender address shows where the email originated. The From: address shows where to respond. In most cases, they match, but not in all cases. Spammers and scammers often abuse the mismatch of addresses. They can change the From: address part to something that recipients are likely to recognize. However, the envelope sender stays in their control. What to do if you received a spam email with Heficed IP address? Extract the header from your email and send an abuse report. Our Abuse team will handle the issue as soon as possible. Was this article helpful? If you need any further help, don't hesitate to send a support request to our support team.
<urn:uuid:4c18b8dd-be37-41bd-a424-11a1fcb0dddf>
CC-MAIN-2022-40
https://www.heficed.com/kb/abuse/email-headers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00634.warc.gz
en
0.872407
982
3
3
Five benefits of network virtualization New challenges and technologies such as virtualization are disrupting networks everywhere, with many businesses struggling to remain agile. What is network virtualization? Network virtualization is the process of combining hardware and software network resources and network functionality into a single, software based administrative entity. Virtual networks are no longer something to consider in the future. According to research company IDC, close to 65% of all server tasks on earth are run by virtual servers. Virtualization is growing in all industries, especially manufacturing. The global storage virtualization market will enjoy a compound annual growth rate of more than 24 percent from 2015 to 2019, according to Technaviosmarket research. Types of network virtualization There are 2 types of network virtualization: - External network virtualization - Internal network virtualization What are the benefits of virtualization? - Reduces the number of physical devices needed - Allows to easily segment networks - Permits rapid change / scalability and agile deployment - Provides security from destruction of physical devices - Allows failover mode – defective disk simply switches to a backup on the fly, and the failed component can be repaired, while the system continues to run A little background on the benefits of virtualization, from a hardware perspective – virtualization allows more applications to be run on the same hardware, which translates into cost savings. If you buy less servers, you will incur less capital expenditures and maintenance costs. Organizing your virtual network can be relatively easy, and immediately increase network efficiency. Find out what are the five network virtualization challenges and how to deal with them. Examples of Network Virtualization You can design your network so that your Local Area Networks (LANs) are subdivided into virtual networks and VLANs. Doing do will dramatically improve load balancing. You can also improve security by segmenting your network and establishing role and location-based permissions and procedures. Doing this in a virtual environment enables you to be agile and adapt your network architecture as needed to manage changing and increasing network loading and demand. More about Securing Network Infrastructure Why using network virtualization? Greater visibility into networks is invaluable and allows for considerable CAPEX and OPEX savings and reduced downtime. Right now, OT networks tend to be a lot smaller than IT networks, but that is in the process of changing, especially with the looming shift to the Industrial Internet of Things. When that happens, and there are industry pundits saying it will happen en masse sooner rather than later, the number of network connected devices is absolutely going to mushroom. So, most likely in two years, network monitoring will be as important in this industry as it already is in IT. It seems pretty simple, a boost in visibility will allow everyone from engineers to C-level leaders to make informed decisions in the new era of manufacturing. The article was written by Frank Williams, CEO of Statseeker, a global provider of innovative network monitoring solutions for the IT enterprise and OT industrial market space. Frank holds a BSEE, augmented by many post graduate courses in management, leadership and technology.
<urn:uuid:1f1894af-1632-4f80-a663-63b9e950d4b9>
CC-MAIN-2022-40
https://www.iiot-world.com/industrial-iot/connected-industry/five-benefits-of-network-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00634.warc.gz
en
0.928755
633
2.59375
3
Online Videos May Be Conduits for Viruses Online videos aren't just for bloopers and rants -- some might also be conduits for malicious code that can infect your computer. As anti-spam technology improves, hackers are finding new vehicles to deliver their malicious code. And some could be embedded in online video players, according to a report on Internet threats released Tuesday by the Georgia Tech Information Security Center as it holds its annual summit. The summit is gathering more than 300 scholars and security experts to discuss emerging threats for 2008 -- and their countermeasures. Among their biggest foes are the ever-changing vehicles that hackers use to deliver "malware," which can silently install viruses, probe for confidential info or even hijack a computer. "Just as we see an evolution in messaging, we also see an evolution in threats," said Chris Rouland, the chief technology officer for IBM Corp.'s Internet Security Systems unit and a member of the group that helped draft the report. "As companies have gotten better blocking e-mails, we see people move to more creative techniques." With computer users getting wiser to e-mail scams, malicious hackers are looking for sneakier ways to spread the codes. Over the past few years, hackers have moved from sending their spam in text-based messages to more devious means, embedding them in images or disguised as Portable Document Format, or PDF, files. "The next logical step seems to be the media players," Rouland said. There have only been a few cases of video-related hacking so far. One worm discovered in November 2006 launches a corrupt Web site without prompting after a user opens a media file in a player. Another program silently installs spyware when a video file is opened. Attackers have also tried to spread fake video links via postings on YouTube. That reflects the lowered guard many computer users would have on such popular forums. "People are accustomed to not clicking on messages from banks, but they all want to see videos from YouTube," Rouland said. Another soft spot involves social networking sites, blogs and wikis. These community-focused sites, which are driving the next generation of Web applications, are also becoming one of the juiciest targets for malicious hackers. Computers surfing the sites silently communicate with a Web application in the background, but hackers sometimes secretly embed malicious code when they edit the open sites, and a Web browser will unknowingly execute the code. These chinks in the armor could let hackers steal private data, hijack Web transactions or spy on users. Tuesday's forum gathers experts from around the globe to "try to get ahead of emerging threats rather than having to chase them," said Mustaque Ahamad, director of the Georgia Tech center. They are expected to discuss new countermeasures, including tighter validation standards and programs that analyze malicious code. Ahamad also hopes the summit will be a launching pad of sorts for an informal network of security-minded programmers.
<urn:uuid:213b4f7d-218e-431e-83b1-d4373bc1412b>
CC-MAIN-2022-40
https://mcpmag.com/articles/2007/10/02/online-videos-may-be-conduits-for-viruses.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00634.warc.gz
en
0.937034
610
2.625
3
What is 2FA? "2FA" refers to Multi-factor Authentication. In short this is a process of adding an additional layer of security beyond simply a username & password. We often are asked "where can I actually use 2FA?". This is a pretty vague and often troubling question for customers. Multi-factor Authentication has several uses. - Protecting Windows desktop logins. - Adding 2FA to a VPN or other appliance that supports RADIUS. - Adding 2FA to your RMM (Remote Management) tool. - Adding 2FA to popular PSA (Professional Services Automation) Tools. - Adding your own customer integration using our publicly available API. Whether you are using a Windows Desktop, Laptop, Workstation or Server we can add a Windows Credential Provider to add an 2FA requirement to the login. Note: This agent does not replace the existing Windows login requirement of username/password. The agent will only add a third requirement for an Passly passcode. Want to know how to install a Windows Credential Provider? Please see this section. VPN's and other appliances that support RADIUS If you are working with an appliance or device that has support for RADIUS - What is RADIUS? You might want to check out this article from Wikipedia. - How Should I setup RADIUS? - How can I add a RADIUS agent? - How can I find documentation on setting up my VPN or Appliance to use RADIUS? - First consult with your OEM (Original Equipment Manufacturer) for the appliance or device you are considering using RADIUS. Each appliance/device and version has different settings that can be used. Your OEM is your best resource. Adding 2FA to RMM (Remote Management) tools - Kaseya VSA (Virtual System Administrator) supports both SAML and 2FA logins. Please check out this Kaseya / ID Agent Integrations section for more information. - Labtech, Solarwinds Nable, Continuum all have integrations that support 2FA that they maintain. Please check out this section for more information. Adding 2FA to PSA (Professional Services Automation) tools. - You can add 2FA to both Autotask and Connectwise. Both companies maintain their own integration that is designed to work with Passly. Please check out this section for more information on the configuration.
<urn:uuid:cf1d074d-9eba-499f-9299-15b0aa50ebd2>
CC-MAIN-2022-40
https://support.idagent.com/hc/en-us/articles/360005104497-How-can-I-use-2FA-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00634.warc.gz
en
0.855566
512
2.546875
3
The Hidden Superhero in Your Kitchen There’s little question that food waste has become a massive global issue—we waste as much as one-third of the food produced worldwide each year. And the foods that we send to landfills produce methane, a potent greenhouse gas shown to have a global warming potential 21 times that of carbon dioxide. Talk about a one-two environmental punch. But for the average person, the question of how to solve such a big problem feels out of reach. Of course, we could get more disciplined about eating leftovers, but food waste –onion peels, apple cores, fish bones—is an inevitable part of consuming food. Enter the most unexpected of kitchen superheroes—the garbage disposal. This mighty kitchen food waste fighter turns the phrase “down the drain” on its head and potentially offers significant sustainability cred. Emerson’s InSinkErator® garbage disposals are able to efficiently break down even the toughest foods—corn cobs, orange and banana peels—into tiny pieces. From here, the food waste is sent via your home’s wastewater plumbing to treatment facilities equipped to handle the small particles. At capable treatment plants, that food waste is even repurposed as energy. WORLD-CHANGING TECHNOLOGY InSinkErator is not just the world’s largest manufacturer of garbage disposals: We invented them in 1927. Using InSinkErator garbage disposals helps keep food waste out of landfills, potentially reducing methane gases and leachate, an acidic liquid residue that can seep into and contaminate ground water. Many people have realized the benefit of InSinkErator: There are more InSinkErator garbage disposals in homes around the world than all other brands combined. “Food waste may feel like a big issue to tackle, but every family that chooses to keep their food waste out of landfills can make a difference,” said Chad Severson, president of Emerson’s InSinkErator. “We’re grateful that so many families are committed to doing their part to help the environment. And we’re even more grateful that we can do our part too, by continuing to put our InSinkErator disposals to work across the globe.” YOU CAN GRIND THAT? Garbage disposals have long been a sustainable option for families, but InSinkErator advances in recent years have made them an even better option. Those same advances have turned some conventional garbage disposals don’t-grind-that “wisdom” into kitchen urban legends. The Hidden Superhero in Your Kitchen Our InSinkErator garbage disposal is the hidden kitchen superhero, helping address food waste across the globe.
<urn:uuid:c3f72579-417d-428d-860e-0b8c989512a1>
CC-MAIN-2022-40
https://www.mbtmag.com/home/whitepaper/13249057/emerson-process-management-the-hidden-superhero-in-your-kitchen
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00634.warc.gz
en
0.937121
574
2.59375
3
Ymasumac Marañón Davis is an educational consultant, intuitive life coach and author. This blog is the third in a series around access. All thoughts are her own. I am often invited to visit schools that have received awards because of high ratings in their use of educational technology and as innovative learning centers. This always excites me, a place where creative learning takes place for kids? Yes, sign me up! And from an initial glance they are definitely different learning environments. New flexible furniture, colorful walls and decor, excited adults – awesome! It is apparent, that there is an effort to change the learning environment; whether or not it’s actually happening at the core level of values and belief systems remains to be seen. What eventually unfolds is something we are accustomed to: some children are eager to share their projects with you, while the majority sit back and quietly share when prompted. What is most striking, is the projects are all vastly similar, if not starkly the same. Where then is the innovation and more importantly, what was the professional development like? Innovation is a big buzzword right now in education. We need our kids to innovate – we want them to be creative thinkers – they need to think outside the box. The question then becomes, how do we do this? In an effort to be innovative, schools often latch on to big ideas, like “maker spaces” (classrooms with Legos and art materials whose intention is to allow students to be creative and innovative in their thinking). Many schools are looking towards creating robotics or Lego clubs. All of these intentions are laudable. Unfortunately, these efforts alone do not impact learning for everyone. To do this, we must consider our learning environment. Bringing in new furniture or programs without shifting the learning culture results in reverting to familiar classroom arrangements and teaching new programs with traditional, teacher driven pedagogy. When exploring questions around access in education, we have to consider the learning culture of the classroom and school. Although many of the aforementioned efforts may intend to impact the learning culture, they often generate excitement among a small minority of teachers and students. How then, do we impact all students to truly think in creative and innovative ways? Our starting point most likely is misplaced. Rather than look at students, we need to start with adults. Students are born ready for change and innovation, innately curious about the world around them. By middle school, this inherent drive to learn is minimized so drastically, it is troubling. That this happens in adolescence, when students’ brains once again have become as active as when they were toddlers undergoing an incredible transformative process, tells us that the learning culture surrounding them, rather than any innate characteristics, is what impedes innovative learning. We adults need to be willing to reflect on whether our learning culture truly allows all students to learn. The good news is that there are adults who are very willing to take these risks and try something new, fail miserably, reflect, and try again! Every campus has at least one of these teachers who is, ready to create change and try new things. It is these teachers who create magic in our classrooms and from whom we can learn to do the same. How can we scale what they have mastered? First, start with the willing and then have them coach peer to peer. These risk-taking teachers are often more than willing to share. They have not just latched onto the tools but also the learning culture that is required to go with the new tools and programs. Being willing to reimagine our learning culture requires us to examine the skills we hope our students achieve, and to assess, whether the characteristics and qualities of the environments support these skills. If we desire to cultivate the skill of curiosity in our students, then we need to ask ourselves what corresponding change in the learning culture is required. A quality that complements curiosity is risk-taking. However, many react to the prospect of risk-taking with fear: What if our students fail? What if our school doesn’t do well in state testing? These fears often stifle the risk-taking that enables innovation. My son a freshman in college, recently described to me the frustration he sees from his college teachers, who want their students to speak up and take risks in the classroom through problem-solving. He said none of his peers ever volunteered or spoke up, even though his teacher encouraged them to try, even if it results in mistakes. “Why?”, I asked. His response: “Because they’re afraid to fail.” “Where does this come from?” I asked. “It starts in middle school and solidifies in high school. If we make mistakes it affects our grades, and if our grades aren’t good, we know it’ll affect our college prospects or even passing a class. So we don’t like making mistakes, because it means we aren’t doing well and the consequences are too severe.” In one swift response to the question, “Why don’t students take risks?” My son summarized our educational system’s culture that ties student performance to their grades, which are tied to school ratings, college entrance, prestige, etc. I realize this asks us to reexamine our entire system, including our grading practices. Many schools are doing just this. Hampshire College in Massachusetts has dropped standardized testing as a requirement for admission. According to the school’s president Jonathan Nash, in an article published by The Independent, “Our applicants collectively were more motivated, mature, disciplined and consistent in their high school years than past applicants.” Although many of us cannot make sweeping decisions like this, we can begin by examining the very area where students will spend a good part of their day – our classrooms. How will students feel when they walk in? Will they, for that brief period, feel encouraged to stretch their limits and take risks? Will they know this is a space where they can tackle tough questions? Learning asks all of us to be present – not just our intellect, but our full selves, which includes our emotions and spirit. What will drive students through problem-solving, if not the inner spirit to know, the gnashing of emotions to pull through the unknowns of questioning? So when it comes to teacher buy-in and scaling up, start small. Good learning practices can catch like wildfire. Professional development should not just consist of learning new programs and using new tools and furniture. It should also be a space where educators have an opportunity for deep reflection on their own learning practices. Asking big questions. Educators need time to wrestle with these questions, and then the freedom to begin cultivating a new culture in their classrooms. To do this once is not enough. Professional development is more effective if it models coaching. Ultimately, good professional development should open up more questions and offer an opportunity to continue honing in on these questions throughout the year. In the end, we have to ask ourselves, what drives us, what keeps us moving through? And then, with an honest lens, open up to the risks that enable innovation – creating magic in the classroom.
<urn:uuid:28c77bfe-b17f-4103-ac9f-6b6d89600055>
CC-MAIN-2022-40
https://blogs.cisco.com/education/professional-development-for-educators-is-about-creating-magic
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00634.warc.gz
en
0.965598
1,484
2.859375
3
Between 9/11, the 2004 Madrid train attacks, and countless other attacks in recent years, mass transit has become a significant target for terrorist attacks. All over the world, there have been severe pushes in enforcing new kinds of rules and regulations in an attempt to help curb the possibility of terrorist attacks. There have been many pushes in terrorist screening measures, whether this is through the use of more human personnel or terrorist screening technology. Let us look at a few of the measures set in place to make mass transit especially more safe for everyone involved. An Increased Amount of Security Forces The first and perhaps most obvious method for reducing possible terrorist attacks is to have more security personnel on deck, whether security guards or more personnel for flights. These security forces would then be able to screen any passengers and their bags, although this does come at a bit of a price. There are a few financial ramifications, as hiring more than one person always comes with a bit of a higher price tag, but it could also lead to a reduced perception of the convenience of transportation, allowing for more vulnerabilities at those security checkpoints and creating a general feeling of fear. Video Surveillance Systems and Analytics Now, video-based security systems are becoming much more intelligent. These days, these types of systems are capable of features like facial recognition to determine if someone is authorized to be in a particular area. This could be an excellent feature for prohibiting free movement in places where only transit employees are allowed to be. One could imagine that setting up a driver camera inside an airport’s employee parking lot can help prevent terrorists or other shady figures from gaining key access. Systems Designed Especially for Mass Transit As we make the transition into more and more security measures being set up to address the concern for terrorist screening in mass transit, we will notice and more systems designed with mass transit in mind. Take Gatekeeper’s Automatic Train Undercarriage Inspection System, for example. While it can scan either cargo trains and passenger trains, the system is intended to provide high-resolution scans for every car so you can make sure that there are no foreign objects or possibly dangerous modifications made to the cars. Groundbreaking Technologies with Gatekeeper Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 37 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company.
<urn:uuid:dbb4a8ae-6af5-4522-927f-861fd489b6d9>
CC-MAIN-2022-40
https://www.gatekeepersecurity.com/blog/terrorist-screening-technology-implemented-throughout-mass-transit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00634.warc.gz
en
0.942517
571
2.6875
3
Behind any enterprise network environment, lies a strong foundation of robust hardware. It is all too easy to look past the importance of network hardware in favour of concentrating on more glamorous software solutions. However the hardware you use and the way its connected has a massive effect on the end quality of your service. The world of network hardware is made up of a rich tapestry of devices. The average admin has so much to consider before their service even goes live. Selecting the right network hardware will define your user experience, and how well your network performs in line with the expectations of your business. Overlooking your network hardware can result in poor performance. All enterprise networks operate as either a local area network (LAN, wide area network (WAN), or campus area network (CAN). LANs are used within individual offices with ethernet, WAN are large networks that travel across jurisdictional boundaries and CANs connect multiple LANs together in a close area. Most companies are using LANs as the latter two are extremely expensive to build and maintain. In this network hardware guide, we take a look at some key network components and help to give you the fundamentals to start building your network from scratch. The hardware elements of your network will need to be controlled by appropriate software, but those considerations are outside of the scope of this guide. After reading this guide you will be able to decide which types of hardware you will need to deploy in order to satisfy the needs of the business. We’ll be covering: - WLAN Controller - Network Interface Card - Hubs and Switches - Cable Modem - Analog Modem - T1 Line However, before we get to these specifically, it’s important to cover what network hardware actually is. What is Network Hardware? Network hardware is the name given to physical elements on your network. At a basic level the term encompasses elements like computers, laptops, tablets, servers, scanners, and printers. At a more sophisticated level, it includes WLAN controllers, Network Interface Cards (NIC), hubs, and switches. Picking the right hardware is important because it determines what your network is capable of and your overall efficiency. When it comes to managing an enterprise grade network, WLAn controllers are essential. They are used to configure access points both on a remote and local basis. Many administrators use WLAN controllers to manage wireless networks centrally. A WLAN sends out messages to access points throughout the network In the event that a connection fails, the WLAN operates on a standalone basis to eep the network up and running. WLAN controllers have been employed by many organizations to maintain network uptime as much as possible. Larger organizations struggle to manage multiple access points if they don’t have WLAN in place to centralize the process. Generally speaking, a WLAN controller will support up to 250 users and 25 access points. Managing all your access points through one tool also has the advantage of eliminating the need for lots of manual oversight. Rather than going to individual access points and wasting time an admin can view them all from one location. Likewise, it also makes it easier to respond in the event that a service fails. For example, if an individual access point goes down, the controller can reroute wireless traffic to a new one. A network adapter is an internal component of a computer which is used to communicate with another computer via a network. Network adapters allow computers to connect to each other through a LAN connection. Most network adapters are located on the circuit board and connected straight to the motherboard. There are a number of different types of network adapters: - Those with wireless network adapters onboard - USB network adapter that plugs into a USB port - PCI adapter (NIC) a card that could be added inside the computer Network Interface Card Your network interface card (NIC) is what your computer uses to communicate with other computers throughout your local network. You need a network interface card in order to be able to reach out to other computers on your network. This is one of the main pillars of your local connectivity. Each network interface card has a number used to identify the device in question. One of the main challenges raised by network interface cards is picking the right one. One of the main factors you want to consider when choosing a network interface card is bandwidth. The speed of the cables in your local environment is only as good as your network interface card. If your cables have a higher speed then your network interface then you’re not going to keep up the pace. The next thing you need to consider about network interface cards is that they don’t work on the same type of media. For example, you won’t be able to use a wireless network interface card with an ethernet one. As a result, you need to choose a network interface card that is compatible with your existing media. There are many network interface cards that combine ethernet and wireless as well, so these are something to consider when setting up your network. Finally, you need to make sure that your network interface card is compatible with all other devices on your network. Everything from your general topology to your hardware devices will need to be compatible with your network interface card. As such you’ll want to have a thorough understanding of your network architecture before committing to any particular network interface card. Hubs and Switches Hubs and switches are used in enterprise networks for a number of reasons. They help to split a single signal into multiple signals and to repeat degraded signals. However, there are distinct differences between the two. A hub is rudimentary in nature operating where as a switch is more sophisticated and active on the data link layer of OSI. A hub simply receives information from one port and then broadcasts to others on the network. The problem with hubs is that they efficiently consume bandwidth. Data that flows through a hub can only go in a single direction. This is referred to half-duplex and this model is exacerbated by the fact that you have to share your bandwidth between each port on the device. That being said hubs do have the advantage of being inexpensive and simple to deploy than their more advanced counterparts. The end result is low speeds and a very inefficient usage of bandwidth. In contrast, a switch has the ability to reroute data. Switches reroute data through the use of micro segmentation which is used to reduce the collision of data within a network. This serves to eliminate the problem of reduced bandwidth availability that is raised by the rudimentary approach of a hub. You can also configure a switch in a way that you can’t with a hub. While you have to pay more for a switch in enterprise grade networks they are extremely useful. If you have lots of different computers competing over one line then you will still experience poor bandwidth performance but the ability to reroute data transfers is incredibly useful. Even though broadband has become massie over the last decade or so, there are still a number of people using modems as their primary way of accessing the internet. Essentially, an analog modem converts the digital information taken from a computer into a tone-based format which is transferred via a telephone line. In contrast, a digital modem takes digital signals and transfers them between transmission systems. In general, modems have been designed with the purpose of allowing computers to communicate over a phone line. A telephone line is incapable of receiving digital data so a modem converts it into a tone-based format that can be understood. Once the data is transferred to the from the sending computer to the receiving computer, the receiver converts that tone-based data back into a digital format of serial data. One of the most important components of a modem is the universal asynchronous receiver-transmitter (UART) chip. The UART is a physical circuit inside a microcontroller, located in the serial port of your computer and converts outflowing data into a single stream. In flowing data is converted into eight data streams. It is important to note that an internal model will have a UART on the modem card. The UART acts as a communication protocol. It is important to know that the UART can also handle synchronous serial transmission as well. Synchronous transmissions are used when connecting to a printer. In general, the UART is used to convert data from a parallel format into a serial one. The type of your connection will greatly affect the type of modem that you use. If you’re currently with an internet provider who uses cable internet, then a cable modem is essential. You want to look for a cable modem that can match the speed of your broadband plan. This will ensure that you don’t go over your capacity and experience poor performance. This isn’t too much of an issue as most modems surpass the majority of broadband speeds (unless they are considerably fast paced). Cable modems can be situated inside or outside of a computer. Internally a cable modem contains a tuner, demodulator, modulator, media access control device and a microprocessor. One of the biggest advantages of cable modems is that they’re incredibly easy to set up. There’s very little you need to do in terms of customisation. It can be a good idea to consider purchasing a used cable modem. The reason is that newer modems don’t boast any particular performance advantage over used ones. It takes a ton of usage before a cable modem is past its best days. Buying a used cable modem can save you money as they cost a fraction of the price of a new product. Of course, if you choose to go down this route is important to make sure that the cable modem hasn’t taken too much punishment. DSL and ADSL Modems Digital Subscriber Line (DSL) is a term thats used to refer to high speed broadband services conducte dthrough the Pyblic Switched Telephone Network (PSTN). When using DSL information is transferred over copper lines to keep a high speed. One of the most common ways DSL is used is through Asymmetric Digital Subscriber Line (ADSL) modems must have become popular because they use DSL in a cheap format. ADSL is essentially a broadband connection that is transferred through copper wires. This is the most popular form of broadband connection because it has the lowest barrier to entry. The only barrier to entry that AD has is that you need to have a telephone service connected to PSTN. IF you need faster speeds you can also get an ADSL2 connection which is fast paced. DSL is used so widely because it doesn’t need to transfer digital into an analog format. It can be transmitted between computers as digital data. This results in much more efficient use of bandwidth and higher speeds. T1 Lines (or trunk lines) are a single land-line which can be used to conduct local and outgoing calls. T1 is an alternative to DSL that offers a speed of 1.54 mbps. T1 lines have 24 digital voice channels and are transmitted using a single circuit. They also use optics and wireless media to transfer data in a reliable format. Generally speaking T1 lines are used to connect to an internet access provider. Historically, T1 lines were used before DS1 became big. DSL boasts superior speeds to T1 but it is often oversubscribed by multiple, this means that the speed that’s advertised is often much lower. By comparison, T1 has a lower speed but is your personal service and guarantees you a speed of 1.54 mbps. Another commonly used line is T3. T3 is like T1 but it has a higher capacity of 44.736 Mbps. In a nutshell, T1 lines are widely used on account of their reliability. Having a high internet speed is all well and good, but if the bandwidth is eaten up by lots of devices it is simply not worth the hassle. Organizations that use T1 lines know for a fact that their connection speed won’t drop below 1.54 mbps. Routers are a device that is used to connect devices together wirelessly and link up to the Internet. Routers within a business environment have much higher standards that need to be maintained than home routers. Both business and home routers will have inbuilt switches and Wifi. The main difference between a router for home use and business use is scalability and security. Most enterprise grade networks will have a VPN in place that can handle anywhere between 10 to 100 users. In addition, you can also use a virtual network (VLAN) to break down traffic on the network and segment it. This helps to improve the baseline security offered in an enterprise environment. IN terms of downtime, a business grade router my also have the ablity to change lines if the primary ADSL fails. This makes sure that if your service experience any problems the router will automatically switch to a line that is up and running. While this isn’t complete security it does help to keep you online through adversity. The Key to Long Term Uptime: Scalability In order to keep your service up over the long term, you need to choose your network hardware with scalability in mind. Ultimately the scalability of your network comes down to how many Ethernet ports are available via your switches. If your endpoints ever outnumber your Ethernet ports, you’re in trouble! Anticipating what your needs will be in future will help you to deploy equipment with enough Ethernet ports to scale up to in the future. Likewise, you want to make sure that you have a switch that can handle your future file transfer demands as well. Even if it costs substantially more, it’s worth purchasing a top of the range switch if it prevents you from incurring unnecessary downtime. The role of the systems administrator Once the network engineer has created the layout of the network, it is the responsibility of the systems administrator to ensure that all of the equipment is correctly configured and kept up to date. The systems administrator also needs to keep an eye on the configurations of each piece of networking equipment to make sure that it is not tampered with. If you are running a small network, the chances are that you are both the network engineer and the systems administrator. This combination of roles is actually an advantage because there is no clear division between the installation of hardware and the expansion of a network. Running a network is a recursive task and it requires repeated adjustment to all infrastructure in order to keep the network serving the business’s needs efficiently. Monitoring software and analytical tools assist systems administrators in their task of keeping an eye on service delivery performance. Once a shortfall in performance is identified, you will need to consider adding on more hardware to keep up with demand. When that occurs, you will have to rely on your knowledge of network hardware in order to work out how to improve the network. In order to start building a network capable of sustaining an office, you need switches and routers. These two tools will lay the foundations for you to build on as your network environment grows more complex and scales up. Switches connect devices together so they can share data and a router connects your devices to the Internet. Your router is the main point of contact between your local network and the outside world. Once you have the basics in place you can start to incorporate a more complex setup. The most important consideration that network administrators have to take into account is to build a network that’s scalable. Every organization’s needs changes over time. As such it is a good idea to be flexible and to make sure that new hardware can be integrated in order to make sure the office equipment evolves. Above all, any network hardware chosen should have reliability firmly in mind. It is no good to have fancy network equipment if it doesn’t ensure the uptime of your organization. Only deploying hardware that has been proven to be reliable is essential to stay up over the long term.
<urn:uuid:23fae521-cc05-4c51-933b-aff05edea4e4>
CC-MAIN-2022-40
https://www.itprc.com/the-ultimate-guide-to-network-hardware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00634.warc.gz
en
0.948328
3,306
2.765625
3
The cybersecurity researchers at Trellix have recently identified a 15-year-old Python bug that has been found to potentially impact 350,000 open-source repositories. There is a possibility that this bug could lead to the execution of code. This 15-year-old Python bug was disclosed in 2007 and has been tracked as CVE-2007-4559. Despite this, no patch was provided to mitigate the security issue. It was only mitigated by an update to the documentation that alerted developers to the risks. Several industry verticals are represented by the open source repositories, including:- - Software development - Artificial intelligence - Machine learning - Web development - IT management The tarfile module is affected by this security flaw, which was rated 6.8 by CVSS. A tar file is composed of several files that are bundled together with metadata and other information about the files. In order to unarchive the tar file in the future, it is necessary to use this metadata. A tar archive contains a variety of metadata containing information that can range from the following:- - File name - File size - Checksum of the file - File owner information This information is represented in the Python tarfile module by a class called TarInfo, which represents this information. A tar archive generates this information for each member. Several different types of structures can be represented using these members in a filesystem, including:- - Symbolic links There is an explicit trust in the information contained within the TarInfo object within the code. This is followed by joining the path that was passed to the extract function with the current path. This vulnerability can be exploited by an attacker if they add “..” with the separator for their operating system (“/” or “\”) into the filename. So they can escape the directory where the file is supposed to be extracted to take advantage of this vulnerability. The tarfile module in Python allows us to do precisely this:- A filter can be added to the tarfile module to manipulate the metadata of a file before it is included in the archive. By using as little as six lines of code, attackers are able to create their exploits. A researcher from Trellix rediscovered CVE-2007-4559 earlier this year during the investigation of a different security vulnerability. In this case, an attacker could gain access to the file system via a directory traversal vulnerability caused by the failure of the tarfile.extract() and tarfile.extractall() functions to sanitize their members’ files. Over 350,000 Projects Affected The researchers developed a crawler that allowed them to identify 257 repositories that most likely contained the vulnerable code through the use of this crawler. These repositories were examined in 175 instances to determine if one of them contained it. As a result, it turned out that 61% of them were susceptible to attacks. Based on the small sample set, an estimation of all impacted repositories on GitHub was derived from the sample set by using it as a baseline. Trellix affirmed that the number of vulnerable repositories in their repository exceeds 350,000 based upon the 61% vulnerability rate that is manually verified. They are frequently used by machine learning tools that facilitate the development of faster and more accurate projects for developers. For the provision of auto-complete options, these tools use code from hundreds of thousands of repositories in order to do so. The developer would not be aware that an issue has been propagated to other processes when they provide insecure code. Trellix further developed a custom tool, Creosote, which enables users to check whether a project is vulnerable to CVE-2007-4559, as well as other vulnerabilities. Spyder IDE as well as Polemarch were found to have a vulnerability that could be exploited by using it. However, over 11,000 projects have been patched by Trellix. It is expected that more than 70,000 projects are going to be fixed in the next few weeks because of the large number of project repositories affected by the bug. Download Free SWG – Secure Web Filtering – E-book
<urn:uuid:b7ed4bef-afc9-4341-97b4-7ff2c688b63e>
CC-MAIN-2022-40
https://gbhackers.com/15-year-old-python-bug-let-hacker-execute-code-in-350k-python-projects/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00034.warc.gz
en
0.953452
882
2.71875
3
Samsung and IBM recently announced a new vertical transistor architecture for semiconductor design. This new design has the potential to deliver a two times improvement in performance or to reduce energy usage by 85 percent compared to the current scaled fin field-effect transistor (finFET) used by leading semiconductor manufacturing companies. This new chip design will enable week-long battery life on smartphones. Until now, transistors have been built to lie flat upon the surface of a semiconductor. With new Vertical Transport Field Effect Transistors, or VTFET, transistors are built perpendicular to the surface of the chip with a vertical, or up-and-down, current flow. The VTFET process addresses many barriers to performance and limitations to extend Moore’s Law as chip designers attempt to pack more transistors into a fixed space. It also influences the contact points for the transistors, allowing for greater current flow with less wasted energy. “Today’s technology announcement is about challenging convention and rethinking how we continue to advance society and deliver new innovations that improve life, business and reduce our environmental impact,” Dr. Mukesh Khare, Vice President, Hybrid Cloud and Systems, IBM Research. “Given the constraints the industry is currently facing along multiple fronts, IBM and Samsung are demonstrating our commitment to joint innovation in semiconductor design and a shared pursuit of what we call ‘hard tech.’”
<urn:uuid:926cf85c-c651-41e7-858a-cfa02d611ec5>
CC-MAIN-2022-40
http://dztechno.com/samsung-and-ibm-unveil-a-new-chip-design-that-will-enable-week-long-battery-life-on-smartphones/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00034.warc.gz
en
0.904058
294
2.9375
3
Web applications attacks/Client side data Client side data are data that are sent to your browser once the page has been interpreted by the server. Be very careful with data you send. Hence, it is very easy to intercept client-side data and modify them. In addition, never do critical filtering of data on client-side. See also client-side verifications. - WebGoat, Client-side filtering lab shows that you should never filter sensitive data on client-side! - WebGoat, Insecure lient Storage lesson shows how to crack a client-side weak encryption mechanism. It should make you aware of the necessity to filter data on the server and not on the client, even if data is encrypted. - WebGoat, Expllit Hidden Fields shows how to exploit hidden fields to modify the price of products. - HackThisSite.org, Basic, Level 4 shows how to intercept and modify an email address. - HackThisSite.org, Basic, Level 10 shows how to intercept and modify a value (level_authorized) to grant an access. - Hidden fields are to use with caution! - Never filter sensitive data on client-side but always on server-side.
<urn:uuid:2ec27346-0d3b-4d8d-af5d-dabadd640b6a>
CC-MAIN-2022-40
https://www.aldeid.com/wiki/Web_applications_attacks/Client_side_data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00034.warc.gz
en
0.857054
259
2.859375
3
The average power usage effectiveness (PUE) ratio for a data center in 2020 is 1.58, only marginally better than seven years ago, according to the latest annual Uptime Institute survey (findings to be published shortly). PUE, an international standard first developed by The Green Grid and others in 2007, is the most widely accepted way of measuring the energy efficiency of a data center. It measures the ratio of the energy used by the IT equipment to the energy used by the entire data center. Aiming for 1.0 All operators strive to get their PUE ratio down to as near 1.0 as possible. Using the latest technology and practices, most new builds fall between 1.2 and 1.4. But there are still thousands of older data centers that cannot be economically or safely upgraded to become that efficient, especially if high availability is required. In 2019, the PUE value increased slightly, with a number of possible explanations. The new data (shown in the figure) conforms to a consistent pattern: Big improvements in energy efficiency were made from 2007 to 2013, mostly using inexpensive or easy methods such as simple air containment, after which improvements became more difficult or expensive. The Uptime Institute figures are based on surveys of global data centers ranging in size from 1 megawatt (MW) to over 60 MW, of varying ages. As ever, the data does not tell a complete story. This data is based on the average PUE per site, regardless of size or age. Newer data centers, usually built by hyperscale or colocation companies, tend to be much more efficient, and larger. A growing amount of work is therefore done in larger, more efficient data centers (Uptime Institute data in 2019 shows data centers above 20 MW to have lower PUEs). Data released by Google shows almost exactly the same curve shape — but at much lower values. Operators who cannot improve their site PUE can still do a lot to reduce energy and/or decarbonize operations. First, they can improve the utilization of their IT and refresh their servers to ensure IT energy optimization. Second, they can re-use the heat generated by the data center; and third, they can buy renewable energy or invest in renewable energy generation.
<urn:uuid:80a1df30-bfa3-4418-b514-3617c948e482>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/opinions/data-center-pues-have-been-flat-2013/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00034.warc.gz
en
0.951953
465
2.671875
3
Security controls are a critical element of any IT strategy. However, it’s a common misconception that the number of controls correlates to a level of difficulty in achieving compliance. As with many things in the world of compliance, there is more to the story. To determine what security controls are appropriate for your needs, you must first consider the risk that your organization faces and the requirements that you may need to meet. In this post, we’ll tell you what you need to know about security controls and share a few best practices based on different frameworks. What Are the Types of Security Controls? Security controls are broken down into categories. They can either be broken down based on the type of control (physical, administrative, or technical) or based on the purpose of the control (preventive, detective, corrective). Controls may be categorized based on any combination of type and purpose. For instance, a control can be categorized as a preventive physical control, or a corrective technical control. Each of these categories of controls plays a key role in both proactively addressing risk and responding to threats when they appear. To provide the best security for your organization and its data, you need to consider all of them. Physical controls protect your resources and infrastructure from physical threats such as theft or damage. These controls exist on-premise to help you manage the environment where critical information exists. Examples of physical controls include: - Security guards - Video surveillance equipment - Access cards that limit entry into restricted areas Administrative controls involve policies, procedures, and guidelines which are put in place to ensure that human error does not create security vulnerabilities for the organization. This is key because approximately 88% of all data breaches are caused by employee error. Examples of administrative controls include: - Data classification policy - Employment agreements - Password expiration policy Technical controls include hardware, software, and firmware that is used to prevent unauthorized access to systems or data. Controls at this level act as another line of defense if an unauthorized user were to gain access to your devices. Examples of technical controls include: - Antivirus software Preventative controls are there to prevent or decrease the chances of an information security incident. Controls at this level allow you to take a proactive approach and build a security-first culture within your organization. Examples of preventative controls include: - Segregation of duties - Security awareness training - Multi-factor authentication Detective controls are put in place to help you identify irregularities or problems when an information security incident occurs. They can also help you determine whether your preventative controls are working properly. Examples of detective controls include: - Security information and event management - Data leakage detection - Malware detection Corrective controls act after an information security incident or problem has been detected. These controls are there to remedy flaws, make improvements, and guide corrective action. Examples of corrective controls include: - Incident management and planning - Disaster recovery planning - Error handling Although the examples above are not exhaustive lists of possible controls, they can give you an idea of what you can implement across different control types. What is the Main Goal of Security Controls? The goal of security controls is to protect your data and systems from unauthorized access or use. You should use security controls for everyone—from passwords for online accounts to monitoring your network for attacks. The important thing to remember is that security controls are not something you can set and forget. When you take part in an audit, you’ll need to take steps to ensure specific controls are in place and working properly. You’ll also need to take steps to address new risks, continuously update and test your programs, and maintain compliance. Number of Security Controls Through Different Frameworks There are several different security frameworks organizations can use to help prove that assets secure. The number of controls you will need to implement depends on the criteria and requirements that apply to your organization based on the framework. Keep in mind that the below is simply a generalized breakdown to give you a ballpark on the number of controls for each framework. Depends on which categories (Availability, Confidentiality, Processing Integrity, Privacy) are included in your audit in addition to Security. Controls aren’t defined by SOC 2, so there can be a wide range of controls included in an audit. Organizations define their own controls and your auditor will use their professional judgment to render an opinion on whether the controls you have in place meet the SOC 2 criteria for the categories in scope. Average number of controls: Typically between 80 to 150 ISO 27002 defines ISO 27001 Annex A controls. You can identify controls from any source, however, the controls they use must be compared to the Annex A controls to determine that all Annex A controls are covered. Organizations complete an SOA (Statement of Applicability) to determine which Annex A controls apply. Number of Annex A controls that must be covered: - ISO 27002:2013 – 114 - ISO 27002:2022 – 93 If you’re required to complete Self-Assessment Questionnaire (SAQ) D or engage a Qualified Security Assessor to complete a Report On Compliance (ROC), you will be subject to all required controls, unless you deem controls not applicable. Other self-assessment questionnaires will require less controls. Number of controls if all PCI controls are in scope: - PCI v3.2 – about 350 - PCI v4.0 – about 400 HITRUST recently released the new i1 assessment. The i1 assessment does not take into account an organization’s inherent risk factors like the r2 assessment considers. For r2 assessments, each control is divided into three implementation levels with multiple requirements. Organizations have to determine the implementation level for each control and tailor the controls based on risk. Once the implementation level and risk is determined, the number of required controls can be determined. i1 assessments won’t have implementation levels. Even though there are more controls in an i1 assessment, it will require less effort since implementation levels in r2 assessments have several items to consider. Number of controls: - i1 assessment – 219 - r2 assessment – 156 For the r2 assessment, there can be as many as 1000 sub requirements. Most organizations are able to significantly reduce the number of control requirements after they determine which control requirements are necessary based on risk. The number of required controls is based on the control baselines: LI-SaaS, Low, Moderate, High. Organizations must determine the impact level of the system being assessed as defined in FIPS 199. The security controls and enhancements were selected from the NIST SP 800-53 Revision 4 catalog of controls. Number of Controls: - High – 421 - Moderate – 325 - Low – 125 - LI-SaaS – 126 This certification has three levels. Level 1 is for organizations handling Federal Contract Information only and Levels 2 and 3 are for organizations handling Controlled Unclassified Information. Level 1 encompasses the basic safeguarding requirements in FAR Clause 52.204-21. Level 2 encompasses the security requirements for CUI in NIST SP 800-171 Rev 2 per DFARS Clause 252.204-7012. Information on Level 3 will be released at a later date and will contain a subset of the security requirements specified in NIST SP 800-172. Number of Controls: - Level 1 – 17 - Level 2 – 110 - Level 3 – 110+ For Level 3, the total number of controls will be determined by the Department of Defense at a future date. Looking for the smartest way to continuously monitor your controls for SOC 2, ISO 27001, PCI DSS, GDPR, CCPA, and HIPAA in one place? Drata can help. Schedule a demo to see how our solution can play a role in streamlining the process. More Blog Posts Subscribe & receive the latest content. Subscribe & receive the latest content. Get Started Today Close more sales and build trust faster while eliminating the hundreds of hours of manual work that used to go into maintaining your SOC 2 report and ISO 27001 certification.
<urn:uuid:d22d5c6b-a963-4dcb-9d3d-23a145b0fdcd>
CC-MAIN-2022-40
https://drata.com/blog/security-controls
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00034.warc.gz
en
0.908185
1,756
2.875
3
The data business is booming, and so are the demands of government and industry to hire professionals who can make sense of their data. The city of San Francisco established the role of chief data officer last year and created a data coordinator position for each department. Other states and cities — and the federal government — have followed suit. The White House hired its first chief data officer in February to help lead the administration’s open-data efforts and recruit talented data experts into government. Roughly 1.6 million government employees are employed in data occupations, according to a new report by the Commerce Department’s Economics and Statistics Administration. Those jobs account for about 16 percent of the 10.3 million public and private sector jobs in which data is central to the work being performed. The report is based on 2013 figures, the most current full year’s worth of data. There are far more than 10.3 million jobs in the U.S. where data is at least important to the job, but the report focuses on data-intensive jobs, or those in which data is central to the work being performed. Employees in those roles — including Big Data engineers, database managers and data analysts — process or analyze data using sophisticated computer technology. Similar to the private sector, most government employees in data-intensive jobs are in business and computer roles, followed by management and office and administrative roles. The report provides additional background information on the management and office support positions in government: Employment in data occupations in the areas of management (primarily education administration) and in office support (largely police and fire dispatchers) make up a larger portion of the total in the government sector than they do in the private sector; these two areas account for more than a quarter of the government sector employment in data occupations, as compared with 10 percent in the private sector. More Data Buffs Work on the East Coast Virginia, Maryland and Washington, D.C., boast some of the highest concentrations of data-intensive jobs in the private sector, but the report suggests that the federal government may be the driving source of those jobs. According to the report, “jobs that support the federal government may be playing a role in this specialization in data industries.” Read the full report here.
<urn:uuid:d03e9f1e-928a-4eef-abff-7f6d328401ed>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2015/03/report-16-million-data-buffs-work-government
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00034.warc.gz
en
0.949081
469
2.578125
3
Top 15 Live Cyber Attack Maps for Visualizing Digital Threat Incidents Recently, Broimum has conducted a study that shows how digital crime revenue has grown to 1.5 trillion dollars annually in illicit profits. GitHub, EA , and many other popular websites now face larger, hi-tech attacks every day, all while falling victim to the growing trend of cybercrime. Frantic internet users are asking questions like, Who is behind the attacks? Where are these attacks coming from? What’s the top attacker host? These questions can be answered by exploring the logs, then performing lookups for all available information. What is a DDoS Attack? First, we must define the meaning of a DDoS attack. DDoS attacks are a main concern in internet security, and many people misunderstand what exactly they are. A Distributed Denial of Service (DDoS) attack is a malicious attempt to upset typical internet traffic of a targeted server by overwhelming it with traffic from multiple sources. DDoS attacks target a plethora of important resources, from banks to news websites, and present a major challenge to making sure Internet users can publish and access important information. A DDoS attack is similar to a traffic jam on a highway, preventing typical traffic flow. How does a DDoS attack work? A DDoS attack requires the attacker to gain control of a network of online machines. Computers are infected with malware, turning them into a bot. Then, the attacker has control over the group of bots, now called a botnet. Once a botnet is established, the attacker will send instructions to each bot from a remote control. Once the IP address is targeted, each bot will respond by sending requests to the target, causing the server to overflow, which will result in a DDoS attack. How can you combat DDoS attacks? If you are facing an isolated low- to mid-size Distributed Denial of Service (DDoS) attack, you can explore these logs and find the information you need to protect yourself from these attacks. However, with larger attacks, manual lookups are time consuming and ineffective. That’s why there need to be other plans in place to fight cyber-attacks. However, if you are not experiencing a DDoS attack, and you just want to learn about top digital attack information from cybersecurity incidents around the world, where would you look? You can try internet service provider (ISP)’s stats or check out anti-DDOS providers, or you can see what’s happening right now by looking at digital attack maps. To see how cybersecurity works globally, you can observe cyber-attacks and how malicious packets interact between countries. We are going to share with you the top cyber-attack maps that you can watch in order to visualize digital threat incidents. Global Cyber Attacks Today Today, cyber-attacks can affect anyone, but some of them are designed to leave global damage. A cyber-attack is any type of internet attack designed by individuals or entire organizations that targets computer information systems, networks, or infrastructures. When they appear, they come from a seemingly anonymous source that will attempt to destroy its victim through hacking into its system. There have been many, many worldwide cyber-attacks, and some are happening right now. The latest statistics say that security breaches have increased by 11% since 2018 and 67% since 2014. In fact, hackers attack every 39 seconds, so on average, 2,244 times a day. What is a Cyber Attack Map? Cyber-attack maps are valuable tools that give information on how to stay ahead of attacks. A cyber-attack map shows how the Internet functions in a graphical way and can be useful to see the big picture. Even though we’re talking about enormous amounts of damage that cybercriminals cause, the maps themselves can be fascinating to watch. Every 39 seconds, a cyber-attack occurs. While some of these are manually-targeted cyber-attacks, most of them are botnets steadfast on shutting down infrastructures and destroying computers and systems of major organizations. A DDoS attack map is a type of cyber-attack map that details just DDoS attacks. Most current digital attack maps share these specifics: - They are incorrectly advertised as “live maps”—most do not show live attack data, but records of past attacks. - They only show Distributed Denial of Service (DDoS) attack, not other types of cybercrime. - They only display anonymous traffic data. Because most cyber-attack maps are not in real-time, it can be difficult to understand them. However, there are still positives to these maps. Is it Useful to Understand Cyber Attack Maps? The jury is still out on whether it is actually beneficial to understand cyber-attack maps and how they function. Some Information Security industry experts claim that these maps aren’t useful at all, that they’re simply used as a sales tool by cybersecurity solution providers. However, other experts believe that while these threat maps have no practical usage for mitigating attacks, threat maps can be used to study past attack styles, to recognize raw data behind DDoS attacks, or to even report outages on certain dates and times to their customer base. Another essential point to keep in mind about the source of the attacks: even though these maps pinpoint particular countries launching attacks against others, that doesn’t mean the actual source of the attack is the same as the attacker location. In actuality, the source of an attack is often forged, which means that it appears as though it was initiated from a certain country, but it is not from that country at all. When the map shows the correct location, it’s often not the real attacker behind the cyber-attack, but rather an infected computer working for a botnet. Another noteworthy fact is that the largest attacks usually originate from high bandwidth nations, who are perfectly suited to launching huge attacks from thousands of infected devices led from more isolated locations. One more important point to note is that while these maps provide valuable cyber-attack information, it is impossible to fully map all digital attacks online because they are constantly changing. These maps update regularly (usually hourly, but some are in real time), but they cannot show everything. The Most Popular Cyber Attack Maps 1. Arbor Networks DDoS Attack Map Arbor Networks is one of the most popular attack maps. This map is devoted to tracking down attack episodes related to DDoS attacks around the world. Arbor Networks ATLAS® global threat intelligence system has gathered and presented the data, which comes from a worldwide analysis of 300+ ISPs with over 130 Tbps of live traffic. This map’s stats are updated hourly, but the digital map also allows you to explore historical data sets. Its features include: - Stats for each country - The attack source and destination - Various types of attacks (large, uncommon, combined, etc) - Color-coded attacks by type, source port, duration and destination port - The size of the DDoS attack in Gbps - The embed code so you can attach the map in your own website - Sort by TCP connection, volumetric, fragmentation and application 2. Kaspersky Cyber Malware and DDoS Real-Time Map The Kaspersky cyber threat map is one of the most comprehensive maps available, and it also serves as the best when it comes to graphical interface. It also looks amazingly sleek, although of course, what it signifies is Internet devastation. When you open the map, it detects your current location and displays stats for your country, also including top local attacks and infections from the past week. Here are the activities detected by the cybermap Kaspersky: - On-Access Scan - On-Demand Scan - Mail Anti-Virus - Web Anti-Virus - Intrusion Detection Scan - Vulnerability Scan - Kaspersky Anti-Spam - Botnet Activity Detection Here are some other features this map offers: - Switch to globe view - Toggle map color - Zoom in/out - Enable/disable demo mode - Embed map using iframe - Buzz tap which includes helpful articles 3. ThreatCoud Live Cyber Attack Threat map CheckPoint designed the ThreatCloud map, which is another cyber-attack map offering a hi-tech way to detect DDoS attacks from around the globe. It’s not the most advanced map in our list, but it does succeed in showing live stats for recent attacks. ThreatCloud displays live stats, which include new attacks, the source of the attacks, and their various destinations. Another interesting feature is the “Top targets by country” feature, which offers threat stats for the past week and month, as well as the average infection rate and percentage of most frequent attack sources for some countries. At the time of this writing, the Philippines was the top country attacked, with the United States in second. 4. Fortinet Threat Map The Fortinet Threat Map features malicious network activity within various geographic regions.. In addition, this attack map will display various international sources of attack and their destinations. It may not be as visually exciting as some of the others, but it is easy to understand. General live attack activity will be shown in order of attack type, severity and geographic location. You can also see a day/night map under the attack map which is interesting. If you click on a country name, you will see statistics for incoming and outgoing attacks, as well as overall activity in the country. The different colors on the map represent the type of attack, for example: - Execution (remote execution attacks) - Memory (memory-related attacks) - Link (Attack from a remote location) - DoS (Denial of Service attacks) - Generic attacks Another feature of the Fortinet Threat Map is the ongoing statistics on the bottom left hand corner of the page. For example, number of Botnet C&C attempts per minute and number of malware programs utilized per minute. 5. Akamai Real-Time Web Attack Monitor Another great attack visualization map is Akamai Real-Time Web Attack Monitor. This company controls a big portion of today’s global internet traffic. With the vast amounts of data it gathers, it offers real-time stats pinpointing the sources of most of the biggest attacks anywhere around the globe. It also cites the top attack locations for the past 24 hours, letting you choose between different regions of the world. This map is displayed in various languages. You can change the language by clicking on the language tab on the top right corner of the page. This map also includes helpful learning resources such as a glossary and a library. 6. LookingGlass Phishing/Malicious URL Map The LookingGlass real-time map shows actual data from Looking Glass threat intelligence feeds, including: - Cyveillance Infection Records Data Feed - Cyveillance Malicious URL Data Feed - Cyveillance Phishing URL Data Feed The goal is this map is to detect and show live activity for infected malicious and phishing domain URLs. When you load the map, the results will be shown in four columns which include infections per second, live attacks, botnets involved, and the total number of affected countries. When you click on any location on the map, you will see additional details about the malicious incident, such as time, ASN, organization, and country code. You can also filter the display options using the “filter” tab in the upper right-hand corner of the webpage. 7. Threat Butt Hacking Attack Map Threat Butt features one of the coolest looking digital attack maps around, not because of a wide range of features, but because of its retro design. Top Live Cyber Attack Maps for Visualizing Digital Threat Incidents The map is displayed in a basic black and green design, with red lines which extend to countries where attacks are detected. In the footer you’ll see descriptive information about each attack, including origin country, IP address, destination, and even some humorous captions. This map is one that is appealing to explore. We know cybercrime is no laughing matter, but the makers of Threat Butt certainly have a sense of humor. 8. Talos Spam and Malware Map Another company offering a free digital attack map is Talos. The threats displayed on this map are detected by Talos attack sensors, as well as culled from third party feeds. The information displayed is completely dedicated to revealing the world’s top spam and malware senders. Talos Spam and Malware Map displays the top 10 cyber-attack sender lists by country as well as by top malware senders. To see more information about these senders, such as the exact IP address of the server that sent the spam/malware, hostname, the last day of the detection, and the reputation status, you can click on their names. Also, when you click the hostname, you will see information about the network owner, as well as reputation details, email volume average and volume change. 9. Sophos Threat Tracking Map The Sophos map is not a real-time map, but a static threat tracking map. Its data comes from SophosLabs monitoring and malware research activities. Threats are visualized by three central graphics: - Today’s Malicious Web Requests - Today’s Blocked Malware - Today’s Web Threats At the bottom of the page, you will see a Threat Geography map which allows you to click on any affected location to find out more details about spam issues. Examples include: - Infected websites (including the malware/virus name). - Spam source (including subject, source IP and exact location) - Email malware source (including subject, source IP and exact location) 10. FireEye Cyber Threat Map The FireEye Cyber Threat Map is still informational, but it does not contain many of the features that the others do. It does, however, show the origin, the destination, the total number of attacks, as well as some other stats about the previous 30 days, such as top attacker countries and top most attacked industries. It does feature an informative blog that is updated regularly, so users can learn and understand more about threat research, solutions and services, and even executive perspectives. 11. Deteque Botnet Threat Map A division of Spamhaus, the Deteque Bonet Threat Map is a botnet attack map that provides a lot of useful information. The map identifies areas with high botnet activity and potential botnet control servers. Locations showing red circles have the most intense bot activity. Blue circles show command and control botnet servers. If the circle is larger on the map, there are more active bots at that given location. Users can zoom in on any location to see details on botnet attacks in that area. At the bottom of the map are two charts. One is the “Top 10 Worst Botnet Countries,” and the other is the “Top 10 Worst Botnet ISPS.” 12. Bitdefender Live Cyber Threat Map From Bitdefender, which is headquartered in Romania, is the Bitdefender Live Cyber Threat Map, an interactive map that shows infections, attacks, and spam that are occurring globally. This cyber threat map shows a real-time “Live attack” report, complete with the time, type of attack, location, attack country, and target country. 13. SonicWall Live Cyber Attacks Map The SonicWall Live Cyber Attacks Map provides a graphical view of worldwide attacks over the last 24 hours. It shows which countries are being attacked and where the attack originates. This interactive map shows not only malware attacks, but ransomware, encrypted traffic, intrusion attempts, and spam/phishing attacks. Also included are attack site statistics for the past 24 hours. The SonicWall Live Cyber Attack Map also shows Security News, where the Capture Labs team publishes research on the latest security threats and attacks. 14. Digital Attack Map Built in collaboration with Arbor Networks and Google Ideas is the Digital Attack Map, which shows a live data visualization of top daily DDoS attacks worldwide. You can also look at historical attack date, including the most notable recent attacks. This data is collected anonymously, so it does not include information about the attacks or victims involved in any particular attack. This map allows filtering by size and type so you can look at the information in detail. 15. NETSCOUT Cyber Threat Horizon Powered by ATLAS-NETSCOUNT’s Advanced Threat Level Analysis System, the NETSCOUT Cyber Threat Map is much more than a cyber attack map. It provides highly contextualized information on threats all over the world. It shows DDoS attacks observed globally in real-time. It shows many characteristics of the attacks, such as size, type, sources, destinations, and more. It also provides reports on DDoS attacks and highlights events like the most significant attacks by region, industry sectors, and time-span. What can Hosting Providers, ISP, and Large Organizations Do to Protect Their Networks? Hosting providers, Internet Service Providers, and large organizations can protect their networks against cyber-attacks by first being educated and aware of the severity of the potential attack. Reviewing visual threat maps is obviously a good start. There are also companies, such as Arbor Networks, who not only provide cyber data for this visualization, also offer a number of DDoS mitigation services. To find out more click here. What Can Individual Sites Do to Prevent Themselves from DDoS Attacks? To protect your individual website, you need to be able to block malicious traffic. Webmasters can talk to their hosting provider about DDoS attack protection. They can also route incoming traffic through a reputable third-party service that provides distributed caching to help filter out malicious traffic—which reduces the strain on existing web servers. Most such services require a paid subscription, but will, of course, cost less than scaling up your own server capacity to deal with a DDoS attack. Google Ideas has launched a new initiative, Project Shield, to use Google’s infrastructure to support free expression online by helping independent sites mitigate DDoS attack traffic. What’s the Bottom Line? Cyber-attacks, along with spam and malware infections, are increasing in frequency daily. While the cyber-attack maps we’ve explored won’t help diminish these attacks, it’s essential to be aware of the threats, where they are coming from, where they are going, and We do know that no one has ever been 100% safe from cyber attacks. While this is concerning, there are steps you or your company can take to protect your networks the best ways possible. With that said, the question is now, what are you doing to prevent cybercrime in your online company?
<urn:uuid:cac62442-8970-447e-a07c-7c6769bf4548>
CC-MAIN-2022-40
https://norse-corp.com/map/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00235.warc.gz
en
0.92306
3,965
2.84375
3
While being a core component of the Internet, DNS remains one of the least secure protocols in active use. DNS security is a long-standing debate, with DNS privacy a much more recent matter and a source of division among the security community. Indeed, privacy matters and must weigh in the balance when considering DNS security. However, security is not only about confidentiality. Protocols such as DNS over TLS or DND over HTTPS must be leveraged wisely to strengthen network security, rather than introducing bias that could prove costly in the future. Movements are massive around DNS resolution at the application level, so it’s worth taking a step back to see the whole picture before moving in the wrong direction. Let’s first consider the need. Why are we looking for both DNS privacy and security? There are two main reasons: - Legitimate domain owners expect that DNS answers sent from their Name Servers will be transmitted to the DNS clients without alteration so that the client accesses a genuine service – this is about integrity - DNS clients expect their privacy to be respected and that the DNS answers they receive are trustworthy – this is both about integrity and confidentiality Why is it so challenging? First, because DNS was designed to permit resource delegation, maximizing service availability and performance when security was not yet a major concern. As a result, the Domain Name System ends up being a widely-distributed hierarchical database leveraging 3 different components: - The authoritative Name Servers – The authority that holds actual DNS records for a domain - The resolvers – The servers providing name resolution services for the clients on a network - The clients – The various systems relying on DNS for reaching network services/content What matters most regarding the security challenges previously mentioned are the resolvers. These components of the Domain Name System are the ones trusted by the clients, and therefore the most sensitive for numerous reasons: - DNS traffic is not encrypted nor authenticated, a client connected to an unsecured network can be tricked into using any rogue DNS server. This was demonstrated in the “Breaking LTE” attack. Once the client uses such server, he can easily be directed to malicious content (even if SSL certificate validation offers additional protection, it’s not sufficient) - The DNS resolver is part of the network infrastructure and acts as a cache for the DNS protocol. Therefore, it has extended visibility over network activity and can be used as a network security solution (detecting suspect behaviors), to protect both the clients (preventing them from accessing known malicious content) and a company’s assets. Most of the time, this DNS component is provided by the access network operator because many local network services depend on private DNS zones (e.g. an enterprise internal application or voicemail) - The DNS resolver can enforce the use of security mechanisms such as DNSSEC (see What is DNSSEC) to validate the integrity of the answers stored in its cache before serving them to the clients, contributing to their safety In the end, securing DNS is all about securing/trusting the resolution process. To do so, several options have been studied: - Revising the operating system’s DNS lookup library to natively enforce DNSSEC validation and DNS traffic encryption could have been an option, but the maintainers have expressed scalability and security concerns on integrating cryptographic components to this critical operating system library. - Relying on a local system daemon for DNS resolution process. For example, this process is the one currently leveraged to implement DNS over TLS in systemd or by Cisco for enforcing DNS protection on mobile clients with Umbrella. In this scenario, the local agent listens on the loopback and the libc point to this resolver, which enforces the use of security mechanisms such as DNSSEC and secure communication with the resolver using DNS over TLS or alternative protocol such as DNSCrypt or DoH. The only issues with this solution are slow deployment and maturity. For instance, until late June 2019, the system resolver was not validating the SSL certificate of the DNS server when using DNS over TLS. - Deporting the DNS resolution into applications such as the browser. This option comes mainly from the initiative of some internet giants supporting DoH on behalf of privacy concerns. With this approach, each application embeds its own resolution mechanism independently of the Operating System. This allows to ignore current implementation and quickly move forward but comes with significant security and operational implications. It’s too soon to tell which is the best option, but in each scenario, the client continues to rely on the external DNS resolver, mostly for scaling purposes. Therein lies the main security issue. As DNS is part of the network infrastructure, it’s commonly leveraged for various purposes (legitimate or not). According to best practices, connected clients should only be allowed to reach the local network’s DNS resolver. That way, DNS can be used to prevent phishing, malware spread and detect malicious behavior (network scan, exfiltration, etc.) as part of enterprise network security strategy. This is often achieved in two ways, either using Deep Packet Inspection (DPI) at the firewall level or at the DNS resolver level using a DNS firewall. Obviously, encryption will forbid DPI analysis. But it can also allow a client to bypass the local DNS resolver if the ciphered traffic cannot be identified and blocked. Clients can then use any external DNS provider, with all the security concerns this can raise. In this context, DNS over TLS is not seen as a threat, whereas DoH is. The reason is pretty simple: DNS over TLS relies on a dedicated TCP port (853) and can be easily filtered on a network’s boundary. DoH, however, relies on the infamous https port 443. This prevents any attempt to filter related traffic, effectively making it a wide-open door to the internet from any secured network. Worse, while the claim to protect internet users’ privacy is legitimate, we can seriously question the intentions of the tech giants supporting DoH, as they are for all intents and purposes insidiously centralizing all DNS queries for processing. Don’t be fooled- while the traffic between the clients and the service is encrypted, queries are performed unencrypted and the data will most certainly be processed for various, yet undisclosed, purposes. Remember the adage: if the service is free, then you are the product. In the end, DoH is a nice try to address the DNS security challenges the DNS community has minimized for a while. Yet this attempt deliberately seems to ignore the security concerns of network operators and organizations and may dupe users through a false promise of privacy. Current DoH deployment proposal means externalization of the DNS resolution. This implies loss of visibility over related network activity, weakening network security considering that 91% of malware are relying on DNS. It makes Internet slower as DNS resolutions will no longer fall below one millisecond, even with higher rate broadband networks. Finally, it generates new privacy issues. The question is, who is a better manager of your privacy- your own local DNS resolver or an external DNS resolver? At EfficientIP, we do believe DNS traffic encryption is going to take off. DNS is hierarchical and somehow controlled by the root servers. This is legacy, yet it has been proven pretty efficient. However, it doesn’t mean the DNS can’t be enhanced. Technical solutions are already on the table. Network operators need to take back control over this too often forgotten protocol that is DNS. They must leverage trustable, distributed DNS services across their own network implementing DoT and/or DoH natively on their resolvers and support DNSSEC deployment so that they can protect both privacy and their users from the many threats spreading on the internet. For sure, DoH providers will appear on the market in order to answer region or organization regulation, but the market cannot be in the hands of the few current ones. We need to look at DoH usage from different angles from the viewpoints of users in the enterprise or end users at home, from privacy concern at the hotel, airport or café or for mobile users of a corporation. Future DNS solutions will embrace all the transport means to adapt to all the usages, current implementation is still in its infancy. DOH has become a hot topic lately with browsers and ISPs. Mozilla has pushed the limits with its Firefox initiative and Cloudflare. Google plans on testing ISPs with it’s DNS over Https with Chrome. For more information: - DoH! To block or not to block? - DNS over HTTPS (DoH) Considerations for Operator Networks - An Analysis of Godlua Backdoor - DoH Creates More Problems Than It Solves
<urn:uuid:81090e5f-b021-422f-b72c-c3647ba5b1fa>
CC-MAIN-2022-40
https://www.efficientip.com/dont-rush-into-doh/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00235.warc.gz
en
0.934187
1,809
2.734375
3
Generator-Equipped EVs Could Help During Weather Emergencies More electric vehicle makers are considering the addition of onboard generators in newer models. The ability to generate power would be useful in extreme weather situations like the one Texas faced earlier this year. During the historic Texas freeze last month, Austin-Travis County EMS medics responded to dozens of calls for carbon monoxide poisoning, with some trying to use their cars in garages to stay warm or recharge phones but then breathing in exhaust fumes from the gas engines. But new zero-emission electric car models, like Hyundai’s 2022 model Ioniq 5, are going to include a built-in generator and power adapter, according to Steve Burkett, an electric vehicle specialist. “This car and others that follow that technology will simply be able to plug in a quick adapter and you will be able to power the most bare kind of essentials of your house with that vehicle for several days,” Burkett said. “However, you are probably looking at another year or two before you start seeing that becoming a common feature on most electric vehicles.” Electric vehicle technology is expanding its presence in Texas with companies like Rivian, whose delivery trucks operate out of Amazon's distribution hubs in the state, and Tesla, which is building a factory in the Austin area. Burkett said he thinks the shift to EVs in the future is inevitable because they are more efficient and cleaner than gasoline-powered vehicles. “I liken it to Blackberries and iPhones," Burkett said. "Initially, there were people who couldn't live without a keyboard, now 12 years later, that would look archaic as a technology. It is natural for people to be a bit resistant to something they don't have experience with.” In December 2020, Austin Energy created an Electric Vehicle Buyer's Guide in partnership with local dealerships. That same month the energy company also finished installing several chargers that can give a full charge in about 30 minutes at 21 cents a minute. However, not all vehicles are suited for this wattage and extended use will reduce the battery’s efficacy and lifespan. Joshua Busby is a professor of public affairs at the University of Texas and specializes in climate and environmental policy. He says the growth of the electric vehicle market and consumer acceptance in Texas all depends on this type of infrastructure. He said he is also an electric vehicle owner. He tried to visit College Station recently but found that the only charging stations between there and Austin were for Tesla. “Texas is a huge state and for drivers to have confidence that they can charge their vehicle from Point A to Point B and not be stranded somewhere,” Busby said. “Consumer acceptance will depend on that infrastructure investment.” The American Council for an Energy-Efficient Economy ranked Texas as 28th in its top 30 states for transportation electrification. The state earned high marks because of its tax credits and rebates offered to encourage electric vehicle purchases, but the group rated Texas poorly on supportive infrastructure for electric vehicles. The council study noted that for the 39,504 light-duty electric vehicles registered in Texas, the state only has 3,131, charging ports and 1,215 charging stations. On the national level, President Joe Biden in January signed an executive order aiming to replace the federal government’s fleet of gas-powered vehicles with electric vehicles, and to build 500,000 charging plugs throughout the nation. Busby said he has confidence that the Biden administration's infrastructure renewal plans can be passed through the U.S. Senate. “We are in a moment where U.S. infrastructure is decaying so we could employ people and also have the need to refurbish the decaying infrastructure,” Busby said. “What we ultimately need is investments in cleaner infrastructure that employs Americans, cleans up our energy systems in the process and puts us on the pathway of the jobs of the 21st century.” ©2021 Austin American-Statesman, Distributed by Tribune Content Agency, LLC.
<urn:uuid:55928e14-b2c2-43b4-9fb4-82c0463fe927>
CC-MAIN-2022-40
https://www.govtech.com/fs/generator-equipped-evs-could-help-during-weather-emergencies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00235.warc.gz
en
0.96498
831
2.53125
3
The gender gap in the AI industry risks fostering an economic and technological system with a massive underrepresentation of women, according to UNESCO’s Gabriela I. Ramos Patiño. In an article posted on the World Economic Forum blog, the UNESCO assistant director-general for the social and human sciences suggested that the gender gap in AI is “self-perpetuating.” She warned that the disparity in the number of women versus men in Industry 4.0 “exacerbates the lack of entry points for women into tech.” “This huge inequity is a problem that has seen no improvement over the past decade, with the share of female AI and computer science PhDs stuck at 20%,” Patiño said. A 2021 Deloitte study found that women make up just 26% of the AI workforce in the U.S. According to Patiño, the disparity has increased further due to the pandemic. She referred to TrustRadius’s report, which found women are twice as likely as men to have lost their jobs and 42% of women in tech say they took on most of the household work during the pandemic. “The inequality women experience at work is compounded by the inequality they face at home,” she wrote. Patiño also cited recent World Economic Forum research that states the percentage of male graduates in IT is nearly five times higher than women graduates (8.2% versus 1.7%). “These numbers are an affront to key principles of diversity and inclusion. But the lack of female participation in this sector is also a detriment for the industry as a whole, which becomes more effective the more gender-diverse it is,” she added. Bridge the gender gap in AI Education and employment systems are perpetuating the problem, according to Patiño. “The lack of gender diversity in the workforce, the gender disparities in STEM education and the failure to contend with the uneven distribution of power and leadership in the AI sector are very concerning, as are gender biases in data sets and coded in AI algorithm products,” she said.
<urn:uuid:3d8de661-0e63-4412-b5f3-5e613b6e5853>
CC-MAIN-2022-40
https://www.itprotoday.com/artificial-intelligence/industry-urged-bridge-ai-gender-gap
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00235.warc.gz
en
0.959553
445
2.78125
3
The Justice Ministers of the 15 Member States of the European Union have decided to modify their country laws, with the aim of including prison sentences for the authors of computer crime. This decision may have more implications than it may seem: there are many different types of computer crimes, and all of them can be seen from different perspectives. For example, the attacks ‘against the integrity of information systems and databases carried out with the intent of hindering or interrupting the system’ will be a principal target of new legislation. That is, when a system is accessed and modified to affect its operation. However desirable it may be that these attackers end up in jail, without being a legal expert, this will be difficult if the author of the attack can not be identified. When a hacker wants to access a computer and steal information, he does not identify himself. If he did, it would be as if a thief went into a bank and said: ‘Hi, it’s me, I’m Jane Bloggs, I want to steal your money’. For that reason, a hacker will try to hide his actions in many different ways: from the most simple one, like using a computer in a Cybercafe, to the most sophisticated ones, like the usage of Trojans or logins in ill-protected computers. It is very difficult to identify the computer from which the attack is being carried out, even more if we take into account that there are many counties which do not have adequate controls over ISPs or telephone service operators. What’s more, there are free Internet connections with anonymous users, in many countries, that allow people to hide the number from which the telephone call is made. Summing up, a person could make a phone call to connect to the Internet without anybody knowing who or where he is. It is even easier to search for an unprotected computer (without a personal firewall and antivirus) on the Internet. Once the hacker finds it, he just needs to find an open port and log in. From that moment on, the attacks would seem to have been carried out from that computer. Of course this is a good reason for not forgetting to have your personal firewall enabled while connected to the Internet. Another crime that the legislation aims to deal with is the spreading of viruses, which is even more surprising than the case above. Every user is a potential virus distributor since, once a computer becomes infected, the malicious code will automatically spread itself to other users. Obviously, the European Union wants to punish the person that first sends the virus out, who intentionally causes the first infections. But it is all to easy to do this by using a computer in a Cybercafe, a free website, or even a USENET newsgroups. It must be taken into account that this first distributor could be a 14 year boy. This leads us to the conclusion that the real guilty party is the virus writer. He is the one to be punished. The problem is that this person has a series of resources to protect himself from the current legislation. As an example: if I am Chilean and I create a virus and leave it in a web page with a ‘.TV’ domain –which is located in a Japanese server- and a European kid uses it to carry out an infection, who is to be blamed? What does the Chilean legislation say about it? And Tuvalu’s one, which is the legislation to which the domain belongs? And does the Japanese government provide for this problem? Is the European Union the one who finally will put one of its citizens in jail? In the best of the cases (when the process is carried out in the European Union), if the virus writer has included the message ‘I do not accept responsibility for the wrong usage of this code, which I have left here for investigation purposes’ the writer will not be culpable. Whatever the law does to punish criminals, they will continue to exist, and they will be many! For that reason, the best thing to do is to protect your computer with a good antivirus and a personal firewall, and go on enjoying the Internet!.
<urn:uuid:b38bf732-ef2a-42d5-ad4d-68150f51ef03>
CC-MAIN-2022-40
https://it-observer.com/prison-computer-crime.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00235.warc.gz
en
0.963634
852
2.734375
3
October is Cyber Security Awareness Month (CSAM). The goal of CSAM is to help Canadians stay cyber safe by equipping them with knowledge through this five-week strategy. Last week, we looked at how to keep your phone and the information on it secure. While phones can do almost everything that a computer can, computers play an important role in our daily operations. This week, we explore how you can keep your computer and the information on it secure. Why You Should Protect Your Computer Many of our greatest accomplishments, whether personal or business-related, are thanks to computers. There’s no doubt about the benefits of using computers; they increase productivity, connect you to the rest of the world, store and organize information, and allow for endless possibilities. Whether you’re using a computer to do online banking or check in with friends on social media, computers store and process sensitive information. It’s important to be aware of how your computer can be vulnerable to a cyberattack and how you can keep your information safe. The last thing you want to worry about is a hacker stealing your personal or corporate information. How Can You Protect Your Computer? Here is a list of three easy things you can do right now to keep your computer and the information on it secure: - Create complex passphrases Did you know that at least 65% of people reuse the same passwords across multiple sites? Although this makes remembering your credentials easier to do, this also makes your accounts vulnerable to cyberattacks. By creating complex passphrases and unique passwords for each site you use, you instantly tighten up your security, making your accounts less attractive to hackers. Password managers such as Google Password Manager and LastPass can easily help you create complex passphrases and store them so that you never forget a password again. Some best practices for creating complex passphrases include: - Avoiding family, pet, company, and familiar names that can be easily guessed by others - Using unique combinations of letters, numbers, symbols, and cases for each site you use - Creating passwords with at least 4 words and 15 characters long - Prevent against malware Malware is one of the most common ways people experience a cyberattack. Did you know that 2 in 5 Canadians have had malware on their computer? Malware is software that is specifically designed to interfere with, damage, or gain unauthorized access to a computer system. If your device is infected, it can cause freezing and crashing, poor performance, unwanted pop-ups, and toolbars, and even send out unwanted emails. Malware presents itself in many forms, including viruses, worms, trojan horses, spyware and adware, and ransomware. These common forms of malware are sometimes difficult to recognize. The following best practices can help you protect your computer system against malware: - Install and use anti-virus software - Avoid suspicious links and email attachments - Download only from trusted sources - Use a VPN on unsecured networks like public Wi-Fi - Avoid phishing scams Like malware, phishing is a common method that hackers will use to steal valuable information from individuals and organizations. Phishing scams are often disguised as messages from people and organizations that you trust, making them easier to fall victim to. The most important way to avoid a phishing scam is to learn how to recognize one. Here are seven red flags to look out for: - Urgent or threatening language: Look out for threats of closing your account or taking legal action, and pressure to respond or act on something quickly. - Requests for sensitive information: Be on alert for links directing you to login pages, requests to update your credentials, and demands for yours or your company’s financial information. - Anything too good to be true: Avoid actions on messages that claim winnings from contests you’ve never entered, prizes you must pay for to receive, and inheritance from long-lost relatives. - Unexpected emails: Disregard emails such as receipts for items you’ve never purchased and updates on deliveries for things you didn’t order. - Information mismatches: Look out for an incorrect (but maybe similar) sender email addresses, links that don’t go to official websites, errors in spelling or grammar that a legitimate organization wouldn’t miss. - Suspicious attachments: Avoid attachments that you didn’t ask for, have weird file names or uncommon file types. - Unprofessional design: Be on alert for incorrect or blurry company logos, image-only emails, and company emails with little, poor, or no formatting. If you encounter any of these red flags in an email or message, do not interact with it. Rather, delete the email or message. If you are unsure, ask the sender about the message through a different channel. Stay Safe Browsing The number one trick to a secure IT landscape is knowledge. Knowing what you’re up against and all the various methods of cybersecurity best practices are key to your success. We’ve put together a short eBook from our experts of 10 simple practices you can implement today to instantly boost your cybersecurity.
<urn:uuid:76fed30c-0fad-4ec8-bb98-20c230b2e6c1>
CC-MAIN-2022-40
https://staging.alphakor.com/blogs/it-services/3-things-you-can-do-to-keep-your-computer-secure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00235.warc.gz
en
0.920144
1,068
3.03125
3
Semiconductor device manufacturing consumes large quantities of water for a variety of purposes ranging from equipment cooling to wafer surface cleaning. Low purity water requirements and conditioning will not be discussed in this overview. Rather, we will focus on the production of ultrapure water (UPW) for use in device fabrication processes. Ultrapure water is required for many process steps. Early stages of device fabrication require repeated steps for wafer cleaning, rinsing and surface conditioning. At many different stages in device manufacturing, it is used for surface cleaning, wet etch, solvent processing, and chemical mechanical planarization. Indeed, the latter unit process has become one of the largest consumers of UPW within the fab, requiring high volumes for slurry production and rinsing. Large amounts of UPW are consumed in all fabs - according to the International Technology Roadmap for Semiconductors (ITRS) (2011), device fabs utilized 7 liters/cm2 of UPW per wafer out. This means that a typical 200 mm wafer fab that processes 20,000 wafers per month can use up to 3,000 m3 of UPW per day. That is the equivalent of the daily water requirements of a community of 20,000 people. The conversion of raw water to water of ultrahigh purity is thus a significant and costly activity for all semiconductor fabs. Because of the high cost of production and the high-volume needs, there are constant and significant efforts within the industry to reduce the usage of UPW. The ITRS usage target for 2020 was cited as 4.5 liters/cm2 in the 2015 ITRS Roadmap. UPW is normally produced using reverse osmosis / deionised resin bed technologies; however, as device linewidths continue to shrink, the requirement for ever higher water purities in semiconductor applications is expected to increase beyond the capabilities of current production technologies. Indeed, modern semiconductor standards for ionic contaminants in UPW are so stringent that some analyses are beyond the detection limits of available analytical tools. This section, will provide the reader a basic familiarity with the design elements and functionalities for UPW systems. We will discuss the main UPW parameters, the treatment sequence for UPW and provide some details of the main treatment steps. Several parameters are monitored for quality control of UPW. These parameters, their points of measurement and measurement method are identified in Table 1. A brief discussion of the main kinds of contaminants, methods of control of their level in UPW, and their typical specified limits is provided below. |Parameter||Measured (POD/POC)||Test Method| |Organic Ions||Lab||Ion Chromatography| |Other Organics||Lab||LC-MS, GC-MS, LC-OCD| |Total Silica||Lab||ICP-MS or GFAAS| |Particle Monitoring||Online||Light Scatter| |Particle Count||Lab||SEM - capture filter at various pore sizes| |Cations, Anions, Metals||Lab||Ion Chromatography, ICP-MS| |Dissolved O2||Online||Electric Cell| |Dissolved N2||Online||Electric Cell| Table 1. UPW Parameters, measurement points and methods. Resistivity: This is measured in mega-ohm centimeters or Mohm-cm. Low ion contaminant concentrations in the UPW produce high resistivity values. The theoretical upper limit for UPW with zero ionic contamination is 18.25 Mohm-cm. The 2015 International Technology Roadmap for Semiconductor (ITRS) guideline for UPW resistivity at 25°C is >18.0 Mohm-cm. Total Oxidizable Carbon (TOC): The TOC of UPW is measured in parts per billion (ppb). Oxidizable carbon in UPW originates from both inorganic (i.e., mineral carbonates) and organic (including biological and man-made contaminants) carbon contamination in the raw feed water. Typically, reverse osmosis (RO), ion exchange, UV irradiation and degasification are employed to reduce the TOC to acceptable levels in UPW. Tolerable TOC levels in UPW can vary, depending upon the application; however, most applications require very low carbon levels. As an example, the TOC levels needed to avoid lens hazing in immersion lithography have been a recent driver for this specification, with point of use levels of <1.0 ppb being specified for acceptable performance. The ITRS guideline for TOC is <1.0 ppb. Dissolved Oxygen (DO): DO is measured using an electrochemical cell. Typically, DO levels in modern fabs are less than 5 ppb. Dissolved oxygen is removed from UPW using vacuum degasification in membrane contactor systems. Particulate Matter: Raw water sources have high levels of particulate matter. Particles above the micron scale are removed using pre-filters and microfilters, after which the water is polished using increasingly fine filters to remove particles with diameters down to about 0.2 microns. Ultra-filtration at 10,000 molecular weight is used to remove residual particulates beyond this point. Particle specifications for UPW vary, depending on the fab application; in general, particles greater than 0.2 microns cannot be tolerated in any device fabrication, with well-defined limits on particle counts/liter for particles of smaller diameters down to around 0.05 microns. The current ITRS guideline for UPW is <0.3 particles/ml @ 0.05 micron particle diameter. Industry targets are ambitious; suggestions have been made for a specification of the order of <10 particles/ml having diameters greater than 10 nm, a specification that the industry is currently unable to measure, let alone control. In addition to particulate removal in bulk UPW, point of use (POU) ultrafiltration is often employed in the fab environment. Particle counts are normally measured using laser light scattering. Bacteria: Some bacteria can survive the UPW treatment process and these pose both a biological and particulate threat to integrated devices. Bacterial adhesion occurs naturally in water as pipe walls attract minute quantities of organic nutrients, which attach to the wall and initiate the biofilm process. While regular sanitization programs are employed by some facilities and provide safeguards against microbial activity, biofilms can prove resistant and may permanently coat the inaccessible surfaces of valves and dead-legs. Proper system design and adequate flow velocity are more important than periodic sanitizations to maintaining cleanliness of the system. Currently, the tests for bacteria and other organisms in UPW employ culture methods that test for viable bacteria and determine the level as "colony forming units/litre" or cfu/liter. These methods lack sensitivity in that only viable bacteria are recovered and large volumes of water may need to be sampled to provide adequate reliability (e.g. <1 cfu/liter cannot be measured with a 100 ml sample). A new technology called Scan RDI may offer a solution for testing total viable organisms. The method is able to detect a single cell based on direct measurements of cell activity and includes bacteria and other live organisms that may be present in biofilm. Another method for bacteria detection is epifluorescence, in which a technician uses a microscope to visually identify both viable and non-viable bacteria that have been stained with dyes that cause biological materials to fluoresce under ultraviolet light. A skilled microscopist can determine much qualitative and quantitative information about the bacteria in the UPW using this method. The ITRS recommended specification for bacterial contamination in UPW is <1 per 1000 ml (by culture). Silica: Silica, normally measured in ppb, is present in the feed water to UPW systems as silicates and polymeric (or colloidal) silica. Gross removal of silica normally occurs in the RO step of water purification with final removal of residual silica accomplished using anion exchange resin beds followed by ultrafiltration. Typically, the limiting specification for total silica in UPW is 0.2 - 1.0 ppb for dissolved silicates and 0.3 - 2.0 ppb for colloidal silica. Ions and Metals: Dissolved solids in the feed water to UPW systems consist of a charge-balanced mixture of cations (mostly metals) and anions. These impurities are removed in ion exchange resin beds. Acceptable concentrations of ions and metals in UPW range between 0.02 and 1 ppb, depending on the species and the application. UPW Unit Operations: Figure 1 provides a schematic of the unit operations in a typical UPW system. Semiconductor Fab Utilities
<urn:uuid:7766f5d5-7bd5-43b7-b5f7-f058dcea781f>
CC-MAIN-2022-40
https://www.mks.com/n/semiconductor-ultrapure-water
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00235.warc.gz
en
0.897371
1,874
2.984375
3
The recent global crisis of ransomware attacks on infrastructures and private businesses have left cyber experts and government authorities scrambling to double their efforts. Computer systems were infected worldwide in June 2017 with a massive cyber attack similar to a recent assault that affected tens of thousands of machines internationally, causing critical infrastructures to take a major hit. After recovering from a string of attacks that left thousands without power six months ago in December 2016, the citizens of Ukraine were faced with an even worse offense. A.T.M.s stopped working, workers were forced to manually monitor radiation at the old, toxic Chernobyl nuclear plant due to computer failures and industrial employees worldwide were scrambling to respond to massive hacks. “At the Chernobyl plant, the computers affected by the attack collected data on radiation levels and were not connected to industrial systems at the site, where, although all reactors have been decommissioned, huge volumes of radioactive waste remain. Operators said radiation monitoring was being done manually,” according to the New York Times. The entirely new ransomware infected the systems of Ukraine’s power companies, metro services, airports and government ministries such as Kiev’s central post office. The outbreak was the latest and most sophisticated in a series of attacks, using dozens of hacking tools, according to the NY Times. The malware also had an impact internationally, causing system shutdowns of: To continue the discussion on cyber espionage and industrial cyber security, join us at Transform 2017, our annual conference in Put-in-Bay, Ohio. Special Agent Keith Mularski, Unit Chief of the FBI Cyber Initiative & Resource Fusion Unit heads the Cyber Initiative for the FBI and was part of an effort to declassify cyber threats and pass them on to industry. Keith will walk through case studies of cyber incidents at US Steel, Alcoa and Westinghouse, revealing how the government communicated and worked together with industry to fight cyber crime. [ut_button color=”red” target=”_blank” link=”http://graymattersystems.com/transform-2017-cyber-security/#industrial-cyber” size=”medium” ]Learn More About Transform 2017[/ut_button] Cars have data and analytics for when parts should be replaced, so why can’t your utility? Like owning a car, the idea is similar for asset management. In a water treatment plant, pumps often come with a “best-by” sticker; a generic six-month date is stamped onto it, creating a time-based system for maintenance, regardless of usage. The date becomes the driving factor for servicing rather than following data. But there is a better way to capture condition of assets consistently, accurately and efficiently. The solution lies in combining two systems already in place and leveraging the findings to save time and money, drastically increasing uptime. Download the white paper to learn how to leverage digital data to effectively and accurately forecast maintenance of assets. [ut_button color=”red” target=”_blank” link=”http://graymattersystems.com/learn-true-real-time-condition-asset/” size=”medium” ]Download the White Paper[/ut_button] The Water Environment Federation (WEF) and Smart Water Networks Forum (SWAN) recently formed a pact to jointly promote the development of best industry practices for sustainable smart water networks. Smart water networks detect system leaks and manage energy through incorporating technology, according to Water Technology, an online water news publication. “Supporting innovation is essential to the water sector, and to further development of intelligent water systems,” WEF executive director Eileen O’Neill said. In the wake of technological advancements in the water sector, the combination of the groups’ focus on smart wastewater network management and integrated intelligent water practices will provide new skill sets and knowledge, allowing for workforce advancement. The partnership seeks to determine common barriers of implementing intelligent water practices, technology trends and new solutions. GrayMatter and DC Water have recently had success through a partnership of their own by co-innovating a smart sensor drinking fountain. A drinking fountain that monitors water quality and flow in real time – giving users more confidence in the water they are drinking and saving money spent on maintenance and testing. The groundbreaking project addresses lead levels – one of the most pressing issues in water. “This project redefines public water consumption, putting people and clean water first,” Jim Gillespie, GrayMatter CEO. The new tech fountains have sensors that use real-time data and analytics to monitor both water quality and flow levels, sending that information to the cloud and back, alerting when water quality measurements begin to deteriorate. The co-innovation project is just the beginning of many ways private sector innovation and independent operations are joining forces to make water operations more efficient, at a lower cost. The fountains are set to be used in public places this fall, including schools. Learn more about the GrayMatter and DC Water water innovation project at Transform 2017: |cookielawinfo-checbox-analytics||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".| |cookielawinfo-checbox-functional||11 months||The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".| |cookielawinfo-checbox-others||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.| |cookielawinfo-checkbox-necessary||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".| |cookielawinfo-checkbox-performance||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".|
<urn:uuid:2a76ffea-9064-4a05-8d43-853d8bf1c959>
CC-MAIN-2022-40
https://graymattersystems.com/techhub-industrial-ransomware-asset-management-water-innovation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00235.warc.gz
en
0.916916
1,367
2.828125
3
Virtual desktops are computers you access and use over the internet. Because the operating system is not being run on the hardware used to access it, virtual desktops can be accessed from anywhere. In the computer vs. textbook debate, virtualization in schools could make student access to computers and educational software just as accessible as textbooks. The process of providing support and access to multiple devices and operating systems can translates into enormous challenges and high costs for the schools and their districts. That’s where virtualization steps in; reducing the school’s costs, extending hardware life and enhancing the learning experience of each student. Virtualization in schools is a particularly beneficial technology because it gives organizations the power to scale both computer and networking capabilities without requiring expensive hardware. Desktop virtualization and BYOD Areas where school administrators are finding more useful and a significant cost reduction are combining desktop virtualization in schools with a bring your own device (BYOD) model. Major benefits of this model are: - Schools reduce cost and complexity - Students get remarkable improvement from a quality learning experience Instead of the school providing a computer system, students are assigned a desktop virtual machine, which lets them connect over a network. That allows each student to use their personal device, i.e., computer, laptop, or tablet to connect to their virtual desktop. Unfortunately for students, this model will probably eliminate the “dog ate my homework” excuse for missing an assignment. Advantages of virtualization in schools Running and supporting multiple machines or servers can eat up a fiscal budget. Virtualizing reduces that need. With fewer servers, the school’s energy costs drop, and their existing hardware lasts longer. - Quicker Disaster Recovery and Backup options – In a matter of minutes, a cyber-attack, power outage, natural disaster, even a leaking pipe can wipe out data critically needed for any school. Virtualization makes backup files easy to restore, and the recovery process quicker because it’s all virtual. - Better school continuity plans – With education seeing an increasingly mobile usage, having a good school continuity plan is crucial. Without it, school files are not accessible, work goes undone, processes slow down, and the school staff is less productive. With virtualization, a school’s staff can access communications, records, and software from anywhere and from any device. - More efficient IT operations – Going to a virtual environment can make everyone’s job easier – especially the school district’s IT staff. With virtualization, the school’s technicians have a quicker path when distributing updates and patches, installing and maintaining the district’s software, as well as keeping the network more secure. Disadvantages of virtualization in schools With any technology transition, the act of making a change can cause frustration and confusion as staff and students learn new processes and technology. However, with careful planning and expert implementation, all of these drawbacks should be solved. - Hardware upfront costs. When your school invests in the virtualization software, there many be additional hardware necessary to make the virtualization process possible. That will depend on your school’s existing network. - Software licensing considerations. With more software vendors adapting to the increased use of virtualization in schools, it’s becoming less and less of a problem, but it is essential to check with your school district’s vendors to clearly understand how they view software usage in a virtualized environment. - Expect a learning curve. Some applications do not adapt well to the virtualized environment; which is something that your school district’s IT staff will need to be aware of and address before converting. Implementing and managing a virtualized environment will require IT staff with expertise in virtualization. Have more questions on virtualization in schools? Talk to your managed services provider about how virtualization in schools would work in your district.
<urn:uuid:8ebc3ca1-8d4d-4c14-b0e5-0cc2a4a1a58d>
CC-MAIN-2022-40
https://www.freeitdata.com/making-a-case-for-virtualization-in-schools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00235.warc.gz
en
0.92765
797
3.5
4
- 17 November, 2021 Franz’s Jans Aasman gives a primer on Graph Neural Networks at KMWorld Connect 2021 Enterprises are subscribed to the power of modeling data as a graph and the importance of building Knowledge Graphs for customer 360 and beyond. The ability to explain the results of AI models, and produce consistent results from them, involves modeling real-world events with the adaptive schema consistently provided via Knowledge Graphs. Jans Aasman, CEO, Franz Inc., discussed the power of knowledge graphs during his KMWorld Connect 2021 presentation, “Graph Neural Networks for NLP and Entity-Event Knowledge Graphs.” Graph Neural Networks (GNNs) have emerged as a mature AI approach used by companies for Knowledge Graph enrichment via text processing for news classification, question and answer, search result organization, and much more. A graph can represent many things—social media networks, patient data, contracts, drug molecules, etc. GNNs enhance neural network methods by processing the graph data through rounds of message passing, as such, the nodes know more about their own features as well as neighbor nodes. This creates an even more accurate representation of the entire graph network. “We’ve been working with knowledge graphs for many years,” Aasman said. “The model we’ve come up with is the entity-event approach.” This technique can be used by telecoms, medical fields, call centers, and in aviation, he explained. It’s a prediction about what’s going to happen to the entity you are interested in. Sometimes you need something more than looking at a series of events. That’s where a GNN comes in. However, machine learning doesn’t work well when the context is a graph. In general, GNN can be used for node classification, graph clustering, and link prediction. GNN can help with relation to extraction in NLP. “We at Franz are interested in this and wanted to add to these use cases,” Aasman said. AllegroGraph used the GNN to look at patterns in literature and put it into a graph of events that are related, he explained. The company looked at social political actions for world events in 2018. A semantic reasoned create additional knowledge and facts based on logic references. Think of GNN as a probabilistic inference engine, he said. AllegroGraph is currently working in the medical domain to find patterns within medical data and patients. “If you look at statistic relationships, taxonomies at the same time, you can predict the next outcome,” Aasman said. “We’ve seen our predictions get better with Graph Neural Networks.” KMWorld Connect 2021 is going on this week, November 15-18, with workshops on Friday, November 19. On-demand replays of sessions will be available for a limited time to registered attendees and many presenters are also making their slide decks available through the conference portal. For more information, go to www.kmworld.com/conference/2021.
<urn:uuid:582d61f5-3d46-433e-b5aa-7dd4311fdaeb>
CC-MAIN-2022-40
https://allegrograph.com/articles/franzs-jans-aasman-gives-a-primer-on-graph-neural-networks-at-kmworld-connect-2021-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00435.warc.gz
en
0.944258
654
2.578125
3
On average, corporations in regulated industries do not use between 73–95% of their data. The causes of this “wasted data” vary, but the most common reason is regulatory concerns over privacy. Currently, 137 out of 194 countries have or are in the process of enacting data privacy legislation – and those laws can vary wildly from region to region (for example, GDPR vs. CLOUD Act) and even within state to state. With all of the complexity involved in using data, organizations are seeking new ways to realize the full value of their data in a regulatory compliant manner. After decades of extensive research into the varied privacy enhancing technology approaches, there is finally a way for businesses to move beyond just storing their data, to using it to make better, more informed decisions. Privacy Enhancing technologies (PETs) are technologies that enable businesses who are handling large volumes of data that are highly sensitive to utilize it to their greatest potential, without sacrificing privacy or protection. They are, by design, engineered to minimize sensitive data exposure and maximize data security, and are also engineered to allow that data to be analyzed and used while it is fully or partially encrypted. Privacy Enhancing Technologies allow businesses the ability to collaborate on data and maximize its value without compromising the privacy of customers, clients, and intellectual property. PETs also support decentralized data analysis where data doesn’t leave its jurisdiction of origin, an important capability when data localization laws apply, e.g. in the European Union (GDPR). Advantages of Privacy Enhancing Technologies The advantages of Privacy Enhancing Technologies include the ability to: analyze disparate sets of data, train and tune models on encrypted data, allow multiple users to conduct secure calculations on pooled or aggregated data, and much more. Which capabilities are available depends not only on the expertise of the team deploying Privacy Enhancing Solutions on their systems, but also on which PETs they choose to use. What are the Different Types of Privacy Enhancing Solutions? - Fully Homomorphic Encryption – a form of encryption that enables computations on encrypted data, without decrypting it - Multiparty Computation – uses advanced cryptography which allows multiple parties to compute over their combined data while keeping their inputs private - Federated Learning – allows for a machine learning model to be trained over differing sets of data without the data being decrypted Individually, each of these Privacy Enhancing Solutions provides organizations with specific tools to realize the full value of their data. When combined, PETs complement each other to not only enhance the value of data, but also to eliminate points of failure in an organizations’ data processing structure. |Federated Learning||Secure Multiparty Computation||Homomorphic Encryption| |Definition||Allows parties to share insights from analysis on individual data sets, without sharing data itself.||Allows parties to perform a joint computation on individual inputs without having to reveal underlying data to each other.||Data and/or models encrypted at rest, in transit, and in use (ensuring sensitive data never needs to be decrypted), but still enables analysis of that data. Can be combined with other methods, like SMPC, to offer hybrid approaches.| |Typical Use Case||Learning from user inputs on mobile devices (e.g., spelling errors and auto-completing words) to train a model.||Benchmarking between collaborating parties where aggregated output is adequate.||Analysis of sensitive data where flexibility around computation is desired, and regulatory compliance, precision, and security are necessary| |Drawbacks||-Complexity of managing distributed systems -Sharing aggregated insights may expose unwanted information -Large scale data needed to glean valuable insights -Model parameters known by collaborating parties |-Output is known by all parties and can therefore be used to infer sensitive data -Each deployment requires a completely custom set up, making it complex to implement -Typically requires intensive communication between parties, driving high costs |– Best for batch or “human scale” computations| How can PETs be combined to benefit businesses? It depends on the use case. We’ll explore two distinct examples below. Combining Federated Learning with Homomorphic Encryption Federated Learning is a PET which allows parties to share insights from analysis on individual data sets, without sharing data itself. Many organizations who use machine learning and artificial intelligence to build models use federated learning when compiling decentralized data. For example, a major transportation conglomerate wishes to optimize its bus routes nationwide. GPS data from the company’s fleet are stored in several data centers hundreds of miles apart. Due to the way that the data is stored, compiling all the data in one central database is both costly and inadvisable. Moving the data from multiple data centers would either involve manual labor – which is costly and slow – or moving the data to a cloud provider, which places security for the data in the hands of a third-party provider. Federated learning requires all participating organizations to pre-determine which types of analyses will be performed on the combined datasets. This makes it rigid and very challenging to adopt new analyses. Additionally, a third party could analyze updates to the model to make inferences about the underlying data, thereby putting the data at risk. Finally, aggregating the results leads to loss of precision and accuracy, undermining the goal of the machine learning project. Combining Homomorphic Encryption with Federated Learning not only allows machine learning a greater level of security; it provides enhanced accuracy and makes the most of available data. By encrypting the data and encrypting the results, nothing can be inferred about the model or the data. Only permissioned parties are able to decrypt the results. Combining Federated Learning with Multiparty Computation and Homomorphic Encryption What happens when there is especially sensitive data being computed, and multiple parties are collaborating to create a shared model on that data? In cases like these, combining federated learning and homomorphic encryption is not enough. By adding multiparty computation, not only is all data encrypted, all models are built on encrypted data and all parties must agree to access any results. This protects data even if a participant is fully compromised. The distributed nature of the multi-party computation also protects against denial-of-service attacks. This way, it is impossible for an infiltrator to learn anything about the data, the model, or the results of training that model and reduces the ability of external actors to prevent the use of the service. In practical terms, there are a few industries which are at the forefront of combining Federated Learning, Fully Homomorphic Encryption, and Multiparty Computing: - Fighting fraud: Any given bank has access to 15-25% of their customers’ financial information. The additional information exists with a number of external financial institutions. Privacy Enhancing Technologies can be used together to build a complete customer profile. By collaborating with other banks, fraud prevention officers can pool data and then train a model to analyze that data while it is still encrypted to predict which types of fraud are the most common in their country or which accounts flagged for fraud are likely to perform suspicious activity again. - Cyber Threat Intelligence Sharing: Organizations can share protected network trace data and develop models of insider cyber-attacks using the network trace data. This would allow the models to better prevent insider cyber attacks in the future. - Genome-Wide Association Studies (GWAS) – Data collaboration between multiple medical institutions makes it possible to train a model to predict the likelihood of a person developing certain kinds of cancers based on their genes, or how they will react to a specific strain of COVID-19. Some of this research is already being done today. Interested in learning more about PETs in action? Sign up for our upcoming webinar, “Revolutionizing the Data Sharing Landscape: Data Sharing in the Privacy Age.”
<urn:uuid:89335c78-d8ee-40ee-80d8-b7f6404bae9c>
CC-MAIN-2022-40
https://dualitytech.com/combining-privacy-enhancing-technologies-pets-to-solve-big-data-problems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00435.warc.gz
en
0.912577
1,630
2.8125
3
A human firewall is a security team composed of well-trained and highly engaged individuals. They act as the first line of defense against cyberattacks and can be critical to the success of your organization’s security posture. The number one cause (52% of all data breaches) of data breaches is human error. This means that having a well-functioning human firewall is more important than ever. Most of the time, phishing and social engineering attacks succeed because they exploit human vulnerabilities. With Cyber attackers becoming more innovative and resourceful than ever before, organizations need to equip their teams to be able to identify and respond to these attacks. In 2020 alone, the cost of phishing scams on business email alone (BEC) was estimated at $1.8 billion. Human firewalls can help reduce these avoidable costs by training employees to know the dangers of phishing and social engineering attacks. What’re The Biggest Threats To Human Firewall? There are several different types of threats that can affect a human firewall. The most common ones are phishing and social engineering attacks. - Phishing Attacks: Phishing is an attack that uses fraudulent emails to exploit human vulnerabilities. The goal of a phishing attack is to get the victim to click on a link or open an attachment. - Malware: Malware is a type of software designed to damage or disable computers. It can be installed through fraudulent emails, social media platforms, and websites. - Human Error: Human error is one of the biggest threats to organizations. It can be caused by carelessness, lack of knowledge, or simply clicking on the wrong link. Cybercriminals can exploit human error through phishing attacks and social engineering. 7 Steps to a Successful Human Firewall The firewall is a vital part of the security system that protects your organization from outside attacks. The human firewall does exactly this but from a manual in-person perspective. A human firewall makes sure that the data is not compromised and there are no data leaks. We’re not just talking about the front-of-house security guards or IT staff working to keep your data safe. We mean promoting an always-on approach, a team of dedicated employees who can recognize an attack on social media, prevent a crisis, and stop it in its tracks! Employees who know how to spot so-called potential fraudsters or imposters, employees who know how to identify phishing attempts and avoid them. There are a few key considerations to building a successful human firewall. 1- Onboard with security in mind The first step in building a successful human firewall is to start creating a cybersecurity culture from day 1. The recruitment and onboarding process of a new employee should be one that includes cyber security awareness training. In fact, recruiters should look for security-minded characteristics as part of the recruitment process. It is essential to have a mix of skills and experience on your team. You need people who can protect your organization from cyberattacks and understand the business. 2- Train them well Once you have recruited the right people, it is vital to train them well. It’s a marathon, not a sprint. Security training should be an ongoing process and should cover various topics, such as phishing attacks, ransomware attacks, malware, and social engineering. The training should be engaging, scenario-based, and be done in an environment where the team can feel vulnerable yet empowered. You want to form a team that’s alert, ready to respond, and agile. Attacks come in all shapes and forms if you want your A-team to be responsive and adaptable, create this environment from day one. 3- Keep them informed It is also important to keep employees informed about the latest threats and how they can protect themselves. Employees need to be aware of the latest cybersecurity risks and dangers of clicking on links and opening attachments from unknown sources. It needs to go beyond regular security updates and newsletters, all employees should be encouraged to call out cyber security threats and attempts that happen to them. Perhaps it’s having a dedicated slack channel or reporting system. The more your organization is aware of the frequency and diversity of these attacks the more you can strengthen and grow your human firewall. 4- Use the right tools “A (wo)man’s only as good as his/her tools” The next step is to ensure your organization has and uses the right tools. Create a complete security awareness platform for your employees. Security tools, such as data protection software, Network Security Monitoring Tools, Encryption Tools, Antivirus Software, and Web Vulnerability Scanning Tools are all important considerations. Consider CyberReady if you’re looking for a platform that can simulate phishing attacks, equip your team with security awareness and provide compliance tools to their best cyber security work. 5- Create your Human Firewall Plan The next step in building a successful human firewall is implementing strong security policies. Security policies should be clear and concise and cover various topics, such as password policy, email security, and social media usage. Enforce security policies, and employees should be held accountable for following them. 6- Conduct phishing tests Another way to keep employees engaged in maintaining business security is to conduct phishing tests. Phishing tests are a great way to check if employees are aware of the dangers of phishing attacks and how to protect themselves. The best way to conduct phishing tests is to use a tool such as Blast by CybeReady. 7- Create a strong cybersecurity culture The final step in building a successful human firewall is to create a strong cybersecurity culture. A strong cybersecurity culture will help employees stay engaged and motivated. One way to create a strong cybersecurity culture is not being afraid to talk about cyber security and vulnerabilities. Share regular security updates, conduct phishing tests, regular employee training and engagement, and focus on team culture. The more people care, feel valued and enjoy what they do, the better your human firewall will be. Reward, appreciate and incentivize At the end of the day when you form a human firewall, you’re asking your team to prioritize cyber security, take time out of their day and add to their list of commitments. You’re asking them to care. Above salary, it’s likely your team wants to be part of an organization with a great culture and a great mission. They want to be part of an approachable workplace, feel that they can grow, add value, and feel valued! Perhaps it’s a cyber guard of the quarter? a financial bonus? a team day out or leaving early on a Friday, Create this culture, reward, appreciate and incentivize. Building a successful human firewall can be a daunting task, but following these seven steps will help you get started. By selecting the right people, training them well, and keeping them engaged, you can create a security team poised to assist and protect your organization from cyberattacks. Remember, the key to a successful human firewall is to have a strong cybersecurity culture. Employees should know how badly cyberattacks can impact their business and how they can protect themselves. By implementing strong security policies and a culture that cares you can create the a-team of human firewalls and protect your organization from cyberattacks.
<urn:uuid:93dc3e6d-bc41-4f42-b6d0-edf7f21dce4d>
CC-MAIN-2022-40
https://cybeready.com/7-steps-to-a-successful-human-firewall
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00435.warc.gz
en
0.939471
1,504
3.0625
3
The American Crime Prevention Institute (ACPI) has developed a comprehensive training and education program designed as a vital step in strengthening community trust and respect for law enforcement. A recent series of highly publicised police-involved deaths of unarmed citizens has brought unprecedented acrimony toward law enforcement by some segments of the community. Public trust and respect have eroded, resulting in reduced police effectiveness and calls for police defunding – signalling a widening gap between communities and law enforcement. Without positive and fundamental change to improve the relationship between the public and law enforcement, the safety and security of our communities is put at higher risk. “The public’s trust and respect for law enforcement has waned; budgets are being challenged and more heavily scrutinised than ever before,” said Dan Keller, Executive Director, ACPI. “Police-community engagement programs encourage positive, proactive relationships with community members in a non-stressful, non-enforcement and non-confrontational manner. This results in strengthened community support, improved effectiveness and enhanced legitimacy of police.” Community Engagement for Law Enforcement is a three-day seminar developed specifically for law enforcement administrators, officers and community leaders. The course addresses specific strategies and tactics law enforcement can leverage to address topics such as minority, youth, community activist and LGBTQ engagement, implicit bias and procedural justice, among many others. Designed as a positive step in improving rapport with communities, this course will provide a fundamental re-imagination of effective policing, a roadmap to establishing proactive community relationships and a review of successful community engagement programs and initiatives currently being employed by law enforcement agencies throughout the nation. “Without the trust, respect and support of our community, it is difficult for law enforcement to be effective,” said Jeff McGowan, Past President of the Texas Crime Prevention Association. “Community engagement programs strengthen positive police/community relationships.” The program, scheduled for 8-10 December 2020, will be presented live online, enabling real-time interaction with instructors. ACPI has partnered with SecureBI to conduct this course virtually using interactive video collaboration technology. To register, visit https://bit.ly/3d6ryPf.
<urn:uuid:b274c0b2-dbea-4619-8fe9-e6ed14e2e4fb>
CC-MAIN-2022-40
https://internationalsecurityjournal.com/acpi-launches-engagement-program-to-improve-public-support-for-law-enforcement/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00435.warc.gz
en
0.944365
442
2.703125
3
Email is one of the most critical business tools and a major component of the lives of many people. At the same time, it seems to lack adequate security as the Clinton campaign email leaks and the publication of France’s Macron emails have shown. Email is at the same time insecure but used to share important and often sensitive information. While companies encrypt sensitive data, work on their perimeter protection and worry about personal data in the cloud, you don’t hear much about email encryption. This may be partly because email has a reputation of being hard to encrypt. You’ve heard that you have to download and install complicated email encryption applications, worry about keys, and even then the email may be shown in clear text if it is decrypted at a gateway or at an email server. Protection Against Identity Theft Even if you think you don’t send sensitive information via email, because it’s not secure after all, most people use email to exchange opinions, purchase details, travel plans, family matters and other private information. Separately, these bits of data don’t mean anything, but when taken together with other publicly available personal information, third parties can assemble a detailed profile of your family, your preferences, your activities and your future plans. Essentially such data mining can be as pervasive as if someone is actually reading all your emails. They could use the information to call up your bank or credit card company and successfully pretend to be you because they know all about you. They could also hijack your email and social media accounts by guessing the answers to security questions and they could use your accounts to get access to even more private information. Eventually they could steal your identity and cause all kinds of problems for you. These are not hypothetical issues. You may already notice that the ads you see on websites change according to what sites you visit. Cloud service providers scan emails to show you ads. Other people may be able to read your emails as well. Email encryption keeps you secure by preventing others from seeing a lot of your personal information. Since the Edward Snowden revelations, we know that governments regularly force email service providers to hand over access to email accounts. It sounds innocuous but maybe you are against pipelines or don’t like an elected official. Maybe you have firm opinions on foreign wars or immigration. Based on certain key words that you might easily use, your email account could come under surveillance. And even if there is nothing suspicious there, government agencies might mistake something you say, a joke or an exaggeration, and take further action. This assumes you actually don’t act on any of your opinions. If you do, for example by engaging in legal protest, you can’t get organized without government agencies knowing all about your plans. Effective legal protest is difficult when the target of your protest knows what you are going to do before you do anything. If instead of protesting, you uncover government corruption, your whistleblower career is likely to be cut short if the corrupt individuals find out what you intend to do. If you value your civil liberties, even hypothetically, encrypting your email is a good place to make sure you keep them. Absolute End-to-End Encryption With new encryption technology, End-to-End Encryption for emails is easy and transparent. The email stays illegible from the time you type it to the time your authorized recipient reads it. This prevents criminal hackers from finding out enough about you to create a useful profile and it keeps government agencies from learning about your private affairs. Encrypting your emails helps ensure that you keep personal matters, details and opinions private and increases your overall security by making certain third parties can’t use information from your emails against you. This technology allows you to fully control who has access to your emails. If you send an email to a trusted recipient and find out that the account is compromised, you can remove your authorization and nobody will be able to read your email. If you mistakenly send a confidential email to a new account, nobody there can read it because you haven’t given them your authorization. Until you authorize a specific email recipient to read your email, nobody, not the government, not your local tech person, not your system administrator nor people at your cloud service provider will be able to read your emails. You retain full end-to-end control. With CloudMask, only your authorized parties can decrypt and see your data. Not hackers with your valid password, Not Cloud Providers, Not Government Agencies, and Not even CloudMask can see your protected data. Twenty-six government cybersecurity agencies around the world back these claims. Watch our video and demo at www.vimeo.com/cloudmask Share this article:
<urn:uuid:172ef861-cde0-4fc2-88e0-a37fc7891bf4>
CC-MAIN-2022-40
https://www.cloudmask.com/blog/if-everybody-could-read-your-email-are-you-secure
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00435.warc.gz
en
0.936634
973
3.046875
3
The Islamic finance and Islamic banking are one of the largest industries in the world today and this industry is very profitable for those companies who are able to make use of it. These industries have also started a trend towards Islamic financial transactions as well. This has been started since the first year of independence, when international trade and business were introduced. With the introduction of new policies, there was a need to introduce new laws and regulations to regulate the trade and business. The new laws made it mandatory for businesses to deposit their capital in Muslim banks. This has encouraged businessmen and traders to make use of Islamic banking, which they can also use to secure their business dealings. It also provided them with better ways to control their cash flow and investments. There was also a need to bring all international trade and business under the same rules. With this, a certain number of regulations were imposed by the government, so that the international trade would be regulated in a way that would benefit the economy and all businesses and industries that are involved in it. This is why industries that rely on Islamic Finance have seen a growth in its growth and profit making because they were able to incorporate these various Islamic financial rules into their practices. This means that they can now conduct their business in a better way. This has allowed them to keep their businesses afloat and even expand them. Because of this, the government of the country has been working very hard to get new regulations approved and imposed for the industries that rely on Islamic finance and Islamic banking. These industries have been facing a lot of difficulty trying to get regulations approved and implemented in a way that will benefit them and not hinder them. Each time that different regulations are implemented, the businesses and industries have to wait for a long time before getting approval and implementation. With this, it has been impossible for many industries to survive without any sort of help. There have been instances where businessmen and traders have died from lack of resources because of lack of funds. They have also faced many difficulties in conducting their financial transactions as well. These difficulties have forced these industries that rely on Islamic finance and Islamic banking to look at other options for securing their financial transactions. There were some industries that were able to continue operating even without getting these financial transactions done. However, others have suffered a lot from the financial problems caused by the lack of regulations. Because of this, the government of the country had to resort to these industries that rely on Islamic finance and Islamic banking in order to ensure that their financial transactions will be made with ease. This is so they will be able to have more opportunities to make their financial transactions with ease and security. Since Islamic finance and Islamic banking were introduced, this has helped the economies and business tremendously. They are no longer suffering from the financial transactions that they used to suffer from when it comes to dealing with finance and other issues that they encounter because they were relying on other types of businesses and industries. This is because they are able to conduct all financial transactions with ease. They are no longer having to face the hardships of dealing with other industries and businesses that did not abide by the financial regulations that have been imposed. They have also become more stable due to the fact that they are now dealing with firms and other businesses that abide by the Islamic Banking laws and procedures. These firms have realized that the traditional ways they used in the past will no longer suffice for all their needs. They have also realized that they cannot afford to do the things that they used to do because the traditional ways of doing business were not able to cater for their needs anymore.
<urn:uuid:c0c62a92-f361-4364-9164-466cd531d496>
CC-MAIN-2022-40
https://globalislamicfinancemagazine.com/industries-that-rely-on-islamic-finance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00435.warc.gz
en
0.987602
722
2.546875
3