url
stringlengths
15
1.48k
date
timestamp[s]
file_path
stringlengths
125
155
language_score
float64
0.65
1
token_count
int64
75
32.8k
dump
stringclasses
96 values
global_id
stringlengths
41
46
lang
stringclasses
1 value
text
stringlengths
295
153k
domain
stringclasses
67 values
https://elli.jobs.personio.de/job/362446
2021-05-14T20:07:51
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00370.warc.gz
0.939029
188
CC-MAIN-2021-21
webtext-fineweb__CC-MAIN-2021-21__0__152719672
en
What you will do - You will be the responsible system owner for Elli’s Asset Management System (AMS) in the area of charging solutions, in order to keep track and manage IoT devices and their components throughout their life time. - You will be in charge for the AMS from the conceptual phase, via implementation with Elli´s partners, all the way to running, continuously improving, and maintaining it. - You will define and align requirements, interfaces to other systems and prioritizations with relevant stakeholders from within Elli as well as external partners. - You will run, orchestrate, improve, and develop Elli’s asset data for existing as well as new products. - You will act as a coach to Elli´s product development teams on asset management when defining and building new products and be the single-point-of-contact in all AMS-related topics.
systems_science
https://chscpr.org/cdc-launches-new-respiratory-illness-data-dashboard-resp-lens/
2024-02-27T20:21:48
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474686.54/warc/CC-MAIN-20240227184934-20240227214934-00076.warc.gz
0.861633
234
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__21261685
en
CDC recently launched a new Respiratory Virus Laboratory Emergency Department Network Surveillance (RESP-LENS) dashboard. This new tool shows emergency department visits for laboratory-confirmed cases of influenza (flu), SARS-CoV-2 (COVID-19), and respiratory syncytial virus (RSV) infection. Preparedness planners can use the RESP-LENS dashboard to follow trends and compare activity of these three different pathogens in different age groups and geographic locations. CDC plans to update the data displayed on the RESP-LENS interactive dashboard weekly. This surveillance system provides timely data on the primary viruses causing acute respiratory illness and how each virus’s activity might vary over time and by age group. The percent positivity metric can be used to follow trends and comparisons of laboratory-confirmed cases of COVID-19, flu, and RSV activity in different age groups and HHS regions across the United States. RESP-LENS serves as a valuable tool for public health and health care professionals, allowing users to visualize and understand trends in virus circulation, estimate disease burden, and respond to outbreaks, informing decisions and strategies for protecting public health.
systems_science
https://fenix-network.eu/fenix-project-showcased-in-greece/
2023-12-04T01:12:31
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100518.73/warc/CC-MAIN-20231203225036-20231204015036-00343.warc.gz
0.943885
345
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__15369602
en
On 25 May a technical demonstration of the FENIX project was held at the Greek pilot site at the Container Terminal at the Port of Piraeus. The event was organised by the Hellenic Institute of Transport of the Centre for Research and Technology Hellas (HIT/CERTH) and the ISENSE Group research team of the Institute of Communication and Computer Systems (ICCS), in close cooperation with PCT S.A. and the Greek Deputy Minister of Transport Michalis Papadopoulos attended it (in the centre). “The Ministry of Infrastructure and Transport participates as a partner in the FENIX project, actively advocating the digitisation of processes and the use of innovative technologies, which strengthen the Greek ecosystem of transport and logistics. (…) procedures such as customs clearance, which used to take up to 6 hours, through the digitisation carried out by the FENIX project, are now limited to 20-30 minutes. This reduction is reflected in significant economic and environmental benefits. It is a very good example of how the new technology is integrated into the transport chain, for the benefit of all involved and mainly, for the benefit of the Greek economy as a whole. The development of the Greek Transport & Logistics Observatory is also extremely important, which will improve the position of Greek services in the European market and will facilitate communication between individuals and the public sector.’’ stated Mr Michalis Papadopoulos. The technical demonstration represented an opportunity to showcase the advancements of the initiative and the attendees could tour the port and observe the ongoing physical process of importing/exporting containers that is related to the FENIX digital service. Read the full press release of the event, here.
systems_science
https://undercovertechguy.wordpress.com/2010/06/03/sorting-out-rkhunter-on-fedora-13-and-hooking-it-up-to-anacron/
2018-06-24T04:54:31
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866358.52/warc/CC-MAIN-20180624044127-20180624064127-00392.warc.gz
0.894072
683
CC-MAIN-2018-26
webtext-fineweb__CC-MAIN-2018-26__0__120909086
en
I’ve used rkhunter (Root Kit Hunter) in the past (on my old Ubuntu machine) and even though it might be a little overtly paranoid it’s not a bad idea to run sometimes and check your system integrity. Now that I’ve recently got a fresh Fedora 13 install I wanted to set up rkhunter again on my clean system. I installed it with YUM: # yum install rkhunter and invoked it to make it update it’s file properties database (basically saying: this system is clean, use it as a reference for future checks) # rkhunter --propupd (note: run as root) And then proceeded to run a check on the system (not really needed, since I just propupd’ it but I just wanted to check to see if things worked): # rkhunter --check Invalid XINETD_CONF_PATH configuration option - non-existent pathname specified: /etc/xinetd.conf Ok, so apparently this is a known problem on Fedora since version 11 and is fixed by commenting out the following line in the /etc/rkhunter.conf file: Having done that rkhunter runs as expected and checks the system for problems. Btw, you can get more detailed info on rkhunter here. Now I wanted to add rkhunter updates and checks to Anacron so that it could be run every couple of days. Since I’m on a laptop that isn’t always on Anacron is the right choice (as opposed to Cron.) More on that can be found here. To make this work I had to edit the /etc/anacrontab file which lists the different tasks to be run. By default it contains some entries related to cron, there’s some trickery involved between the two, but that’s not relevant to the task at hand. All that was needed was to add the following two lines to the file: 5 5 rkhunter.update rkhunter --update 5 15 rkhunter.check rkhunter --check --sk --rwo No earlier than every 5 days, no earlier than 5 minutes after anacron first starts, a task we identify as “rkhunter.update” is run and the command is “rkhunter –update”…simples. Similar for the next line, which is the actual rootkit check. (The parameters “–sk” and “–rwo” mean: don’t ask for key presses and only output warnings.) NOTE: I had to search around a bit before I realized that all the tasks in the anacron (and presumably cron-) -tab files are run as root… Anacron (and cron) both email the output from these runs to the root account. To see what’s been emailed the simple (but not elegant!) method is this: So now you know how to install and run rkhunter on Fedora 13 and to get it set up to run on a regular basis using anacron.
systems_science
http://salfordonline.com/5669-eight-things-to-know-about-here.html
2020-07-04T06:43:19
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00143.warc.gz
0.937288
670
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__13857042
en
This week Nokia announced an agreement to sell HERE to a consortium of leading automotive companies, comprising AUDI AG, BMW Group and Daimler AG. Here are eight facts about HERE that you should know: 1. HERE has been mapping roads for 30 years HERE combines expertise in the fields of mapping and location services that Nokia has developed since the mid-2000s. The roots of HERE lie in 1985 when a start-up began by mapping the San Francisco Bay area. This start-up became NAVTEQ (acquired by Nokia in 2008), and the areas covered and detail of its maps increased rapidly and in 1994 it installed its first ever automotive-grade map in the BMW 7 series. 2. Today, HERE is building the map of the future for automated cars Automated cars can’t work without extremely detailed high definition (HD) maps. As a crucial component of automated driving technology, these HD maps allow cars to precisely position themselves on the road. They also serve as a foundation for real-time data about the road environment to enable vehicles to react to changes on the road in a timely manner. 3. There are hundreds of HERE cars mapping the world’s roads right now HERE uses cars equipped with laser-based LiDAR technology to map roads around the world to centimeter-level accuracy. In total, HERE calls upon 80,000 data sources to constantly update its maps including cars, land-registry data and a global community of cartographers. 4. HERE already provides map data to the majority of car makers Four out of five new cars with in-dash-navigation in Europe and North America are fitted with HERE maps. That equates to a new car every three seconds or 10 million cars a year. It also provides additional features and functionalities like routing, traffic and local search to a number of premium car manufacturers. 5. HERE helps businesses to be faster, cost-effective and efficient The world’s leading enterprise software and logistics companies like FedEx, Oracle, SAP and UPS are working with HERE. Location intelligence from HERE helps to make fixed and mobile asset management more safe, productive, efficient and sustainable. 6. HERE can predict traffic 12 hours into the future HERE uses its expertise in predictive analytics and cloud computing to help predict traffic up to 12 hours ahead of time. Its real-time traffic service covers 50 countries up-to-the-minute information about current traffic conditions and incidents that could cause delays. It’s also is the only traffic service on the market today that updates the direction of traffic flow on metropolitan reversible express lanes. 7. HERE maps public transport, too. HERE doesn’t just map roads, it also has up-to-the-minute information on public transport routes and timetables. It offers public transport information for 1000 cities across 44 countries globally. 8. You can use HERE maps for free on your smartphone The HERE maps app is available to download for Android and iOS for free, with users able to download maps to their device to reduce mobile data use at home and abroad. It offers turn-by-turn navigation for 131 countries, real-time traffic information for 50 countries and public transport routing for 1.000 cities worldwide.
systems_science
http://jackieyang.me/soundr/
2024-02-27T07:19:02
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474671.63/warc/CC-MAIN-20240227053544-20240227083544-00826.warc.gz
0.880608
351
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__96306344
en
Although state-of-the-art smart speakers can hear a user’s speech, unlike a human assistant these devices cannot figure out users’ verbal references based on their head location and orientation. Soundr presents a novel interaction technique that leverages the built-in microphone array found in most smart speakers to infer the user’s spatial location and head orientation using only their voice. With that extra information, Soundr can figure out users references to objects, people, and locations based on the speakers’ gaze, and also provide relative directions. To provide training data for our neural network, we collected 751 minutes of data (50x that of the best prior work) from human speakers leveraging a virtual reality headset to accurately provide head tracking ground truth. Our results achieve an average positional error of 0.31m and an orientation angle accuracy of 34.3° for each voice command. A user study to evaluate user preferences for controlling IoT appliances by talking at them found this new approach to be fast and easy to use. DOI link: https://doi.org/10.1145/3313831.3376427 Citation: Jackie (Junrui) Yang, Gaurab Banerjee, Vishesh Gupta, Monica S. Lam, and James A. Landay. 2020. Soundr: Head Position and Orientation Prediction Using a Microphone Array. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376427 Paper PDF: https://jackieyang.me/files/soundr.pdf
systems_science
http://blog.ivocalize.com/2013/09/server-updates.html
2017-03-30T18:35:03
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218199514.53/warc/CC-MAIN-20170322212959-00578-ip-10-233-31-227.ec2.internal.warc.gz
0.838652
127
CC-MAIN-2017-13
webtext-fineweb__CC-MAIN-2017-13__0__116061872
en
For example, if your website already has a database of usernames and passwords, you may use your existing login system to log users directly into iVocalize, bypassing the normal login page. We have also released the Server Management API for automating the tasks associated with room creation and configuration. Server customers wishing to use either Delegated Authentication or the Server API may contact [email protected] to get started. Other changes and fixes with the latest update: - added automatic reconnect - corrected a problem with the whiteboard slide number control on Firefox - improvements to the server control panel
systems_science
https://www.opclabs.com/component/content/article/217-resources/news-and-announcements/1109-quickopc-used-for-education-by-technikerschule-der-landeshauptstadt-muenchen
2024-02-29T09:39:49
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00478.warc.gz
0.735282
136
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__199370459
en
Technikerschule - Städtische Fachschule für Maschinenbau-, Metallbau-, Informatik- und Elektrotechnik in München, Germany, has used QuickOPC in one of its courses. In an automation + IT class of Dipl.-Ing. Reiner Doll, students learn about various levels in industrial automation. One of their tasks is to access a control system using OPC Unified Architecture from a Visual Basic program. QuickOPC is used to achieve this task, either directly, or in form of higher-level programming blocks prepared for the students. The material to the course can be found here.
systems_science
http://hpc.hud.ac.uk/systems/
2017-08-20T23:00:21
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00131.warc.gz
0.851472
183
CC-MAIN-2017-34
webtext-fineweb__CC-MAIN-2017-34__0__142942684
en
The HPC visualisation and video conferencing suite is managed by us on behalf of the HPC Research Group. The suite is equipped with 3x 70” 3D displays, supports video conferencing over IP, and has a direct link with the Nikon X-ray Tomography lab. Eridani is the general purpose 136 core Intel cluster, with 2GB/core RAM. Sol is the 256 core AMD cluster, with 2GB/core RAM. Ascella is the 200 core Intel advanced research cluster, with 2GB/core RAM and Infiniband interconnect. Our Condor pool spans every Windows PC on campus. Typically you can expect 1-3k CPU cores available for opportunistic jobs. We provide 10TB of NFS storage for user home directories. Regulus is our Lustre deployment that provides a shared parallel filesystem of 38TB.
systems_science
https://sapcupgrades.com/all-you-need-to-know-about-the-wannacrypt-ransomware/
2023-12-07T00:04:51
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100626.1/warc/CC-MAIN-20231206230347-20231207020347-00543.warc.gz
0.96388
427
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__79918329
en
WannaCrypt is a ransomware program targeting Windows. On Friday, 12 May 2017, a large cyber-attack using it was launched, infecting more than 230,000 computers in 150 countries, demanding ransom payments in the cryptocurrency bitcoin in 28 languages. It was being spread primarily by phishing emails (most commonly links or attachments) and as a worm on unpatched systems. The attack affected Telefónica and several other large companies in Spain, as well as parts of Britain’s National Health Service, FedEx, Deutsche Bahn and LATAM Airlines. Other targets in at least 99 countries were also reported to have been attacked around the same time. WannaCry is believed to use the EternalBlue exploit, which was developed by the U.S. National Security Agency (NSA) to attack computers running Microsoft Windows operating systems. Although a patch to remove the underlying vulnerability for supported systems (Windows Vista and later operating systems) had been issued on 14 March 2017, delays in applying security updates and lack of support by Microsoft of legacy versions of Windows left many users vulnerable. Due to the scale of the attack, to deal with the unsupported Windows systems and to contain the spread of the ransomware, Microsoft has taken the unusual step of releasing updates for all older unsupported operating systems from Windows XP onwards. Shortly after the attack began, a researcher found an effective kill switch, which prevented many new infections and allowed time to patch systems. This significantly slowed the spread. It was later reported that new versions that lack the kill switch were detected. Cyber security experts also warn of a second wave of the attack due to such variants and the beginning of the new workweek. As always, be sure your Windows is up to date. XP users should consider upgrading where possible. The vulnerabilities for that operating system will not go away. Don’t click links in an email. Don’t open file attachments. And, our longest running advice; back up regularly. You can back up to the cloud, or another drive. Programs like Macrium Reflect can Image your drive essentially restoring everything at any time.
systems_science
https://www.wasoftware.com/national-call-center-data-warehouse
2024-04-14T08:36:27
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816875.61/warc/CC-MAIN-20240414064633-20240414094633-00116.warc.gz
0.909405
269
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__36078031
en
NATIONAL CALL CENTER DATA WAREHOUSE AND REPORTING SYSTEM The U.S. Department of Labor, Employment and Training Administration (ETA) project conducts more than 75,000 telephone surveys each year with former Job Corps enrollees and graduates to assess their satisfaction with the program and to collect data on their post-program employment and educational outcomes. The Office of Job Corps uses the data collected from the survey to measure program outcomes at a national level and to assess Job Corps Centers’ effectiveness. Washington Software was responsible for the migration of the call center system location from the corporate server room to the Department of Labor’s data center in Maine. WSI also created call center monitoring and scheduling software that allows call center supervisors to record interviewers’ performance, as well as a data warehouse reporting system with summary reports to compare and evaluate the performance of different interviewers. Efficient call center performance facilitates assessment of high numbers of Job Corps enrollees and graduates each year. Call center monitoring and scheduling software ensures that interviewers’ performance levels meet program standards. Summary reports enable management to compare and evaluate performance. C#, .NET, Windows 2008, application security, IIS, SQL Server, SQL Server Reporting Service (SSRS)
systems_science
https://www.museums.cam.ac.uk/events/science-sundays-how-has-evolution-fine-tuned-c4-photosynthetic-turbo-charger
2021-04-13T13:26:57
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00168.warc.gz
0.949886
180
CC-MAIN-2021-17
webtext-fineweb__CC-MAIN-2021-17__0__243512288
en
Dr Johannes Kromdijk Department of Plant Sciences, University of Cambridge Delivery of carbon dioxide to photosynthesis inside plant leaves is a slow, diffusion-limited process in most plants. However, plants with C4 photosynthesis have the ability to increase carbon dioxide levels inside their leaves and thereby boost the efficiency of photosynthesis, somewhat like a turbo-charger in an internal combustion engine. Using phylogenetically controlled experiments, we are trying to find out in which ways the evolution of C4 photosynthesis has affected physiological responses which are qualitatively unaltered compared to the C3 ancestral state. In this talk I will focus specifically on the regulation of microscopic pores on the leaf surface called ‘stomata’, which control gaseous exchange, as well as the regulation of light harvesting in the chloroplast thylakoids, which is important to protect against high light damage.
systems_science
https://codeislife.blog/2019/07/23/stacking-it-up-what-are-web-application-stacks/
2023-01-29T12:30:12
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00034.warc.gz
0.930387
915
CC-MAIN-2023-06
webtext-fineweb__CC-MAIN-2023-06__0__116555872
en
I was recently asked by a non-technical colleague to explain the concept of a stack. In this post, I am going to try to explain in non-technical terms what is a web technology stack and the different types of stacks that are out there. I am going to be explaining a technology stack in a web applications only (think web sites). The easiest way to conceptualize the inner workings of the many technologies and pieces of software it takes to run a web application is to think of each technology as a layer in a stack. Roughly speaking the lower layer is responsible for lower level computing (more on the responsibilities of each layer as we discuss each layer). In order to understand a technology stack, we must first understand four basic layers related to web applications; the operating system layer, the web server layer, the data persistence layer and the front-end layer. Operating System Layer All my non-technical readers should already be familiar with what is an operating system but for the sake of completeness lets define it. According to Wikipedia Fundamentally, all computers must use an operating system to manage all their resources (such as hardware and software components). Some popular operating systems include Windows, Linux, MacOS, iOS, etc. When someone is referring to the “Operating System” in a web stack, they are usually referring to server side operating systems such as Linux and Windows (Linux and Windows are two very popular server side operating systems). For my technical friends, please don’t shoot! I get your objection even before you raise it but this is for non-technical folks after all. Web Server Layer Next comes the web server layer. This layer is responsible for hosting the web site content and providing an interface between the low level networking hardware and software necessary to ‘serve’ a web application to any client that is requesting any resources. This layer is responsible for serving any HTTP requests coming to the server and this layer sits on top of the server side operating system. Popular web servers include Internet Information Services (IIS), Apache and Express.js. Data Persistence Layer Next is the data persistence layer. The data persistence layer is responsible for making sure that any changes you or any other user has made to and with the application are stored permanently and not lost. The data persistence layer is also responsible for retrieving the data when you log back in the next time so you can pick up where you left off. The data persistence layer is also responsible for generating reports of data generated by the web application. The reason I was careful not to call this layer the “database’ layer is that a database is not a necessary requirement for data persistence. It is actually possible to design a data persistence model that reads and writes data directly to a hard disk in an unstructured format without any need for a database which would do the job of a data persistence layer perfectly well. Now, the management of a such a system would be a nightmare which is why almost all web application use databases for persisting their data. Database are specialized pieces of software that are especially written to provide ease of management of large volumes of data. Some databases include MySQL, SQL Server, MongoDB, Neo4J among others. Stacking Them Up So what stacks are out there? There are four major stack that are fairly popular today two of which are proprietary and two of which are open source. The Microsoft and Java stacks are proprietary stacks while the LMAP and MEAN/MERN stacks are open source. You can also pick and choose technologies and create a hybrid stack and get the best of both worlds depending on your resources and technical know how. There you have it folks. Clear as mud? Do you use a hybrid stack or a single stack in your environment? Which stack do you prefer? My favorite so far has to be MEAN stack. Let me know below.
systems_science
https://raspberry-pi-industrial.info/
2023-09-30T10:41:06
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00153.warc.gz
0.897411
647
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__315154676
en
Andino Gateway without 4G Andino Gateway with 4G The life expectancy of the used memory is strongly dependent on the ambient temperature. While at 43 degrees, the memory loses 30% of its capacity after about 7 years. The X1 board has a 9-24V wide-range DC input with reverse polarity protection. Powerful, reliable, stable power supply: 5 Volt, 2.6 Amp – enough power for the Raspberry, your USB hardware and customer-specific adaptations. The integrated EMC protection circuits protect the Pi from voltage and current surges. Programmable 8-bit microcontroller (Atmega 168 8Mhz) for adapting the inputs and outputs. Accurate and reliable detection of digital and analog signals. The Raspberry Pi Pico microcontroller of the Andino X1 comes with an Arduino-compatible bootloader. Our combination of Raspberry Pi Pico and Raspberry Pi 4/CM4 on the Andino X1 is ideally suited for use in home automation and sensor technology, as well as in more demanding industrial automation applications. In addition: The strengths of both boards complement each other perfectly. While the single-board computer Raspberry Pi can perform complex tasks (eg hosting of database and WebServer) as a full-value computer, the microcontroller can take care of the fast signal pre-processing. Our built-in microcontroller communicates with the Pi via UART. The X1 is programmable with the Arduino IDE via USB from a PC. The X1 board has two electrically isolated inputs (up to 5kV isolated) as well as two relay outputs for 42 volts and 1 amp. His IO is controlled by a microcontroller. Further GPIO of the Raspberry Pi for industrial applications as well as IO of the microcontroller are led to an internal pin header. In conclusion, it is possible to bring own adaptations to the screw terminals. Via the SPI and the I2C interface of the Raspberry Pi, further hardware extensions can be connected to the free screw terminals. Thus, stable, control cabinet-compatible wiring is possible. The integrated, battery-buffered RTC provides the correct time even if no NTP (time) server is available. A high-precision time chip DS3231 from Dallas Semiconductors is used. Due to the internal temperature compensation of the oscillator, the chip achieves a very high accuracy of ± 2ppm at 0 ° C to + 40 ° C. Our Andino products for Raspberry Pi for industrial applications are designed, developed and manufactured in Germany. Our Andino X1 was tested for its electromagnetic compatibility (EMC) together with a Raspberry Pi in its DIN rail housing. All tests were based on the immunity to electrostatic discharge, high-frequency electromagnetic fields, fast transient electrical disturbances (burst), impulse voltages, conducted disturbances – induced by high-frequency fields and magnetic fields with energy-related frequencies. The Andino X1 has mastered these tests not only with flying colors, but also meets the more stringent limits. This underlines its suitability of our Raspberry Pi for industrial Applications.
systems_science
https://giridhark.org/big/data/hadoop/2020/05/25/gentle-intro-hadoop.html
2023-12-09T05:51:28
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100800.25/warc/CC-MAIN-20231209040008-20231209070008-00552.warc.gz
0.925237
1,327
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__316409153
en
This article aims to simplify and make Hadoop less cooler by understanding the overall purpose and working of a file system. We will first discuss why we care about Big Data and why need a file system to store the large amounts of data that is generated. We will then discuss a simple analogy for hadoop to understand the overall idea behind it, followed by explaining the individual components in HDFS. Why do we care about Big Data ? Consider the following facts: - Humans and human created systems generate a lot of data. - 95% of the new data was generated between 2010- 2016 As data storage technology improved, so did the we start to store all kinds of data- logs, text, video, audio, images. To summarise, we characterize Big data by the following features. - Volume- The quantity of the generated data. - Velocity- The speed at which data is generated and processed to meet the demands. - Variety- The type and nature of data We also talk about veracity indicating the quality of the data captured. Data is knowledge and knowledge is wealth. If Machine learning models are a vehicle, then the fuel that drives the vehicle is data. Thus there is a need to build a cheap and efficient file system architecture that can help in fast accessibility and retrieval of data. Hadoop is one such architecture. Bob and the magic godown: An anology for Hadoop Consider farmer Abe cultivating rice and has a storage space in his house to store his yield. But the storage space is enough just to store yield of 100 sacks while on a good year he can end up with a yield more than 300 to 500 sacks. He would now have to approach the magic godown whose owner Bob is a friend to store the excess yield. Bob maintains a book with the details of room capacity and a team of elves in charge of each of the rooms in the godown. He offers to store all of Abe’s yield. The magic in the godown is that it can create 2 replicas of the stock that is brought to it. This is because Bob knows that his team of elves can sometimes be inefficient and therefore makes multiple copies of the stock. Two months later Abe wants 20 sacks of rice from the godown to make rice noodles and he goes to the godown. Bob knowing that the godown is capable of magic, asks for the recipe for the rice noodles and gives it to the godown elves. The godown elves apply the recipe to the sacks of rice placed at different rooms individually and ask the elder elf to combine all of the rice noodles into a single container. The rice noodle is combined and shipped to Abe. What is HDFS? HDFS or Hadoop File System lets you store large amounts of data (read petabytes, zetabytes) on a scaleable and cost-efficient collection of nodes (cluster of consumer computers AKA commodity hardware). Hadoop is designed to prevent data loss while also being efficient at providing fast access to data. Hadoop also follows the write-once-read-many model, which simply means that data once stored is not altered but only read many times for performing tasks or executing various commands. Commands that are to be executed on the data are handled using the MapReduce processing model. Hadoop File System has 5 components or services under 2 main categories of functions. - Master Services - Name node (Bob): Manages the data block management. - Secondary Name node: Works concurrently with Name node. - Job Tracker (Elder Elf): Tracks the status of the jobs. - Slave services - Data node: Commodity hardware nodes which provide storage. Data nodes send heartbeat to Name node indicating their functioning status. The interval for these heartbeats vary but is usually in the range of 3 seconds. - Task tracker Based on the analogy, lets go through the sequence of steps of storing and retrieving data from the Hadoop File System. For instance consider a file that includes phone numbers of everyone in the world entered alphabetically; The numbers of alphabet ‘A’ are stored on server 1 and ‘B’ on server 2 etc. To ensure availability when any of the nodes fail, HDFS makes 2 replicas of the data and now has to find space in its storage capacity to store the data. (Known as Redundancy or fault tolerance) The cluster contains two different types hardware systems- first, the commodity hardware (usually referred to as SATA) and a second, more advanced, reliable and optimized storage hardware(usually SAS disks). The HDFS runs as a layer on top of the commodity hardware to manage incoming data. This article explains the differences in the hardware well. HDFS uses the advanced hardware to store ‘metadata’ or data about data on the location description of the incoming data. For instance, the incoming 100 TB of the data could be distributed among 5 machines and this information about the distribution would be stored in the advanced hardware. The cluster stores data in blocks. Blocks are the smallest unit of data in HDFS. Each block have a size of 64 MB. HDFS tries to place each block (of the given data) on separate nodes(data nodes). The file creation process uses a staging mechanism where the data to be written to HDFS is temporarily stored locally until enough data accumulates for a block. Once that accumulation is complete the Name node commits the data to be written to disk. The Mapreduce algorithm In 2004, Google came up with the MapReduce algorithm to effectively process queries on large data repositories. It enables scalability across large Hadoop clusters. The MapReduce paradigm is a collection of two separate distinct tasks that programs perform. Map: This job takes in a set of data and converts it into another set of intermediate data. Data is stored as key-value pairs. Reduce: The out of Map task is combined to give a smaller set of tuples. An example for MapReduce Consider the task of finding the population of a state by the Census bureau. Each volunteer is sent to a set location and is asked to collect the population information. The data they collect is stored as (location, population) tuples with the location being the key. They then report to the capital city where the bureau collects the information and adds up all of them to give the population of the state.
systems_science
https://www.spiezle.com/dangers-in-the-darkness/
2024-03-04T02:27:35
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476409.38/warc/CC-MAIN-20240304002142-20240304032142-00488.warc.gz
0.962051
671
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__65712853
en
Dangers in the Darkness Severe Weather Can Disrupt Your Operations. Will You Be Ready? 715,000 customers lost power this winter in the Philadelphia region, caused by the latest in a string of severe weather events that have disrupted service. Prolonged power outages are more than an annoyance, they can be life threatening. For many building operators, especially of senior living and healthcare facilities, even short periods without power can seriously impact resident care and wellbeing. Power outages are costly and disruptive events. How can you minimize the impact on your facility of events such as these? It’s hard to predict what may happen tomorrow. In most parts of the country, buildings need both heat and power to operate. Electricity is normally purchased from utility companies and delivered via the power grid. When the grid goes down, it’s a problem. Buildings also rely on fuel oil, propane, natural gas or electricity to produce heat to warm and cool the building as well as to provide domestic hot water. These requirements are constant and are the base energy load requirements of the building. When the power goes out, many buildings rely on a back-up diesel generator, which is noisy and expensive to maintain. Although necessary in an emergency, it is not an effective way to supply power to your facility and offers little means to supply warmth. In the middle of a winter like this, that is a problem as well. Building owners now have options to generate combined heat and power (CHP) using reliable cogeneration technology. Cogen systems are not only energy efficient and cost effective; they can provide uninterrupted power service during emergencies. Spiezle is currently working with clients to incorporate CHP systems in the design of new buildings, and studying the feasibility of retrofitting existing facilities with CHP systems using natural gas fired microturbines. There microturbines are powerful, quiet, reliable and operate 24/7; providing cost effective electricity and heat energy. They integrate easily with existing building systems and are capable of generating electricity at a net cost of about 1/3 the price of purchasing electricity from the utility company, depending upon where you are. Many building owners see them as a good investment and a hedge against future power disruptions. These systems are not cheap. But Many States and Utility Companies offer incentives for installing CHP systems that reduce their first cost. We are working with two clients who are installing microturbine CHP systems. One is a new hospitality project. The other is an existing apartment building. In both cases incentives and operating cost savings produce a project payback of less than 4 years. In both cases each building has a constant demand for electricity and heat, with utility bills in excess of $100,000 a year. The microturbine CHP systems will save almost $150,000 year in regular energy costs, and supply power in an emergency for select lighting, hot water, water pumps, elevator motors, or for critical computer systems. These building owners are thinking not only about the safety and wellbeing their guests and residents, but are looking at their bottom line. What they see makes sense, especially over time. It’s not just emergencies that are on their minds, it is the cost of business today, coupled with the uncertainties of tomorrow.
systems_science
https://www.markfive.com/reactive-chemicals/
2023-12-11T16:42:13
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00548.warc.gz
0.968437
156
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__7982424
en
Many chemicals that are stable in their pure state become violently reactive when contaminated, heated, or combined with another chemical. Even when the reaction is intentional, as, in most chemical manufacturing, there is the potential for a runaway reaction that results in an explosion and fire. It is, therefore, important that safeguards be in place to prevent uncontrolled reactions from occurring. Failure to have these safeguards has resulted in numerous incidents. I am a founding member of the AIChE/CCPS Chemical Reactivity Roundtable which published a system for identifying potential reactive hazards and programs that can be used to control those hazards. I have also prepared a paper on the details of the programs needed to control these hazards, which is available for download here: Explosion and fire resulting from an unexpected reaction.
systems_science
http://www.compulab.co.il/utilite-computer/wiki/index.php?title=Utilite_U-Boot_User_How-To%27s
2019-05-20T10:46:27
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00024.warc.gz
0.77163
1,098
CC-MAIN-2019-22
webtext-fineweb__CC-MAIN-2019-22__0__199721946
en
Utilite U-Boot User How-To's From Utilite Wiki Get help with U-Boot commands Each of the following commands print the full list of commands supported by U-Boot: To get help for a specific command, run: Working with environment variables The U-Boot environment is a collection of settings which are stored in environment variables. You can view, create, delete, and modify environment variables using U-Boot commands.All changes you make to the environment will be discarded after reset, unless the saveenvcommand is used to save the changes. - To see all the environment variables and their values, run - To see the value of only one variable, run When typing the variable name, you can press Tab to auto complete the variable name, or see a list of completion candidates if there's more than one possible completion. - To set a value to an environment variable, run setenv <variable_name> '<new_value>' The above command will create the environment variable if it does not exist. - To delete an environment variable, run Working with storage U-Boot supports various types of storage, including USB, MMC, SATA, and NAND. With the exception of NAND, all storage types should be detected manually before they can be used. NAND storage is detected automatically by U-Boot. - The following command detects the SATA storage - The following command scans for an MMC/SD card - The following command detects all USB devices connected to the system Once the storage has been detected, it is possible to read/write into the storage, as well as get information on the storage device. The ls and load commands can be used to display and load the contents of storage devices regardless of the device type and the filesystem on it. To use the ls and load commands, first determine the number of the storage device: - On NAND, the device number is always 0 - On SATA, the device number is always 0 - On MMC, the device number is always 2 - On USB, the device number can be between 0 and 4. To find the number of the USB storage you want to use, run Once you've determined the interface and device number for the storage, you are ready to use the ls and load commands. For example: - To display the contents of the first USB storage device run ls usb 0 - To load a file from the first USB storage to the memory address 0x10800000, run load usb 0 10800000 <filename> To setup networking, run The above command will contact the local DHCP server and obtain network parameters such as IP address, netmask, TFTP server IP, and more. All of the network parameters can also be setup manually by editing the appropriate environment variables. See Working with environment variables for instructions on how to modify U-Boot environment variables. List of networking environment variables: - ipaddr: the IP address assigned to the module - serverip: the default IP for the TFTP and NFS servers when using the 'tftp' or 'nfs' command After the network is setup, you can use U-Boot network commands. - To ping an IP address, run ping <IP address> - To load a file to memory using TFTP, run tftp <memory address> <filename> - To load a file to memory using NFS, run nfs <memory address> <filename> tftp 10800000 myscript.img Then, to run the script, execute Enabling HDMI console - The following commands enable HDMI console: setenv panel HDMI setenv stdout serial,vga setenv stderr serial,vga saveenv Enabling USB keyboard - Plug a USB keyboard into a USB port - Use the following command to make U-Boot detect it: - To tell U-Boot to accept console input from a USB keyboard, run the following commands: setenv stdin serial,usbkbd saveenv - To make U-Boot detect the USB keyboard on boot, run the following commands: setenv preboot "usb start" saveenv The steps for booting linux are: - Setup the kernel arguments by modifying the bootargs environment variable. For example: setenv bootargs 'console=ttymxc3,115200 root=/dev/sda2 rootfstype=ext4 rw rootwait' See Working with environment variables for instructions on how to modify U-Boot environment variables. - Load the kernel from where it is stored. For example, for a kernel stored on SSD: sata init load sata 0 10800000 uImage-cm-fx6 - Execute the boot command OS boot countdown - After booting, U-Boot begins a countdown before initiating OS boot. OS boot can be interrupted before countdown reaches 0 by pressing any key. - If the countdown time is set to 0 seconds, it is still possible to stop OS boot by holding Ctrl+C while U-Boot boots. - To change the countdown time, run the following commands: setenv bootdelay <seconds> saveenv
systems_science
http://eizyinfotech.com/cms-content-management-system/
2018-05-23T07:03:17
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00274.warc.gz
0.863388
784
CC-MAIN-2018-22
webtext-fineweb__CC-MAIN-2018-22__0__29692492
en
Content management systems make your content smarter and more powerful. Because content is stored only one time no matter how many times it is used, the system can track everything that happens to it. And editors only have to handle the content one time while the changes are made globally within and across all documents. Benefits of CMS ( Content Management System ) - Quick and easy page management – Any approved user can quickly and easily publish online without complicated software or programming. - Consistent brand and navigation – Design templates provide a consistent brand and standard navigation across all KU websites. - Workflow management – An integrated workflow process facilitates better content management. - Flexibility for developers – Because the CMS enables non-technical users to easily publish content, this frees up technical developers to focus on functionality and enhanced features. - Design is separate from content – You can manipulate content without fear of accidentally changing the design. - Database-driven – You only need to change data once for it to be updated throughout your site. - Shared resources – Website managers will have access to shared resources, such as modules, images, audio and video files, etc. - Approval systems – You can give different levels of access to different users, and the CMS has mechanisms to ensure content is approved before going live. - Mobile ready – The CMS automatically scales your site to fit tablets, mobile devices and smaller browser windows. - Archive capabilities –You can track who has made changes to your page and archive previous versions of your page. - Remote access – You can access and update your site from anywhere with an Internet connection. - Security – Security is automatic. - Search engine-friendly – The CMS helps to optimize your website so that search engine users can easily find your information. - Updates— The CMS allows alerts to be set to notify the editor when content needs to be reviewed, updated or removed. This will help prevent old data from being presented and misinforming users. |Benefit||Before a CMS||After a CMS| |Centralized and shared content||Content is scattered throughout the organization, resulting in contributors creating similar or duplicate content in many different formats.||Content is consolidated into one powerful repository, facilitating content sharing among co-workers.| |Accurate content||Numerous versions of documentation reside in separate files. Each file must be updated individually through a manual process, leading to errors and inaccuracies.||Because each piece of content is only stored one time in a CMS, it can be reused throughout one or multiple documents. The CMS tracks every instance of content reuse and flags all instances when a change is made to ensure all appropriate instances are updated and consistent.| |Secure content||Anyone can access the content in documents, posing a security threat.||User privileges are assigned, so only authorized people can access content with unique IDs.| |Shorter editorial cycles||The editorial and review process is inefficient. Responsibilities and deadlines are not well-defined and monitored.||Users are alerted to their pending tasks and due dates. Additionally, daily editorial tasks can be automated to save time.| |Quick creation of new publications||Content is rewritten for new publications because previously written content cannot be found.||Content can be searched, retrieved, and reused to create new products within minutes.| |Timely delivery of publications||Separate files exist for print, Web, and PDF versions of the content, increasing the time it takes to update and publish the content.||Single-source content is updated once and repurposed for multiple media channels as often as daily or weekly.| |Lower translation costs||Documentation published in many languages is confusing and costly to update and translate.||A CMS with full Unicode support allows small chunks of updated content to be translated instead of entire documents, saving thousands of dollars.|
systems_science
https://hipyqav.yunusemremert.com/uml-and-its-uses-24001ax.html
2021-01-26T23:53:25
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00615.warc.gz
0.934624
1,447
CC-MAIN-2021-04
webtext-fineweb__CC-MAIN-2021-04__0__221617227
en
This content is part of in the series: Large enterprise applications - the ones that execute core business applications, and keep a company going - must be more than just a bunch of code modules. They must be structured in a way that enables scalability, security, and robust execution under stressful conditions, and their structure - frequently referred to as their architecture - must be defined clearly enough that maintenance programmers can quickly! That is, these programs must be designed to work perfectly in many areas, and business functionality is not the only one although it certainly is the essential core. Of course a well-designed architecture benefits any program, and not just the largest ones as we've singled out here. We mentioned large applications first because structure is a way of dealing with complexity, so the benefits of structure and of modeling and design, as we'll demonstrate compound as application size grows large. Another benefit of structure is that it enables code reuse: Design time is the easiest time to structure an application as a collection of self-contained modules or components. Eventually, enterprises build up a library of models of components, each one representing an implementation stored in a library of code modules. When another application needs the same functionality, the designer can quickly import its module from the library. At coding time, the developer can just as quickly import the code module into the application. Modeling is the designing of software applications before coding. Modeling is an Essential Part of large software projects, and helpful to medium and even small projects as well. A model plays the analogous role in software development that blueprints and other plans site maps, elevations, physical models play in the building of a skyscraper. Using a model, those responsible for a software development project's success can assure themselves that business functionality is complete and correct, end-user needs are met, and program design supports requirements for scalability, robustness, security, extendibility, and other characteristics, before implementation in code renders changes difficult and expensive to make. Surveys show that large software projects have a huge probability of failure - in fact, it's more likely that a large software application will fail to meet all of its requirements on time and on budget than that it will succeed. If you're running one of these projects, you need to do all you can to increase the odds for success, and modeling is the only way to visualize your design and check it against requirements before your crew starts to code. Raising the Level of Abstraction Models help us by letting us work at a higher level of abstraction. A model may do this by hiding or masking details, bringing out the big picture, or by focusing on different aspects of the prototype. Alternatively, you can focus on different aspects of the application, such as the business process that it automates, or a business rules view. The new ability to nest model elements, added in UML 2. You can use UML for business modeling and modeling of other non-software systems too. Using any one of the large number of UML-based tools on the market, you can analyze your future application's requirements and design a solution that meets them, representing the results using UML 2. You can model just about any type of application, running on any type and combination of hardware, operating system, programming language, and network, in UML. Its flexibility lets you model distributed applications that use just about any middleware on the market. You can do other useful things with UML too For example, some tools analyze existing source code or, some claim, object code! Some tools on the market execute UML models, typically in one of two ways: Some tools execute your model interpretively in a way that lets you confirm that it really does what you want, but without the scalability and speed that you'll need in your deployed application. Other tools typically designed to work only within a restricted application domain such as telecommunications or finance generate program language code from UML, producing most of a bug-free, deployable application that runs quickly if the code generator incorporates best-practice scalable patterns for, e. Our final entry in this category: Today, faced with an embarrassingly rich array of middleware platforms, the developer has three different middleware problems: First, selecting one; second, getting it to work with the other platforms already deployed not only in his own shop, but also those of his customers and suppliers; and third, interfacing to or, worse yet, migrating to a new "Next Best Thing" when a new platform comes along and catches the fancy of the analysts and, necessarily, CIOs everywhere. In fact, a UML model can be either platform-independent or platform-specific, as we choose, and the MDA development process uses both of these forms: Every MDA standard or application is based, normatively, on a Platform-Independent Model PIMwhich represents its business functionality and behavior very precisely but does not include technical aspects. This conversion step is highly automated, but not magic: Before the tool produces a PSM, the developer must annotate the base PIM to produce a more specific but still platform-independent PIM that includes details of desired semantics, and guides choices that the tool will have to make. Because of the similarities among middleware platforms of a given genre - component-based, or messaging-based, for example - this guidance can be included in a PIM without rendering it platform-specific. Still, developers will have to fine-tune the produced PSMs to some extent, more in early days of MDA but less and less as tools and algorithms advance.The Unified Modeling Language (UML) was created to forge a common, semantically and syntactically rich visual modeling language for the architecture, design, and implementation of complex software systems both structurally and behaviorally. • The Unified Modeling Language (UML) is a standard language for specifying, visualizing, constructing, and documenting the artifacts of different simple or complex systems. • The Unified Modeling Language (UML) is a standard language for specifying, visualizing, constructing, and documenting the artifacts of different simple or complex systems. UML is popular for its diagrammatic notations. We all know that UML is for visualizing, specifying, constructing and documenting the components of software and non-software systems. Hence, visualization is the most important part which needs to be understood and . How to depict “class uses class” relationship via UML. Ask Question. What is the name of "class uses class" relationship in UML? Would you use the same relationships for both case A and case B? What UML arrow would you use to demonstrate this relationship? Well, you could use the "uses" relationship (dotted or dashed line, . In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among objects.
systems_science
http://snt.rs/is/ERPimplemantation.en.php
2019-08-18T08:53:40
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313747.38/warc/CC-MAIN-20190818083417-20190818105417-00261.warc.gz
0.903542
155
CC-MAIN-2019-35
webtext-fineweb__CC-MAIN-2019-35__0__14842904
en
Implementation of ERP systems Professional ERP implementation - Project Management - Training & Coaching - Individual programming An excellently-schooled team of specialists, whose experience stems from a wide variety of countries and sectors, undertakes the implementation of SAP or Infor ERP systems. This range of services starts with the analyses of needs and the implementation and management of such projects, and extends to the establishment and compilation of specifications and requirements sheets. The range also includes the requisite training. Services ensuing from other segments complement this range of offerings. These include the implementation of individual-use programming, the secure and customized depiction of mobile processes upon mobile devices, and the outsourced operation of hardware and software used in the running of the respective system.
systems_science
https://pegacert.com/vendor/cisco/200-125/simulationyou-work-network-engineer-sascom-network-ltd-2ca7a/
2022-10-04T23:02:56
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00708.warc.gz
0.862802
379
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__229866631
en
You work as a network engineer for SASCOM Network Ltd company. On router HQ, a provider link has been enabled and you must configure an IPv6 default route on HQ and make sure that this route is advertised in IPv6 OSPF process. Also, you must troubleshoot another issue. The router HQ is not forming an IPv6 OSPF neighbor relationship with router BR. Two routers HQ and BR are connected via serial links. Router HQ has interface Ethernet0/1 connected to the provider cloud and interface Ethernet0/0 connected to RA1. Router BR has interface Ethernet0/0 connected to another router RA2. IPv6 Routing Details All routers are running IPv6 OSPF routing with process ID number 100. Refer to the topology diagram for information about the OSPF areas. The Loopback 0 IPv4 access is the OSPF router ID on each router. Configure IPv6 default route on router HQ with default gateway as 2001:DB8:B:B1B2::1 Verify by pinging provider test IPv6 address 2001:DB8:0:1111::1 after configuring default route on HQ. Make sure that the default route is advertised in IPv6 OSPF on router HQ. This default route should be advertised only when HQ has a default route in its routing table. Router HQ is not forming IPv6 OSPF neighbor with BR. You must troubleshoot and resolve this issue. To gain the maximum numbers of points, you must complete the necessary configurations and fix IPv6 OSPF neighbor issue with router BR. IPv6 OSPFv3 must be configured without using address families. Do not change the IPv6 OSPF process ID.
systems_science
https://www.itncart.com/shop/amd-ryzen-7-2700x-the-intelligent-processor/
2021-10-22T16:14:47
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00702.warc.gz
0.907544
906
CC-MAIN-2021-43
webtext-fineweb__CC-MAIN-2021-43__0__168140422
en
ProsImproved overclocking Unlocked chips Better memory and cache performance Includes CPU cooler with RGB lighting ConsLacks Integrated graphics Slower clock speeds as compared to Intel Draws lot of energy on full load, temperatures become high Ryzen 7 2700X belongs to Ryzen 2nd generation processor series and it is the most significant processor in this category. The 2nd generation processors are bigger and better , they promise some great and exciting features giving a strong competition to Intel 7th generation processors. Ryzen 7 2700X introduces following features: - 12nm architecture , first time in consumer CPU’s market, - Higher clock speeds with base clock of 3.7 GHz and turbo of 4.3 GHz - Reasonable pricing as compared to Intel i7 8700K: Ryzen 7 2700X costs $329 while i7 8700K costs $349 - Package includes Wraith Prism, a premium RGB CPU cooler - Simultaneous multi threading and better overclocking - Unlocked multiplier for easy overclcoking - Works with existing X370 motherboards, no need to purchase new motherboards AMD Ryzen 7 2700X Key Features The AMD Ryzen 7 2700X brings new architectural improvements over its predecessors. The major features are mentioned below: Precision Boost 2 The older technology Precision Boost determines how fast each core of processor chip should be running based on the workload. So, when a given core will be assigned with a task, it will bump the base speed to the maximum turbo speed i.e. getting maximum speed from the processor when required. This older technology had a limitation: if more than 2 cores were demanding processor time then it would automatically boost speed of slower core only leaving behind the other core. But with arrival of Precision Boost 2 the processor tries to constantly run each core as fast as possible whenever required, it will on;y reduce clock speeds of temperatures are becoming high or due to some other specific reason. So for multi-threaded tasks like video encoding – a better performance will be observed. The older version XFR could boost clock speed of two cores in increments of 25 MHz till maximum of 100 MHz above the rated boost clock speed (only if conditions allowed).With new version of XFR i.e. XFR2 the same concept now can be applied to all cores. Introduction of 12 nm architecture The previous generation Ryzen processors were built on 14 nm process but the new generation use 12nm process. This process has been introduced for first time in consumer market of CPU’s. The smaller transistor chip makes use of lower power usage and allows processor chips to be made from silicon wafer, hence reducing manufacturing costs per chip. Wraith Prism Cooler Ryzen 7 2700X package includes a wraith prism cooler which was missing in the previous AMD generations where user had to buy their own coolers. Wraith prism cooler has a hefty copper base and uses four heat pipes to withdraw heat to a thin metal radiator, while switching between high and low fan speed modes. With the launch of Ryzen 2nd series motherboards, AMD has specified that all the new processors will work with existing X370 motherboards. Ryzen 7 2700X will continue to work with existing X370 motherboards but clock speeds may be slightly reduced to keep up the power delivery. That is why to keep up all the parameters at par, AMD has introduced new X470 motherboard which brings faster memory and speed upto 2933 MHz. The major advantage of X470 motherboard is its new store MI technology which allows user to combine SSD and hard drive into one virtual drive making easy to keep track of all your files. We recommend following X470 motherboards for Ryzen 7 2700X based on their features and price (as on Amazon.com): |Best For Gaming||Gigabyte Aorus X470 Ultra Gaming||$131.58| |For High Performance Computing||ASRock AMD Chipset X470||$219.99| |Built in Ethernet and Wifi||Asus X470 ROG Crosshair VII Hero AM4||$276.91| Price of Ryzen 7 2700X The price of AMD Ryzen 7 2700X varies on Amazon.com from $319.99 to $432.40 The price of AMD Ryzen 7 2700X varies in India as on Amazon.in from Rs. 30,890 to Rs. 34,499.
systems_science
https://www.pctech.com/server-products.html
2023-09-23T08:43:42
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.35/warc/CC-MAIN-20230923062631-20230923092631-00515.warc.gz
0.915129
233
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__101972682
en
We carry the complete line of server products and accessories from Dell, HP and Lenovo. Hardware sales is only one aspect to owning a server, PCTECH also provides complete network consulting services, procurement and deployment services. We also offer helpdesk and maintenance agreements to ensure the entire network infrastructure is running smoothly. If needed, we can work with your existing IT Department to provide another option to access systems, parts, and support in a timely manner. - Authorized Dell Partner with access to all products - Source server products from other vendors as we have relationships with all major hardware vendors - Need server replacement parts such as motherboards, Raid Hard Drives or ECC Ram? Give us a call - Would you like to extend your server warranty or purchase additional licenses? Call our sales department. - We offer full design and installation services of rack cabinets and components from Dell, APC, etc. - Our list doesn't stop here. We provide UPS, firewall, spam filtering systems, backup device, and more. Next time you require a server, let PCTECH quote you a solution.Contact PCTECH
systems_science
http://rcopticalsystems.com/accessories/tcc.html
2018-01-22T22:09:14
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891543.65/warc/CC-MAIN-20180122213051-20180122233051-00050.warc.gz
0.880666
2,199
CC-MAIN-2018-05
webtext-fineweb__CC-MAIN-2018-05__0__109425504
en
Telescope Command Center - TCC-II... Telescope Command Center... Made exclusively for RCOS by Telescope Control Systems, LLC. (Discontinued as of 2010) The TCC-II was released fall of 2007 and is the predecessor to the original TCC-I which was developed in 2001. In addition to the features listed below for the original TCC-I, the TCC-II has: - Added 4 additional stepper motor drivers. - Increased torque on PIR stepper drive to rotate even the new heavy off axis filter wheels. The Telescope Command Center is a combination of hardware and software designed to control the peripheral functions of the RC Optical Systems line of telescopes. The TCC hardware consists of a processing unit attached to the telescope and a hand held display unit used for user control. The TCC can also be simultaneously controlled via computer through its serial port. The TCC-II provides control for the following functions of RC Optical Systems telescopes: - Precision focusing of the secondary mirror via a servo controlled linear actuator designed by RCOS in 1999. Resolution is 1/40,000 of an inch. - Instrument rotation by use of an RCOS Precision Instrument Rotator (PIR). - Temperature monitoring of the primary and secondary mirrors and ambient temperature moving through the telescope with manual or automatic control of the cooling fans and or dew heater(s). - Precise control of secondary mirror dew heater manually or automatically. - Four zones of variable power 60 watt DC auxiliary outputs which can be used to control other variable power 12VDC devices. - High torque stepper motor driver specific to the RCOS PIR. - Four additional stepper motor (motion) outlets. Almost all of the Telescope Command Centers capabilities are available via the scripting interface. This interface is ASCOM compliant, and uses platform-independent COM (ActiveX) interfaces for compatibility with the built-in Windows scripting host and many programming languages. The TCC seamlessly integrates with ASCOM compliant programs such as MaxIm DL CCD and FocusMax. The Hand controller can be used to duplicate all functions performed by a PC. The TCC software allows for computer control of the TCC hardware. The TCC can simultaneously be controlled via the hardware interface and any number of ASCOM compliant computer links. The status of all TCC functions remain synchronized across all connected user interfaces. Conversely, the TCC hardware is fully operational via the TCC software or ASCOM link, without the use of the hand controller. The TCC software extends the functionality of the TCC by adding: - Adjustment of servo parameters of Secondary Dew Heater and Fan control. - Fine tuning of the Focus servo filter parameters. - Logging of Temperatures, Secondary Dew Heater and Fan powers, allowing for fine tuning of these automatic control functions. The TCC software is fully ASCOM compliant and registers itself as an ASCOM compliant Focuser during installation, and is thus visible to all ASCOM compliant software packages such as MaxIm DL CCD. There are two program window sizes options available. The large window option allows for full control of all the TCC options. There is also a miniature stay-on-top window which allows for control of all the TCC core functions, while utilizing a small amount of computer screen real estate. The miniature window option is particularly useful when used in conjunction with a secondary ASCOM compliant program such as MaxIm DL CCD or FocusMax. Closed Loop Servo DC Secondary Focuser... Your RC Optical Systems telescope contains a precision secondary mirror actuator with an integrated servo motor and position encoder. This system allows for a position resolution of 1/40,000 inch which is approximately 40 times the focus tolerance of your optical system. In order to fully utilize the TCC focus capabilities it is helpful to understand to workings of the focus servo motor. When a command is issued to change the focus position, the motor is moved in the desired direction. The TCC reads the position of the secondary mirror 6000 times every second and determines the magnitude and sign of the difference between the desired position and the actual position. This difference is referred to as the Error Term and is used by the servo algorithm to determine how much power to supply to the motor. The power output is directly proportional to the Error Term (the Proportional Term) and inversely proportional to the rate of change of the Error Term (the Derivative Term). In addition, any residual Error Term accumulates over time (the Integral Term) and directly contributes to the power output. The Proportional Term contributes to the reactivity of the system, the Derivative Term adds stability to the system, and the Integral Term increased overall position accuracy. The various servo algorithm terms have been optimized for a typical RC Optical Systems secondary focuser. However, because every system is unique it may be possible for the user to further tune the servo system. The TCC main application Focus Tab displays both the Target position and the Actual position of the focuser. The knowledge of both positions is helpful for determining when the focus has settled and if there is any error in positioning. The TCC servo parameters are set to provide a balance of stability and position accuracy. It is not unusual for the system to settle with a difference of up to 2 counts between the actual position and the desired position. This residual error is immaterial because it is well within the focus tolerance of the optical system. When the TCC is powered up, the focuser position readings are set to zero. Movement of the focuser results in a position reading that indicates a position relative to this starting position. To obtain position readings that indicate the absolute position of the mirror, the TCC must first be commanded to move to the Home position. The Home position is a precision mechanical stop within the focus actuator and allows for reliable repeatability of absolute positioning. Once the TCC detects the Home position the focuser is moved 1/2,000 inch to relieve any mechanical tension. Thus the Home position corresponds to a position reading of 20 counts. Auto Focus Routine... The TCC works very well with the "auto focus" routine from FocusMax and MaxIm DL. Use the Temperature tab to control the many temperature-related functions of the TCC. These include telescope cooling via fan control, secondary mirror dew heating, and auxiliary variable power outputs which can be used to control dew heater elements. The TCC monitors three pertinent temperature: ambient, primary mirror, and secondary mirror; which are displayed in degrees Fahrenheit. These temperatures can be used to guide telescope cooling and secondary mirror heating, and are used for automatic control of these functions. The two modes of cooling fan control are manual mode and automatic mode. In manual mode, the fan speed (power) is adjusted by pressing the Power (%) up/down arrows. In automatic mode, the fan speed is automatically adjusted in response to the relative Primary Mirror and Ambient temperatures according the equation: Power (%)=(Primary T -(Ambient T+ Set Point T))/Servo Gain x 100% Where Set Point defines an offset to Ambient T and Servo Gain defines a temperature range over which the Power will vary from 0-100%. For example: With an Ambient T of 60 ºF, a Set Point of 1 ºF, and a Servo Gain of 2 ºF, the Power will equal 0% when the Primary T is less than 61 ºF, 50% when the Primary T equals 62 ºF, and is 100% when the Primary T is greater than 63 ºF. Note: To insure that cooling fans operate over their full speed range, see adjust fan settings. The two modes of secondary mirror heater control are manual mode and automatic mode. In manual mode, the heater power is adjusted by pressing the Power (%) up/down arrows. In automatic mode, the heater power is automatically adjusted in response to the relative Secondary Mirror and Ambient temperatures according the equation: Power (%)=((Ambient T+ Set Point T)-Secondary T)/Servo Gain x 100% Where Set Point defines an offset to Ambient T and Servo Gain defines a temperature range over which the Power will vary from 0-100%. For example: With an Ambient T of 45 ºF, a Set Point of 0.5 ºF, and a Servo Gain of 3 ºF, the Power will equal 0% when the Secondary T is greater than 45.5 ºF, 50% when the Secondary T equals 44 ºF, and is 100% when the Secondary T is less than 42.5 ºF. The two auxiliary power outputs are adjusted through their respective slider bars. Each output is capable of supplying 60 watts of power and can be used to control any device which requires a variable power (pulse width modulated) DC power, such as resistive heating elements like those supplied by Kendrick. Precision Instrument Rotator... Shown above is the RCOS 82mm Precision Instrument Rotator (PIR-82mm). The RCOS instrument rotator in controlled through the Instrument Rotator tab. The instrument rotator drive consists of a precision worm drive powered by a high quality stepper motor, which provides a resolution of 1/200 degree of rotation. To command the instrument rotator to move, set a rotation step size by clicking an available option, or by choosing Other and typing step size. Press the Up arrow to rotate clockwise and the Down arrow to rotate counter-clockwise. Keeping the buttons depressed will cause the movement command to be continually repeated. The instantaneous rotator position is displayed and updated during rotation. Pressing the Home button will cause the instrument rotator to rotate counter-clockwise to one of its two home position. The rotator slews at a rate of approximately 3.25 degrees/sec. It takes 110 seconds to complete a full rotation. To reduce the amount of time it takes slew home, the instrument rotator contains two positions sensors place 180 degrees apart. Therefore, position reading after a home maneuver may read either 0 or 180 degrees, depending on which home position is detected first. Pressing the Stop button will halt a rotator slew at its current position. And since this is now a PC based control system, we expect new and improved features will be available as a software download. Software and drivers can be user installed directly to the control board. |Telescope Command Center (TCC):|
systems_science
https://graphs.unepgrid.ch/graph_ocean_temperature_resp.php
2023-12-08T06:11:13
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00314.warc.gz
0.778366
181
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__283091043
en
Cheng et al., 2017 : Ocean 0-2000m averaged temperature change since 1940, with 95% confidence interval shown in shading. Data from Cheng et al. 2017. Cheng et al., 2017, 2019: Human-emitted greenhouse gases (GHGs) have resulted in a long-term and unequivocal warming of the planet (IPCC, 2019). More than 90% of the excess heat is stored within the world’s oceans, where it accumulates and causes increases in ocean temperature (Rhein et al., 2013; Abram et al., 2019). Because the oceans are the main repository of the Earth’s energy imbalance, measuring ocean heat content (OHC) is one of the best way to quantify the rate of global warming (Trenberth et al., 2016; Von Schuckmann et al., 2016; Cheng et al., 2018).
systems_science
http://maxreferal.ru/referat/dlya-studenta/valin-tekst-do-25-aprelia/
2017-01-18T06:09:13
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00502-ip-10-171-10-70.ec2.internal.warc.gz
0.939919
952
CC-MAIN-2017-04
webtext-fineweb__CC-MAIN-2017-04__0__18485556
en
GPS for Precise Time and Time Interval Measurement The Global Positioning System (GPS) has quickly evolved into the primary system for the distribution of Precise Time and Time Interval (PTTI)/ This is true not only within The Department Of Defense (DOD)but also within the civilian community, both national and international. The users of PTTI are those who maintain and distribute time (Epoch) to better than one-millisecond (1ms) precision and/or accuracy and time interval (frequency) to better than one part in ten to the ninth (1X10-9). The GPS is very effective not only in meeting these modest requirements of the PTTI community but also meeting more stringent ones, such as synchronizing clocks to tens of nanoseconds over large distance. It is not surprising that this is the case. As with all navigation systems, the heart of the GPS is a clock. In the GPS, it controls the transmission of the navigation signals from the each satellite and is an integral part of the ground monitor stations. This relationships between clocks and navigation is not unique. It goes back to the eighteenth century when John Harrison (1693-1776) developed this famous clock. Harrison’s clock solved the longitude problem for the Royal Navy by allowing a ship to carry Greenwich time with it to sea. The navigator then determined his own local time. The difference between the navigator’s local time and the Greenwich time, which he was carrying with him, was his longitude difference from Greenwich. The GPS NAVSTAR satellites are similar to the Royal Navy H.M.S. Deptford. They carry a standard reference time onboard. The navigator then uses the difference between his local time and the reference time onboard the satellite to help him determine his position. The importance of the GPS to the PTTI community can be neither understated nor underestimated. The GPS is and will be the primary means by which time, that is Universal Coordinated Time, U.S. Naval Observatory [UTC(UNSO)], the time scale maintained at the U.S. Naval Observatory and the reference for all timed DOD systems, will be distributed within the DOD. The GPS provides time it the one-way mode (OWM), easily to a precision and accuracy of 100 ns in real-time. With a modest amount of care, it is possible to reach 25 ns. In the OWM, the GPS is considered to be akin to a clock on the wall. The output from the receiver provides time as if looking at a clock on the wall. In addition, the OWM also allows the user to determine the difference between a local clock and UTC(USNO) or GPS time. Correction can be applied to the local clock in real time or after the fact, so that can be set on time to UTC(USNO) within the specifications of the system. Through the GPS, PTTI users can also compare clock in the common-view mode (CVM) over large distances to a precision and accuracy better than 10 nanoseconds. In the CVM, two users make measurements of their local clock with respect to the same GPS satellite at the same instant of time. If a user differences the values obtained at each site, he or she can determine the offset between the clocks at each site. However, this method requires the exchange of data by at least one of the participants. The melting-pot method (MPM), which is similar to the OWM and requires an exchange of data with the CVM, also allows clocks at remote sites to be synchronized and, more importantly, to be steered. In the MPM, a control station determines both the remote clock offset and rate from GPS time or UTC(USNO) and its own clock offset and rate from GPS time or UTC(USNO) by some from of regression to the observations of as many satellites as possible during the day. by comparing the two clock offsets and rates with respect to GPS time or UTC(USNO), corrections to the remote clock can be estimated. Then, corrections to the remotely located clock can be sent via a dial-up modem at any desired time. This last mode has the advantage of allowing automatic operation, and it is not dependent upon any one satellite. The ability to use the GPS in different modes to derive timing information ensures its prominence as a critical contributor to all timed system. However, a word of caution is necessary. Prudent systems engineering requires that adequate and alternate back-up systems for PTTI be factored into the overall design of the system. This point must be emphasized.
systems_science
https://icappublishers.com/?product=artificial-intelligence-with-robotics-by-dr-venkata-ramana-motupallidr-k-sreenivasuludr-v-venkata-ramanadr-m-v-rathnamma
2024-03-04T19:16:51
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476464.74/warc/CC-MAIN-20240304165127-20240304195127-00220.warc.gz
0.902785
469
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__23294681
en
Welcome to the intersection of intelligence and automation, where the realms of “Artificial Intelligence with Robotics” converge to redefine the boundaries of technological possibilities. In this book, we embark on a journey through the synergy of artificial intelligence (AI) and robotics, exploring the symbiotic relationship that propels us into an era of unprecedented innovation and transformation. The fusion of AI and robotics represents more than just a technological evolution; it marks a paradigm shift in the way we interact with machines and, ultimately, with the world around us. This book is a testament to the profound impact these technologies have on reshaping industries, revolutionizing processes, and challenging our perceptions of what machines can achieve. Our exploration commences with a foundation in both artificial intelligence and robotics, demystifying complex concepts to make them accessible to enthusiasts, students, and professionals alike. From machine learning algorithms to the mechanics of robotic systems, readers will find a comprehensive introduction that sets the stage for the synergistic journey ahead. As we traverse through the chapters, we delve into the application of AI in robotics and its transformative effects. From autonomous vehicles and industrial automation to healthcare and space exploration, each section explores the ways in which intelligent machines are augmenting human capabilities, streamlining processes, and pushing the boundaries of what is achievable. Practical insights are seamlessly woven into the narrative, showcasing real-world examples and case studies that illustrate the convergence of AI and robotics in diverse fields. Whether you are a researcher pushing the frontiers of knowledge, an engineer developing the next generation of robotic systems, or an enthusiast fascinated by the possibilities of intelligent machines, this book aims to provide both a theoretical foundation and practical understanding. Moreover, the discussion extends beyond technical dimensions to encompass ethical considerations, societal impacts, and the evolving relationship between humans and machines. In a world where AI and robotics play increasingly integral roles, it is imperative to foster a dialogue that considers not only what we can build but also how we should build it. In crafting this book, our intent is to inspire curiosity, fuel innovation, and contribute to the collective understanding of the transformative duo that is artificial intelligence with robotics. May the pages that follow serve as a guide, inviting you to explore the frontiers of intelligence, automation, and the boundless possibilities that arise when AI and robotics converge.
systems_science
https://www.peralex.com/radio-astronomy/
2023-09-30T19:42:42
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510707.90/warc/CC-MAIN-20230930181852-20230930211852-00053.warc.gz
0.820311
805
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__7367530
en
The SKARAB is an extremely scalable, energy-efficient 1U 19” rack mount network-attached FPGA computing platform. The heart of the platform is a motherboard featuring a Xilinx Virtex 7 (XC7VX690T-2-FFG1927) FPGA that provides unparalleled I/O bandwidth (1.28 Tera-bits per second total bandwidth) to four high performance mezzanine sites. The FPGA features dedicated supervisory and diagnostic interfaces (1 Gb Ethernet and USB), allowing highly scalable platform management (large scale cluster reconfiguration and health/status monitoring). An advanced reconfiguration interface allows sub-second on-the-fly reconfiguration of the Virtex 7 FPGA over 1Gb Ethernet, enabling compute clusters to rapidly change function with minimal down-time. Four symmetrical mezzanine sites provide flexibility in optimal balancing of digital or analog I/O and local (cache) memories: - A 4 x 40 Gb Ethernet mezzanine option supports high bandwidth, low latency Ethernet interfaces directly from the FPGA. - A high performance memory mezzanine option featuring next-generation Micron Hybrid Memory Cube (HMC) technology, provides extremely high bandwidth, high capacity local cache memory. - A four-channel, 14 bit, 3 GSPS ADC mezzanine with built-in digital down-conversion, capable of digitizing up to 1.5 GHz of bandwidth positioned from near-DC to 3.2 GHz. - A 5th COM Express-compatible mezzanine site supports high-performance management processor sub-system (e.g. 4-core Intel Atom/NVidia Tegra K1.) - A rich board support package is available to allow users to take full advantage of the platform’s features and allow rapid customization to a specific application. The SP4000 range of high performance, network-centric HDD and SSD managed storage solutions provides unprecedented scalability and performance for high end data storage and buffering tasks. The SP4000C is a compact 4U 19” rack-mountable 48-bay managed storage solution, with 2.5” hot-swap drive support. It is intended for high bandwidth networked storage or buffering applications. It supports a wide range of 2.5” SATA HDDs and SSDs. The primary data interface is 40 Gb Ethernet, with a total SSD storage capacity of up to 96 TB per unit. Up to 16 secondary SAS interfaces allow the unit to directly manage archival storage (e.g. tape drive arrays) or additional SAS JBODs. The SP4000L is a 4U, 19” rack-mountable top-loading 48-bay managed storage solution. It is intended for high capacity storage clusters, where it achieves a remarkable storage-capacity-to-cost-ratio using high capacity 3.5” SATA HDDs. Data interfacing is achieved through 10 Gb Ethernet interfaces, with a total storage capacity of up to 384 TB per unit (using 8TB HDDs). The SP4000 series is the ideal hardware platform on which to implement high performance/high redundancy RAID arrays, as well as scalable network-centric object stores (e.g. CEPH Object Store Daemons), for data-centric compute. MEERKAT L-BAND DIGITISER ENCLOSURE Peralex conceptualised and manufactured functional prototypes of the chassis that houses the Meerkat Digitiser digital radio receiver in a harsh desert environment on top of the antenna pedestal. The enclosure is capable of maintaining unprecedented levels of radio frequency shielding necessary to ensure the sensitivity of the SKA-SA Meerkat Radio Telescope.
systems_science
https://www.wehner-jungmann.de/services.html
2022-08-09T07:27:59
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00030.warc.gz
0.933201
189
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__207683484
en
Driven by your business objectives, we support you in all aspects of your software engineering project. We have many years of experience in managing projects which involve software development in various industrial domains. We specialize in software development processes aligned with your business needs which incorporate your know-how and critical information. We are experts in the concepts and technology of requirements engineering and have long-standing experience in capturing requirements for large industrial systems. One of our core competencies are different methodologies of documenting large-scale systems architecture which form a solid foundation for your future system. We are experts in system testing strategies and documentation, in particular automated system tests. We have employed DevOps practices in the deployment of a number of performance critical systems both in customer and 3rd party environments. We have worked on a variety of medical device software development projects and provide a software development process that reflects the full software life cycle of IEC 62304.
systems_science
https://accurateperforating.com/resources/perforated-bim-objects
2024-04-20T20:30:33
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817674.12/warc/CC-MAIN-20240420184033-20240420214033-00193.warc.gz
0.93759
648
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__39325394
en
What Is Building Information Modeling? BIM essentially takes traditional drawings to the next level. Previously, architecture and design was limited to a single set of drawings, first on paper and then generated by 2D computer software - a limited process where only a single person could make modifications at any given time. Passing drawings from person to person or machine to machine is far from efficient. Unlike the traditional process, BIM is collaborative. With a comprehensive system of high- and low-resolution computer models accessible by multiple users/ designers, architects and engineers can work together on the same document. Consequently, BIM construction and design projects are far more cohesive. Why Is BIM Good for Perforated Metal? Perforated metal designs are unique in that they combine both functionality and aesthetics. As such, it's important to get it right. BIM gives you the tools to plan your perforated metal project while also considering such details as light flow, shadowing, screening, transparency, privacy, aesthetics and much more. With BIM, you know exactly what you're getting so you can plan for the best possible configuration. Benefits of BIM Design Perforated metal BIM design has the potential to streamline the design process, perhaps revealing possibilities you hadn't considered. Here are a few reasons BIM is a game-changer for the construction industry: As noted previously, BIM is ideal for projects involving multiple professionals. The traditional back-and-forth of the design process is grueling, time-consuming and leaves your concept vulnerable to oversights. BIM gives the entire team a chance to work together and communicate more effectively. As a system with comprehensive features, BIM provides an unprecedented level of control over projects from start to finish. That means an entire history of the project is saved, allowing designers to view multiple design options and previous versions simultaneously, all while minimizing the possibility of file corruption. BIM inspires creativity in many ways. For starters, the increased collaboration opens new lines of communication at every level. It also provides tools to simulate and visualize any given project idea. Say you prefer a specific perforated copper panel but you need context; BIM can immediately display how it would actually appear. As a result, the designer can make more informed decisions. Offering Building Information Modeling (BIM) BIM opens a world of possibility for anyone hoping to incorporate perforated metals. Always a leader in innovation, Accurate Perforating was the first perforated metal manufacturer to embrace BIM technology for the construction industry. Offering perforated metal BIM objects gives architects a far more efficient and accurate way to visualize building structures with perforated metal inside and out. Accurate Perforating offers perforated metal building information modeling (BIM) objects to leverage data for the most commonly used perforated metal patterns. These BIM objects allow architects, designers and contractors to visualize the true nature of perforated metal components in a virtual environment, permitting them to see how objects and scenery appear through the perforated material and how light flows through the components.
systems_science
https://www.davisnet.co.nz/products/sensors/iss/6327c/
2022-05-18T12:06:10
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00327.warc.gz
0.853093
127
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__74432268
en
6327C Cabled Integrated Sensor Suite Plus with UV and Solar Radiation Sensors Cabled Integrated Sensor Suite Plus with UV and Solar Radiation Sensors for use with Weather Envoy and WeatherLink allows you to receive data directly from the outdoor sensors - no console required. Our innovative integrated sensor suite combines our rain collector, temperature and humidity sensors, and anemometer into one package�making setup easier than ever and improving performance and reliability. For improved accuracy, temperature and humidty sensors are housed inside a radiation shield. The shield protects against solar radiation and other sources of radiated and reflected heat.
systems_science
https://educationalwordlists.com/word-search-word/apollo
2024-02-24T08:41:38
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474526.76/warc/CC-MAIN-20240224080616-20240224110616-00074.warc.gz
0.933743
213
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__25623977
en
Spaceflight (or space flight) applies astronautics to fly spacecraft into or through outer space, either with or without humans on board. Most spaceflight is uncrewed and conducted mainly with spacecraft such as satellites in orbit around Earth and includes space probes for flights beyond Earth orbit. Such spaceflight operates either by telerobotic or autonomous control. The Apollo spacecraft was composed of three parts designed to accomplish the American Apollo program's goal of landing astronauts on the Moon by the end of the 1960s and returning them safely to Earth. The expendable (single-use) spacecraft consisted of a combined command and service module (CSM) and an Apollo Lunar Module (LM). Two additional components complemented the spacecraft stack for space vehicle assembly: a spacecraft–LM adapter (SLA) designed to shield the LM from the aerodynamic stress of launch and to connect the CSM to the Saturn launch vehicle and a launch escape system (LES) to carry the crew in the command module safely away from the launch vehicle in the event of a launch emergency.
systems_science
https://www.hayabusa2.jaxa.jp/en/topics/20181225e_AstroDynamics/
2023-11-30T18:21:33
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100229.44/warc/CC-MAIN-20231130161920-20231130191920-00744.warc.gz
0.945393
634
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__116956678
en
Until now, “astrodynamics” has been one of the less frequently reported operations for Hayabusa2. In space engineering, the movement, attitude, trajectory and overall handling of the flight mechanics of the spacecraft is referred to as “astrodynamics”. For example, astrodynamics played an active role in the gravity measurement descent operation in August 2018. While this was a short time ago, let’s look at a few of the details. From August 6 - 7, 2018, the “Gravity Measurement Descent Operation” was performed to estimate the strength of asteroid Ryugu’s gravity. Hayabusa2 initially descended from the home position at an altitude of 20km to an altitude of 6100m. Orbital control was then temporarily stopped to allow the spacecraft to “free-fall” towards Ryugu, moving due to the gravitational pull of the asteroid alone. When the altitude decreased to about 850m, the thrusters were instantaneously fired to give the spacecraft an upward velocity, whereupon Hayabusa2 performed a “free-rise” to an altitude of about 6100m (the spacecraft’s movement here is similar to throwing a ball vertically upwards). From the spacecraft’s motion during the free-fall and free-rise, the strength of Ryugu’s gravity could be measured and the mass of the asteroid obtained. As a result of this measurement, the mass of Ryugu was calculated to be about 450 million tons. The shape and volume of Ryugu are known thanks to the construction of the three-dimensional shape model (article on July 11: http://www.hayabusa2.jaxa.jp/topics/20180711bje/index_e.html). Using this volume and the measured mass of Ryugu from the gravity measurement descent operation, the average density of the asteroid can be calculated. The average density and shape of Ryugu could then be used to find the gravitational strength (gravitational acceleration) on the surface of Ryugu, which was found to have the following distribution: The gravitational acceleration on the surface of Ryugu is approximately 0.11~0.15 mm/s2, which is about eighty thousandths (~ 1/80000th) the strength of the Earth’s gravity and a few times stronger than that of Itokawa. We can additionally see that the gravity near the poles of Ryugu is stronger than near the asteroid’s equator. This is due to the equatorial ridge protruding from the surface. The information on the asteroid’s gravitational acceleration obtained through this method has been used for operations that approach close to the surface of Ryugu. Of course, it will also be used during touchdown. The gravity measurement descent operation described here is one application of astrodynamics. The astrodynamics team for Hayabusa2 use a variety of similar methods to estimate the trajectory of the spacecraft and Ryugu, and to evaluate the dynamic environment for operating around Ryugu.
systems_science
https://www.mitsubishi-logistics.co.jp/english/logistics/system/
2024-03-02T08:04:08
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00716.warc.gz
0.898268
390
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__37291161
en
Logistics information system Examples of the use of information systems in logistics services International transportation and import/export systems NACCS (Nippon Automated Cargo and Port Consolidated System) is linked with forwarding systems that gather all functions necessary for NVOCC work, from the issuance of S/I, B/L, and A/N for air and ocean cargo to billing and payments. It is also linked with import/export and custom clearance systems, allowing customers to check information about cargo and track transportation status online, according to their needs. Warehouse Management System (WMS) & Distribution Center Management System (DCMS) and related systems We support systems for storage management at warehouse/distribution centers, classification by transportation direction, and transportation by optimal means. Shipment orders received online from customers via EDI, etc. are directly imported into the systems and handled accurately and promptly. We also use systems such as pharmaceuticals and other special goods, to provide services for variety of items. Storage & management system for documents and recording media This is an online system for the comprehensive management of documents and recording media. Requests for delivery and retrieval of documents stored in boxes and other units, to search contents, management retention periods, and disposal instructions can be made online. We will notify customers of document that are approaching the end of their retention period at a pre-determined time. Online inventory information management system This system allows customers to check product inventory information and make shipping orders online. Monthly inventory and shipment data by item can be downloaded to reduce the burden on customers' logistics operations. They can also manage import goods by customs status (cleared, uncleared). Online cargo inventory management system Customers can use this online system to check the arrival status of goods shipped from warehouse/delivery centers. It is possible to check on individual delivery vehicles.
systems_science
https://lazarussoftware.com/clients.html
2023-03-21T13:31:40
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00674.warc.gz
0.939129
198
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__230978364
en
Marriott International pinpointed an initiative to create a mobile solution to minimize the check-in process, thus allowing guests to use their mobile device in order to gain entry to their room, built into the existing Marriott Guest Services App. Lazarus Software, Corp. produced a solution that was not only mobile, but scaled with the demand on-property. For the first time, a guest will be able to check in with the app before arrival, then simply hold their phone up to the Key-Printer with their Bluetooth-enabled mobile device, print their room key and thereby skip the entire front desk process. This initiative also allows guests to request amenities, receive context-aware offers, and check out. Lazarus delivered a platform and mobile application that allowed Marriott to not only realize their strategy, but also provided analytics and metrics that displayed the adoption of the various phases. Lazarus produced lessons learned when scaling the solution to Marriott’s 3000+ properties.
systems_science
https://www.systemssolutions.us/about-us/why-systems-solutions/
2022-05-25T04:19:57
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662578939.73/warc/CC-MAIN-20220525023952-20220525053952-00596.warc.gz
0.907365
137
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__63772045
en
Systems Solutions has maintained long-term relationships with clients to help create better efficiencies and higher productivity. We create unique solutions that suit our clients’ requirements and budget. Part of the Systems Solutions' advantage is our storied and prolific vendor relationships. With industry leaders like VMware, SonicWall, Cisco, Microsoft and IBM® on your side, there’s no technology problem we can’t handle for you. Our expert engineers look for potential problems and solve them before they disrupt your business Get exclusive prices on hardware and software, thanks to our partnerships with industry leaders Enjoy round-the-clock tech assistance and remote support from our dedicated help desk team
systems_science
http://www.eniac-battman.eu/
2017-04-24T09:22:15
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00410-ip-10-145-167-34.ec2.internal.warc.gz
0.930246
142
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__210021908
en
The EU funded Eniac project will design and develop Lithium-battery-pack systems which manage photovoltaic power feed efficiently and deliver optimized, reliable, low-cost and predictable performance. Batteries and battery management systems are the essential storage elements in any solar-powered application. These systems can be employed in a variety of different markets and applications, yet reliable long-term service of the battery and system is the common challenge for all of the applications. The BattMan project therefore focuses on these essential elements and targets solar-powered, off-grid street lighting poles as a challenging demonstrator. It will be specified, simulated, designed, prototyped, demonstrated and validated in the project.
systems_science
http://kumiho.smith.man.ac.uk/dupliphy/
2017-04-29T05:27:07
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123276.44/warc/CC-MAIN-20170423031203-00533-ip-10-145-167-34.ec2.internal.warc.gz
0.78084
385
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__197073506
en
DupliPHY and DupliPHY-ML1 are command line tools to determine the evolutionary history of gene families using weighted parsimony2 and maximum likelihood3. The tools accept a newick string species phylogeny and a list of gene family sizes. DuipliPHY allows the user to also provide a weight matrix specifying the costs of gene gain and loss, while DupliPHY-ML allows the user to select between two different models of gene gain and loss. The tools will output the ancestral family sizes for each gene family in a tab delimited format. The tool can be applied to single gene families or run for whole genomic studies. DupliPHY and DupliPHY-ML are available as executable jar files and should run on any operating system. The source code is available for DupliPHY. The source code for DupliPHY-ML is available on request. Right click here and choose "Save Link As" to download the DupliPHY executable jar file complete with user guide and example data sets. Right click here and choose "Save Link As" to download the DupliPHY source code. Right click here and choose "Save Link As" to download the DupliPHY-ML executable jar file complete with user guide and example data sets. All downloads contain example data. This is a subset of the Drosophila dataset from the publication. The full Drosophila data set can be downloaded here. 1. Ames RM, Money D, Ghatge V, Whelan S and Lovell SC. Determining the evolutionary history of gene families. 2. Sankoff D. Minimal mutation trees of sequences. SIAM Journal of Applied Mathematics 1975, 28:35. 3. Felsenstein J. Sinauer Associates. 2004.
systems_science
https://jurnal.itbi.ac.id/index.php/journalinformatika/article/view/53
2022-07-01T10:09:01
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00727.warc.gz
0.707832
467
CC-MAIN-2022-27
webtext-fineweb__CC-MAIN-2022-27__0__157415030
en
Implementasi Proxy Server Menggunakan Squid Sebagai Sistem Bandwith Monitoring dan Website Filtering Computer Network Technology as a medium of data communication is currently increasing, especially on the internet network (interconnection networking) which is a complex network. The need for shared use of existing resources in the network, both software and hardware, has resulted in various developments in network technology itself. With the increasing level of demand and the increasing number of network users, they want a form of network that can provide maximum results both in terms of efficiency and increasing network security. One of the steps in monitoring and securing data servers is to use a proxy server. The proxy server in this case is a third party who acts as an intermediary between the two parties who are interconnected, in this case the local network and the internet network. The proxy server acts as a gateway to the world of the internet for each client so that data traffic can be controlled. This monitoring and blocking system was built using Squid Proxy Server. Ika Atman Satya. 2006. Mengenal dan Menggunakan Mikrotik Winbox, Datakom Lintas Buana, Jakarta, 2006 J.D. Wegner. 2000. IP Addressing and Subnetting Include IP v6, Synress Media Inc.America. Madcoms. 2016. Manajemen Sistem Jaringan Komputer dengan Mikrotik RouterOS, Yogyakarta:Penerbit Andi Offset Pascale Vicat-Blanc [et.al]. 2011. Computing networks : from cluster to cloud, New York computing Pratama, Yudha. 2010. Analisis dan Implementasi Optimasi Squid Untuk Akses Ke Situs Youtube.STMIK Amikom Yogyakarta Rafiudin, Rahmat. 2004. Panduan Menjadi Administrator Sistem Unix. Penerbit Andi, Yogyakarta. Sanjaya, Ridwan. 2005. Trik Mengelola Kuota Internet Bersama Squid. Penerbit Elex Media Komputindo, Jakarta
systems_science
https://stenner.com/products/pumps/s128/
2023-09-23T10:51:12
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00547.warc.gz
0.759315
1,213
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__138348736
en
1:128 Pump Adjusts to System Flow Rate for Even Distribution of Product Activated by a dry contact pulse water meter, the S128 automatically injects solution into the water line proportional to the system water flow at a ratio of 1 ounce of solution to 128 ounces of process water. As the pump registers dry contacts, it delivers the precise ratio. The pump has three speeds to automatically adjust the injection rate when the water system flow rate increases or decreases. This flow response design allows the pump to evenly and proportionally inject solutions especially at low flow rates. The S128 and the low flow VPD water meter are ideal for early stage poultry flocks and swine nurseries when water flow is minimal. The S128 can inject undiluted chemicals directly into the water line eliminating water restriction and the daily mixing of stock solutions. In addition, the pump is unaffected by poor water quality. - Potentiometer: Prime, 1 PPG, 1 PPL, 10 PPG, Reset Tube Timer, Standby - Output Relays: Signal Repeater Leak Detect, Drive Fault, Low Level - LED Indicators: Leak/Fault, Overrun, Change/Level, Power/Standby - Clear cover on control panel for moisture protection - Enclosed housing with NEMA 4X rating Built-in Signal Repeater Relay The built-in signal repeater relay repeats the pulse signal from the water meter to another pump or house controller. If using a meter to register water consumption to the house controller, an additional meter is not required to activate the pump. Tube Change Timer Set the Tube Change Timer DIP switches with the number of hours you want the pump to run before the CHANGE LED is activated. When the set time is reached the change/level LED is lit solid red. The standby setting allows the tube replacement without disconnecting the pump. Leak Detect Setting and Output Relay Set the pump’s DIP switch to stop or to continue to run when a leak is detected. Set the optional output relay to notify another device if a leak occurs. The pump has three dedicated inputs to receive a signal from another device. PULSE Receives a pulse from a dry contact water meter to activate the pump. STANDBY Remotely start or stop the pump when a dry contact input is sent to the pump. Replace tube without disconnecting pump LEVEL Input from a fluid level device indicates a low level in the solution tank. The Level Input may be used with the low level output relay. Stenner Peristaltic Pumps Advantages - Self-priming against max. working pressure; foot valve not required - Solutions do not contact moving parts - Pump head requires no valves, allows for easy maintenance - Does not lose prime or vapor lock - Pumps off-gassing solutions and can run dry. - Output volume is not affected by back pressure. - Tube replacement without tools Flow Rate Outputs |Approximate Output @ 50/60Hz| |Flow Rate Output Control||Potentiometer| |Maximum Working Pressure||60 psi (4.1 bar)| |Maximum Operating Temperature||104°F (40°C)| |Maximum Suction Lift||25 ft (7.6 m) vertical lift, based on water| |Motor Type||Brushless DC motor| |Shaft rom (average maximum)||45| |Maximum Viscosity||1500 Centipoise| |Motor Voltage (Amp Draw)||120V 60Hz 1PH (0.6)| |Power Cord Type||120V 60Hz SJTOWA| |Power Cord Plug End||120V 60Hz 5-15P| |Cord Length||6 ft (1.8 m)| |Materials of Construction| |Pump Tube||Santoprene®* (FDA approved)| |Ball Check Valve Components||Ceramic ball (FDA approved); Tantalum spring; FKM seat & O-ring or Ceramic ball (FDA approved); Stainless steel spring; EPDM seat; Santoprene®* O ring |Pump Head Rollers||Polyethylene| |Roller Bushings||Oil impregnated bronze| |Suction/Discharge Tubing||Polyethylene (FDA approved)| |Tube and Injection Fittings||PVC or Polypropylene (both NSF listed)| |Connecting Nuts||PVC or Polypropylene (both NSF listed)| |Suction Line Strainer and Cap||PVC or Polypropylene (both NSF listed); ceramic weight| |All Fasteners||Stainless steel| |Pump Head Latches||Stainless steel| |Leak Detect Components||Hastelloy®††| * Santoprene® is a registered trademark of Celanese International Corporation. †† Hastelloy® is a registered trademark of Haynes International, Inc 14 x 9 x 10 in. (35 x 23 x 24 cm) 10 lbs (4.5 kg)
systems_science
http://tonto.stanford.edu/~brian/making_dendrograms.html
2014-09-17T13:31:44
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123617.30/warc/CC-MAIN-20140914011203-00036-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
0.781417
354
CC-MAIN-2014-41
webtext-fineweb__CC-MAIN-2014-41__0__145024352
en
Making a Dendrogram Hierchical cluster analysis (as we've been doing here) can be portrayed graphically by a dendrogram, which represents the clustering process in a tree-like graph. One axis will (usually) represent an agglomeration coefficient. This depends on the clustering algorithm used, but is usually the distance between clusters joined at each stage. Along the other axis individual cases will be plotted giving a visualization of the relative size of each of the clusters. Here's the dendrogram created when clustering the data using Ward's Method (squared Euclidean distance, variables normalized using z-scores) |Stage||Distance Btw Cluster Ctrs.||Total SSE At Each Stage| Along the horizontal axis of this graph is the distance between cluster centers (centroids), I'm not quite sure why for Ward's method this distance is used rather than SSEtotal, but it is the same as the increase in SSE at each stage of clustering multiplied by 2. For instance at the last stage (stage 29) the coefficient (or total SSE) is 203.000. The previous stage is 141.451. Meaning the increase in total SSE is: SSE29 - SSE28 = ΔSSE28-29 203.000 - 141.451 = 61.549 This has to be multiplied by 2, for reasons that are explained here. 61.549 * 2 = 123.098 Which as you'll see is where stage 29 is plotted on the horizontal axis. The next step in this is to determine the number of clusters you want to work with. How many clusters are there?
systems_science
https://easeus-partition-master-home-edition.en.softonic.com/
2022-08-18T07:53:30
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00187.warc.gz
0.880265
785
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__101770355
en
Excellent everyday partition management EASEUS Partition Master is an easy-to-use disk partitioning tool for your PC. Disk partitioning and copying can be a complicated business, especially if you're not as computer-savvy as you may like. EASEUS Partition Master might just be the answer to your partition prayers. The latest version of this useful tool is just as easy to use and functional as previous versions. For basic partition management, it's the perfect program. This is also available to the newly released Windows OS, Windows 11. The array of task wizards offered makes using EASEUS Partition Master relatively simple. There are wizards for partitioning, copying your disk and partition recovery, as well as excellent help files if you need even more guidance. It also has a number of other tools that make the job even easier, including an undo function, operations pending list, and the ability to set a password. EASEUS Partition Master isn't the best-looking tool out there. Its interface is basic and really kind of bland. For a program that deals with something as unglamorous as partition management, however, this is probably something most users will be willing to forgive. If you're new to partitioning or have pretty basic partitioning needs, EASEUS Partition Master is really the only tool you'll need to get the job done. EASEUS Partition Master is a great example of an easy-to-use everyday partition tool. Free Magic Partition Solution - EaseUS Partition Master Free Edition is a free and all-in-one partition solution. It provides three main features: Partition Manager, Partition Recovery Wizard and Disk & Partition Copy to solve all partition problems under hardware RAID, MBR & GPT disks (support 8TB hard disk, 16TB in commercial edition) and removable devices in Windows XP/Vista/Windows 7/Windows 8 (32-bit & 64-bit). The first attractive feature is its partition function which helps to extend system partition to solve low disk problem, resize/move, merge, convert, create, format and wipe partition, rebuild MBR, convert dynamic disk, defragment on MBR & GPT disks. Besides, it allows you to drag and drop on the disk map easily. Moreover, it doesn’t require a reboot when extending NTFS system partition. The second feature is useful for copying partition or hard disk. For example, when you want to upgrade the disk into a larger one; transfer Windows system or data to other disks. It also provides a dynamic disk copy function for dynamic disk replacement or backup. The third amazing feature is to help recover deleted or lost FAT, NTFS, EXT2/EXT3 partition to avoid any personal, hardware or software failure, virus attack, or hacker's intrusive destruction partition loss. Main functions: Upgrade system disk to a bigger one with one-click. Extend NTFS system partition without reboot. Merge partitions into one without data loss. Resize/move, merge, copy, create, format, delete, wipe, recover, convert and explore partition. Convert dynamic disk to basic disk. Convert MBR disk to GPT disk and vice versa. Rebuild MBR. Copy & resize dynamic volume. Defragment. Wipe disk, partition and unallocated space. Disk surface test. Set Active/Label, hide/unhide partition. Support MBR & GPT disk (support 8TB hard disk, 16TB in commercial edition), removable device and hardware RAID. Support up to 32 disks. New feature: Create a Windows PE bootable media without installing AIK/WAIK, saving your valuable time and energy. Multiple language supported: English, Deutsch, Español, Français, Português, Polski and ???.
systems_science
http://sokolowski.eeb.utoronto.ca/
2014-07-23T16:01:00
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997880800.37/warc/CC-MAIN-20140722025800-00147-ip-10-33-131-23.ec2.internal.warc.gz
0.927604
137
CC-MAIN-2014-23
webtext-fineweb__CC-MAIN-2014-23__0__42659226
en
Note: This site is under construction. In the Sokolowski lab we study how genes and the environment interact to influence behaviour in fruit flies. We discovered the foraging gene which influences naturally occurring behavioural variation including the rover/sitter polymorphism. This gene plays a role in behaviour in many other organisms including social insects. We are interested in the mechanistic and evolutionary basis of this genetic polymorphism. Two fundamental questions we seek to understand better are: - How do genes and their proteins act in the nervous system in order to cause normal individual differences in behaviour? - How do genes and their proteins act in response to the environment to affect normal individual differences in behaviour?
systems_science
https://noteskart.com/index.php/2023/01/04/what-is-single-instruction-multiple-data-simd-computer-architectures/
2023-03-28T05:10:34
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00074.warc.gz
0.917298
1,209
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__18587789
en
SIMD stands for Single Instruction, Multiple Data. It is a type of computer architecture that allows a single instruction to be applied to multiple data elements at the same time. This can be used to perform the same operation on multiple data elements in parallel, which can greatly speed up certain types of computations. SIMD architectures are often used in graphics processing and scientific computing, where they can be used to perform vector and matrix operations efficiently. In SIMD (Single Instruction, Multiple Data) computer architectures, a single instruction can be applied to multiple data elements simultaneously, rather than being applied to each element individually. This allows for a high level of parallelism, as many data elements can be processed at the same time. SIMD architectures are typically implemented using a specialized unit called a vector processor, which can execute instructions on multiple data elements in a single clock cycle. The data elements are typically stored in a special type of memory called a vector register, which can hold a large number of elements and is optimized for fast access. One key feature of SIMD architectures is that they allow for a high degree of data parallelism. This means that many data elements can be processed at the same time, greatly increasing the performance of certain types of computations. For example, SIMD architectures are often used to perform vector and matrix operations, which can be parallelized easily. There are several different types of SIMD instructions that can be used, depending on the specific needs of the computation. For example, there are instructions for adding and subtracting vectors, multiplying vectors by scalars, and performing other operations on vectors. SIMD architectures have a number of benefits, including increased performance and energy efficiency. However, they can also be more complex to design and implement than other types of architectures, and may not be well suited for all types of computations. SIMD for Computer Vision Task: SIMD (Single Instruction, Multiple Data) computer architectures can be used to accelerate a wide range of tasks in computer vision, including image and video processing, object recognition, and 3D reconstruction. One way in which SIMD architectures can be used in computer vision is to perform vector and matrix operations on large amounts of image data. For example, an image convolution operation can be performed using a SIMD architecture by applying the same convolution kernel to multiple pixels in the image at the same time. This can greatly speed up the operation, especially for large images or when using complex kernels. SIMD architectures can also be used to perform other types of image processing tasks, such as resizing, color space conversions, and edge detection. In addition, SIMD architectures can be used to accelerate tasks related to object recognition, such as feature extraction and matching, and can be used in 3D reconstruction algorithms to perform tasks such as point cloud alignment and surface fitting. Overall, SIMD architectures can provide significant performance improvements for many tasks in computer vision, making them an important tool for researchers and practitioners in the field. Examples of SIMD (Single Instruction, Multiple Data) computer architectures: - Intel MMX: Intel’s MMX (MultiMedia eXtensions) is a set of SIMD instructions that was introduced in 1996 as part of the Pentium processor. MMX was designed to accelerate multimedia tasks, such as audio and video decoding, and was implemented using a set of 64-bit vector registers. - Intel SSE: Intel’s SSE (Streaming SIMD Extensions) is a set of SIMD instructions that was introduced in 1999 as an extension to the MMX instruction set. SSE added support for floating-point operations and expanded the number of vector registers to 128 bits. - ARM NEON: ARM’s NEON (Enhanced NEON SIMD) is a set of SIMD instructions that was introduced in 2004 as part of the ARMv7 architecture. NEON is designed to accelerate a wide range of tasks, including multimedia, image processing, and scientific computing, and is implemented using 128-bit vector registers. - AMD SSE5: AMD’s SSE5 (Streaming SIMD Extensions 5) is a set of SIMD instructions that was introduced in 2006 as part of the AMD64 architecture. SSE5 added support for a wide range of new operations, including advanced vector and matrix operations, and expanded the vector registers to 256 bits. - NVIDIA CUDA: NVIDIA’s CUDA (Compute Unified Device Architecture) is a parallel computing platform that uses a combination of hardware and software to implement SIMD architectures. CUDA is designed to run on NVIDIA’s GPUs (Graphics Processing Units), which include a number of specialized hardware units that can execute SIMD instructions. Here is an example of how SIMD (Single Instruction, Multiple Data) can be used to accelerate a computation: Suppose we have a list of 100 integers, and we want to multiply each integer by 2. Without SIMD, we would have to perform the multiplication operation on each integer individually, like this: result = 2 * input result = 2 * input ... result = 2 * input This would require 100 separate multiplication operations. With SIMD, we can perform the same computation using a single instruction that operates on multiple data elements at the same time. For example, using a 128-bit SIMD instruction, we could perform the multiplication on 8 integers at a time like this: result[0:7] = 2 * input[0:7] result[8:15] = 2 * input[8:15] ... result[96:103] = 2 * input[96:103] This would only require 13 SIMD instructions, rather than 100 separate multiplication operations. This can greatly speed up the computation, especially on systems with hardware support for SIMD instructions.
systems_science
https://socialapples.com/solid-state-drives-vs-hard-disk-drives/
2024-02-21T06:42:40
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473370.18/warc/CC-MAIN-20240221034447-20240221064447-00108.warc.gz
0.91292
1,455
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__199823271
en
As an avid computer user, you’ve probably come across the terms “Solid State Drives (SSDs) and Hard Disk Drives (HDDs). These two types of device storage contend for supremacy while serving fundamentally the same purpose. In this article, we’ll delve into the differences between SSDs and HDDs, including fundamental differences in their mechanisms, speed, durability, pricing, and form factors. Let’s dive right into it: 1. Movement in HDDs Versus SSDs The core distinction between traditional Hard Drives and SSDs is to be made in the area of moving parts. A traditional Hard Drive is reminiscent of a record player. At its core lies a spinning disk with a read/write head attached to a mechanical arm. This mechanical movement is how an HDD reads and writes data, relying on magnetism to store and retrieve information. However, the very nature of moving parts introduces an element of vulnerability, as mechanical breakdowns can occur. In stark contrast, SSDs have no moving parts, replacing the spinning disk and mechanical arm with solid-state flash memory chips. These chips are similar to those found in smartphones and cameras, and they store data without the need for magnetism. This absence of moving components not only makes SSDs faster but also renders them more durable due to the lack of mechanical vulnerabilities. Keep in mind that this is an excellent research topic for computer science students who may need to write a research paper on the subject. Always use a professional essay writing service as a backup in case you need to craft a superb paper on such topics. 2. Magnetic vs. Flash Memory In HDDs, the read/write operation involves the magnetization of the spinning disk’s surface to represent binary data – 0s and 1s. The read/write head detects these magnetic orientations, translating them into usable information. While this mechanism is effective, it is contingent on the physical movement of components and can result in delays and mechanical wear over time. SSDs, on the other hand, employ flash memory, storing data using a different logic. Each memory chip contains floating gate transistors that can hold an electrical charge, signifying binary values. Unlike HDDs, SSDs don’t rely on magnetism or physical movement. They read and write data to these flash memory chips directly, leading to faster and more reliable performance. 3. Speed and Performance Due to the mechanical process of finding and accessing data on the spinning disk, HDDs are comparatively slower than SSDs. The time it takes to spin up the disk and position the read/write head introduces latency, making them less responsive. SSDs showcase near-instantaneous data access. With no moving parts, SSDs can read and write data almost immediately. With no physical barriers, the operation translates to quicker boot times, faster program loading, and swift file-saving actions. 4. Fragility vs. Robustness Again, because of different internal components and moving parts such as the spinning disks in HDDs, these types of storage are susceptible to shocks and vibrations. Any accidental drops may lead to mechanical failures or slowing down of the Hard Disk operation. Solid-state drives, characterized by their lack of moving parts, are robust when subjected to physical stress. They can withstand shocks and vibrations better than their HDD counterparts. This is a great solution for users who are always on the move and those who carry portable drives around. 5. Budget-Friendly vs. Premium Performance HDDs have for the longest time been a budget-friendly option. Their longstanding presence in the market and mature manufacturing processes contribute to their affordability. If sheer storage capacity at a lower cost is the priority, HDDs are a natural choice. SSDs on the other hand represent quality at a premium, although their market prices have been reducing with advancements in manufacturing. The tradeoff is a significant boost in speed and performance. 6. Bulky vs. Slim HDDs have a lower form factor compared to SSDs. The physical constraints of spinning disks restrict their form factor, making them less suitable for sleek and compact devices. SSDs on the other hand come in various form factors, including the slim and sleek M.2 and U.2 SSDs, offering flexibility for different device sizes. This adaptability makes SSDs ideal for slim laptops and compact electronic devices. Emerging Trends in Storage Technologies Besides HDDs and SSDs, other emerging technologies promise to revolutionize computer storage. These include: Shingled Magnetic Recording (SMR) Unlike traditional perpendicular recording, SMR overlaps the magnetic tracks on a hard disk, optimizing storage density. This innovation holds the potential to enhance HDD capacities, bridging the gap with SSDs in terms of storage volume. Heat-Assisted Magnetic Recording (HAMR) is another trailblazing advancement. This technology employs a laser to heat the disk surface briefly, allowing for more precise data recording. HAMR aims to push the limits of HDD capacities, promising higher data densities and extended longevity. Non-Volatile Memory Express (NVMe) Non-Volatile Memory Express (NVMe) is emerging as a potential game-changer in the tech world. NVMe facilitates faster data transfer between storage devices and the computer. This protocol, designed for solid-state drives, reduces latency, ensuring that the full potential of high-speed storage is harnessed. While still in the theoretical phase, quantum storage is peering from the horizon. Quantum storage leverages the principles of quantum mechanics to store and retrieve information. Unlike classical bits, quantum bits or qubits can exist in multiple states simultaneously, opening up possibilities for unparalleled storage capacities and processing speeds. The Rise of Computational Storage An emerging paradigm in storage architecture is Computational Storage. This innovative approach integrates processing capabilities directly into the storage device. By doing so, data can be processed locally, which reduces the need for constant data movement between storage and processing units. This approach not only enhances efficiency but also paves the way for more intelligent and responsive storage solutions. Edge Computing and Storage As more computing tasks migrate to the edge of the network, storage solutions are adapting to meet the demands of decentralized processing. Edge storage, characterized by its proximity to the data source, minimizes latency and enhances real-time data access, aligning with the needs of IoT (Internet of Things) and emerging 5G technologies. Computer storage systems represent an exciting area for research, driven by the need for speed, sustainability, and convergence. We’ve examined the differences between Solid State Drives (SSDs) and Hard Disk Drives (HDDs) while comparing their efficiencies. For users who can afford it, SSDs represent a better option for storage, due to their compactness, speed, and durability, amongst other great features. We’ve also explored emerging technologies in computer storage including Quantum Storage, Edge Storage, NVMe, and HAMR, amongst others. All these promise to revolutionize the computer storage domain greatly, whilst improving efficiency and the number of use cases.
systems_science
https://www.itrelease.com/2023/03/what-is-scheduling-in-operating-system-os/
2024-02-24T05:12:59
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474523.8/warc/CC-MAIN-20240224044749-20240224074749-00157.warc.gz
0.942947
1,400
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__191419364
en
In an Operating system (OS) multiple processes are running at a time. One process may have multiple threads in it also. For example, the MS Word program has multiple threads in it, one thread for checking grammar, one thread for counting words etc. All the processes and threads in OS are managed by the OS. The OS have to assign time for running these processes and thread. The management of processes and threads is known as CPU scheduling. In OS multiprogramming is used to switch the CPU among different processes. The goal of multiprogramming is that at least one process keeps running on the CPU at a given time. If there is one processor then one process has to run at a given time and if there are multiple processors then there will be multiple processes running at a given time by the CPU. There is a mechanism by which the CPU has to decide which process has to run next. For this purpose, the ready queue is created by OS. The process which has to run next is kept ready state. The process is moved from the ready state to the running state. If a process needs any input/output request then that process is moved from the running state to the waiting state. After the process completes the input/output operation then that process is again placed in a ready state to be run by the OS. Objectives of Scheduling: There are some goals or objectives of scheduling that makes the OS perform well. Some goals of scheduling are:- Maximize throughput. Throughput is the number of jobs completed per unit of time. The CPU time has to be divided in fairness way i.e. all jobs should be treated equally. As CPU is costly so the purpose of scheduling is to maximize CPU utilization. Minimize response time. Response time is the time a process stays in a ready state and gets CPU for the first time. Maximize utilization of other resources such as printers and disks. Minimize waiting time. Waiting time is the total time a process is kept in a ready state. Minimize turnaround time. Turnaround Time = Waiting Time + I/O Time + Computation Time. Preemptive and Non-Preemptive Scheduling: In Preemptive scheduling, if a process is running then if any high-priority process comes then the previous process has to be stopped. The previous process is moved from the running state to the ready state. In non-preemptive scheduling, the process continues to run until it needs input/output resources. That process is given CPU time again after completing the input/output request. There are some criteria for scheduling that judge the performance of the CPU. They are:- Turnaround Time: The amount of time that is needed to execute a process is called turnaround time. It is the actual job time plus the waiting time. Waiting Time: The amount of time the process has to wait for execution is called waiting time. It is the turnaround time minus actual job time. Throughput: The number of processes executed at a specified period is called throughput. Throughput decreases if the size of processes is huge. It increases for short processes. CPU utilization: The CPU has to be active all the time for the best performance. CPU utilization is considered important in real-time applications and multi-programmed systems. Response Time: The amount of time between a request being submitted and the first response being produced is called response time. The CPU scheduling algorithm tries to minimize waiting time, turnaround time and response time. Also, it is used to maximize throughput and CPI utilization. Below is a list of scheduling algorithms that we will discuss:- First Come First Served Scheduling Shortest Job First Scheduling Round Robin Scheduling Multi-Level Queue Scheduling Multi-Level Feedback Queue Scheduling First Come First Served Scheduling: In this type of scheduling the process which enters first will be executed first and then the second process which comes next will be executed next and so on. If the first process has the longest burst time then other processes will have to wait longer. If the first process burst time is shorter then the remaining processes will be executed more quickly. Shortest Job First Scheduling: In this type of scheduling, the average waiting time is reduced. The process/job that has less completion time will be run first. Then the next job with the shortest completion time than the first job will be run. The shortest job first scheduling (SJF) will run the process with a lower burst time and assigns it to the CPU. In this type of scheduling all the processes are assigned a priority id. The priority id is in integer form and has a value from 0 to 10. The lower the value of the priority id the higher the priority value. In priority scheduling the process with higher priority will be executed first then the second process with higher priority will be executed. Round Robin Scheduling: In Round Robin scheduling each process is given a quantum of time to run by the CPU. It is the same as FCFS but the process is switched to another process after a quantum of time. All the processes are first placed in the ready queue then the first process that has come will be executed for some fixed time. Then the next process is ready queue will be executed for a fixed amount of time and so on. Multi-Level Queue Scheduling: In this type of scheduling two categories are made, one is interactive processes (foreground processes) and one is non-interactive (background processes). The interactive processes are executed first and have high priority and non-interactive processes have lower priority and are executed after interactive processes. In this scheduling ready queue is divided into 5 sub-queues i.e. Interactive editing processes Note that the process which is assigned to one sub-queue will remain in that sub-queue and it cannot be moved to other sub-queues until it completes its lifetime. In a multi-level queue, each sub-queue has a priority assigned to it that range from 0 to 4. 0 is considered high priority and 4 is considered low priority. The processes that are placed in a higher sub-queue will be executed first then all the processes which are in low priority sub-queue will be executed and so on. All the sub-queues are assigned a quantum of time and then the CPU shift to another sub-queue and execute processes in that sub-queue. Multi-Level Feedback Queue Scheduling: In this queue, processes are placed in different sub-queues as we see in multi-level queue scheduling. But in this scheduling, the processes can be moved to a higher priority queue if they are waiting longer. This type of scheduling follows the first come first serve (FCFS) rule.
systems_science
https://lsi.gatech.edu/lsi-awarded-r21-grant-to-develop-a-non-viral-transfection-system-for-car-t-cell-manufacturing/
2023-11-30T04:02:44
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100164.87/warc/CC-MAIN-20231130031610-20231130061610-00736.warc.gz
0.913085
604
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__283636233
en
The National Cancer Institute at the National Institute of Health is sponsoring LSI’s work on bioinspired nanovectors that manufacture CAR T cells with CRISPR/Cas9-mediated insertion, which will lead to a non-viral transfection system to achieve rapid and cost-efficient CAR T cell manufacturing. Summary | Adoptive cell therapy using patient-specific T cells engineered with chimeric antigen receptors (CARs) presents a promising treatment modality for cancer patients. However, FDA-approved CAR T cells are genetically engineered by viral transduction, a process that poses limitations for manufacturing and in vivo translation. Viral production is prohibitively expensive and is a main driver of the high price of CAR T cell therapy ($350–450K per treatment). Additionally, batch production of viral vectors requires a minimum 4+ week lead time. This long duration in therapeutic cell manufacturing can delay treatments for patients with progressive diseases. Moreover, due to safety concerns associated with viral transduction (e.g., insertional mutagenesis), the FDA regulates the number of integrated viral vectors per T cell to 5 copies, which limits the number of viral particles used for transduction and results in low transduction efficiencies. These issues are a barrier to optimization of CAR design, expanding clinical applications, and broad patient access to CAR T cell therapies. Therefore, the overall goal of this proposal is to develop a new non-viral transfection system to achieve rapid and cost-efficient CAR T cell manufacturing. This system consists of bioinspired nanovectors that mimics the biological activity of endogenous serum proteins to enhance CAR transgene delivery to primary T cells. Preliminary data supporting this proposal demonstrates that the bioinspired nanovectors were internalized by activated T cells more efficiently than conventional nanoparticle formulations, such as liposomes. The bioinspired nanocarriers therefore overcome the low endocytic capability of primary T cells, a delivery barrier faced by other nanoparticle- based transfection reagents. To achieve persistent CAR expression, this system will use CRISPR/Cas 9 for site- specific CAR insertion into the T cell genome, which mitigates safety concerns resulting from virus-induced random insertions. This proposal will also leverage high-throughput, scalable microfluidic reactors to accelerate the nanocarrier optimization at the exploratory phase and allow future clinical translation of the proposed non- viral transfection system for CAR T cell manufacturing. The specific aims of this proposal are to (1) optimize bioinspired nanovectors for non-viral CAR T cell manufacturing, and to (2) benchmark anticancer efficacy of the non-virally transfected CAR T cells against virally transduced counterparts. Successful completion of this project will lead to a new CAR T cell manufacturing process that accelerates CAR T cell development for clinical translation, facilitates compliance with regulations, and reduces the manufacturing costs and lead times to democratize CAR T cell therapy.
systems_science
https://thisisinternet.pl/en/2018/06/10/
2021-09-25T15:23:34
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057687.51/warc/CC-MAIN-20210925142524-20210925172524-00073.warc.gz
0.918081
174
CC-MAIN-2021-39
webtext-fineweb__CC-MAIN-2021-39__0__136952464
en
The Internet is useful, but its huge flaw is that all it takes to find pornographic websites is a simple Google search. This is especially concerning if you have children. Fortunately, there’s a simple and free way to prevent access to Internet porn – changing the DNS server to one that blocks inappropriate websites. One such server is OpenDNS FamilyShield, which blocks lots of known and not-so-known websites, although other DNS servers exist. IP addresses of this particular server are 220.127.116.11 and 18.104.22.168. It’s best to set up the new DNS server on your router to protect all computers in your household. I’m not putting instructions on how to do this here because the process varies between operating systems and devices, but it’s easy to find necessary information on the Internet.
systems_science
https://regtechafrica.com/nigeria-smartcomply-unveils-ai-powered-on-premise-enterprise-version-to-revolutionize-compliance-for-banks/
2024-04-15T08:13:38
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816954.20/warc/CC-MAIN-20240415080257-20240415110257-00578.warc.gz
0.880311
519
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__159582538
en
Smartcomply, a leading provider of compliance management solutions, is thrilled to announce the launch of its new On-Premise Enterprise Version. This state-of-the-art solution, driven by artificial intelligence (AI), is poised to revolutionize compliance processes for banks, financial institutions, and large corporations. By offering enhanced control, security, and customization, organizations can harness the power of AI while maintaining full data sovereignty. Jude Ogbonna, Country Head of SmartComply, expressed his enthusiasm for the release, stating, “We are delighted to introduce Smartcomply On-Premise Enterprise Version, a groundbreaking solution empowering banks, financial institutions, and large corporations to unlock the full potential of AI while retaining control over their data and compliance processes. With this innovative offering, organizations can achieve compliance excellence, enhance data privacy and security, and drive business growth with confidence.” By leveraging AI capabilities within an on-premise model, this cutting-edge solution provides unparalleled control, security, and adaptability, enabling organizations to maximize AI benefits while retaining complete data control. Smartcomply’s On-Premise Enterprise Version represents a significant advancement in compliance technology, empowering organizations to leverage AI within their infrastructure while ensuring data sovereignty, regulatory compliance, and adherence to internal policies. Key Features of Smartcomply On-Premise Enterprise Version include: - AI-Driven Compliance Automation: Smartcomply’s AI-powered algorithms automate complex compliance tasks, streamline processes, and mitigate risks more effectively, driving operational efficiency. - Enhanced Data Privacy and Security: With the On-Premise Enterprise Version, organizations can uphold the highest standards of data privacy and security by maintaining data within their infrastructure, minimizing external exposure. - Customizable Deployment and Integration: Smartcomply’s solution offers flexible deployment options and seamless integration with existing systems and workflows, allowing organizations to tailor the platform to their specific requirements and compliance frameworks. - Dedicated Support and Maintenance: The On-Premise Enterprise Version package includes dedicated support, maintenance, and updates, ensuring optimal performance and reliability. With a team of experienced professionals available for assistance, organizations can rely on Smartcomply for ongoing support and guidance. With the introduction of Smartcomply’s On-Premise Enterprise Version, banks, financial institutions, and large corporations can embrace AI-powered compliance solutions while maintaining control, security, and compliance integrity. This marks a significant milestone in the evolution of compliance technology, empowering organizations to navigate regulatory challenges with confidence and efficiency.
systems_science
http://en.szzhxi.com/News_details/2.html
2023-12-11T10:38:44
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00526.warc.gz
0.93855
593
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__237828202
en
T8 Automatic Production Line for Glass Tube In today's fast-paced world, automation has become an integral part of various industries. One such industry that has greatly benefited from automation is the glass manufacturing industry. The introduction of the T8 automatic production line for glass tube has revolutionized the way glass tubes are produced, offering numerous advantages over traditional manual methods. This essay will explore the features and benefits of the T8 automatic production line, highlighting its impact on the glass manufacturing industry. The T8 automatic production line is a state-of-the-art system designed to streamline the production process of glass tubes. It incorporates advanced technologies such as robotics, computerized control systems, and precision machinery to ensure efficient and accurate production. The line consists of multiple interconnected machines, each performing specific tasks, resulting in a seamless and continuous production flow. One of the key features of the T8 automatic production line is its high level of precision. The use of robotics and computerized control systems eliminates human error, ensuring consistent quality throughout the production process. This precision is particularly important in the glass manufacturing industry, where even the slightest deviation can lead to defective products. With the T8 line, manufacturers can achieve a higher level of quality control, minimizing wastage and reducing the need for manual inspection. Another significant advantage of the T8 automatic production line is its speed and efficiency. The line operates at a much faster pace compared to traditional manual methods, significantly increasing production output. This increased efficiency not only allows manufacturers to meet growing market demands but also reduces production costs. The T8 line requires fewer laborers, as most tasks are automated, resulting in reduced labor expenses and increased profitability for glass manufacturers. Furthermore, the T8 automatic production line offers improved safety for workers. By automating hazardous tasks, such as glass cutting and shaping, the risk of accidents and injuries is greatly reduced. This not only protects the well-being of employees but also minimizes the potential for production disruptions caused by workplace accidents. The T8 line ensures a safer working environment, promoting the overall welfare of the workforce. Additionally, the T8 automatic production line enables customization and flexibility in glass tube production. The line can be easily programmed to produce tubes of different sizes, shapes, and specifications, allowing manufacturers to cater to diverse customer requirements. This flexibility opens up new opportunities for glass manufacturers to expand their product offerings and enter new markets. The ability to quickly adapt to changing market trends and customer demands is crucial for staying competitive in the glass manufacturing industry. In conclusion, the introduction of the T8 automatic production line has revolutionized the glass manufacturing industry, offering a range of benefits over traditional manual methods. With its precision, speed, efficiency, safety, and flexibility, the T8 line has become an indispensable tool for glass manufacturers. As automation continues to advance, we can expect further innovations that will continue to enhance the production processes in various industries, including the glass manufacturing sector.
systems_science
https://transitionleicester.wordpress.com/about/background-info/climate-change/
2020-05-30T12:34:49
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409171.27/warc/CC-MAIN-20200530102741-20200530132741-00245.warc.gz
0.957096
288
CC-MAIN-2020-24
webtext-fineweb__CC-MAIN-2020-24__0__174173052
en
Hardly a day goes by without a new story about Climate Change in the media – whether it’s extreme weather events that could be linked to the changing climate, new studies on species that face extinction, or new plans for renewable energy projects to help reduce our carbon emissions. Climate change science has many uncertainties, such as how much warming we can expect from a given increase in CO2 concentrations, or whether there could be sudden shifts in climate for particular regions this century. However, what is clear about climate change, is that it is being driven by human activities, and that unless we rapidly stop adding to atmospheric greenhouse gas concentrations, the viability of Earth as a habitat for millions of species of animals and millions of human beings is at risk. Our message is that based upon the existing climate research and what we know about how the Earth functions as a system, the scale of action we need to take goes some way beyond what is being considered at present. Changing lightbulbs and using energy more efficiently is a useful start, but ultimately our goal should be to redesign our energy systems and lifestyles so that we can meet our needs and thrive as a society, whilst eliminating our need for fossil fuels. Although climate change is often seen as a frightening or threatening idea, we also see it as offering a positive opportunity, as solutions to the problem tend to be ones which enable us to lead happier, healthier and more connected lives.
systems_science
https://www.pueblod60.org/site/default.aspx?PageType=3&ModuleInstanceID=6983&ViewID=7b97f7ed-8e5e-4120-848f-a8b4987d588f&RenderLoc=0&FlexDataID=8614&PageID=1416&Comments=true
2023-09-23T01:03:39
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00396.warc.gz
0.916578
316
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__162011191
en
Find it Fast Director of Technology Job description snapshot: - Responsible for the support and service of all technology-related aspects of the district to include; computers, network infrastructure, data systems, educational and digital learning systems, software, servers, telephone communications, security cameras, etc. - Supervises Technology Department in supporting and serving district needs specific to technology, educational technology, and media services - Oversees all data systems and processes to include the student information system and the accounting and HR employee system - Oversees all technology communication systems to include telephones, email and collaboration systems, as well as network services and Internet access - Responsible for developing vision with technology in the form of a technology plan and ensuring the district maintains up to date with instructional technology practices - Manages all technology budgetary items for the district including district wide instructional software, infrastructure purchases, endpoint devices and more - Responsible for ensuring the district appropriately utilizes the E-Rate program - Ensures industry best practices are implemented and followed for cyber security, data backup and restoration, and student data privacy Summary of Functions: The Director of Technology is responsible for planning, directing, and executing all the district’s Technology programs and activities. This position functions as a collaborative strategic business partner within the organization by aligning technology resources and strategies to help achieve the Districts goals, manage the activities and operations of the department, develop operational guidelines and recommend policies, and prepare and implement a technology plan. This position also provides leadership for the district in technology.
systems_science
https://www.contexthq.com/2007/06/19/google-plugs-in-green-car-initiative/
2023-09-23T18:36:11
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.19/warc/CC-MAIN-20230923162848-20230923192848-00515.warc.gz
0.966882
252
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__2546426
en
Google has announced a philanthropic initiative aimed at reducing carbon emissions by popularising electric-powered cars. It is also looking to make investments in technologies and companies featuring plug-in hybrids, fully electric vehicles, vehicle-to-grid capabilities and batteries. By means of demonstration, Google is having four of its Toyota Priuses and two Ford Escapes re-fitted to run as plug-in hybrid electric vehicles and will publish telemetry data from the vehicles to the web to indicate the cars’ efficiency. The company last year announced plans to ensure a third of its giant Mountain View, California, campus becomes powered by solar energy. It is this 1.6-megawatt, 9,212-panel solar effort that powers its existing plug-in vehicle fleet. A hundred such cars will become part of its car-sharing scheme. Conventional hybrid vehicles like the Prius run on batteries but also depend in part on regular combustion engines, while plug-in vehicles can recharge off a wall socket overnight. Google.org said if all US cars on the road by 2025 were hybrids, half of them plug-in hybrids, America could reduce its oil consumption, currently 10m barrels per day, by 8m barrels per day.
systems_science
https://robotbits.co.uk/product/cytron-25amp-7v-58v-high-voltage-dc-motor-driver/
2024-04-19T18:26:45
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00332.warc.gz
0.817752
620
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__181619312
en
Cytron 25Amp 7V-58V High Voltage DC Motor Driver The MD25HV is a high voltage single channel bidirectional motor driver for brushed DC motor from 7V to 58V. With discrete NMOS H-Bridge design, this motor driver is able to support 25Amp continuously without additional heatsink. The onboard test buttons and motor output LEDs allow functional test of the motor driver in a quick and convenient way without hooking up the host controller. Buck regulator which produces 5V output is also available to power the host controller. This is especially useful for high voltage application where no additional power source nor high voltage buck regulator is needed. MD25HV can be controlled with PWM and DIR inputs. With input logic voltage range from 1.8V to 30V, it’s compatible with wide variety of host controller (e.g. Arduino, Raspberry Pi, PLC). If you prefer to control the motor directly without any programming, this motor driver can be controlled from a potentiometer (speed) and a switch (direction) too. Various protection features are also incorporated in the MD25HV. Overcurrent protection prevents the motor driver from damage when the motor stalls or an oversized motor is hooked up. When the motor is trying to draw current more than what the motor driver can support, the motor current will be limited at the maximum threshold. Assisted by temperature protection, the maximum current limiting threshold depends on the board temperature. The higher the board temperature, the lower the current limiting threshold. This way, MD25HV is able to deliver its full potential depending on the actual condition without damaging the MOSFETs. Note: Power input does not have reverse-voltage protection. Connecting the battery in reverse polarity will damage the motor driver instantaneously. - Bidirectional control for one brushed DC motor. - Operating Voltage: DC 7V to 58V - Maximum Motor Current: 25A continuous, 60A peak - 5V output for the host controller (250mA max) - Buttons for quick testing. - LEDs for motor output state. - Dual Input Mode: PWM/DIR or Potentiometer/Switch Input. - PWM/DIR Inputs compatible with 1.8V, 3.3V, 5V, 12V and 24V logic (Arduino, Raspberry Pi, PLC, etc). - PWM frequency up to 40kHz (Output frequency is fixed at 16kHz). - Overcurrent protection with active current limiting. - Temperature protection. - Undervoltage shutdown. Example Application: Automated Guided Vehicle (AGV), Mobile Robot, Automation Machine, Electric Vehicle. - 1 x MD25HV - 1 x Potentiometer with connector - 1 x Rocker switch with connector - 4 x Nylon PCB Standoffs/Spacers
systems_science
https://ghphippswyoming.com/state-of-wyoming-combined-laboratories-facility/
2023-12-10T23:52:06
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102697.89/warc/CC-MAIN-20231210221943-20231211011943-00248.warc.gz
0.960137
195
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__265044396
en
The Cheyenne Joint Laboratories houses the Department of Environmental Quality (DEQ), the Public Health Department (PH) and the Department of Criminal Investigations (DCI). These laboratories and office spaces are housed in a 125,000 square foot facility with its own central plant. The mechanical systems are state of the art with differential pressure allowing each lab space the proper air cascade to accommodate the materials that are being handled by each group. This is accomplished using a Phoenix system with over 350 valves in place. It is serviced by 5 air handling units, two chillers, boilers, exhaust systems, a cooling tower and a very complicated control system. This building also houses a BSL-3 laboratory to deal with highly contagious material that can be isolated and analyzed. The DCI, PH and DEQ personnel have provided office space within the facility. This project was also built in the spirit of LEED insuring that the building would operate in an efficient and cost effective manner.
systems_science
https://www.pixelpines.com/stories/dakota-pump/
2024-02-20T22:39:36
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473347.0/warc/CC-MAIN-20240220211055-20240221001055-00132.warc.gz
0.938778
368
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__9661954
en
About Dakota Pump Since the 1950’s, Dakota Pump, Inc. (DPI) has provided a wide range of customers with the highest quality pumps, packaged pumping systems, parts and service. As recognized leaders in packaged pump design, performance & innovation, we continue to lead the package pump industry. Dakota Pump packaged pump stations are handled by several representatives across the US. Our product line has vastly expanded over the years to include package pump systems for both the water and wastewater markets as well as a controls department that develops high quality automation and SCADA systems for various markets. Thus making Dakota Pump Inc. a major supplier not only in the packaged pump market but also in the automated systems technology arena. Dakota Pump required a base solution to track and manage Customers including contacts, locations, and equipment. In addition to the Customer management tasks, there was a need to build simple Quotes, Contracts, and Service Tickets all used to provide a complete Customer lifecycle and feed the invoicing effort with the required information. With the solution built by Pixel Pines, Dakota Pump management and technicians can now track interactions with Customers associated specifically with a location and equipment in the field. These interactions are tracked through Notes, Service Tickets, and Work Time Logs which lead to quick generation of Quotes, Contracts, and Ticket Invoice Requests. Ticket Invoice Requests are a pre-invoice report and upon approval, automatically email the back-office accounting staff requesting invoice generation with supporting attachments. Pixel Pines continues to work with Dakota Pump building upon the foundation of the solution driving for efficiency in workflow and customer data capture. Back-Office Web Application built with DevExpress eXpress Application Framework (XAF). Microsoft SQL Server Database. Web Portal and Database hosted on Microsoft Azure.
systems_science
http://www.bombalatimes.com.au/story/176999/electricity-changes-hands/
2013-06-20T01:45:14
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00007-ip-10-60-113-184.ec2.internal.warc.gz
0.931556
312
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__69550444
en
OPERATIONAL management of the electricity distribution network located within the boundaries of the East Gippsland Shire in Victoria will transfer from Essential Energy to SP AusNet, a Victorian electricity distribution business, on June 29. This means SP AusNet will assume full operational control of the electricity network – the ‘poles and wires’ and electricity meters – that connect electricity to around 240 homes and businesses located in the East Gippsland Shire Council area. This includes maintaining and reading meters, and maintaining, replacing and extending the network to ensure customers receive safe and reliable network services into the future. From June 29, customers will need to call SP AusNet on 13 17 99 to report electricity network faults, supply interruptions and emergencies. Essential Energy will continue to provide first response network services on behalf of SP AusNet from its depot in Bombala. This means that when customers call SP AusNet on 13 17 99, local Essential Energy field crews will respond. SP AusNet will be scheduling the installation of smart meters in this area as part of the Victorian Government’s mandated program to install smart meters at every home and small business across the State by 2013. Smart meters will give consumers greater detail around electricity consumption while enabling SP AusNet to detect and restore network faults faster. Residents and small businesses in the area have been sent a letter notifying them of the change. More information on the smart meter program can be found at www.sp-ausnet.com.au/smartmeters.
systems_science
https://www.aaglobal.co.uk/aa-global-puts-clients-first-with-major-it-investment/
2023-11-28T09:51:36
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00278.warc.gz
0.968235
486
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__317266987
en
An expanding language services company has completed the latest phase of a major investment programme as it delivers improved services for clients in the public and private sectors. AaGlobal, a key supporter of the Chamber’s International Trade Centre, recruited HBP Systems to upgrade its entire IT system and switch to a cloud-based operation which will support the company’s round-the-clock operations. Kirk Akdemir, CEO of AaGlobal, said: “It is a major investment by the business – in recent years we have invested significantly in people, property and now IT and they are all linked. “The improvements to property and IT are helping us create a better and more productive working environment for our growing numbers of staff. That in turn enables us to improve services for our client base which extends across the world.” AaGlobal was founded in Worcester 27 years ago and has grown considerably since opening its Hull office with a staff of two in 2011. In addition to the 30 people who work across the two offices – with more than 20 in Hull – AaGlobal employs around 14,000 translators and interpreters worldwide who cover 500 different languages and dialects. Kirk said: “The business is bigger and much more sophisticated and we needed an IT upgrade which could deal with that, supporting our 24/7 operation and ensuring security, which is particularly important for our public sector clients. “The new system will also support our private sector clients as they look for new markets overseas. It brings greater capacity and protection, and the outcome is that we are ready and waiting for those companies who want to reach out further afield. “We can help to introduce businesses to the new markets in the languages that open the doors to potential customers and we can also provide essential advice about cultural matters and the different ways of doing business, because it is important for people to understand that communication is about more than conversation.” Mike Peck, IT Sales Consultant at HBP Systems, said: “AaGlobal have embraced a very modern way of working, utilising cloud technology which enables their staff to work quickly and efficiently. Ultimately, the team at AaGlobal were focused on optimising their own service so, we worked together to ensure speed, reliability and security were all top priorities when designing and implementing their new IT solution.”
systems_science
https://www.jiecang.com/2021-MEDICA-in-Dusseldorf-Germany-after-two-years-Jiecang-will-meet-you-again-id42953477.html
2023-06-08T11:18:34
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654871.97/warc/CC-MAIN-20230608103815-20230608133815-00596.warc.gz
0.934459
672
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__26289306
en
Views: 1130 Author: Site Editor Publish Time: 2022-04-27 Origin: Site 53rd Düsseldorf, Germany International Medical Devices and Equipment Exhibition (MEDICA) Hall 17 | D41 As the world's largest hospital and medical equipment exhibition, MEDICA ranks first in the world medical trade show with its irreplaceable scale and influence. Affected by the epidemic, what surprises will Jiecang bring to you in this offline exhibition after two years? Intelligent hospital bed drive system, fully upgraded Equipped with the CAN system specially developed by Jiecang for medical applications, it has stronger scalability and smoother communication between products. It comes with functions such as weighing system and bed exit alarm, which improves nursing efficiency and has stronger protection capabilities. Compared with similar products, Jiecang's system software is more stable and has a better experience. Elderly care drive system to help life The home nursing bed system adopts an external power supply to achieve strong power separation and safer use; the power output can reach 36V/2A, which makes the push rod running speed more stable and high-speed; the controller adopts advanced welding method, which has a more compact structure and better protection effect. Excellent, the design of the built-in night light is more convenient. The shifter system exhibits economical and high-performance versions to meet the different needs of customers. The design of lithium batteries is more prominent in terms of service life. The entire system reaches IPX6, which can be washed, greatly reducing nursing pressure. The bath chair up to IPX8 can run freely in the water, the back panel can be adjusted from multiple angles, the material has strong anti-skid property, conforms to ergonomics, the user is more comfortable, and the work intensity of the nursing staff is also reduced. The electric wheelchair servo control system, as a product developed by a subsidiary of Jiecang, uses the sine wave vector control method and the high-reliability driving and steering algorithm to ensure the safe and stable driving of the vehicle in various environments. In addition, it can expand the function of electric push rod and the function of human-computer interaction APP to make the product more intelligent. Diagnosis and treatment bed drive system, newly developed The whole system can reach IPX6. The high waterproof design effectively reduces the working pressure of medical staff. The column is optional, the maximum thrust can reach 6000N, and the power is stronger. Bluetooth drive system, intelligent operation It can be operated through the new Bluetooth hand controller and APP to realize intelligent control of multiple devices; the protection level is high, and the complete system can reach IPX6; it strictly conforms to the safety standards of the medical industry. Security system solution, protection upgrade It has the function of falling back when it encounters obstacles, and the protection is fully upgraded to prevent the user from secondary injury and also protect the operator from accidental injury; it has a variety of fixing methods, which can be adapted to a variety of bed types to meet the needs of different scenarios.
systems_science
http://www.bigassfans-thailand.com/smartsenze/
2023-09-21T07:46:04
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233505362.29/warc/CC-MAIN-20230921073711-20230921103711-00260.warc.gz
0.849144
608
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__76625194
en
IT’S EASY TO SAVE UP TO 30% ON ENERGY BILLS The patent-pending SmartSense™ control system maximizes both energy savings and comfort year-round, allowing you to control your Big Ass Fans with just the push of a button. Featuring three user modes — winter, summer and manual — SmartSense eliminates the human-error and hassle of fan operation by automatically matching the speed of your Big Ass Fans to seasonal conditions. - Winter Mode: SmartSense automatically adjusts the fan’s speed to destratify air, redirecting warm air trapped at the ceiling back down to the floor level to maintain consistent temperatures throughout the space. Result? You save on heating costs. - Summer Mode: Automatically adjusts fan speed based on the floor level temperature, creating a cooling effect that makes you feel up to 10°F (5.6°C) cooler. Result? Your employees feel comfortable and stay productive. - Manual Mode: Take the system off autopilot to manually direct fan operation. Result? You have total control over your domain. MAXIMIZE WINTER SAVINGS Hot air is lighter than cooler air, so it rises, creating a significant temperature difference between floors and ceilings. In large spaces with high ceilings, temperature differences reach up to 20°F (11°C). This natural phenomenon is known as stratification. In the winter, Big Ass Fans are slowed, not reversed, to efficiently destratify heat, circulating air back down to the floor level without creating a draft. This saves you up to 30% on heating costs because the process reduces both the load on your heating system and the amount of heat escaping through the roof. By continuously monitoring ceiling and floor-level temperature, SmartSense automatically adjusts the speed of your fans to effectively destratify air. Big Ass Fans fully mix the air throughout the space, maintaining a consistent and even temperature, so you can be comfortable and save money year-round. EFFICIENT SUMMER COOLING Air conditioning costs too much to run in large spaces like warehouses. Big Ass Fans provide cooling breezes that make workers feel up to 10°F (5.6°C) cooler. That’s important because research1 shows productivity begins decreasing when temperatures rise above 77°F (25°C), so keeping your workers comfortable also boosts your bottom line. SmartSense takes the guesswork out of determining the optimal air speed for cooling. As the indoor temperature begins to rise, the fans speed up, providing a larger cooling effect. Conversely, as temperatures begin to fall, the fan slows down to push warm air down. Simply tell SmartSense your average floor temperature once, and let it do the work for you. 1Seppänen O, Fisk WJ, Faulkner D. 2003. Cost benefits analysis of the night-time ventilative cooling. In: Proceedings of the Healthy Buildings 2003 Conference, Singapore 2003.
systems_science
http://7dniv.com.ua/bonnie++/diff.html
2023-12-01T05:53:34
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100276.12/warc/CC-MAIN-20231201053039-20231201083039-00183.warc.gz
0.960706
283
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__197734131
en
I originally started work on this project to support more than 2G of storage for the database tests. At the time I was testing hard drive performance of some Sun servers with 8G of RAM, so a 2G test file (the maximum size for 32bit code) wasn't enough to get valid results. To do this I wrote a C++ class to encapsulate the file IO and support storage in a number of 1G files with all access being through an index to the 8K block used (giving a potential for storage of 2^44 bytes of data - 16T). Then I started playing with the Reiser file system and found that Bonnie didn't run any faster on ReiserFS than on any other file system. So I decided that it needed to test the creation of large numbers of files in a directory to be able to show how file systems such as ReiserFS outperform older file systems such as Ext2 and UFS. After it became apparent that Tim Bray and I will probably never agree on the relative merits of C vs C++ Tim suggested that I release my code independantly under the name Bonnie++. After I started releasing test versions which then started to become popular I began added some features based on code contributions (such as synchronization of multiple bonnie++ processes) and on request (blocking mode). Main Bonnie++ page.
systems_science
https://oceantourism.org/blue_tourism_tools/national-marine-sanctuaries-visitor-counting-process/
2024-04-20T12:51:58
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817650.14/warc/CC-MAIN-20240420122043-20240420152043-00711.warc.gz
0.891896
183
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__166651567
en
This paper focuses on the development of a systematic data collection effort that allows managers to better understand the visitors to marine resource areas managed by NOAA Office of National Marine Sanctuaries (NMS). Through the National Marine Sanctuary Visitor Counting Process (NMS-COUNT), resource managers will gain valid and reliable data and data collection methodologies to advance predictive capability and understanding of visitors. While various federal and state agencies and Coastal Treaty Tribes collaborate in the management of coastal and marine areas, there is little compatibility in methods for estimating visitation. The NMS-COUNT process offers an iterative framework that allows local management and stakeholders to contribute to the understanding of visitor use at an NMS unit throughout each phase of the process. Building off the Interagency Visitor Monitoring Framework, the NMS-COUNT process focuses on visitation estimates and direct communication with managers and researchers to develop and implement the most efficient methodology.
systems_science
https://devilan-remote-control-visualization-tool-for-developers-ios.soft112.com/
2018-12-14T08:10:47
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825495.60/warc/CC-MAIN-20181214070839-20181214092339-00040.warc.gz
0.792445
502
CC-MAIN-2018-51
webtext-fineweb__CC-MAIN-2018-51__0__252748045
en
1. Standard configuration and monitoring tool for synertronixx devices and modules. Search, find and configure devices in LAN and WLAN. 2. Use app as a TCP/IP socket based remote control (RC) and visualization tool more info for "Configuration tool": - DeviLAN uses UDP broadcast messages to request and change network configuration for modules like CAN2Web-Advanced or CAN2Web-Professional. TCP/IP communication is used for setting parameters and monitoring device. - Configure and monitor CANIO modules connected to CAN-bus via CAN2Web. more info for "Remote control & Visualization": - Use DeviLAN App as a TCP/IP socket based remote control (RC) and visualization tool - Create your own remote control and send accelerometer data to control your (embedded) device - No iOS programming necessary, just send the configuration data via socket - With simple commmands add sliders, switches, buttons, textfields, progress bars, webview, scroll texts, graphs, maps ... - segment/radio-buttons and pickers. - Create bar graphs, pie charts, curves and other complex graphics and more ... - Use PC based test tool (remote control & visulization server) to see how RC works - Use free RC & visualization sample code for Linux to start your project - Use free Python scripts to control Raspberry Pi (Pi-Finder, RC-Server, Simple webserver) - Simply modify the scripts to control your Raspberry Pi hardware and other peripheral devices. - Please use product manual and help webpages to get more info Requires iOS 7.0 or later. Compatible with iPhone, iPad, and iPod touch. DeviLAN - Remote control & Visualization tool for developers is a free software application from the System Maintenance subcategory, part of the System Utilities category. The app is currently available in English and it was last updated on 2012-07-29. The program can be installed on iOS. DeviLAN - Remote control & Visualization tool for developers (version 1.15) has a file size of 5.66 MB and is available for download from our website. Just click the green Download button above to start. Until now the program was downloaded 0 times. We already checked that the download link to be safe, however for your own protection we recommend that you scan the downloaded software with your antivirus.
systems_science
https://shpark.org/publication/park2020diverse/
2024-02-25T14:22:15
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474617.27/warc/CC-MAIN-20240225135334-20240225165334-00590.warc.gz
0.899505
204
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__60720193
en
Multi-agent trajectory forecasting in autonomous driving requires an agent to accurately anticipate the behaviors of the surrounding vehicles and pedestrians, for safe and reliable decision-making. Due to partial observability in these dynamical scenes, directly obtaining the posterior distribution over future agent trajectories remains a challenging problem. In realistic embodied environments, each agent’s future trajectories should be both diverse since multiple plausible sequences of actions can be used to reach its intended goals, and admissible since they must obey physical constraints and stay in drivable areas. In this paper, we propose a model that synthesizes multiple input signals from the multimodal world|the environment’s scene context and interactions between multiple surrounding agents|to best model all diverse and admissible trajectories. We compare our model with strong baselines and ablations across two public datasets and show a significant performance improvement over previous state-of-the-art methods. Lastly, we offer new metrics incorporating admissibility criteria to further study and evaluate the diversity of predictions.
systems_science
https://ivorix.com/category/time-series-analysis/page/7/
2021-07-27T07:50:39
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00591.warc.gz
0.93245
295
CC-MAIN-2021-31
webtext-fineweb__CC-MAIN-2021-31__0__184963100
en
Regression analyses are a popular statistical tool for illustrating trends. They are also used in various types of risk assessment such as the Jensen and Treynor ratios. The basic method of linear regression seeks to construct a straight line in a two-dimensional system of coordinates such that all data points within the system of coordinates lie as near as possible to this line. The straight line constructed in this manner is described by the two variables alpha (intercept, intersection of the straight line with the y-axis) and beta (slope, gradient of the straight line). For every data point, the point n,y of the regression line can then be calculated by means of these two variables. The formulas for the calculation of alpha and beta in the analysis of time series are: In these equations, N is the number of data points in the time series, i.e. the number of days, for example, n is the number of the data point, i.e. 1 for the first and 1000 for the thousandth data point in the time series. That is to say, y-values exist only for the natural numbers (n) on the x-axis. The curve of the time series thus arises through the connection of all points n,y of the time series. Hence, the calculation for time series of a constant length is relatively easy, since the denominators in the formulas for the calculation of alpha and beta all become constants.
systems_science
https://www.radionl.com/2018/02/08/future-home-continues-tour-with-a-stop-in-kamloops/
2018-12-16T18:32:01
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827963.70/warc/CC-MAIN-20181216165437-20181216191437-00605.warc.gz
0.945597
170
CC-MAIN-2018-51
webtext-fineweb__CC-MAIN-2018-51__0__133372369
en
The future home has arrived in the River City. Senior Market Manager for the Telus Pure Fibre Health Team, Clare Adams says, the home is meant to showcase the kinds of innovative technology that homes could see in their future. “From facial recognition software as you enter the home, to a smart fridge, and smart cooktops, that really changes the way that we plan and cook our meals.” Clare says, there’s also a health monitoring system, and virtual reality gear. “Like any kind of a concept vehicle it’s all things that are sort of predicted in, I’d say, the near future.” The Telus future home will be on display at Aberdeen mall 10 a.m. til 6 p.m daily, until February 6th.
systems_science
https://www.fema.biz/en/applications/ventilation-air-conditioning.php
2022-12-10T04:36:26
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00762.warc.gz
0.921975
128
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__196933403
en
Ventilation and Air Conditioning In air conditioning and ventilation systems, blowers and filters are monitored by measuring the differential pressure. Using differential pressure switches, it is possible to, e.g., transmit readings to a central control unit as soon as the filter has attained a specified degree of contamination. Differential pressure switches can be employed both for the continuous monitoring of filters and the control of pressure differentials between rooms or zones. Frost protection thermostats protect against freezing and flow sensors provide information on the condition and performance of blowers. In the hydraulic portion of these facilities, flow monitors and temperature sensor are employed.
systems_science
https://oliviaylee.github.io/
2024-02-28T15:00:13
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474737.17/warc/CC-MAIN-20240228143955-20240228173955-00219.warc.gz
0.925749
516
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__97269044
en
Hello! I am a final year undergraduate student at Stanford University (Class of 2024), pursuing a B.S. in Symbolic Systems with a minor in Mathematics and a coterminal M.S. in Computer Science. I conduct research with Stanford’s IRIS Lab which studies intelligence through robotic interaction at scale, affiliated with the Stanford Artificial Intelligence Laboratory (SAIL) and Stanford Machine Learning Group. I am fortunate to be mentored by Professor Chelsea Finn, Suraj Nair, and Annie Xie. My research interests span robotics, machine learning, and computer vision. I’m interested in enabling robots to learn generalizable representations from diverse datasets, and refine them through interaction for performing complex tasks in the real world. I’m especially interested in: - Visual pretraining and representation learning: I am excited by the potential of embodied agents learning skill and object representations via pretraining on large, diverse datasets, and using them for sample-efficient exploration or downstream tasks. - Interactive learning from multimodal human data: Humans communicate goals using various modalities, from language to physical corrections. I hope to facilitate human-compatible robot behaviors by enabling robots to process multimodal inputs and feedback, potentially leveraging large pretrained models. - Continual data collection and learning: Ideally robots should continually acquire experience and skills with limited supervision. I aim to improve autonomous exploration methods for scalably collecting in-domain robot data and adapting to novel environments. Inspired by my interdisciplinary coursework, I am drawn to research leveraging concepts in cognitive science for robot learning and visual understanding. I aim to better understand human cognitive processes, such as multimodal perception, curiosity, and interactive learning, to develop human-inspired learning algorithms for robotics. I am a U.S. citizen who grew up and was educated in Singapore. Besides research, I worked as a software engineer at Salesforce and at several startups, ranging from deep tech (quantum computing) to B2C companies based in Southeast Asia and the United States. I also studied abroad at the University of Oxford (Magdalen College) in Fall 2022, where I studied graph representation learning and philosophy of mind, and tried my hand at rowing! I also enjoy playing tennis, hiking, reading (+ occasionally writing) science fiction, and brush calligraphy. If any of the above sounds interesting to you, I would love to hear from you! Feel free to reach me at oliviayl [at] stanford [dot] edu.
systems_science
http://stpeteautoservice.com/computerized-engine-analysis/
2018-02-19T21:45:25
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00764.warc.gz
0.901882
444
CC-MAIN-2018-09
webtext-fineweb__CC-MAIN-2018-09__0__264490330
en
Computerized Engine Analysis And Programming Your modern vehicle’s engine is a highly sophisticated piece of equipment. The days of your father’s gas-guzzler are long gone-instead, Federal Exhaust Emission and Fuel Economy regulations demand that today’s vehicles be equipped with electronic engine control systems to curb carbon emissions and increase fuel efficiency. With technically advanced control systems taking the place of simple engine components, common maintenance services such as tune-ups are also a thing of the past. Regular services (such as spark plug and filter replacements) are still required, as well as a computerized analysis of your vehicle’s control computer. Our factory-trained technicians are equipped with state of the art equipment, the knowledge, and the experience to fix it right the first time. Here’s How Your Modern Vehicle’s Control Computer Operates: A network of sensors and switches convert and monitor engine operating conditions into electrical signals. The computer receives this information, and, based on information and instructions coded within this savvy computer program, commands are sent to a multitude of different systems such as: ignition, fuel, body control, and emission control just to name a few. Whenever a problem arises (sometimes seen by that nagging “check engine” light), our service pros check whatever command is prompted, in addition to the status of your engine control computer and sensors. That way you’ll know if your vehicle’s performance is caused by a real problem, or just a sensor/computer issue. We find the problems that other shops can’t. Best of all we provide you with a worry free nationwide warranty so you can drive with confidence knowing that it’s fixed right the first time. Here’s a Brief Overview of Your Vehicle’s Sensory Components: - Mass airflow sensor - Throttle position sensor - Manifold absolute pressure sensor - Coolant temperature sensor - Exhaust oxygen sensor - Crankshaft position sensor - Camshaft position sensor Furthermore if your vehicle’s computer needs replaced or reprogrammed we can do it on site using OEM specifications.
systems_science
https://www.nuvia.com/projects/waste-characterization-automated-system/
2024-03-02T00:43:31
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475711.57/warc/CC-MAIN-20240301225031-20240302015031-00057.warc.gz
0.903102
211
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__164060028
en
We are pleased to share the success of our latest project aimed at changing the way radioactive waste stored in drums is handled and characterized. Our Czech NUVIATech Automation team has successfully designed, manufactured, and implemented the first part of a fully automated system that showcases the combination of our unique know-how in automation, radiation measurement, software development, and detector technology. - Fully Automated Operation: Our systems offer a hands-free solution for characterizing radioactive waste, minimizing human exposure, and ensuring safety. - Multi-Parameter Measurement: The operator can load up to 10 drums at a time onto the input conveyor, and each drum undergoes a comprehensive analysis, including weight measurement, radiation spectrum acquisition, and gamma dose rate measurement from various angles. - Unique Design: The resulting product meets the complex set of client requirements, making it a unique offering. Our fully automated system is a crucial step forward in managing radioactive waste responsibly. By streamlining the characterization process, we ensure the safety of both operators and the community.
systems_science
https://www.nbcorporation.com/engineering-info/frictional-resistance-required-thrust/
2024-04-13T05:27:53
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816586.79/warc/CC-MAIN-20240413051941-20240413081941-00769.warc.gz
0.918834
196
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__10305638
en
Frictional Resistance and Required Thrust The static friction of a linear system is extremely low. Since the difference between the static and dynamic friction is marginal, stable motion can be achieved from low to high speed. The frictional resistance (required thrust) can be obtained from the load and the seal resistance unique to each type of system using the following equation: The dynamic friction coefficient varies with the applied load, preload, viscosity of the lubricant, and other factors. However, the values given in Table 1-35 are used for the normal loading condition (20% of basic dynamic load rating) without any preload. The seal resistance depends on the seal-lip condition as well as the condition of the lubricant, however, it does not change proportionally with the applied load, which is commonly expressed by a constant value of 2N to 5N. Dynamic Friction Coefficient Applied Load versus Dynamic Friction Coefficient
systems_science
https://remote-control.peatix.com/
2019-03-26T08:02:15
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204885.27/warc/CC-MAIN-20190326075019-20190326101019-00125.warc.gz
0.906709
551
CC-MAIN-2019-13
webtext-fineweb__CC-MAIN-2019-13__0__185723124
en
Saves water, saves time, watering in your absence, reduces fungus growth on leaves, reduces runoff. Whether for surface watering, underground watering or drip watering, the principle is the same: a programmer orders solenoid valves to open their watering channel at a time and for a duration that you have planned. You can thus decide how to water the lawn, how to water the tomatoes, how to water the hedge, at different times of the day. And this is the enormous advantage of automatic remote control switch watering: each plant has different water needs, and the watering programmer can manage several watering channels. To install automatic watering, it is thus necessary to determine the zones of watering well, while trying to group in the same zone of the plants which have the same needs in water. The installation of an automatic sprinkler consists in creating a network of sprinkler pipes managed individually by a solenoid valve, and all connected to the programmer. So you need: Water hoses and micro tubes An automatic sprinkler programmer Start by drawing a plan of your garden, drawing the contours of your house and terrace, and any obstacles, shelters, etc. Then indicate the locations of your plantations, trying to determine areas: the vegetable garden, the hedge, the flower beds, etc. Each area will have similar water needs. You can then draw your watering paths. This raises the question of the choice of sprinklers. Caution: Never put different types of sprinklers, for example drippers and micro-sprinklers, or fixed and mobile sprinklers such as nozzles and turbines, on the same watering lane, because they have different watering times and different flow rates. Calculate the flow from your tap. If it is not enough to run the entire network, create several smart irrigation controllers. Then check the pressure of your water supply. It must be at least 2 bar. If automatic drip irrigation is installed, a pressure reducer is required. Draw with a compass the range and angle of the rotating sprinklers. Then install your automatic sprinkler programmer. It can either be attached to the tap, directly to a water pipe, or offset to the wall. Deploy your PVC pipe network, which you will bury in the case of automatic underground watering. Make sure as much as possible to place your faucet in the middle of the watering path, in order to maintain pressure at the end of the line. Indeed, the pressure decreases by 0.25 bar every 10m, along a 25mm garden hose. Place a solenoid valve by watering, then your sprinklers: turbines, nozzles, drippers, micro-sprinklers, etc..
systems_science
https://bonito.psm.msu.edu/research/
2021-02-25T19:27:53
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351454.16/warc/CC-MAIN-20210225182552-20210225212552-00381.warc.gz
0.861889
538
CC-MAIN-2021-10
webtext-fineweb__CC-MAIN-2021-10__0__188910022
en
Fungal evolution; Plant microbiomes; Truffle biology; Plant-fungal-bacterial interactions; Microbiome ecology and evolution Our research makes use of phylogenetics, high-throughput sequencing, isotope tracers and –omics approaches to better understand: (1) phylogenetic and functional diversity of plant-associated fungi (2) environmental and genetic factors that structure microbiome communities (3) the evolution and functional relevance of bacterial symbionts of fungi Our lab research has applications pertinent to agriculture, forestry, biodiversity and the sustainability of Earth’s life support systems. - Phylogenetic and functional diversity of tripartite plant-fungal-bacterial symbioses This project investigates the diversity, evolution and functions within a lineage of fungi, the Mucoromycota, implicated in terrestrialization of Earth. These fungi co-evolved with plants through innovations that include growth habits within the plant and on its surface. Intriguing, many of the plant-associated genera of these fungi carry specific bacterial endosymbionts within their cells only known from fungi. The evolution and functional ecology of these endobacteria remains unclear. This project will compare and analyze entire genomes to identify co-evolved symbiosis traits in plant-fungi-bacteria partners and assess the impact of bacterial endosymbionts on the function of their fungal host and its interaction with plants. Impact of production system, plant species and stress on whole plant microbiome and productivity Sustainable agriculture production is intimately linked to microorganisms that associate with plants, known as the plant microbiomes. Our research will reveal foundational knowledge on fungal, oomycete and bacterial microbiome characteristics of woody plant (poplar) and herbaceous (wheat/corn/soy;) agronomic crops grown under three production systems (conventional; organic; no-till). We will also investigate how applications of fungicides, herbicides and insecticides impact plant and soil microbiomes. - Great Lakes Bioenergy Research Center (GLBRC) – Harnessing the switchgrass microbiome The GLBRC is addressing interrelated knowledge gaps that currently limit the industrial scale production of specialty biofuels and bioproducts from purpose-grown bioenergy crops, in order to develop a new generation of sustainable lignocellulosic biorefineries. The Bonito lab is working to characterize the switchgrass microbiome and its impact on plant physiology in collaboration with other researchers in the GLBRC.
systems_science
https://www.boroughs.org/fullnotice.php?id=506
2019-12-13T15:23:36
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540564599.32/warc/CC-MAIN-20191213150805-20191213174805-00204.warc.gz
0.934282
196
CC-MAIN-2019-51
webtext-fineweb__CC-MAIN-2019-51__0__45285485
en
Feedback Sought on Cybersecurity Issues Affecting Local Governments in PAAugust 19th, 2019 Penn State University, in partnership with the Center for Rural PA, is conducting a research study to assess the cybersecurity readiness of rural and urban municipalities in PA. The survey seeks information about technology, as well as specific questions about information systems security. These questions will focus on what types of information technology your municipality uses, how much knowledge you have about the information technology and cybersecurity, as well as how much you use the various components of the information technology. For more information, visit https://www.linkedin.com/pulse/cybersecuritymunicipalities-pennsylvania-jungwoo-ryoo. The survey link has already been emailed to manager/secretaries and will be open until the end of September. Questions can be directed to [email protected].
systems_science
http://www.engineerdir.com/product/catalog/12097/index1.html
2018-10-19T18:17:35
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512421.5/warc/CC-MAIN-20181019170918-20181019192418-00556.warc.gz
0.833723
617
CC-MAIN-2018-43
webtext-fineweb__CC-MAIN-2018-43__0__245736667
en
S-310 to S-316 multi-axis tip/tilt platforms and Z-positioners are fast and compact units based on a piezo tripod design. They offer piston movement up to 12 µm and tilt movement up to 1.2 mrad (2.4 mrad optical) with sub-msec response and settling. The S-310 to S-316 systems are designed for mirrors and optics up to 25 mm diameter; the clear aperture is ideal for transmitted-light applications. The units can be mounted in any orientation. Open / Closed-Loop Operation In open-loop operation, the vertical position / platform angle roughly corresponds to the drive voltage (see the “Tutorial” section for behavior of open-loop piezos). The S-310 to S-315 open-loop models are ideal for applications where the position is controlled by an external loop based on data provided by a sensor (e.g. PSD, quad cell, CCD chip, ...). The S-316.10 closed-loop version allows absolute position control, high linearity and repeatability based on the internal ultra-high-resolution feedback sensor. The S-310 to S-316 tilt platforms are equipped with three long-life, ceramic-encapsulated, high-performance PICMA® piezo drives. Five different versions are available: Open-loop Z-platforms; all three piezo linear actuators are electrically connected in parallel, providing vertical positioning (piston movement) of the top ring. Only one drive channel is required. The three piezo actuators are individually matched for equal displacement, providing straight motion with tilt errors of less than 70 µrad over the complete range. Open-loop Z, tip/tilt positioners; all three piezo linear actuators can be driven individually (or in parallel) by a three-channel amplifier. Vertical (piston movement) positioning and tip/tilt positioning are possible. Closed-loop Z, tip/tilt positioner. All three piezo linear actuators are equipped with strain gauge position feedback sensors and can be driven individually (or in parallel) by a three-channel amplifier/position servo-controller. Vertical positioning (piston movement) and tip/tilt positioning are possible. The integrated position feedback sensors provide sub-µrad resolution and repeatability. Higher Performance Through Parallel Kinematics S-31x series tip/tilt systems feature a single moving platform, parallel-kinematics design. Compared to stacked, multi-axis systems, the parallel-kinematics design provides faster response and better linearity with equal dynamics for all axes in a smaller package. Laser cavity tuning Laser beam stabilization Laser beam steering & scanning Where nanopositioning technology and motion control systems of the highest accuracy level are concerned, PI has been a leading supplier worldwide for many years. Over 30 Years Experience When PI introduced piezoelectric... more
systems_science
http://cine2digits.co.uk/overview.html
2021-10-17T11:50:47
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00005.warc.gz
0.922896
305
CC-MAIN-2021-43
webtext-fineweb__CC-MAIN-2021-43__0__190578992
en
The concept is to "photograph" each individual frame in the film and build up an AVI video from these frames. A machine vision camera is used together with a macro lens to focus directly on the film frame in the projector gate. The film is lit from the rear using an RGB LED lighting system. Each time a frame arrives in the gate after pulldown, a sensor detects this and tells the control system to open the camera shutter, flash the LEDs on and then close the shutter. The resulting image is transferred from the camera to host PC where it is displayed for preview and also saved to disk if capture is enabled. Frames can be stored separately as TIFF, BMP or JPG files or, more usually, can be combined directly into an AVI file. Capture rate is not important as it has no bearing on the resulting playout rate of a captured AVI file. Generally, the speed is limited by the resolution of the captured image and the Firewire bus. 30fps is possible with a 1024x768 camera, my IMI 1388x1036 is limited to 15fps with its Firewire 1394A bus (400 Mb/s). In practice, 10fps is a nice rate to allow timely user adjustments if required during capture. The LED driver and Motor Control board is controlled via a USB connection to the PC. Motor control and LED exposure control can be local via hardware rotary controls and/or via software simulation of these controls using the mousewheel.
systems_science
https://rivera.canvasslabs.com/help/cfc_toc_file
2023-06-06T10:02:15
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652494.25/warc/CC-MAIN-20230606082037-20230606112037-00572.warc.gz
0.879463
256
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__37818640
en
Canvass for Compliance (CFC)'s Table of Contents (ToC) File Your key to mapping anonymized analysis results back to your project’s files. Table of Contents File Canvass for Compliance (CFC), formerly known as LiAnORT, performs blind audits to protect your intellectual property (IP). To avoid sending the file/directory structure of your project to our servers, CFC generates substitute paths that perserve only the extension of the original files. While doing so, a Table of Contents (ToC) file is generated locally so that CFC can later map these substitutes back to your original filepaths. It is important that you hold onto this file, as without it, CFC would not be able to map the results of your job back to your original files. If run in command-line mode, the CFC client will store ToC files in the current working directory. If run interactively, the CFC client creates a hidden directory in your home directory named '.lianort' in which to store ToC files. get commands are run, CFC will check both the current working directory and .lianort directoy for the correct ToC file.
systems_science
https://frgglobal.com/it-infrastructure.html
2023-04-02T03:33:31
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00738.warc.gz
0.926006
203
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__244389828
en
Using the latest technologies, FRG provide a consulting solution to translate your business requirements into effective infrastructure management solutions that ensure the integrity and high availability of your network and infrastructure. Our team of skilled developers provide programming and development services to meet the needs of our global clients. Using open script programming language and graphic design techniques our programmers create inspiring websites to promote your business effectively. When working with FRG you are dealing with a Consultant rather than a Reseller. Therefore, through independent thinking we provide our customers with a solution that best suits their needs and environment. We look at our customer’s total package needs, providing a complete package of technology consulting, product evaluations, product supply and implementation, training and post-sales support. Our multi-product platform and service offering ensures we offer our customers the best solution from across the spectrum of vendors and products. Our emphasis is to strike a partnership with the customer where the customer takes care of his core business and we take care of his IT infrastructure.
systems_science
http://hpux.cs.utah.edu/
2018-12-18T11:29:43
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829140.81/warc/CC-MAIN-20181218102019-20181218124019-00309.warc.gz
0.960181
156
CC-MAIN-2018-51
webtext-fineweb__CC-MAIN-2018-51__0__114406451
en
However, we are in communication with the archive's sponsor/maintainer and trying to work out the possibility of serving the HP/UX archive off of one of our existing Linux systems. Unfortunately, the software which current handles the process of mirroring the archive between systems is written and compiled for HP/UX only - and will not run on Linux as-is. The archive maintainer is hopeful that he can port the software over to Linux (he's currently serving the archive from a Linux system) but until he has done so, there is no way to accomplish this feat. So the short answer is, we're trying to work out a new HP/UX archive server, but there is currently no ETA or estimation of success in the project. Sorry.
systems_science
https://www.dataprotectworks.com/Remote-Office-Protection.asp
2019-05-19T10:52:18
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254751.58/warc/CC-MAIN-20190519101512-20190519123512-00187.warc.gz
0.885603
315
CC-MAIN-2019-22
webtext-fineweb__CC-MAIN-2019-22__0__52979429
en
Arcserve Remote Office Protection Time to tighten-up the remote data backup screws Remote data backup is slow and risky And, trying to manage it without on site IT staff? Now, that’s the stuff of nightmares. The risk to data is obvious when you consider: - 70% of data lives outside of corporate data centers - More than 30% of companies don’t perform remote data backups due to staff and resource limitations Remote data backup and recovery makes your job easy—no matter where your people work Arcserve UDP helps you deliver corporate headquarters-level data protection across all of your locations—no matter where or what those offices are—branch, home, or virtual. It’s a remote backup and recovery solution that combines the benefits of granularity, performance, and resiliency—at an affordable cost—so you can: - Reduce storage capacity demands - Meet rigorous service-level agreements - Keep employee productivity high How do we deliver on that promise? For starters, we give you: - Easier and faster remote data backup and recovery on both physical and virtual servers - Consolidated data storage for centralized backup or off site disaster recovery - Continuous data protection to help keep critical data accurate, while speeding up recovery - High availability for critical remote systems, applications, and data Remote data protection is critical to your business. So, get proactive and ensure every one of your remote servers, desktops, laptops, and mobile devices are adequately protected.
systems_science
https://futurewei-cloud.github.io/ARM-Datacenter/qemu/debug-qemu-qtest-accelerators/
2020-12-01T02:43:14
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141542358.71/warc/CC-MAIN-20201201013119-20201201043119-00342.warc.gz
0.91275
333
CC-MAIN-2020-50
webtext-fineweb__CC-MAIN-2020-50__0__100524961
en
Each QTest will decide which accelerators it uses. For example, the test might try to use ‘kvm’, which causes QEMU to use KVM to execute code. Or the test might try to use ‘TCG’ support, where QEMU will emulate the instructions itself. Regardless of which path is chosen, this choice inevitably results in different code paths getting exercised inside QEMU itself. In some cases when developing QEMU code, we might want to force certain code paths which are specific to different accelerators. In this case we have a few things to decide. Take the case for example, where we want to force a specific TCG code path on an aarch64 machine for an aarch64 QTest. We will use the tests/qtest/arm-cpu-features test as an example. This test it selects the specific accelerator(s) to use for each test case. It is possible that we might want to force the use of a specific accelerator to force that code path in QEMU. We might want to use TCG instead of kvm for instance. In this case we would need to edit the test, for instance tests/qtest/arm-cpu-features.c, and replace the use of “kvm” with “tcg”, or in cases where both -accel kvm and -accel tcg are used, just remove the kvm. This will have the effect of forcing the use of a specific code path, which can be very useful when debugging or validating a change.
systems_science
https://ictqt.ug.edu.pl/job/postdoctoral-researcher-from-ukraine-for-the-maestro-project-2-2/
2023-12-01T06:40:54
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100276.12/warc/CC-MAIN-20231201053039-20231201083039-00863.warc.gz
0.83018
858
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__306077743
en
Project title: Relativistic Causality and Information Processing (in Polish: Przyczynowość relatywistyczna a przetwarzanie informacji) The project is financed by the National Science Centre (NCN). We are looking for the Postdoctoral Researcher from Ukraine to work in New Quantum Resources Group at the International Centre for Theory of Quantum Technologies (ICTQT). ICTQT was created in 2018 within the International Research Agendas Programme of the Foundation for Polish Science co-financed by the European Union from the funds of the Smart Growth Operational Programme, axis IV: Increasing the research potential (Measure 4.3). The founders of ICTQT are Marek Żukowski (the director) and Paweł Horodecki (the research group leader). The Centre’s official partner is IQOQI-Vienna of the Austrian Academy of Sciences. The Centre consists of 6 research groups: Multiphoton Quantum Optics for Quantum Information (leader Marek Żukowski); New Quantum Resources (leader Paweł Horodecki); Foundational Underpinnings of Quantum Technologies (leader Ana Belen Sainz); New Quantum Resources and Thermodynamics (leader Michał Horodecki); Quantum Cybersecurity and Communication (leader Marcin Pawłowski); Quantum Open Systems in Relation to Quantum Optics (leader Łukasz Rudnicki). About the group The broad aim of the New Quantum Resources Group would be to perform research concerning quantum phenomena which could be used for quantum information processing. Exemplary goals of the group are: - Connections between quantum computational speedup and contextuality/Bell-“nonlocality” - New protocols on randomness amplification - Research on communication networks - Connections between violations of Bell inequalities and of non-contextuality and the quantum advantage in communication complexity - Quantum batteries as open quantum systems - Relativistic quantum information processing About the “Relativistic Causality and Information Processing“ project: The project’s central goal is to study the information-processing properties within the broad framework of „within-and-beyond-quantum” theories (relativistic quantum physics, PR-boxes, GPTs, etc.). To this end and intergrative methodology combining the tools from i.a. quantum information, quantum field theory, relativity and cryptography will be developed. Finally, protocols for physical implementations and/or simulations of some of the theoretical findings will be developed. - Active scientific research. - Presentation and discussion of ideas and results with a diverse audience at the ICTQT and at the external events. - Participation in mentoring of PhD students. - Participation in activities organized by the ICTQT. - Active participation in seminars, group meetings, etc. - PhD degree in physics, mathematics, computer science (PhD degree obtained in 2015 or later). - The candidate should be interested in mathematical and conceptual foundations of quantum mechanics, quantum information, relativistic physics and related topics, especially those which are within the research agenda of the project. - The candidate should be committed to working collaboratively within inclusive and diverse environment. - Good written and oral communication skills are appreciated. - Full-time employment in a rapidly developing unit, the International - Centre for Theory of Quantum Technologies at the University of Gdansk. - possibility of accommodation with the family - Scientific and organizational support. - Basic equipment and core facilities. - Friendly, inspiring, interdisciplinary environment. - curriculum vitae; - a research resume with a list of research projects in which the candidate took part (with specification of the role); PDF files of publications (if there are any); A list of talks at conferences and workshops, and a list of prizes and awards; - Documents confirming scientific degrees (copy of PhD diploma, or Please submit the documents via email to ictqt[at]ug.edu.pl
systems_science
https://golublab.broadinstitute.org/
2024-04-19T15:57:41
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817438.43/warc/CC-MAIN-20240419141145-20240419171145-00084.warc.gz
0.912731
137
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__52832339
en
The Golub Lab is based at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts. Our research group uses functional genomics and chemical biology to discover novel biological and therapeutic insights in cancer. We are passionate about bringing together a multi-disciplinary scientific community focused on understanding the basic molecular mechanisms of cancer and applying this knowledge to impact the future of cancer medicine. The work in our lab includes systematic and comprehensive elucidation of cancers in terms of their molecular profiles, and functional responses to perturbation, and the development of novel approaches to therapeutic discovery. Scientists in the lab share ideas and launch collaborative projects to tackle key challenges by partnering with scientists across the Broad Institute and beyond.
systems_science
https://talent.emcap.com/companies/project44/jobs/30023787-staff-software-engineer-scraping-web-crawling-platform
2023-12-10T23:15:07
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102697.89/warc/CC-MAIN-20231210221943-20231211011943-00541.warc.gz
0.921071
976
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__261117551
en
Senior Software Engineer - Scraping/Web Crawling Platform At project44 we’re on a mission - to make supply chains work. project44 optimizes the movement of products globally, delivering better resiliency, sustainability, and value for our customers. As the supply chain connective tissue, we operate the most trusted end-to-end visibility platform that tracks more than 1 billion shipments annually for the world’s leading brands. The undisputed leader in the market, project44 was named the Leader in the Gartner Magic Quadrant, #1 in FreightWaves’ FreightTech 25, and the Customer’s Choice in Gartner Peer Insights’ Voice of the Customer report. project44 is headquartered in Chicago with a diverse and fast-growing, global workforce. If you’re eager to be part of a winning team that works together to solve some of the most challenging supply chain challenges every day, let’s talk. We're looking for a Staff Engineer who has experience as being a technical lead and deliver projects with a team of 5-7 engineers. You will work directly with engineering leadership to shape the future of our technology stack and products, collaborating with product managers, product designers, data scientists and other engineering teams. You will be a thought leader designing, building, and implementing our best-in-class integrations platform with a strong focus on accelerating how project44 connects to the world’s logistics networks. - Capture of massive data on the web and mobile terminals, and the design of architectures such as extraction, deduplication, classification, clustering, and filtering; - Design and development of distributed web crawlers, able to independently solve various problems encountered in the actual development process; - Research and development of web page information extraction technology algorithms to improve the efficiency and quality of data capture; - Analysis and warehousing of crawled data, monitoring of the crawler system and abnormal alarms; - Designing and developing data collection strategies and anti-shielding rules to improve the efficiency and quality of data collection; - Design and development of core algorithms according to the system data processing flow and business function requirements; - Own the development of these tools, services, and workflows to enhance data management, crawl/scrape analysis, reports, and workflows - Control the testing of the data and the scraping to guarantee compliance, quality, and accuracy - Monitor the procedure to detect and address any problems with breaks and scale scrapes as necessary - Create systems for handling large and unstructured data while developing a regulatory update tool for legal clients - Create a tool that gathers information on regulatory updates for legal clients by using scraping bots on websites, especially regulatory websites - Proficient in Python language, familiar with one or more of the commonly used crawler frameworks, such as Scrapy framework or other Web scraping frameworks, with independent development experience - Have 10+ years of experience and 7+ years of experience working with WebScrape, Crawlers, and Data Extraction. - Familiar with vertical search crawlers and distributed web crawlers, deeply understand the principles of web crawlers, have rich experience in data crawling, parsing, cleaning, and storage related projects, and master anti-crawler technology and breakthrough solutions. - Familiar with common data storage and various data processing technologies are preferred - A solid foundation in data structure and algorithms is preferred - Experience in distributed crawler architecture design, IP farms and proxy is preferred - Familiar with commonly used frameworks such as ssh, multi-threading, network communication programming related knowledge. - Mentoring experience of 2-3 engineers is preferred. Diversity & Inclusion At project44, we're designing the future of how the world moves and is connected through trade and global supply chains. As we work to deliver a truly world-class product and experience, we are also intentionally building teams that reflect the unique communities we serve. We’re focused on creating a company where all team members can bring their authentic selves to work every day. We’re building a company that every one of us at project44 is proud to work for, and our journey of becoming a more diverse, equitable and inclusive organization, where all have a sense of belonging, is shaped through the actions of our leadership, global teams and individual team members. We are resolute in our belief that each team member has an equal responsibility to mold and uphold our culture. project44 is an equal opportunity employer seeking to enrich our work environment by creating opportunities for individuals of all backgrounds and experiences to thrive. If you share our values and our passion for helping the way the world moves, we’d love to review your application!
systems_science
http://www.simulistics.com/node?page=12
2013-05-19T05:48:59
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383508/warc/CC-MAIN-20130516092623-00017-ip-10-60-113-184.ec2.internal.warc.gz
0.848013
856
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__42780416
en
- Products and services - Resources and support - About us - Log in/Register Simile can extract a 2-dimensional array of data from an image file, with each pixel in the image corresponding to a datapoint. The image can be in any format supported by the Tkimg package, which is to say, most of them. If you have a .csv file with a 2-dimensional grid of data items, this can be loaded directly into a 2-dimensional array file parameter in Simile. For other 2-D formats that can be read into a spreadsheet, you can create the .csv file. Once you have specified the data for all the file parameters, you can save that data in a scenario file (extension .spf, sometimes referred to as a parameter metafile). This file does not necessarily contain the actual data, it can contain references to the files that actually contain the data. It is in an XML format, allowing it to be examined and edited if required. Ther normal way to use the file parameter system is to create and save references to data in other files. To do this, click the 'pencil' icon to the right of the data entry field. This brings up the table data dialogue, which allows you to specify the file containing the data, and how to get the data from the file. Once this has been done, hovering over the entry field will produce a popup showing which file contains the actual data. Hitting the 'pencil' button again will show the reference information and allow it to be altered. The File Parameters dialogue will contain entry fields for both fixed and variable parameters. Fixed parameters must be given a value; their captions in the dialogue will be shown in red until a correctly formatted value has been supplied. Variable parameters do not need a value, and can be left empty, so the value can be set by a slider while running the model. However, it is possible to enter a time series in the File Parameters dialogue, which will cause the variable parameter's value to be set at a series of specified time points while the model is running. The simplest way to set a file parameter is to type its value straight into the entry box beside its caption in the file parameter dialogue. For single values, this is the only way to enter them initially. The tick button replaces the currently saved value with what has been typed in, while the cross button reverts the entry field to the saved value. The Table Data dialogue is used to specify how to extract data from a file of any one of a number of formats so that it can be used in a Simile component. Its main use is for getting data for file parameters, but it also appears when creating a table function that is built into a component's equation. There are four tabs in the dialogue, corresponding to the four varieties of supported data format. These are: The File Parameter dialogue allows the modeller to specify where every parameter in the model gets its values from. It contains an entry box for each parameter, which displays the values associated with it, and when you hover over it it pops up information about where those values come from. Hovering over the parameter captions, listed to the left of the entry fields, displays the units and dimensions of the required data. Simple values can be typed in directly, while long lists can be loaded from data files. It may be that you want to run your model with many different datasets, and these datasets each have different numbers of records which are supposed to correspond to instances of a submodel. Or perhaps you have nested submodels, where the membership of the inner submodel is different in each instance of the outer submodel, with the actual memberships being determined by the numbering of data records in a file. This tool enables a dataset to be displayed as the colours of polygonal areas on a map. It can be used either where vector data for an actual map is available (either in the model, or in a separate file), or where an area is represented as some regular set of shapes whose vertices can be calculated by the model, e.g. hexagonal tiling.
systems_science
https://www.meteomodem.com/meteorology
2023-12-01T06:52:44
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100276.12/warc/CC-MAIN-20231201053039-20231201083039-00075.warc.gz
0.905956
148
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__137635045
en
Upper Observation is a key activity of operational Meteorology being the only source of in-situ measurements in the atmosphere. These measurements are usually performed one or twice a day from meteorological sites distributed around the world, whether it deals with manual or automatic soundings. The purpose of this kind of Observation is to produce and disseminate meteorological messages that are interpretable by forecasters. Thus, radiosondes feed Numerical Weather prediction models to establish weather forecast, early warnings… Radiosonde systems are also crucial regarding Climatology, the science that focuses on describing and analyzing the climate change over long-term periods. That is the reason why accuracy and traceability of measurements remain of major importance.
systems_science
https://www.wissenschaftsrat.de/EN/Fields-of-Activity/Research_and_higher_education_system/Infrastructures/infrastructures_node.html
2023-12-03T11:50:26
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100499.43/warc/CC-MAIN-20231203094028-20231203124028-00739.warc.gz
0.872227
398
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__257230904
en
A comparatively recent topic area within the range of activities of the German Science and Humanities Council (Wissenschaftsrat, WR) concerns the infrastructure needs in the research and higher education system. Research infrastructures play a central role when it comes to enabling research activities in the first place – and this is not limited to “large-scale” equipment or particularly extensive infrastructures. The WR believes that scientific collections, libraries, archives and data collections, which are subsumed under the term “information infrastructures”, are also of fundamental importance for research, higher education and early career support in all scientific disciplines. Research infrastructures are a constitutive part of the research and higher education system, without which it would be difficult to gain new scientific insight. It is therefore a public task to ensure their availability for the world of science and research. In the humanities and social sciences, research infrastructures make an important contribution to gaining knowledge about social problems and making our cultural heritage accessible. For example, digitally processed specialist information offers completely new possibilities for the research-based indexing of library, archive and collection holdings. Thus, worldwide access to research information is enabled and new virtual working environments for researchers and scholars can be created. In 2014, the WR developed an overall concept for the further development of information infrastructures, on the basis of which the Joint Science Conference founded a ”German Council for Scientific Information Infrastructures” (Rat für Informationsinfrastrukturen, RfII) as an overarching coordination and advisory body. Recommendation on the topic: Recommendations on the future of the library network system in Germany, 2011 (German version only) Empfehlungen zur Zukunft des bibliothekarischen Verbundsystems in Deutschland (Drs. 10463-11), Januar 2011
systems_science
http://www.newschools.org/venture/brightbytes
2014-04-24T23:51:55
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00059-ip-10-147-4-33.ec2.internal.warc.gz
0.946717
126
CC-MAIN-2014-15
webtext-fineweb__CC-MAIN-2014-15__0__21514719
en
BrightBytes uses data to enable the creation of effective 21st Century learning environments. The company’s SAAS-based analytics platform, Clarity, measures the impact of technology use on student achievement by collecting school data and analyzing it within a proprietary framework. Based on this analysis, (1) Schools receive a customized roadmaps for improvement, as well as the resources to put the plan into action. (2) Government entities receive measurements of progress that ensure accountability and target spending. (3) Ed-tech providers receive evidence of their products’ effectiveness, along with data on the products and services needed throughout the system.
systems_science
http://www.keuwl.com/electronics.html
2023-05-28T19:19:52
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644506.21/warc/CC-MAIN-20230528182446-20230528212446-00534.warc.gz
0.838691
773
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__150678196
en
With the growth of Arduino, Raspberry PI and other rapid development systems, what can be achieved by makers, hobbyists and professionals in a short time frame is impressive. Keuwlsoft is adding to this area with its Bluetooth Electronics and IR Remote Creator apps. The Bluetooth Electronics app can connect to your project with Bluetooth, Bluetooth Low Energy or via a USB to Serial connection. The IR Remote Creator app can be used to control your project using the IR blaster found on some Android devices. You undertake any electronic project at your own risk. Please be careful. Bluetooth Electronics App with Arduino Examples: This example demonstrates the slider elements from the app. Pulse Width Modulation (PWM) is used to control the brightness of three LEDs. Arduino Uno and a Bluetooth HC-06 module are used. This example demonstrates how to communicate to an Arduino with an HC-06 Bluetooth module using the button controls within the app. To make it more interesting, we used these buttons to control relays which connected power to the motor/solenoid of an old RC car. This example demonstrates a cool effect that can be achieved with a line of LEDs turned on and off rapidly. An Arduino Nano and an HC-06 Bluetooth module were used. Uses two Bluetooth HC-06 Modules to create a repeater, such that anything received on one of the modules is passed onto the other. Demonstrates the software serial on the Arduino, and the terminal controls on the app. An HC-SR04 ultrasonic distance sensor is used to measure distance and send the information via an HC-06 Bluetooth module to the app. The light indicator on the app will change colour depending on the distance measured. Monitors the digital and analogue inputs on an Arduino Mega. Uses an HC-06 Bluetooth module to connect to the Android device. Demonstrates the graph feature of the app. Monitor the Analogue and Digital Inputs on the Arduino Uno. An XBee HC-06 Bluetooth module and shield are used to provide the Bluetooth Connection. A DHT11 temperature and humidity sensor is read using an Arduino Uno. An XBee HC-06 Bluetooth module and shield are used to send the results to the app to be displayed on temperature and bubble gauge indicators. This example demonstrates how to change the baud rate and other settings on an HC-06 Bluetooth module. Two modules are required, one to communicate to the app and the other to be programmed with the Bluetooth AT commands Two stepper motors are controlled either by the accelerometer on the Android device, or pad control elements in the app. Two 28BYJ-48 Stepper motors are controlled with ULN2003 drivers and and Arduino Uno. A NeoPixel Ring is controlled using an Arduino Uno connected to an Android device using an XBee Bluetooth module and shield. IR Remote Creator App with Arduino Examples: This example uses the IR Remote Creator app with NEC protocol to control a tri-color LED with Pulse Width Modulation. Listens to IR transmissions and generates a raw timing pattern that can be used within the IR remote creator app to re-create the transmission. Arduino code to go with the default remote that comes with the IR Remote Creator App. For where the default remote will suffice and for those who want to get something working fast. Remote control a ISD1820 module to make sounds. Prank or tease your friends and family (in a nice way) with unique sounds you have recorded on the ISD1820 module. More Arduino Examples: A DHT22 temperature and humidity sensor is combined with an LCD display to create a basic temperature and humidity meter. The maximum and minimum values are stored and shown when a button is pressed.
systems_science
https://keichenseer.com/author/kilian-eichenseer-phd/
2023-11-28T23:51:58
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00882.warc.gz
0.70044
118
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__178816582
en
I am developing novel statistical and computational methods for the Earth Sciences. In my current PostDoc at Durham University, I build a Bayesian stratigraphic model that can correlate and date geological sections using geochemical signatures. Other projects include modelling the latitudinal temperature gradient from sparsely sampled climate data, and an R package enabling reproducible, efficient workflows for palaeobiological research. Download my CV. PhD in Earth Sciences, 2021 University of Plymouth MSc in Palaeobiology & Sedimentology, 2016 BSc in Geosciences, 2015
systems_science
http://adaptivei.net/solutions/network-optimization/
2018-12-10T02:56:44
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823236.2/warc/CC-MAIN-20181210013115-20181210034615-00592.warc.gz
0.925151
373
CC-MAIN-2018-51
webtext-fineweb__CC-MAIN-2018-51__0__118476351
en
How’s your WAN doing? There are so many moving parts to a well-oiled WAN these days—routers, firewalls, QoS, multiple carrier contracts, WAN optimization appliances. WAN Optimization is still relevant for many types of applications, but there has been a big change in WHERE your apps and data are located. The hyper-fast growth of Saas/IaaS means that most companies now have business-critical apps located “off-net” from their corporate WAN. How do you provide the enterprise-grade performance and reliability that your customers expect when you’re not in control of both ends? The traditional MPLS networks add unnecessary latency when remote user access SaaS/Cloud services. Adding direct internet access at remote sites can reduce that latency, , but aren’t those the same broadband connections whose poor reliability helped you justify MPLS in the first place? The challenge for Network IT teams is to provide performance and reliability for all apps and all data on the extended Enterprise network, regardless of location—and it’s crucial to get it right if your users need to access business-critical apps and data hosted both in the Cloud and in enterprise data centers. Regardless of your existing WAN implementation—MPLS backhaul, hybrid WAN, split tunneling, internet VPN, or a mix of everything—SDWAN solutions can bring a new level of simplicity, reliability, performance and cost-effectiveness to your enterprise. Adaptive Integration works exclusively with IT solution vendors who excel and innovate at pace with today’s technology. We pride ourselves on our comprehensive awareness of not only our partners’ solutions, but our customers’ stresses, demands, and needs. Contact us to get started on your solution today.
systems_science
http://www.talisconsultants.com.au/key-projects/asset-management/pavement-management-and-dtims-v9-implementation/
2019-09-17T18:54:12
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573105.1/warc/CC-MAIN-20190917181046-20190917203046-00065.warc.gz
0.917937
234
CC-MAIN-2019-39
webtext-fineweb__CC-MAIN-2019-39__0__118728887
en
The Shoalhaven City Council manages approximately 1,263 kms of sealed and 394 Kms of unsealed roads in the south-eastern coastal region of New South Wales. Talis assisted the Shoalhaven City Council asset management team to implement the latest v9 dTIMS pavement modelling tool to optimise road network expenditure. Talis worked with council staff to develop a bespoke pavement management model for the Shoalhaven road network. This involved analysis of condition data to determine deterioration patterns for the pavement and surface assets. Other model components such as treatment selection business rules were developed to replicate the current pavement management practices and processes of the council. The model was then coded and implemented in dTIMS and used to model current and future road network investment requirements. The model creates pavement maintenance and renewal strategies for specifically defined road sections. The asset management team can use the system to calculate required budget levels to maintain the network to defined levels of service, and optimises the treatment strategies to ensure the condition of the network is maximised for the available budget. Talis also provides hosting, training and support for the dTIMS system.
systems_science
http://www.chronos.co.uk/index.php/en/syncwatch/dual-role-for-syncwatch/1126-chronos-welcomes-the-implementation-of-an-ofcom-licensing-regime-for-gps-gnss-repeaters-in-the-uk.html
2013-05-21T10:55:21
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00071-ip-10-60-113-184.ec2.internal.warc.gz
0.932253
590
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__170496403
en
Chronos welcomes the implementation of an Ofcom licensing regime for GPS / GNSS Repeaters in the UK Lydbrook, Gloucestershire, UK – 2 July 2012 – Chronos Technology, a specialist supplier of GNSS (GPS, GLONASS & Galileo) products and services, welcomes the decision by the UK regulator Ofcom on the 20th June 2012 to implement a licensing regime for the use of GNSS repeaters in the UK. GNSS repeaters provide coverage for the use and testing of GNSS technology inside buildings where the GNSS signals do not normally reach. Until the recent decision by Ofcom, the use of this repeater technology in the UK was not permitted except in specialised (normally military) situations. Large numbers of consumer and industrial products use GNSS technology for positioning and timing applications including smart phones, telematics equipment, avionics and emergency service applications. GNSS technology can also be used for resource management civil engineering and military applications. The Ofcom consultation prior to this decision highlighted concerns about potential interference to applications by the use of GNSS repeaters, however the conclusion was that a properly installed repeater system, conforming to the ETSI harmonized Standard for GNSS repeaters should have no impact beyond 10 metres. This decision enables the use of GNSS repeaters in many applications and will provide significant benefits and cost savings to organisations wanting to develop, test, integrate and manufacture products and systems that utilise GNSS technology. Chronos Technology has been at the forefront of GNSS repeater technology for many years and is one of the largest suppliers of this technology to the military in Europe. Particularly skilled in the audit (against ETSI Standards) and installation of GNSS repeater systems, as well as product supply, Chronos has installed repeater and other general GNSS infrastructure in more than 50 countries over a 15 year period. Chronos can provide a full service package from survey/audit to system design or alteration of existing systems (to meet the new ETSI Standard), along with assistance with licence applications and implementation/physical installation and training through to comprehensive after sales support. Chronos believes this decision by Ofcom will enable organisations to take advantage of the significant benefits GNSS repeater technology brings, making operations more efficient, saving money in development and manufacturing processes and making repair centre functions more efficient. Prof. Charles Curry MD of Chronos commented “We have been working closely with Ofcomfor some years to assist the evolution of this critical need within the GNSS user community. This has not been a simple process and in the beginning Ofcom had to sponsor a new ETSI Standard. Anyone involved with Standards knows that this does not happen overnight. Now that the Standard is in place Ofcom has moved very quickly to implement a light licensing regime for the deployment of GNSS Repeaters”.
systems_science
https://parliament.nt.gov.au/business/written-questions/wq/written-questions-listings/nest_content?target_id=308485&parent_id=363515
2020-09-29T15:16:30
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401643509.96/warc/CC-MAIN-20200929123413-20200929153413-00177.warc.gz
0.961642
363
CC-MAIN-2020-40
webtext-fineweb__CC-MAIN-2020-40__0__99641127
en
PROMIS management system 77. PROMIS management system Mr Elferink to MINISTER for Police, Fire and Emergency Services During a recent briefing by the Commissioner for Police he indicated that he planned to spend $1.8million dollars on the police PROMIS system to deal with it's inherent problems - – How much of that money has been spent and is the system working effectively as a result of this expenditure. Answered on 01/03/2004 As a result of years of under-funding, the PROMIS technical environment was very outdated. The upgrade of the system consists of several complex and integrated technical sub-projects including replacement of hardware, and upgrades to the operating systems and databases. These upgrades will create the platform for implementation of the new releases of PROMIS as provided by the Australian Federal Police (AFP), which will provide real gains to the members. As most sub-projects are sequential and reliant on several external parties, the total time to complete this project is substantial. Numerous sub-projects are now complete, as planned, including the replacement of 20 servers, and upgrades to the databases and associated operating systems. This allowed Police, Fire and Emergency Services to implement the enhancements provided by the AFP from the version used (August 2001 release) to the version released in January 2003. A large number of problems previously experienced were addressed as a result of this. The next phase of this project includes the final stage of hardware replacement, and upgrades to the operating systems to allow the implementation of the latest version of the PROMIS application. The $1.8 million funding allocated to the PROMIS system is recurrent. The expenditure to date, is in accordance with the project plan and budget allocation. Last updated: 04 Aug 2016
systems_science
https://oxygencylindershop.com/product-tag/diabetes-machine/
2024-04-17T09:09:05
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817146.37/warc/CC-MAIN-20240417075330-20240417105330-00899.warc.gz
0.92109
197
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__15349755
en
Glucose Meter: This is a common device used by people with diabetes to measure their blood sugar levels. It usually involves pricking a finger to obtain a small drop of blood, which is then analyzed by the glucose meter to provide a reading. Insulin Pump: An insulin pump is a device that delivers insulin continuously to help manage blood sugar levels. It can replace the need for multiple daily injections by providing a steady stream of insulin. Continuous Glucose Monitor (CGM): This is a device that continuously tracks blood sugar levels throughout the day and night. It provides real-time data and alerts, helping individuals with diabetes to make informed decisions about their diet, exercise, and insulin dosing. Artificial Pancreas: This is a closed-loop system that combines an insulin pump and a CGM. It automatically adjusts insulin delivery based on real-time glucose readings, mimicking the function of a healthy pancreas.
systems_science
https://www.xjetl.com/news/hydraulic-motor-manufacturers-hydraulic-motor-displacement-features.html
2022-09-24T18:59:02
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00281.warc.gz
0.910534
583
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__4249353
en
The displacement of the hydraulic motor automatically c […] The displacement of the hydraulic motor automatically changes with the load pressure. It has similar working performance as the torque converter, fully conforms to the characteristics of the traveling machine and has a series of new features: It greatly expands the torque range of the hydraulic transmission to adapt to variable load. The load torque is the product of the pressure and the motor displacement. If it is a fixed motor, the load torque is proportional to the working pressure. With the variable motor controlled by HA, the load torque is proportional to the square of the working pressure. Therefore, the torque capacity is increased corresponding to a certain working pressure range. The high-efficiency zone of the hydraulic components of the hydraulic motor is in the medium-high pressure range. Working in the high-pressure zone is beneficial to exert the dynamics of the components and reduce the cost, but the transmission efficiency and life are reduced. The selection of the high-pressure zone is reasonably matched by the rated pressure and the highest pressure, that is, Appropriate pressure derating configuration to solve "1. And the restrictions on the low-pressure zone is also important. For a load-changing condition, the component working in the low-pressure zone means that the motor works at a large displacement, which inevitably causes the transmission The efficiency is reduced and the productivity of the work is reduced, and the function of the machine cannot be exerted. The HA control reduces the motor displacement at a small load to increase the running speed, and raises the low load pressure to the medium and high pressure area, so that the transmission maintains a proper load under any external load. Rate, which improves transmission efficiency and productivity. The low speed required by the vehicle is achieved by reducing the engine speed, which is important for reducing energy consumption. The combination of hydraulic pump DA control in hydraulic motor can form an ideal traction vehicle drive device. HA control automatically adjusts the displacement of the motor according to the external load, so that the hydraulic device maintains a reasonable working pressure and completes the hydraulic transmission. Zone, high productivity, reasonable load selection; pump DA control to complete the selection of reasonable load rate of the engine. DA-HA transmission is an automatic control, reasonable load rate, efficient, high productivity, cost-effective transmission . Taizhou Eternal Hydraulic Machine Co ,Ltd . is a strength enterprise integrating design, development, processing and sales of hydraulic components. It has more than ten years of experience in hydraulic field at home and abroad. It is a professional Hydraulic Motor manufacturers and Hydraulic Motor Suppliers in China, which can provide you with high-quality products and services. The company firmly believes that “only the best can satisfy the best” and become the preferred hydraulic component choice for all customers worldwide. If necessary, please contact us: https://www.xjetl.com
systems_science
https://www.twaino.com/outils/seo-en/traceroute-domain-prepostseo/
2023-11-29T09:52:15
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00503.warc.gz
0.921827
1,150
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__158323773
en
Traceroute Domain | Prepostseo Short description : Traceroute Domain Use Traceroute Tool – Prepostseo to find the route between internet host and computers within a network. Long Description : Traceroute Domain Traceroute Tool – PrepostseoThe Internet is a major means of transiting data between computers and servers. The resources of a website are also stored on a server and Internet users who want to access these resources make a request to the server. But sometimes the server can take a long time to respond, which makes your site slower. Since people quickly leave sites that take a long time to load, it is important to diagnose the problem. For this, you can use Traceroute tools such as Traceroute Tool – Prepostseo. In this description, you will discover this Traceroute Tool and how to detect the origin of the slowness problems of your site. What is Traceroute Tool – Prepostseo?Traceroute Tool – Prepostseo is a tool that indicates the routes of IP packets between two or more networks, which is also called hop. It can help you identify problems within a network. Indeed, Traceroute Track of Domain Location is the path used to pass the data to the target computer or machine where the website is hosted. This tool shows the precise route of your target. It is a utility that allows you to trace the route of a packet from your computer to an Internet host. This tool from Prepostseo will allow you to trace this path and will also show you how many hops the packet takes to reach its host as well as the time each hop takes. Who can use Traceroute Tool – Prepostseo?If you want to know who is authorized to use the Traceroute Tool – Prepostseo, note that it is intended for system administrators and network engineers. They use it to see how traffic flows through an organization and detect irregular paths. In addition, ethical hackers can also use Traceroute Tool – Prepostseo to map an organization’s network to help identify its flaws. The advantages of Traceroute Tool – PrepostseoUsing Traceroute Tool – Prepostseo first allows you to display the time needed to reach each “hop” between the source and the destination. This is very important for tracking down the real cause of the drop in traffic or performance difficulties. Unlike other tools that don’t show your exact site location of a domain, Traceroute Tool – Prepostseo is much more accurate. It unearths the location of domains and proceeds to determine the path of requests and packets that take certain routes to a specific point. It is also used to perform diagnostics and gather more information on networking. It also tracks traffic flows between your device and a target domain. Besides, this Traceroute Tool – Prepostseo tool saves you a lot of time because you are no longer required to do “command prompt” to do the job. Traceroute Tool – Prepostseo also helps identify weak links in your network chain. Thanks to its reliability and ease of use, one can take advantage of its power and ensure that your data travels in the best possible security ways. As a final advantage, Traceroute Tool – Prepostseo is completely free and allows you to determine where the longest delays occur while waiting for their targets. How to use Traceroute Tool – Prepostseo?To make use of this tool, you just need to enter your domain or any IP address of the targeted domain. When you enter the domain and press the green “follow now” button, it converts the domain name first to an IP address and then sends packets to the target address. Traceroute Tool also displays the ping result and the exact IP addresses and their path. How it works ?When you enter the domain name, the tool converts it into an IP address and sends small packets to the domain server. These queries are designed to keep in memory the desired paths and their execution times. Traceroute thus determines when the packet has reached its destination within a period of time. Traceroute Tool traces the complete route that data travels from its origin servers to your screen each time a query is sent. Once the IP or web address of the target machine is entered, it presents an overview of the path taken by the packets, and therefore the number of hops as well as the different networks it must take. Videos : Traceroute Domain Images : Traceroute Domain Company : Traceroute Domain Prepostseo is an agency founded by Ahmad Sattar, a web developer and his AR AS assistant to help web editors, webmasters and SEO experts in creating high-performing articles. Its main objective is to improve the content and referencing of websites. For this, it offers free tools to its users so that they can upload content without plagiarism and above all of quality. We can cite as tools the DA Checker, the Paraphrasing Tool, the Plagiarism Checker, the Summarizer, etc. Thanks to these tools, Prepostseo provides its users with the best SEO solutions. Thus, they can easily check: - The plagiarism rate of an article; - The SEO score of a content; - Spelling and grammatical errors in an article; - Backlinks (return links); - Keyword density; - Domain authority, i.e. the quality of a website; - And much more.
systems_science
http://osm2vectortiles.org/docs/own-vector-tiles/
2017-07-28T04:35:38
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549436330.31/warc/CC-MAIN-20170728042439-20170728062439-00346.warc.gz
0.72604
506
CC-MAIN-2017-30
webtext-fineweb__CC-MAIN-2017-30__0__75716732
en
In this tutorial you will learn how to create your own vector tiles with the help of the OSM2VectorTiles toolset. In order to run through the following steps you need to have docker and docker-compose installed. Clone the OSM2VectorTiles repository and change directory to it. git clone https://github.com/osm2vectortiles/osm2vectortiles.git cd ./osm2vectortiles Start up your PostGIS container with the data container attached. docker-compose up -d postgis wget https://s3.amazonaws.com/metro-extracts.mapzen.com/zurich_switzerland.osm.pbf -P ./import docker-compose up import-external With the next command the downloaded PBF file gets imported into PostGIS. docker-compose up import-osm The following command imports custom SQL utilities such as functions and views, which are needed to create the vector tiles. docker-compose up import-sql Now export a MBTiles file by passing the bounding box you want your desired extract for. docker-compose run \ -e BBOX="8.34,47.27,8.75,47.53" \ -e MIN_ZOOM="8" \ -e MAX_ZOOM="14" \ export Finally, the following command generates the vector tiles and creates an MBTiles file in the docker-compose up export In order to display your vector tiles follow the getting started tutorial. Optional: Merge lower zoom levels (z0 to z5) into extract (prerequisite: sqlite3 installed) Download lower zoom level extract. wget -P ./export/ https://osm2vectortiles-downloads.os.zhdk.cloud.switch.ch/v2.0/planet_z0-z5.mbtiles patch.sh script from the Mapbox mbutil project. wget https://raw.githubusercontent.com/mapbox/mbutil/master/patch chmod +x patch Merge lower zoom levels into extract. ./patch "./export/planet_z0-z5.mbtiles" "./export/zurich.mbtiles"
systems_science
https://www.saidagroup.jp/fds_en/news/888
2022-01-24T17:13:25
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00143.warc.gz
0.816739
133
CC-MAIN-2022-05
webtext-fineweb__CC-MAIN-2022-05__0__88163969
en
The contents of MODA (Microbial Oxidative Degradation Analyzer) for ISO 14855-2, ISO 17556, and ISO 13975 are updated. (Biodegradation of Plastic Materials) The Microbial Oxidative Degradation Analyzer (MODA) is a device capable of accurately and precisely weighing the miniscule amounts of carbon dioxide generated by microbial activities. In particular, studies of plastic biodegradability require high-accuracy and high-precision weighing of carbon dioxide. Therefore, MODA employs the methods according to the international standards described in ISO 14855-2, ISO 17556, or ISO 13975.
systems_science