content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
What it does¶
RnaChipIntegrator was designed to integrate genes/transcripts from
expression analysis (RNA-seq, microarrays) with ChIP-seq binding regions,
however it is flexible enough to allow the comparison of any genome
coordinate based data sets.
RnaChipIntegrator answers the questions:
- “Which genes are close to each of my ChIP-seq regions?”, and
- “Which ChIP-seq regions are close to each of my genes?”.
The first data set, called ‘genes’, is strand specific and the genome coordinates correspond to the transcription start site (TSS) and transcription end site (TES), depending on the strand.
Note
For strand and genome coordinates:
- The start coordinate of a gene on the forward or ‘+’ strand relates to the TSS;
- The start coordinate of a gene on the reverse or ‘-‘ strand relates to the TES.
This is primarily gene or transcript annotation for the whole genome. However, other non-gene features, such as CpG islands, can be used.
The second data set we call ‘peaks’ are strand non-specific, including only the start and end coordinate. This is primarily the coordinates of ChIP-seq binding regions (a.k.a. peaks).
See the Input files section for more information about the input file formats.
Example use cases (‘gene’ versus ‘peak’) include:
- RNA-seq expressed genes versus ChIP-seq binding regions
- Microarray expressed genes versus ChIP-chip binding regions
- Total gene annotation versus ChIP-seq binding regions
- Gene promoters versus CpG island annotation | https://rnachipintegrator.readthedocs.io/en/latest/about.html | 2020-09-18T20:57:49 | CC-MAIN-2020-40 | 1600400188841.7 | [] | rnachipintegrator.readthedocs.io |
Why use a microservices approach to building applications
For software developers, factoring an application into component parts is nothing new. Typically, a tiered approach is used, with a back-end store, middle-tier business logic, and a front-end user interface (UI). What has changed over the last few years is that developers are building distributed applications for the cloud.
Here are some changing business needs:
- A service that's built and operated at scale to reach customers in new geographic regions.
- Faster delivery of features and capabilities to respond to customer demands in an agile way.
- Improved resource utilization to reduce costs.
These business needs are affecting how we build applications.
For more information about the Azure approach to microservices, see Microservices: An application revolution powered by the cloud.
Monolithic vs. microservices design approach
Applications company and that the reports won't be kept long. Your approach will be different from that of, say, building a service that delivers video content to tens of millions of customers.
Sometimes, getting something out the door as a proof of concept is the driving factor. You know the application can be redesigned later. There's little point in over-engineering something that never gets used. On the other hand, when companies build for the cloud, the expectation is growth and usage. Growth and scale are unpredictable. We want to prototype quickly while also knowing that we're on a path that can handle future success. This is the lean startup approach: build, measure, learn, and iterate.
During the client/server era, we tended to focus on building tiered applications by using specific technologies in each tier. The term monolithic application has emerged to describe these approaches. The interfaces tended to be between the tiers, and a more tightly coupled design was used between components within each tier. Developers designed and factored classes that were compiled into libraries and linked together into a few executable files and DLLs.
There are benefits to a monolithic design approach. Monolithic applications are often simpler to design, and calls between components are faster because these calls are often over interprocess communication (IPC). Also, everyone tests a single product, which tends to be a more efficient use of human resources. The downside is that there's a tight coupling between tiered layers, and you can't scale individual components. If you need to do fixes or upgrades, you have to wait for others to finish their testing. It's harder to be agile.
Microservices address these downsides and more closely align with the preceding business requirements. But they also have both benefits and liabilities. The benefits of microservices are that each one typically encapsulates simpler business functionality, which you can scale out or in, test, deploy, and manage independently. One important benefit of a microservices approach is that teams are driven more by business scenarios than by technology. Smaller teams develop a microservice based on a customer scenario and use any technologies that they want to use.
In other words, the organization doesn’t need to standardize tech to maintain microservice applications. Individual teams that own services can do what makes sense for them based on team expertise or what’s most appropriate to solve the problem. In practice, a set of recommended technologies, like a particular NoSQL store or web application framework, is preferable.
The downside of microservices is that you have to manage more separate entities and deal with more complex deployments and versioning. Network traffic between the microservices increases, as do the corresponding network latencies. Lots of chatty, granular services can cause a performance nightmare. Without tools to help you view these dependencies, it's hard to see the whole system.
Standards make the microservices approach work by specifying how to communicate and tolerating only the things you need from a service, rather than rigid contracts. It applications are produced, people have discovered that this decomposition of the overall application into independent, scenario-focused services is a better long-term approach.
Comparison between application development approaches
A monolithic application contains domain-specific functionality and is normally divided into functional layers like web, business, and data.
You scale a monolithic application by cloning it on multiple servers/virtual machines/containers.
A microservice application separates functionality into separate smaller services.
The microservices approach scales out by deploying each service independently, creating instances of these services across servers/virtual machines/containers.
Designing with a microservices approach isn't appropriate for all projects, but it does align more closely with the business objectives described earlier. Starting with a monolithic approach might make sense if you know you'll have the opportunity to rework the code later into a microservices design. More commonly, you begin with a monolithic application and slowly break it up in stages, starting with the functional areas that need to be more scalable or agile.
When you use a microservices approach, you compose your application of many small services. These services run in containers that are deployed across a cluster of machines. Smaller teams develop a service that focuses on a scenario and independently test, version, deploy, and scale each service so the entire application can evolve.
What is a microservice?
There are different definitions of microservices. But most of these characteristics of microservices are widely accepted:
- Encapsulate a customer or business scenario. What problem are you solving?
- Developed by a small engineering team.
- Written in any programming language, using any framework.
- Consist of code, and optionally state, both of which are independently versioned, deployed, and scaled.
- Interact with other microservices over well-defined interfaces and protocols.
- Have unique names (URLs) that are used to resolve their location.
- Remain consistent and available in the presence of failures.
To sum that up:
Microservice applications are composed of small, independently versioned, and scalable customer-focused services that communicate with each other over standard protocols with well-defined interfaces.
Written in any programming language, using any framework
As developers, we want to be free to choose a language or framework, depending on our skills and the needs of the service that we're creating. For some services, you might value the performance benefits of C++ above anything else. For others, the ease of managed development that you get from C# or Java might be more important. In some cases, you might need to use a specific partner library, data storage technology, or method for exposing the service to clients.
After you choose a technology, you need to consider the operational or life-cycle management and scaling of the service.
Allows code and state to be independently versioned, deployed, and scaled
No matter how you write your microservices, the code, and optionally the state, should independently deploy, upgrade, and scale. This problem is hard to solve because it comes down to your choice of technologies. For scaling, understanding how to partition (or shard) both the code and the state is challenging. When the code and state use different technologies, which is common today, the deployment scripts for your microservice need to be able to scale them both. This separation is also about agility and flexibility, so you can upgrade some of the microservices without having to upgrade all of them at once.
Let's return to our comparison of the monolithic and microservices approaches for a moment. This diagram shows the differences in the approaches to storing state:
State storage for the two approaches
The monolithic approach, on the left, has a single database and tiers of specific technologies.
The microservices approach, on the right, has a graph of interconnected microservices where state is typically scoped to the microservice and various technologies are used.
In a monolithic approach, the application typically uses a single database. The advantage to using one database is that it's in a single location, which makes it easy to deploy. Each component can have a single table to store its state. Teams need to strictly separate state, which is a challenge. Inevitably, someone will be tempted to add a column to an existing customer table, do a join between tables, and create dependencies at the storage layer. After this happens, you can't scale individual components.
In the microservices approach, each service manages and stores its own state. Each service is responsible for scaling both code and state together to meet the demands of the service. A downside is that when you need to create views, or queries, of your application’s data, you need to query across multiple state stores. This problem is typically solved by a separate microservice that builds a view across a collection of microservices. If you need to run multiple impromptu queries on the data, you should consider writing each microservice’s data to a data warehousing service for offline analytics.
Microservices are versioned. It's possible for different versions of a microservice to run side by side. A newer version of a microservice could fail during an upgrade and need to be rolled back to an earlier version. Versioning is also helpful for A/B testing, where different users experience different versions of the service. For example, it's common to upgrade a microservice for a specific set of customers to test new functionality before rolling it out more widely.
Interacts with other microservices over well-defined interfaces and protocols
Over the past 10 years, extensive information has been published describing communication patterns in service-oriented architectures. Generally, service communication uses a REST approach with HTTP and TCP protocols and XML or JSON as the serialization format. From an interface perspective, it's about taking a web design approach. But nothing should stop you from using binary protocols or your own data formats. Just be aware that people will have a harder time using your microservices if these protocols and formats aren't openly available.
Has a unique name (URL) used to resolve its location
Your microservice needs to be addressable wherever it's running. If you're thinking about machines and which one is running a particular microservice, things can go bad quickly.
In the same way that DNS resolves a particular URL to a particular machine, your microservice needs a unique name so that its current location is discoverable. Microservices need addressable names that are independent of the infrastructure they're running on. This implies that there's an interaction between how your service is deployed and how it's discovered, because there needs to be a service registry. When a machine fails, the registry service needs to tell you where the service was moved to.
Remains consistent and available in the presence of failures
Dealing with unexpected failures is one of the hardest problems to solve, especially in a distributed system. Much of the code that we write as developers is for handling exceptions. During testing, we also spend the most time on exception handling. The process is more involved than writing code to handle failures. What happens when the machine on which the microservice is running fails? You need to detect the failure, which is a hard problem on its own. But you also need to restart your microservice.
For availability, a microservice needs to be resilient to failures and able to restart on another machine. In addition to these resiliency requirements, data shouldn't be lost, and data needs to remain consistent.
Resiliency is hard to achieve when failures happen during an application upgrade. The microservice, working with the deployment system, doesn't need to recover. It needs to determine whether it can continue to move forward to the newer version or roll back to a previous version to maintain a consistent state. You need to consider a few questions, like whether enough machines are available to keep moving forward and how to recover previous versions of the microservice. To make these decisions, you need the microservice to emit health information.
Reports health and diagnostics
It might seem obvious, and it's often overlooked, but a microservice needs to report its health and diagnostics. Otherwise, you have little insight into its health from an operations perspective. Correlating diagnostic events across a set of independent services, and dealing with machine clock skews to make sense of the event order, is challenging. In the same way that you interact with a microservice over agreed-upon protocols and data formats, you need to standardize how to log health and diagnostic events that will ultimately end up in an event store for querying and viewing. With a microservices approach, different teams need to agree on a single logging format. There needs to be a consistent approach to viewing diagnostic events in the application as a whole.
Health is different from diagnostics. Health is about the microservice reporting its current state to take appropriate actions. A good example is working with upgrade and deployment mechanisms to maintain availability. Though a service might be currently unhealthy because of a process crash or machine reboot, the service might still be operational. The last thing you need is to make the situation worse by starting an upgrade. The best approach is to investigate first or allow time for the microservice to recover. Health events from a microservice help us make informed decisions and, in effect, help create self-healing services.
Guidance for designing microservices on Azure
Visit the Azure architecture center for guidance on designing and building microservices on Azure.
Service Fabric as a microservices platform
Azure Service Fabric emerged when Microsoft transitioned from delivering boxed products, which were typically monolithic, to delivering services. The experience of building and operating large services, like Azure SQL Database and Azure Cosmos DB, shaped Service Fabric. The platform evolved over time as more services adopted it. Service Fabric had to run not only in Azure but also in standalone Windows Server deployments.
The aim of Service Fabric is to solve the hard problems of building and running a service and to use infrastructure resources efficiently, so teams can solve business problems by using a microservices approach.
Service Fabric helps you build applications that use a microservices approach by providing:
- A platform that provides system services to deploy, upgrade, detect, and restart failed services, discover services, route messages, manage state, and monitor health.
- The ability to deploy applications either running in containers or as processes. Service Fabric is a container and process orchestrator.
-. But it does provide built-in programming APIs that make it easier to build microservices.
Migrating existing applications to Service Fabric
Service Fabric allows you to reuse existing code and modernize it with new microservices. There are five stages to application modernization, and you can start and stop at any stage. The stages are:
- Start with a traditional monolithic application.
- Migrate. Use containers or guest executables to host existing code in Service Fabric.
- Modernize. Add new microservices alongside existing containerized code.
- Innovate. Break the monolithic application into microservices based on need.
- Transform applications into microservices. Transform existing monolithic applications or build new greenfield applications.
Remember, you can start and stop at any of these stages. You don't have to progress to the next stage.
Let's look at examples for each of these stages.
Migrate
For two reasons, many companies are migrating existing monolithic applications into containers:
- Cost reduction, either due to consolidation and removal of existing hardware or due to running applications at higher density.
- A consistent deployment contract between development and operations.
Cost reductions are straightforward. At Microsoft, many existing applications are being containerized, leading to millions of dollars in savings. Consistent deployment is harder to evaluate but equally important. It means that developers can choose the technologies that suit them, but operations will accept only a single method for deploying and managing the applications. It alleviates operations from having to deal with the complexity of supporting different technologies without forcing developers to choose only certain ones. Essentially, every application is containerized into self-contained deployment images.
Many organizations stop here. They already have the benefits of containers, and Service Fabric provides the complete management experience, including deployment, upgrades, versioning, rollbacks, and health monitoring.
Modernize
Modernization is the addition of new services alongside existing containerized code. If you're going to write new code, it's best to take small steps down the microservices path. This could mean adding a new REST API endpoint or new business logic. In this way, you start the process of building new microservices and practice developing and deploying them.
Innovate
A microservices approach accommodates changing business needs. At this stage, you need to decide whether to start splitting the monolithic application into services, or innovating. A classic example here is when a database that you're using as a workflow queue becomes a processing bottleneck. As the number of workflow requests increases, the work needs to be distributed for scale. Take that particular piece of the application that's not scaling, or that needs to be updated more frequently, and split it out as a microservice and innovate.
Transform applications into microservices
At this stage, your application is fully composed of (or split into) microservices. To reach this point, you've made the microservices journey. You can start here, but to do so without a microservices platform to help you requires a significant investment.
Are microservices right for my application?
Maybe. At Microsoft, as more teams began to build for the cloud for business reasons, many of them realized the benefits of taking a microservice-like approach. Bing, for example, has been using microservices for years. For other teams, the microservices approach was new. Teams found that there were hard problems to solve outside of their core areas of strength. This is why Service Fabric gained traction as the technology for building services.
The objective of Service Fabric is to reduce the complexities of building microservice applications so that you don't have to go through as many costly redesigns. Start small, scale when needed, deprecate services, add new ones, and evolve with customer usage. We also know that there are many other problems yet to be solved to make microservices more approachable for most developers. Containers and the actor programming model are examples of small steps in that direction. We're sure more innovations will emerge to make a microservices approach easier. | https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-overview-microservices | 2020-09-18T21:30:03 | CC-MAIN-2020-40 | 1600400188841.7 | [array(['media/service-fabric-overview-microservices/monolithic-vs-micro.png',
'Service Fabric platform application development'], dtype=object)
array(['media/service-fabric-overview-microservices/statemonolithic-vs-micro.png',
'Service Fabric platform state storage'], dtype=object)
array(['media/service-fabric-overview-microservices/microservices-migration.png',
'Migration to microservices'], dtype=object) ] | docs.microsoft.com |
Crawlab
Crawlab is a distributed spider management platform based on Golang, which supports multiple programming languages and spider frameworks
Crawlab project has been highly praised by spider enthusiasts and developers since it was published in March 2019. More than half of users said that Crawlab has been used as the company's spider management platform. After several months of iterations, we have published periodical tasks, data analysis, configurable spider,SDK, message notification, Scrapy support, git synchronization and other functions, which make Crawlab more practical and comprehensive, can really help users solve the problem of spider management.
Crawlab mainly solves the problem of a large number of difficult spider management. For example, it is not easy to manage the mixed 'scrapy' and 'selenium' projects of hundreds of websites at the same time, and the cost of command line management is very high, and it is also easy to make mistakes. Crawlab can support any language and any framework, cooperate with task scheduling and task monitoring, and it is easy to effectively monitor and manage scale spider projects.
This user manual is an installation and use development guide to help you install, use and develop Crawlab.
If you want to start crawlab as soon as possible, please check quick start.
First, let's see how to install crawlab. Please refer to the installation chapter, for how to use, please refer to the use Chapter,For simpler spiders, you can use configurable spider to save time, you can use custom spider for more complex spiders, such as those require to login, and more flexible to use.
⚠️Note: If you encounter any problems during installation, please refer to Q&Ato check the problems one by one. If you still can't solve the problem, try Github Issues to find a solution. If you still can't solve the problem, please add the author's wechat 'tikazyq1' and indicate 'Crawlab'. The author will pull you into the group, where you can ask for help from the developers. | https://docs.crawlab.cn/en/ | 2020-09-18T19:17:52 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs.crawlab.cn |
The Memory Profiler window allows you to see how memory is being used by FlexSim:
The Memory Profiler works by traversing FlexSim's tree and estimating the memory use of each node.
The Performance Profiler has two main views for displaying memory data. The first view is the Memory Tree, the view on the left. This view shows each node in the tree, and the estimate for that node's memory use. The list of nodes is sorted by how much memory it uses.
The second view in the Memory Profiler is the Memory Graph. This view draws each node as a box. Bigger boxes indicate that more memory is used by a particular node, and smaller boxes indicate that less memory is used. The boxes are also colored by memory use: the brightest, most yellow box corresponds to the node that uses the most memory. The darkest, most purple box corresponds to the node that uses the least memory.
These two views work together. For example, you can click on a node in the Memory Tree View, and the Memory Graph will show that node bounded in red:
Alternatively, you can click on a box in the Memory Graph. This will highlight that box in the Memory Tree:
In this example, the model is using a Statistics Collector to record some data. That particular Statistics Collector is the node that is using the most data.
In some cases, you can use the Memory Profiler to reduce the memory use of your model. In this example, a large amount of memory is allocated just to tokens in the main Process Flow:
This model happens to have a source that generates tokens, but isn't connected to any activities; perhaps the modeler forgot to remove it. Without that source generating useless tokens, the Memory Profiler shows a different story:
Now the Tools folder uses 2 MB less memory. This example demonstrates how the Memory Profiler can help you find and remove unnecessary memory use from your model.
You can access the Memory Profiler from the Debug menu on the main toolbar.
The Memory Profiler has the following properties:
Click this button to create a new estimate of the memory being used in the tree. You can only view one snapshot at a time, so taking a new snapshot will delete the old one.
If you have taken a memory snapshot, this view will show an estimate of the memory used by each node in the tree. This list is sorted by estimate. You can also right-click on a node in this tree and choose View Up or View Down. If you choose View Down, both the Memory Tree and the Memory Graph will update to show the chosen node as if it is the top node. View Up allows you to back out of the view.
If you have taken a memory snapshot, this view will draw a visual representation of the memory used by each node in the tree. Bigger and brighter boxes represent nodes that used more memory. | https://docs.flexsim.com/en/20.1/Reference/Tools/MemoryProfiler/MemoryProfiler.html | 2020-09-18T20:09:48 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs.flexsim.com |
Customizr Theme options : Post metas (category, tags, custom taxonomies)
The post metas are the meta information of the posts like author, date, tags, categories. It is usually displayed below the post title.
You can access this set of options from : Customizer > Main > Post metas
You can choose whether to display the Post meta (other options will disappear if unchecked)
Select the context
You can choose where (in which context) the Post meta should be displayed.
Select the metas to display
Here you can select what Post meta should be displayed
- Hierarchical Taxonomies (categories)
- Non-hierarchical Taxonomies (tags)
- Publication date
- Author
- Update date (see below)
- Format of last update (Number of days or date) | https://docs.presscustomizr.com/article/111-customizr-theme-options-post-metas-category-tags-custom-taxonomies | 2020-09-18T20:43:46 | CC-MAIN-2020-40 | 1600400188841.7 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/55fadf8b9033604a8c6aaa67/file-20FcOHFhwn.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5aab41f12c7d3a56d88704a7/file-XRUQlChzDq.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5aab4b692c7d3a56d88704b9/file-woSpu6v3db.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5aab4c212c7d3a56d88704bb/file-1TrIVpnosW.png',
None], dtype=object) ] | docs.presscustomizr.com |
Security Operations ServiceNow® Security Operations brings incident data from your security tools into a structured response engine that uses intelligent workflows, automation, and a deep connection with IT to prioritize and resolve threats based on the impact they pose to your organization. Automate all your security tools and work seamlessly with IT With Security Operations, realize the full value of your Now Platform® solution. Many organizations struggle with identifying security threats and vulnerabilities, prioritizing them, and coordinating with IT to remediate them. Using Security Operations, security analysts and vulnerability managers can seamlessly automate their security tools and communicate with IT by working in a unified platform. View and download the full infocard for a highlight of Security Operations features. Identify, prioritize, and remediate vulnerabilities in software, operating systems, and assets The ServiceNow® Vulnerability Response application helps organizations identify and respond quickly and efficiently to vulnerabilities. Scan data from leading vendors to give your teams a single platform for response that can be shared between security and IT to resolve vulnerabilities. Identify, prioritize, and remediate critical security incidents The ServiceNow® Security Incident Response application simplifies the process of identifying critical incidents by applying powerful workflow and automation tools that speed up remediation. Identify, prioritize, and remediate misconfigured assets The ServiceNow® Configuration Compliance application prioritizes and remediates misconfigured assets using data gathered from third-party security configuration assessment scans. Access your company's Structured Threat Information Expression (STIX™) data. The ServiceNow® Threat Intelligence application helps incident responders find Indicators of Compromise (IoC) and hunt for low-lying attacks and threats. The results are reported directly to security incidents. Access Security Operations with your mobile device Access the Vulnerability Response and Security Incident Response applications on your Now Platform instance with your Android or iOS mobile device. As a remediation owner or security analyst, manage security incidents and vulnerability groups so that you can begin remediation without being tied to your desktop. Identify, prioritize, and remediate vulnerabilities in software, operating systems, and assets The Vulnerability Response application imports and automatically groups vulnerable items according to group rules enabling. Identify, prioritize, and remediate critical security incidents With the Security Incident Response (SIR) application, integrate your existing Security Information and Event Manager (SIEM) tools with Security Operations applications to import threat data (via APIs or email alerts), and automatically create prioritized security incidents. Manage the life cycle of your security incidents from initial analysis to containment, eradication, and recovery. The Security Incident Response application enables you to get a comprehensive understanding of incident response procedures performed by your analysts, and understand trends and bottlenecks in those procedures with analytic-driven dashboards and reporting. Identify, prioritize, and remediate misconfigured assets With the Configuration Compliance application, use test results obtained from third-party Strong Customer Authentication (SCA) integrations to verify compliance with security or corporate policies. This application uses the assets listed on the ServiceNow® Configuration Management Database (CMDB) to determine which items are most critical. Workflows and automation enable quick action against individual assets or groups for bulk changes. Identify and remediate non-compliant configuration items. Automatically import policies, tests, authoritative sources, and technologies and assign test results to groups or individuals for remediation. Access your company's Structured Threat Information Expression (STIX) data. Use the Threat Intelligence application to automatically search threat feeds for relevant information when an IoC is connected to a security incident and can send IoCs to third-party sources for additional analysis. The Threat Intelligence application uses Structured Threat Information Expression (STIX) as a language to describe cyber threat information in a standardized and structured manner. Mobile experience for Security Operations Access the Vulnerability Response and Security Incident Response applications on your Now Platform instance with your Android or iOS mobile device. Get started For an overview about Security Operations in your Now Platform® instance, see Understanding Security Operations For information about all the Security Operations applications available for download from the ServiceNow Store, see Security Operations and the ServiceNow Store Applications and features Vulnerability Response Security Incident Response Configuration Compliance Threat Intelligence | https://docs.servicenow.com/bundle/newyork-security-management/page/product/security-operations/concept/security-operations-intro.html | 2020-09-18T20:51:04 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs.servicenow.com |
By default, a tranparent title bar is not draggable, you will need to add some CSS styles to your site to make the window draggable.
You can give an element the ability to drag the window by using
-webkit-app-region: drag in css.
If you want certain elements marked as non-draggable then you can use
-webkit-app-region: no-dragSo, you might end up doing something like this:
html.todesktop .header {-webkit-app-region: drag;}html.todesktop .header .nav {-webkit-app-region: no-drag;}
The
html.todesktop class pre-selector will ensure that the css is only applied when running as a desktop app. You can learn more about this here: | https://docs.todesktop.com/customizing-your-app/making-a-transparent-titlebar-draggable-macos-only | 2020-09-18T20:39:57 | CC-MAIN-2020-40 | 1600400188841.7 | [] | docs.todesktop.com |
Run TellorMiner and Join the Pool
These instructions are for running the official TellorMiner as part of a Tellor pool and without staking 1000 TRB.
Download the TellorMiner
First, visit GitHub and download the latest release of TellorMiner.
Download the Linux or Windows executable and place it into a directory on your computer.
Create a Configuration File
Next, you will need to create a
config.json file in the same directory as the TellorMiner executable.
Start by copying this config.json file that is checked into the TellorMiner GitHub repository.
Edit the Configuration File
Change the following configuration options:
1.
nodeURL: You will need to get an Project ID from Infura or replace this with another Ethereum node that supports RPC connections.
2.
privateKey: Replace this with your Ethereum account private key
3.
publicAddress and
serverWhitelist: Replace these with your Ethereum account public address without the 0x prefix
4. Add the following options to tell the miner to connect to the pool:
"enablePoolWorker": true, "poolURL": "POOL_URL"
Where the
POOL_URL is the link to your pool. Current Tellor mining pools:
- If you're using GPU, add the following:
"gpuConfig":{ "default":{ "groupSize":256, "groups":4096, "count":16 } }
Create a Logging Configuration File
Next, you will need to create a
loggingConfig.json file in the same directory as the TellorMiner executable.
Start by copying this loggingConfig.json file that is checked into the TellorMiner GitHub repository.
Run the TellorMiner
Now run the miner, on Linux:
.\TellorMiner -miner
and on Windows:
TellorMiner.exe -miner. | https://tellor.readthedocs.io/en/latest/MinerSetupPool/ | 2020-09-18T20:03:31 | CC-MAIN-2020-40 | 1600400188841.7 | [] | tellor.readthedocs.io |
Get in Touch
The team behind the Charmed Kubernetes are keen to help and listen to your feedback. There are a variety of ways to get in touch with suggestions, problems or just to get involved.
IRC
Join us on freenode.net in #cdk8s.
Slack
Find us in #cdk on the Kubernetes slack.
Bugs
Charmed Kubernetes bugs are tracked in launchpad.
Documentation
Visit the documentation repository for issues or comments about this documentation.
Source Code
The source for the bundles and all the core charms for Charmed Kubernetes is available on GitHub.
Professional support
If you are looking for additional support, find out about Ubuntu Advantage.
Canonical can also provide managed solutions for Kubernetes. | https://deploy-preview-267--cdk-docs-next.netlify.app/kubernetes/docs/get-in-touch | 2020-10-20T00:30:27 | CC-MAIN-2020-45 | 1603107867463.6 | [] | deploy-preview-267--cdk-docs-next.netlify.app |
Configure Log Aggregation
Log aggregation is enabled by default. You can configure it using Cloudera Manager.
- In Cloudera Manager, select the YARN service.
- Click the Configuration tab.
- Select the Log Aggregation filter under Category.Every log aggregation related property is displayed.
- Find the Enable Log Aggregation property and ensure that it is selected, meaning that log aggregation is enabled.
- Configure the log aggregation properties as applicable.
- Click Save Changes. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/yarn-monitoring-clusters-applications/topics/yarn-configure-log-aggregation.html | 2020-10-20T00:55:36 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.cloudera.com |
How to Publish your Software on Copr, Fedora’s User Repository
This is a short tutorial on how to create and maintain a Copr repository for your software in an automated fashion. It assumes some basic familiarity with Git & how to create a RPM package.
In this guide, we’ll
create a RPM package for a program
create a Copr repository and publish the program to it
set up automatic management of program version, package release and package changelog
set up automatic building of new package versions
The aim is to let you keep your software up-to-date in Copr without ever having to interact with anything other than your software’s git repository.
Prerequisites
The following is needed:
Our program’s source in a publicly available git repository somewhere. This tutorial uses a simple example program - hellocopr - to demonstrate the process. The program and all files referenced in this guide can be found in the project’s git repository. It’s a very simple (& pointless) python program with a setuptools installer:
requirements.txt setup.py src user@host ~/copr-tito-quickdoc % ls src/hellocopr colors.py hellocopr.py __init__.py ``` + . A Fedora (FAS) account in order to be able to create repositories on Copr. This tutorial's demo repository can be found link:[here]. . `tito` installed on your system. link:[Tito] is capable of a lot of advanced automation for package creation, most of which we won't need here. Check out its documentation to learn more. . A specfile for our program. For more information on how to create one, refer to xref:creating-rpm-packages.adoc[Creating RPM packages] and xref:create-hello-world-rpm.adoc[How to Create a GNU Hello World RPM Package] or adapt this tutorial's link:[annotated example specfile]. TIP: You can follow along with this tutorial by cloning or forking the repository and checking out the `initial` tag. This will put the repository in the state just before the next step. The repo's commit history matches the steps followed in this tutorial. == Step 1: Creating the package using tito Copy link:[the spec file] into the project's base directory. A few changes should be made before proceeding: . The values of `Version:` and `Release:` do not matter, since these will be managed by tito. It makes sense to set them to `Version: 0.0.0` and `Release: 0%{?dist}` to mark that this package hasn't been built yet. . tito will also handle the creation of the source tarball from the git repository, so change the `Source0:` URL to the filename `%{name}-%{version}.tar.gz` & add a comment to tell users how to get the tarball . The changelog can be left empty. + ``` user@host ~/copr-tito-quickdoc % cat hellocopr.spec ... Version: 0.0.0 Release: 0%{?dist} ... # Sources can be obtained by # git clone # cd copr-tito-quickdoc # tito build --tgz Source0: %{name}-%{version}.tar.gz ... %changelog ``` Commit the changes. Next, we initialize the project for use with tito. ``` user@host ~/copr-tito-quickdoc % tito init Creating tito metadata in: ~/copr-tito-quickdoc/.tito - created ~/copr-tito-quickdoc/.tito - wrote tito.props - created ~/copr-tito-quickdoc/.tito/packages - wrote ~/copr-tito-quickdoc/.tito/packages/.readme committed to git Done! ``` This creates link:[a subdirectory `.tito` with some default configuration], which can be left unchanged for now. We can now do a test build of the package using `tito build`. Usually, tito will build from a tag, which we haven't created yet. However, using the `--test` flag, we can build from the most recent commit instead, which will be written to `/tmp/tito`: ``` user@host ~/copr-tito-quickdoc % tito build --rpm --test Creating output directory: /tmp/tito WARNING: unable to lookup latest package tag, building untagged test project WARNING: .tito/packages/hellocopr doesn't exist in git, using current directory Building package [hellocopr-0.0.0-0] Wrote: /tmp/tito/hellocopr-git-11.7a6919d.tar.gz ... Successfully built: /tmp/tito/hellocopr-0.0.0-0.git.11.7a6919d.fc32.src.rpm /tmp/tito/noarch/hellocopr-0.0.0-0.git.11.7a6919d.fc32.noarch.rpm ``` Once we've fixed any issues with the package that might crop up, we can let tito create a package release using `tito tag`. Since we haven't set a proper version yet, we need to pass it to tito for the first tag: ``` user@host ~/copr-tito-quickdoc % tito tag --use-version 1.0.0 ``` This will open the editor & display a pre-formatted changelog entry build up from all commits since the last release, which we can edit as needed. Since there have been none so far, the entry will just contain "- new package built with tito". Save the file, link:[and tito will] . set the Version in the specfile to 1.0.0 . set the Release in the specfile to 1 . append the changelog entry to the specfile's `%changelog` section . commit the result and tag it with `<name>-<version>-<release>`, i.e. `hellocopr-1.0.0-1` +
user@host ~/copr-tito-quickdoc % tito tag --use-version 1.0.0 Creating output directory: /tmp/tito Tagging new version of hellocopr: untagged → 1.0.0-1 Created tag: hellocopr-1.0.0-1 View: git show HEAD Undo: tito tag -u Push: git push --follow-tags origin
Push to the commits & tags to the remote using `git push --follow-tags`, and we're ready to release the package on Copr. == Step 2: Publishing the package in a Copr repository . Go to and log in. Once done, click on _New Project_ to start creating a repository for our program. On the following input mask, .. Under _1. Project information_ -> _Project name_ set the name to what you want your repo to be called - since this will only contain a single package, it makes sense to use projectname = packagename, i.e. _hellocopr_. This is the only settings that cannot be changed later. .. Under _2. Build options_ tick all distributions you want to create repositories for - usually all Fedora versions & maybe EPEL versions as well .. Under _4. Other Options_ make sure that _Follow Fedora branching_ is ticked, this will ensure that your repository will automatically update for new Fedora release. . Go to _Packages_ -> _New Package_ .. Under _1. Provide the source_, set the package name & the URL of your git repository .. Under _2. How to build SRPM from the source_ select _tito_ .. Under _3. Generic package setup_ tick the box for _Auto-rebuild_ . Your package will appear in the list of packages. Hit _Rebuild_ to trigger a build. The following page lets you change any build options if necessary, we'll just use the defaults, i.e. the options we set in the previous step. Hit _Submit_ and Copr will build the package from the tito tag we created in Step 1. Once the build has finished, you can test installing the package from Copr by activating your repository. ``` user@host ~/copr-tito-quickdoc % sudo dnf copr enable <username>/hellocopr user@host ~/copr-tito-quickdoc % sudo dnf install hellocopr ``` == Step 3: Automate package (re)-builds Next, we want to set up Copr to automatically build a new package version whenever we create one, so that we no longer need to log in and trigger one manually. To achieve this, we simply need to trigger a build whenever we push a new tag to the repository. This requires some configuration both of your Git repository and of the Copr project. Configuration can be found under _Settings_ -> _Integrations_, the page also explains the steps to configure your git repository for all common Git forges (Pagure, Github, Gitlab & Bitbucket). Now, to test this, let's make some changes to our program that will come in handy for the final layer of automation and create a new release for our software. Currently, the example program has its version hardcoded at multiple places. link:[Let's change this] so that the version string is sourced from a single file. Which file this is doesn't matter, but ideally the version variable should be the only thing in it that is likely to change. In this case, we use the previously empty `src/hellocopr/pass:[__]initpass:[__].py`. We name this new version '1.0.1'. Commit the changes, and create a new release with tito
user@host ~/copr-tito-quickdoc % tito tag Creating output directory: /tmp/tito Tagging new version of hellocopr: 1.0.0-1 → 1.0.1-1 Created tag: hellocopr-1.0.1-1 View: git show HEAD Undo: tito tag -u Push: git push --follow-tags origin
Note that by ommiting the `--use-version` option, tito now updates the version automatically. It does so by . Increasing the Version's final digit by 1 - `1.0.0` -> `1.0.1` . Resetting the Release to 1 it it isn't already. If you want to bump to a different version, say `1.1.0`, you can do so again by passing `--use-version`. Push the resulting commit & tag, and if you now check your projects page on Copr, you'll see that a new build of `hellocopr-1.0.1-1` has been triggered by our pushing a tag. == Step 4: Let tito manage the program version If you check the git log, you'll find that I actually forgot to update hellocopr's version variable to 1.0.1. We don't want that to happen again. Luckily, since we single-source our version, we can let tito automatically generate this file from a template. First, copy the version source file `src/hellocopr/pass:[__]initpass:[__].py` to `.tito/templates/pass:[__]initpass:[__].py.template`. Then, open the template file and replace the version string with `$version`. It also makes sense to add a note that the file is managed by tito and should not be edited manually. ``` user@host ~/copr-tito-quickdoc % cat .tito/templates/__init__.py.template ... # This file is automatically created from a template by tito. Do not edit it manually. __version__ = '$version' ``` Next, add the following to `.tito/tito.props` ``` [version_template] destination_file = src/hellocopr/__init__.py template_file = .tito/templates/__init__.py.template ```[Commit the changes]. Now, when we tag a new release, tito will take the template, replace `$version` with whatever version was tagged, and copy the resulting file to `src/hellocopr/pass:[__]initpass:[__].py` before updating the spec file and commiting the changes. We can test this by tagging a new release:
user@host ~/copr-tito-quickdoc % % tito tag Creating output directory: /tmp/tito Tagging new version of hellocopr: 1.0.1-1 → 1.0.2-1 Created tag: hellocopr-1.0.2-1 View: git show HEAD Undo: tito tag -u Push: git push --follow-tags origin
user@host ~/copr-tito-quickdoc % cat src/hellocopr/init.py … # This file is automatically created from a template by tito. Do not edit it manually.
version = '1.0.2'
` If you again push the tag to the remote repo, Copr
will again automatically trigger a rebuild.
Release procedure in brief
From now on, updating your software in the Copr repository is as simple as
Commit all changes for your new version.
Perform a test build using
tito build --test
Tag the release with
tito tag(add
--use-versionif necessary)
Push the tag to your git repo using
git push --follow-tags
and Copr will take care of the rest.
Packaging from source tarballs
You can use a similar process to manage someone elses software on Copr, i.e. build from a tarball downloaded from upstream.
To do so, the following changes need to be made to the procedure described above:
Instead of the unpacked sources, download & commit the source tarball you want to package to your repository
Instead of modifying the source directly, add any changes you need to make in the form of patch files. List these as
PatchX:in the spec file
Also in the spec file, set the
Version:back to whatever version the program is at and
Source0:back to the tarball URL. You can use macros like
%{version}for the latter to automatically follow version changes.
Modify tito’s
.tito/tito.propsto, one, not try to build a source tarball and two, bump the
Release:instead of the
Version:when tagging
[buildconfig] builder = tito.builder.NoTgzBuilder tagger = tito.tagger.ReleaseTagger ``` + . Don't do any tito templating The rest of the procedure stays the same. If you make changes to the package without changing the source, you can just tag a new release with tito. If you do update the source tarball, you need to update the `Version:` field and reset `Release:` to `0%{?dist}` before tagging. TIP: The tarball-adapted version of the project can be found in the `[foreign-sources]` branch of the git repo. | https://docs.fedoraproject.org/fr/quick-docs/publish-rpm-on-copr/ | 2020-10-20T00:54:40 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.fedoraproject.org |
InCommonUABgrid
Revision as of 13:50, 18 July 2007
To register UABgrid as a resource provider for InCommon we need define the UABgrid operational practices by addressing the "Resource Provider Information" questions from section 3 of the INCOMMON FEDERATION: PARTICIPANT OPERATIONAL PRACTICES.
The questions from section 3 and proposed answers are listed below. We will likely also want to our final document to be more of an operating practices document than a list of questions and responses..
UABgrid is a collaboration environment for use by UAB community members and their designated collaborators from UAB and from other campuses to organize around shared academic interests. UABgrid is a participant directed and controlled collaboration environment that will provide access to web and grid applications. Basic access will be broadly available with additional privileges granted to specific community members based on the information provided by credential providers and peers within the community.
UABgrid's planned resource provider id will be:
-
Required Attributes
3.1 What attribute information about an individual do you require in order to manage access to resources you might make available to other Participants? Describe separately for each resource ProviderID that you have registered.
The only attribute required for basic access to UABgrid resources will be eduPersonPrincipleName (ePPN). This attribute is intended to provide a unique identity for each user that reflects their identity at their Identity Provider. An identity provider may supply a targeted id in addition to or in lieu of ePPN.
An identity provider may supply an email attribute along with the ePPN or targeted id. If supplied, this address should be considered a working email address. This attribute will be used to pre-populate application forms as a convenience to the end user. However, a user will be allowed to override the supplied email address and supplied an alternative working email address, verified during registration.
Please note: UABgrid will not consider the ePPN, targeted id or email address to constitute personally identifiable information. Users and identity providers concerned with privacy at the user-account level are asked to supply opaque identifiers (such as targeted id) whose mapping to personally identified information is maintained by the identity provider at the identity provider.
While this information will be sufficient for basic participation in UABgrid, access to specific resources may require additional information either asserted by the user's identity provider or by authorized peers on UABgrid. An example of these attributes may include the users common name and affiliation as asserted by the identity provider in order to access a computational resource. Requests for these attributes will be identified and determined by resource providers on UABgrid. Users should have the ability to control the release of these additional attributes, with the understanding that denying their release may restrict their levels of privilege on UABgrid.
When requested, every effort will be made to make these additional attributes available only to the applications that require them. For example, if a grid compute resource provider requires the common name and phone number of a user, only that application will receive this additional information.
How Attributes are Used
3.2 What use do you make of attribute information that you receive in addition to basic access control decisions? For example, do you aggregate session access records or records of specific information accessed based on attribute information, or make attribute information available to partner organizations, etc.?
The ePPN will be used to identify an individual user within UABgrid both to web applications and grid resources. This will essentially by their "user identity" within the system.
The email address will enable the user to participate in provided email- based discussions related to the groups with which they participate. The email address will also be used to communicate system-wide announcements to the user and may be used by application providers to communicate with the user. Essentially, the email address considered a communication end point for the user of the UABgrid system environment.
Additional attributes that may be required for authorizations beyond basic access will be used to help identify the individual to resource providers so that authorization requests can be reviewed.
Personally Identifying Attributes Access Controls
3.3 What human and technical controls are in place on access to and use of attribute information that might refer to only one specific person, i.e. personally identifiable information? For example, is this information encrypted?
Access to the databases that store personally identifiable information will be controlled via standard system security procedures. Only UABgrid operators will have access to centrally stored attributes. Attributes made available to specific resource providers will be under the control of those resource providers. User discretion is advised.
Privileged Account Access Controls
3.4 Describe the human and technical controls that are in place on the management of super-user and other privileged accounts that might have the authority to grant access to personally identifiable information?
Privileged accounts will be restricted to a limited set of experienced UABgrid operators. These operators will be familiar with standard security practices regarding the management of personal information.
User Notification in Case of Compromise
3.5 If personally identifiable information is compromised, what actions do you take to notify potentially affected individuals?
In this event UABgrid will make a reasonable effort to contact the user via email to notify them of the event. Additionally, UABgrid will alter the user's identity provider to the compromise.
Please note, UABgrid is a pilot service. Every effort will be made to protect provided information. Users are encourage to exercise discretion and evaluate requests for information based on their trust of the services provided. At the point UABgrid becomes a non-pilot service additional operating practices and procedures may come into effect which may augment or replace those described here. | https://docs.uabgrid.uab.edu/w/index.php?title=InCommonUABgrid&diff=prev&oldid=1436 | 2020-10-20T01:05:08 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.uabgrid.uab.edu |
Advanced Configuration Tricks¶
Configuration of Zend Framework 2., with the exception of the values under the ‘service_manager’ key,module the various service configuration methods in a module class
- configuration returned by
getConfig()
In other words, your
getConfig() win over the various service configuration methods.
Additionally, and of particular note: the configuration returned from those methods will not
be cached.
Note
Use the various service configuration methods when you need to define closures or instance callbacks for factories, abstract factories, and initializers. This prevents caching problems, and also allows you to write your configuration files in other markup formats.
Manipulating merged configuration¶.
At this point, the merged application configuration will no longer contain the
key
some_key.
Note
If a cached config is used by the
ModuleManager, the
EVENT_MERGE_CONFIG event will not be triggered. However, typically that
means that what is cached will be what was originally manipulated by your
listener.instance, | https://zf2-docs.readthedocs.io/en/latest/tutorials/config.advanced.html | 2020-10-20T00:58:06 | CC-MAIN-2020-45 | 1603107867463.6 | [] | zf2-docs.readthedocs.io |
knife tag¶
A tag is a custom description that is applied to a node. A tag, once applied, can be helpful when managing nodes using knife or when building recipes by providing alternate methods of grouping similar types of information.
The knife tag subcommand is used to apply tags to nodes on a Chef server.. | https://docs-archive.chef.io/release/11-16/knife_tag.html | 2020-10-20T00:25:35 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs-archive.chef.io |
Configuration
- Service configuration structure
- Default configuration
The EPTS
EPTS uses its service instance name and replica ID for communication with other services and load balancing across replicas (for example, see 3/ISM). You can control the values with the following configuration options:
service: instance: name: <string> # Service instance name, should be unique cluster-wide. "epts".
Kaa applications
Many Kaa services can be configured for different behavior depending on the application version of the endpoint the processed data relates to.
This is called application>: <application 1 version 2 name>: # Multiple application versions can be configured under the "versions" key ... <application 2 name>: # Multiple applications can be configured under the "applications" key ...
For example:
kaa: applications: smart_kettle: versions: kettle_v1: kettle_v2: kettle_v3:
Due to the various compatibility reasons the application and application version names must be limited to lowercase latin letters (
a-z), digits (
0-9), dashes (
-) and underscores (
_).
Time series configuration
Time series in EPTS are defined within the scope of their applications.
The configuration consists of two logical parts:
- time series definition specifies the structure of a given time series: which values exist and what their types are;
- time series extraction configuration specifies the rules for extracting time series data points from data samples received from endpoints.
Both time series definition and extraction configuration can be specified within an application. However, only the extraction configuration can be overridden in an application version. The time series definition is the same application-wide for consistency and convenience reasons. If you need a change to the time series structure in a certain application version, that deserves a new time series name.
The full structure of the time series configuration is shown below. We will review it in more detail in the following subsections.
kaa: applications: <application name>: time-series: auto-extract: <boolean> # Whether to auto-extract all numeric values from data sample. `false` by default. timestamp: path: <gjson-path> # Path of the timestamp field in data samples. Optional. format: <timestamp-format> # Format of the timestamp in data sample. "iso8601" by default. fallback-strategy: <fallback-strategy> # Fallback strategy to use. `fail` by default. names: <time series name>: values: - name: <string> # Time series value name. "value" by default. type: <string> # Time series value type. "number" by default. path: <gjson-path> # Path to the value in the incoming data sample. Same as `name` by default. versions: <application version name>: time-series: # Time series extraction configuration overridde for an application version. auto-extract: <boolean> timestamp: path: <gjson-path> format: <timestamp-format> fallback-strategy: <fallback-strategy> names: <time series name>: # One of time series names defined at the application level. values: - name: <string> # Name of a time series value field. Must be defined at the application level. path: <gjson-path> # Path to the value in the incoming data sample for the given application version. Required.
Time series definition
Time series structure is defined once per application under
kaa.applications.<application name>.time-series.names.
The configuration fields that contribute to the definition are:
<time series name>—the time series name. Must be unique application-wide and limited to lowercase latin letters (
a-z), digits (
0-9), dashes (
-) and underscores (
_). (e.g.
temperature,
ground_speed,
402-metres-time, etc.).
<time series name>.values—list of time series values.
<time series name>.values[].name—time series value name. Must be unique time series-wide. Defaults to “value”. Names “time”, “kaaEndpointID”, and “kaaEmpty” are reserved and cannot be used.
<time series name>.values[].type—time series value type. One of “number”, “string”, or “bool”, where “number” is stored as a 64-bit float.
nullvalues are permitted for any value type.
Time series extraction
Upon receiving data samples (which are basically arbitrary JSON records), EPTS extracts data points according to the time series extraction configuration.
IMPORTANT:In the process of the time series extraction, EPTS attempts to match that data sample against each of the time series configured in the corresponding application. For the match to occur and a data point to be extracted into a given time series, all time series values must be found in the data sample by their respective configured
path, and the found field type must match the expected value
type. If the processed data sample does not contain a field when searched by the value
path, or the field type does not match the time series specification, EPTS concludes that the data sample does not contain the given time series data, skips the extraction of a data point for that time series, and proceeds to the next configured time series.
number time-series values can be extracted from JSON integers or numbers, or parsed from JSON strings, when possible.
Note that an explicit
null value in a data sample satisfies any value type and will be processed by EPTS as an empty value.
As you can see from the above, you can think of EPTS configured time series as filters that get applied against the received data samples.
Data points only get added to time series when data samples contain all configured time series values with appropriate data types.
This is especially useful when your connected devices take and report different measurements at different time intervals.
For example, you can submit environmental conditions (e.g.
temperature and
humidity) every 10 minutes, and the battery level—every hour.
Thus, every hour a data sample reported from device would contain an additional
battery field.
If you configure one time series to extract environmental conditions data, and the other—battery level, EPTS will extract the former time series from data samples every 10 minutes, and the latter—every hour.
Base time series extraction configuration is defined application-wide and is applied by default, unless you override it for a specific application version.
kaa.applications.<application name>.time-series.names.<time series name>.values[].path defines the path to the value in the data sample.
EPTS uses GJSON for extracting time series values, so you can use the path syntax specified by GJSON.
This setting can be overriden for an application version using
kaa.applications.<application name>.versions.<application version name>.time-series.names.<time series name>.values[].path option.
Timestamp extraction
Every time series data point must contain a timestamp.
EPTS supports extracting timestamps from received data samples or using the server receipt time.
The timestamp extraction can be both defined at the application level under
kaa.applications.<application name>.time-series.timestamp, and overriden for an application version under
kaa.applications.<application name>.versions.<application version name>.time-series.timestamp.
The following options are supported:
path—path to the timestamp field in the data sample. Same as with the time series values path, GJSON path syntax is supported. If
pathis not specified, EPTS does not attempt to extract a timestamp from the data sample, and uses the server receipt time stamp instead.
format—field format to use for parsing the timestamp located at
path. Only used when the
pathis set. Supported formats include:
fallback-strategy—Defines EPTS behavior when the timestamp parsing fails for any reason (missing field, wrong format). Supported strategies are:
server-timestamp—fallback to using the server receipt timestamp;
fail—fail the data sample processing and respond back to the DSTP transmitter with an error (default).
Time series auto-extraction
kaa.applications.<application name>.time-series.auto-extract option enables time series auto-extraction from data samples.
If set to true, EPTS will automatically create a time series for each top-level numeric field in the data sample JSON.
This setting can be overridden for an application version using
kaa.applications.<application name>.versions.<application version name>.time-series.auto-extract option.
Auto-extracted time series name will match the source field name with
auto~ prefix (e.g.
auto~temperature) to prevent collisions with user-defined time series names.
Data samples receiver
Use the following options to configure the data samples receiver interface.
kaa: dstp.receiver: from: <string> # Name of the data samples transmission service instance EPTS will subscribe to. concurrency: <uint> # The maximum number of concurrent workers, which will consume 13/DSTP messages from message pool. 256 by default. queue-length: <uint> # The maximum amount of messages in the DSTP receiver queue. When the queue is full, new messages will be dropped. 64 * `kaa.dstp.receiver.concurrency` by default.
Time series receiver
Use the following options to configure the time series receiver interface that EPTS uses for time series consumption. You can configure subscription to multiple transmitters and either define specific time series to listen to, or use a “*” wildcard.
NOTE: EPTS will only process time series data points that match the configuration according to the time series configuration section.
kaa: tstp.receiver: concurrency: <uint> # The maximum number of concurrent workers, which will consume 14/TSTP messages from message pool. 16 by default. queue-length: <uint> # The maximum amount of messages in the TSTP receiver queue. When the queue is full, new messages will be dropped. 64 * `kaa.tstp.receiver.concurrency` by default. from: # Map of time series transmission service instance EPTS will subscribe to. <transmission service instance name>: time-series: <list of string> # Time series names that EPTS will consume. Optional, "*" by default. ...
Tekton
EPT. queue-length: <uint> # Maximum queue length for 17/SCMP messages from Tekton. 256 by default.
For the Tekton integration to function, there must be no
kaa.applicationskey in the configuration file. Such configuration, when present, takes precedence over the Tekton-supplied application-specific configs.
Data persistence interface
EPTS uses InfluxDB for persisting time series data.
kaa: influx: precision: <string> # The data points timestamp precision in InfluxDB. # All timestamps stored in the InfluxDB are truncated to the given precision. # `h` (hours), `m` (minutes), `s` (seconds), `ms` (milliseconds), `u` (microseconds), `ns` (nanoseconds). # Defaults to `ns`. url: <string> # InfluxDB URL. "" by default. user: <string> # InfluxDB user (see note below) password: <string> # InfluxDB password (see note below) idle-connections: <uint> # Maximum idle connections in the InfluxDB client HTTP transport pool. 128 by default. read: chunk.size: <uint> # InfluxDB request chunk size (number of time series to write to or getting from InfluxDB). 10000 by default. write: concurrency: <uint> # The maximum number of concurrent InfluxDB writes; 4 by default. queue.size: <uint> # The maximum length of the DB write queue per application. # Each item in the queue is based on a received DSTP or TSTP message. 1024 by default. batch.size: <uint> # The optimal desired write batch size in data points. 10000 by default. # EPTS uses this configuration option as a recommendation, not as a strict rule. # It will attempt to collect write batches as close to this number as possible without sacrificing the write latency. # Some writes may exceed the configured batch size, which is by design. timeout: <uint> # Write timeout for InfluxDB requests (in ms). Default 5000 ms.
NOTE: For security reasons, username and password must be sourced from the environment variables.
NATS
The below parameters configure EPTS’s connection to NATS. Note that for security reasons NATS username and password are sourced from the environment variables.
nats: urls: <comma separated list of URL> # NATS connection URLs.
Authentication, authorization, and multi-tenancy
EPTPT EPTS management interface, use the following configuration options.
service.monitoring: disabled: <boolean> # Disables the monitoring interface entirely. False by default (enabled). port: <uint> # TCP port to expose the monitoring server on. 8080 by default.
Logging
EPTS.
Default configuration
Summarizing the above, the default EPTS configuration is as follows. Note that no Kaa applications are defined by default—you have to configure those for any specific Kaa-based solution.
service: instance: name: "epts" replica.id: "<random string generated on boot>" monitoring: disabled: false port: 8080 debug: false kaa: influx: precision: "ns" write.timeout: 5000 read.chunk.size: 10000 idle-connections: 128 | https://docs.kaaiot.io/KAA/docs/v1.2.0/Features/Data-collection/EPTS/Configuration/ | 2020-10-19T23:32:37 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.kaaiot.io |
Contents:
Contents:
This guide steps through the process of acquiring and deploying a Docker image of the Trifacta® platform in your Docker environment. Optionally, you can build the Docker image locally, which enables further configuration options.
Deployment Scenario
- Trifacta Wrangler Enterprise deployed into a customer-managed environment: On-premises, AWS, or Azure.
- PostgreSQL 9.6 or MySQL 5.7 installed either:
- Locally
- Remote server
- Connected to a supported Hadoop cluster.
- Kerberos integration is supported.
Limitations
NOTE: For Docker installs and upgrades, only the dependencies for the latest supported version of each supported major Hadoop distribution are available for use after upgrade. For more information on the supported versions, please see the
hadoop-deps directory in the installer. Dependencies for versions other than those available on the installer are not supported.
- You cannot upgrade to a Docker image from a non-Docker deployment.
- You cannot switch an existing installation to a Docker image.
- Supported distributions of Cloudera or Hortonworks:
- The base storage layer of the platform must be HDFS. Base storage of S3 is not supported.
- High availability for the Trifacta platform in Docker is not supported.
- SSO integration is not supported.
Requirements
Support for orchestration through Docker Compose only
- Docker version 17.12 or later. Docker version must be compatible with the following version(s) of Docker Compose.
- Docker-Compose 1.24.1. Version must be compatible with your version of Docker.
Docker Daemon
Preparation
Review the Desktop Requirements in the Planning Guide.
NOTE: Trifacta Wrangler Enterprise requires the installation of a supported browser on each desktop.
Acquire your License Key.
Acquire Image
You can acquire the latest Docker image using one of the following methods:
- Acquire from FTP site.
- Build your own Docker image.
Acquire from FTP site
Steps:
- Download the following files from the FTP site:
trifacta-docker-setup-bundle-x.y.z.tar
trifacta-docker-image-x.y.z.tar
NOTE:
x.y.zrefers to the version number (e.g.
6.4.0).
Untar the
setup-bundlefile:
tar xvf trifacta-docker-setup-bundle-x.y.z.tar
Files are extracted into a
dockerfolder. Key files:
Load the Docker image into your local Docker environment:
docker load < trifacta-docker-image-x.y.z.tar
Confirm that the image has been loaded. Execute the following command, which should list the Docker image:
docker images
You can now configure the Docker image. Please skip that section.
Build your own Docker image
As needed, you can build your own Docker image.
Requirements
Docker version 17.12 or later. Docker version must be compatible with the following version(s) of Docker Compose.
Docker Compose 1.24.1. It should be compatible with above version of Docker.
Build steps
Acquire the RPM file from the FTP site:
NOTE: You must acquire the el7 RPM file for this release.
- In your Docker environment, copy the
trifacta-server\*.rpmfile to the same level as the
Dockerfile.
- Verify that the
docker-filesfolder and its contents are present.
Use the following command to build the image:
docker build -t trifacta/server-enterprise:latest .
This process could take about 10 minutes. When it is completed, you should see the build image in the Docker list of local images.
NOTE: To reduce the size of the Docker image, the Dockerfile installs the trifacta-server RPM file in one stage and then copies over the results to the final stage. The RPM is not actually installed in the final stage. All of the files are properly located.
- You can now configure the Docker image.
Configure Docker Image
Before you start the Docker container, you should review the properties for the Docker image. In the provided image, please open the appropriate
docker-compose file:
NOTE: You may want to create a backup of this file first.
Key general properties:
NOTE: Avoid modifying properties that are not listed below.
Database properties:
These properties pertain to the database installation to which the Trifacta application connects.
Kerberos properties:
If your Hadoop cluster is protected by Kerberos, please review the following properties.
Hadoop distribution client JARs:
Please enable the appropriate path to the client JAR files for your Hadoop distribution. In the following example, the Cloudera path has been enabled, and the Hortonworks path has been disabled:
# Mount folder from outside for necessary hadoop client jars # For CDH - /opt/cloudera:/opt/cloudera # For HDP #- /usr/hdp:/usr/hdp
Please modify these lines if you are using Hortonworks.
Volume properties:
These properties govern where volumes are mounted in the container.
NOTE: These values should not be modified unless necessary.
Start Server Container
After you have performed the above configuration, execute the following to initialize the Docker container:
docker-compose -f <docker-compose-filename>.yaml run trifacta initfiles
When the above is started for the first time, the following directories are created on the localhost:
Import Additional Configuration Files
After you have started the new container, additional configuration files must be imported.
Import license key file
The Trifacta license file must be staged for use by the platform. Stage the file in the following location in the container:
NOTE: If you are using a non-default path or filename, you must update the
<docker-compose-filename>
.yaml file.
trifacta-license/license.json
Import Hadoop distribution libraries
If the container you are creating is on the edge node of your Hadoop cluster, you must provide the Hadoop libraries.
- You must mount the Hadoop distribution libraries into the container. For more information on the libraries, see the documentation for your Hadoop distribution.
- The Docker Compose file must be made aware of these libraries. Details are below.
Import Hadoop cluster configuration files
Some core cluster configuration files from your Hadoop distribution must be provided to the container. These files must be copied into the following directory within the container:
./trifacta-data/conf/hadoop-site
For more information, see Configure for Hadoop in the Configuration Guide.
Install Kerberos client
If Kerberos is enabled, you must install the Kerberos client and keytab on the node container. Copy the keytab file to the following stage location:
/trifacta-data/conf/trifacta.keytab
See Configure for Kerberos Integration in the Configuration Guide.
Perform configuration changes as necessary
The primary configuration file for the platform is in the following location in the launched container:
/opt/trifacta/conf/trifacta-conf.json
NOTE: Unless you are comfortable working with this file, you should avoid direct edits to it. All subsequent configuration can be applied from within the application, which supports some forms of data validation. It is possible to corrupt the file using direct edits.
Configuration topics are covered later.
Start and Stop the Container
Stop container
Stops the container but does not destroy it.
NOTE: Application and local database data is not destroyed. As long as the
<docker-compose-filename>
.yaml properties point to the correct location of the
*-data files, data should be preserved. You can start new containers to use this data, too. Do not change ownership on these directories.
docker-compose -f <docker-compose-filename>.yaml stop
Restart container
Restarts an existing container.
docker-compose -f <docker-compose-filename>.yaml start
Recreate container
Recreates a container using existing local data.
docker-compose -f <docker-compose-filename>.yaml up --force-recreate -d
Stop and destroy the container
Stops the container and destroys it.
The following also destroys all application configuration, logs, and database data. You may want to back up these directories first.
docker-compose -f <docker-compose-filename>.yaml down
Local PostgreSQL:
sudo rm -rf trifacta-data/ postgres-data/
Local MySQL or remote database:
sudo rm -rf trifacta-data/
Verify Deployment
Verify access to the server where the Trifacta platform is to be installed.
Cluster Configuration: Additional steps are required to integrate the Trifacta platform with the cluster. See Prepare Hadoop for Integration with the Platform in the Planning Guide.
- Start the platform within the container. See Start and Stop the Platform.
Configuration
After installation is complete, additional configuration is required. You can complete this configuration from within the application.
Steps:
- Login to the application. See Login.
- The primary configuration interface is the Admin Settings page. From the left menu, select User menu > Admin console > Admin settings. For more information, see Admin Settings Page in the Admin Guide.
- In the Admin Settings page, you should do the following:
- Configure password criteria. See Configure Password Criteria.
- Change the Admin password. See Change Admin Password.
- Workspace-level configuration can also be applied. From the left menu, select User menu > Admin console > Workspace settings. For more information, see Workspace Settings Page in the Admin Guide.
This page has no comments. | https://docs.trifacta.com/display/r071/Install+for+Docker | 2020-10-20T00:24:56 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.trifacta.com |
# jbackup
Updated: 10/19/2020, 10:24:56 AM
Created: 10/19/2020, 10:24:56 AM
Last Updated By: Daniel Klein
Read Time: 4 minute(s)
# Description
The jbackup utility provides fast on-line backup facilities and can also be used to check file integrity.
jbackup -options {inputlist}
Where:
- inputlist is a file containing a list of files, default stdin
- options may be:
# Uses
jchmod -B filename
will cause jbackup to skip 'filename'.
This is only effective for jBASE hashed files. O/S level directory files will always be backed up.
To avoid backup of O/S directories place them outside of the backup path and use Q-pointers, F-pointers or symbolic links.
2. jbackup creates a file named jbk*PID as a work file when executed. Therefore, jbackup must be run from a directory which has write privileges. If the file system or directory is not write enabled you will receive the error message ERROR! Cannot open temporary file jbk*PID.tmp, error 2
3. See also jrestore.
Examples of use of the jbackup command are as:
# On UNIX
find /home -print | jbackup -P
Reads all records, files and directories under the /home directory provided by the find selection and displays each file or directory name as it is encountered. This option can be used to verify the integrity of the selected files and directories.
jbackup -f /dev/rmt/floppy -m1 -v < FILELIST
Reads all files and directories listed in the UNIX file FILELIST and writes the formatted data blocks to the floppy disk device, displaying each file or directory name as it is encountered. The jbackup utility will prompt for the next disk if the amount of data produced exceeds the specified media size of 1 Mbyte.
jbackup -Ajbase -S /opt/jbase/CurrentVersion/tmp/jbase_stats >/dev/null LIST /opt/jbase/CurrentVersion/tmp/jbase_stats USING /opt/jbase/CurrentVersion/jbackup NAME TOTAL SIZE ID-SUPP
2
Reads all files and directories in home directory of user-id "jbase". Generates statistics information and outputs blocks to stdout, which is redirected to /dev/null. The statistics information is then listed using the jbackup dictionary definitions to calculate the file space used.
# On Windows
jfind C:\users\vanessa -print | jbackup -P
Reads all records, files and directories under the C:\users\vanessa directory provided by the jfind selection and displays each file or directory name as it is encountered. The -P option means that the files are not actually backup (print and scan only). It is useful to verify the integrity of the selected files and directories. This command should be run with jshelltype sh rather than jsh.
jfind D:\data -print | jbackup -f C:\temp\save20030325 -m10000 -S C:\temp\stats -v
The jfind command outputs the names of all the files and directories under the D:\data directory. This output is passed to the jbackup command causing it to backup every file that jfind locates. Rather than save to tape, this jbackup command creates a backup file: "C:\temp\save20030325". Note that jbackup creates the "save2003025" file, but the directory "c:\temp" must exist before running the command.
The -m10000 option specifies that the maximum amount of data to back up is 10,000MB (or 10GB) rather than the default 100MB. The -S option causes file statistics to be written to the hashed file stats. This file should exist and be empty prior to commencing the backup. The -v option causes the name of each file to be displayed as it is backed up.
# Note
Because of the pipe character used to direct the output of jfind to jbackup, this command should be run from an OS shell or with jshelltype sh rather than jsh. | https://docs.zumasys.com/jbase/utilities/jbackup/ | 2020-10-19T23:54:18 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.zumasys.com |
Selective Citrix ADC appliance appliance Citrix ADC appliance sends all cacheable HTTP traffic to a transparent cache farm. Clients access the Internet through the appliance, which is configured as a Layer 4 switch that receives traffic on port 80.
The appliance.
Note: The configuration described here is for transparent selective cache redirection. Therefore, it does not require a load balancing virtual server for the origin, as would a reverse proxy configuration. appliance, see Enable cache redirection and load balancing. | https://docs.citrix.com/en-us/citrix-adc/current-release/citrix-adc-cache-redirection-gen-wrapper-10-con/selective-cache-redirect.html | 2020-10-20T01:08:49 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.citrix.com |
- About DM
- Benchmarks
- Features
- Merge and Migrate Data from Sharded Tables
- Usage Scenarios
- Quick Start
- Deploy
- Software and Hardware Requirements
- Deploy a DM Cluster
- Maintain
- Cluster Upgrade
- Manage Migration Tasks
- Troubleshoot
- Tutorials
- Performance Tuning
- Reference
- Architecture
- Configuration
- Secure
- Glossary
- Release Notes
TiDB Data Migration Glossary
This document lists the terms used in the logs, monitoring, configurations, and documentation of TiDB Data Migration (DM).
B
Binlog
In TiDB DM, binlogs refer to the binary log files generated in the TiDB database. It has the same indications as that in MySQL or MariaDB. Refer to MySQL Binary Log and MariaDB Binary Log for details.
Binlog event
Binlog events are information about data modification made to a MySQL or MariaDB server instance. These binlog events are stored in the binlog files. Refer to MySQL Binlog Event and MariaDB Binlog Event for details.
Binlog event filter
Binlog event filter is a more fine-grained filtering feature than the block and allow lists filtering rule. Refer to binlog event filter for details.
Binlog position
The binlog position is the offset information of a binlog event in a binlog file. Refer to MySQL
SHOW BINLOG EVENTS and MariaDB
SHOW BINLOG EVENTS for details.
Binlog replication processing unit
Binlog replication processing unit is the processing unit used in DM-worker to read upstream binlogs or local relay logs, and to migrate these logs to the downstream. Each subtask corresponds to a binlog replication processing unit. In the current documentation, the binlog replication processing unit is also referred to as the sync processing unit.
Block & allow table list
Block & allow table list is the feature that filters or only migrates all operations of some databases or some tables. Refer to block & allow table lists for details. This feature is similar to MySQL Replication Filtering and MariaDB Replication Filters.
C
Checkpoint
A checkpoint indicates the position from which a full data import or an incremental replication task is paused and resumed, or is stopped and restarted.
- In a full import task, a checkpoint corresponds to the offset and other information of the successfully imported data in a file that is being imported. A checkpoint is updated synchronously with the data import task.
- In an incremental replication, a checkpoint corresponds to the binlog position and other information of a binlog event that is successfully parsed and migrated to the downstream. A checkpoint is updated after the DDL operation is successfully migrated or 30 seconds after the last update.
In addition, the
relay.meta information corresponding to a relay processing unit works similarly to a checkpoint. A relay processing unit pulls the binlog event from the upstream and writes this event to the relay log, and writes the binlog position or the GTID information corresponding to this event to
relay.meta.
D
Dump processing unit
The dump processing unit is the processing unit used in DM-worker to export all data from the upstream. Each subtask corresponds to a dump processing unit.
G
GTID
The GTID is the global transaction ID of MySQL or MariaDB. With this feature enabled, the GTID information is recorded in the binlog files. Multiple GTIDs form a GTID set. Refer to MySQL GTID Format and Storage and MariaDB Global Transaction ID for details.
L
Load processing unit
The load processing unit is the processing unit used in DM-worker to import the fully exported data to the downstream. Each subtask corresponds to a load processing unit. In the current documentation, the load processing unit is also referred to as the import processing unit.
M
Migrate/migration
The process of using the TiDB Data Migration tool to copy the full data of the upstream database to the downstream database.
In the case of clearly mentioning "full", not explicitly mentioning "full or incremental", and clearly mentioning "full + incremental", use migrate/migration instead of replicate/replication.
R
Relay log
The relay log refers to the binlog files that DM-worker pulls from the upstream MySQL or MariaDB, and stores in the local disk. The format of the relay log is the standard binlog file, which can be parsed by tools such as mysqlbinlog of a compatible version. Its role is similar to MySQL Relay Log and MariaDB Relay Log.
For more details such as the relay log's directory structure, initial migration rules, and data purge in TiDB DM, see TiDB DM relay log.
Relay processing unit
The relay processing unit is the processing unit used in DM-worker to pull binlog files from the upstream and write data into relay logs. Each DM-worker instance has only one relay processing unit.
Replicate/replication
The process of using the TiDB Data Migration tool to copy the incremental data of the upstream database to the downstream database.
In the case of clearly mentioning "incremental", use replicate/replication instead of migrate/migration.
S
Safe mode
Safe mode is the mode in which DML statements can be imported more than once when the primary key or unique index exists in the table schema.
In this mode, some statements from the upstream are migrated to the downstream only after they are re-written. The
INSERT statement is re-written as
REPLACE; the
UPDATE statement is re-written as
DELETE and
REPLACE. TiDB DM automatically enables the safe mode within 5 minutes after the migration task is started or resumed. You can manually enable the mode by modifying the
safe-mode parameter in the task configuration file.
Shard DDL
The shard DDL is the DDL statement that is executed on the upstream sharded tables. It needs to be coordinated and migrated by TiDB DM in the process of merging the sharded tables. In the current documentation, the shard DDL is also referred to as the sharding DDL.
Shard DDL lock
The shard DDL lock is the lock mechanism that coordinates the migration of shard DDL. Refer to the implementation principles of merging and migrating data from sharded tables in the pessimistic mode for details. In the current documentation, the shard DDL lock is also referred to as the sharding DDL lock.
Shard group
A shard group is all the upstream sharded tables to be merged and migrated to the same table in the downstream. Two-level shard groups are used for implementation of TiDB DM. Refer to the implementation principles of merging and migrating data from sharded tables in the pessimistic mode for details. In the current documentation, the shard group is also referred to as the sharding group.
Subtask
The subtask is a part of a data migration task that is running on each DM-worker instance. In different task configurations, a single data migration task might have one subtask or multiple subtasks.
Subtask status
The subtask status is the status of a data migration subtask. The current status options include
New,
Running,
Paused,
Stopped, and
Finished. Refer to subtask status for more details about the status of a data migration task or subtask.
T
Table routing
The table routing feature enables DM to migrate a certain table of the upstream MySQL or MariaDB instance to the specified table in the downstream, which can be used to merge and migrate sharded tables. Refer to table routing for details.
Task
The data migration task, which is started after you successfully execute a
start-task command. In different task configurations, a single migration task can run on a single DM-worker instance or on multiple DM-worker instances at the same time.
Task status
The task status refers to the status of a data migration task. The task status depends on the statuses of all its subtasks. Refer to subtask status for details.
- TiDB Data Migration Glossary
- B
- C
- D
- G
- L
- M
- R
- S
- T | https://docs.pingcap.com/tidb-data-migration/dev/glossary/ | 2020-10-20T00:11:23 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.pingcap.com |
XML to JSON conversion¶
Zend\Json provides a convenience method for transforming XML formatted data into JSON format. This feature
was inspired from an IBM developerWorks article.
Zend\Json includes a static function called
Zend\Json.
Example¶
The following is a simple example that shows both the XML input string passed to and the JSON output string
returned as a result from the
Zend\Json\Json::fromXml() function:
JSON output string returned from
Zend\Json\Json::fromXml() function:
More details about this xml2json feature can be found in the original proposal itself. Take a look at the Zend_xml2json proposal. | https://zf2-docs.readthedocs.io/en/latest/modules/zend.json.xml2json.html | 2020-10-20T00:50:40 | CC-MAIN-2020-45 | 1603107867463.6 | [] | zf2-docs.readthedocs.io |
About Avocado¶
Avocado is a set of tools and libraries to help with automated testing.
One can call it a test framework with benefits. Native tests are
written in Python and they follow the
unittest pattern, but any
executable can serve as a test.
Avocado is composed of:
- A test runner that lets you execute tests. Those tests can be either written in your language of choice, or be written in Python and use the available libraries. In both cases, you get facilities such as automated log and system information collection.
- Libraries that help you write tests in a concise, yet expressive and powerful way. You can find more information about what libraries are intended for test writers at Libraries and APIs.
Pluginsthat can extend and add new functionality to the Avocado Framework.
Avocado is built on the experience accumulated with Autotest, while improving on its weaknesses and shortcomings.
Avocado tries as much as possible to comply with standard Python testing technology. Tests written using the Avocado API are derived from the unittest class, while other methods suited to functional and performance testing were added. The test runner is designed to help people to run their tests while providing an assortment of system and logging facilities, with no effort, and if you want more features, then you can start using the API features progressively. | https://avocado-framework.readthedocs.io/en/53.0/Introduction.html | 2020-10-20T01:07:09 | CC-MAIN-2020-45 | 1603107867463.6 | [] | avocado-framework.readthedocs.io |
Ansible Playbooks for Confluent Platform¶
Refer to the topics in this section for using Ansible playbooks provided by Confluent to install and manage Confluent Platform. deploy, manage, and configure Confluent Platform. The cp-ansible repository provides the playbooks and templates that allow you to easily provision the Confluent Platform in your environment.
Note
This guide assumes basic knowledge of how Ansible playbooks work. If you want to learn more about Ansible playbooks, see these resources:)
- Confluent KSQL
Note
With the exception of Confluent Control Center, all Confluent Platform components are installed with Jolokia enabled and metrics exposed.
Refer to the following topics for detailed information about installing, deploying, and managing Confluent Platform with Ansible. | https://docs.confluent.io/5.4.2/installation/cp-ansible/index.html | 2020-10-20T00:30:34 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.confluent.io |
:
const collection = new SendBirdSyncManager.MessageCollection(channel); const handler = new SendBirdSyncManager.MessageCollection.CollectionHandler(); handler.onSucceededMessageEvent = (messages, action) => { switch (action) { case 'insert': // TODO: Add messages to the view. break; case 'update': // TODO: Update messages in the view. break; case 'remove': // TODO: Remove messages from the view. break; case 'clear': // TODO: Clear the view. break; } }; collection.setCollectionHandler(handler); collection.fetchSucceededMessages('prev'); collection.fetchSucceededMessages('next');.
// Set the viewpoint during initialization. const params = new MessageCollectionParams(channel, startingViewpointTimestamp); collection = new MessageCollection(params); ... // Reset the viewpoint. collection.resetViewpointTimestamp(newViewpointTimestamp);.
Note: If the current user isn’t seeing the most recent messages, real-time event messages may not be being delivered to the collection handler. SyncManager assumes that real-time event messages are shown through the
fetchSucceededMessages()method in the next direction..
const params = new sb.UserMessageParams(); params.message = 'Hello world!'; const pendingMessage = channel.sendUserMessage(params, (err, message) => { collection.handleSendMessageResponse(err, message); }); collection.appendMessage(pendingMessage);.
const params = new sb.UserMessageParams(); params.message = 'Updated message'; channel.updateUserMessage(message.messageId, params, (err, message) => { if (!err) { collection.updateMessage(message); } });
Note: Only succeeded messages can be updated.
Messages can be deleted using the
deleteMessage() method in the Chat SDK.
channel.deleteMessage(message, err => { // The collection handler gets the 'remove' event for the message // so no further job is required. });.
import AsyncStorage from '@react-native-community/async-storage'; ... SendBirdSyncManager.sendBird = new SendBird({ appId }); SendBirdSyncManager.useReactNative(AsyncStorage); SendBirdSyncManager.setup(...);
React Native also has a push notification functionality. This feature can be enabled by registering the push token as follows:
if (Platform.OS === 'android') { // Android sb.registerGCMTokenForCurrentUser(token) .then(() => { // FCM push token is registered. }) .catch(err => { console.error('Something went wrong.'); }); } else if (Platform.OS === 'ios') { // iOS sb.registerAPNSTokenForCurrentUser(token) .then(() => { // APNS push token is registered. }) .catch(err => { console.error('Something went wrong.'); }); }.
import {AppState} from 'react-native'; class AppStateExample extends Component { ... state = { appState: AppState.currentState, }; _handleAppStateChange = nextAppState => { if (this.state.appState.match(/inactive|background/) && nextAppState === 'active') { sb.setForegroundState(); } else if (this.state.appState === 'active' && nextAppState.match(/inactive|background/)) { sb.setBackgroundState(); } this.setState({appState: nextAppState}); }; } | https://docs.sendbird.com/javascript/sync_manager_message_sync | 2020-10-20T01:03:17 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.sendbird.com |
Tasks¶
Tasks are collections of input connectors, input topics, processors, output connectors and output topics; these define the data flow from inputs to outputs. A task can be started to initiate the data flow and it can be stopped to end the data flow.
Add Task¶
Select the “Tasks” node in the navigation panel and click “Add” to add an empty task. The task will have the default settings with no connector or processor added.
Tip
To create similar tasks, select an existing task in the content panel, use “Duplicate” to create a copy, then edit the new task as required. If you have configured an output connector with an “Auto” topic (for example, the ODBC connector) then you can right-click on an input connector topic (for example, an OPC UA topic) and select “Send to” to create a task that transfers data from the input to the output connector.
Edit Task Settings¶
Select the task in the navigation panel to edit its settings. Settings are:
- Name
- The task’s name.
- Start this task automatically
- If ticked, then the task is automatically started when the UA Office Link Core Service starts. By default, the service is configured to start automatically and in that case tasks will start automatically after a reboot, for example.
- Auto-start delay
- The number of seconds the auto-start should be delayed for; useful if data sources need some initialisation time, for example, to avoid intial error messages.
- Max pending count
- UA Office Link maintains an in-memory data update queue for each output connector. The maximum pending count limits the number of queued data updates for each connector. If the maximum count is exceeded then incomimg data sets are dropped instead of forwarding the data to the respective connector. Setting the “Max pending count” property to zero (the default) means that there is no limit on the number of queued updates.
Add Input Connectors¶
Click on the task’s “Inputs” node to add input connectors by dragging the connector from the “Connector Browser” onto the task’s “Input Connector” content panel. Each type of connector can only be added once.
Add Input Topics¶
Click on the task’s input connector node that you have added and add input topics by dragging the topics from the “Topic Browser” onto the task’s “Topics” content panel. If you don’t see any topics listed then connector topics may be write-only or no connector topics have been created yet. Please check the topic configuration in the relevant connector.
Add Output Connectors¶
Click on the task’s “Outputs” node to add output connectors by dragging the connector from the “Connector Browser” onto the task’s “Output Connector” content panel. Each type of connector can only be added once.
Add Output Topics¶
Click on the task’s output connector node that you have added and add output topics by dragging the topics from the “Topic Browser” onto the task’s “Topics” content panel. If you don’t see any topics listed then connector topics may be read-only or no connector topics have been created yet. Please check the topic configuration in the relevant connector.
Add Processors¶
Click on the task’s “Processing” node to add processors by dragging the processor from the “Processor Browser” onto the task’s “Processor” content panel. Click on the added processor node to configure the processor.
Start Tasks¶
Select the “Tasks” node in the navigation panel and select one or more tasks from the list in the content panel. Press the “Start” button to start the selected task(s).
Tip
You can also use the right-click menu to start individual tasks.
Inspect the message panel for any error relating to the tasks. If anything goes wrong, then error messages will inform about the cause and the task icon will indicate an error.
Tasks may be in a state of:
-
Stopped
The task has not been started.
-
Running
The task is running normally.
-
Interrupted
The task is not running due to an error but may recover and continue to run later.
-
Aborted
The task has encountered an error and cannot continue to run.
Monitor Tasks¶
Expand the task and drilldown to topic level to inspect live values as they flow through the task stages. Note that “Live Values” must be ticked in the application’s “View” menu.
| https://docs.uaofficelink.com/onpremise/tasks.html | 2020-10-20T00:54:05 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['../_images/tasks-add.png', '../_images/tasks-add.png'],
dtype=object)
array(['../_images/tasks-edit.png', '../_images/tasks-edit.png'],
dtype=object)
array(['../_images/tasks-add-input-connector.png',
'../_images/tasks-add-input-connector.png'], dtype=object)
array(['../_images/tasks-add-input-topics.png',
'../_images/tasks-add-input-topics.png'], dtype=object)
array(['../_images/tasks-add-output-connector.png',
'../_images/tasks-add-output-connector.png'], dtype=object)
array(['../_images/tasks-add-output-topics.png',
'../_images/tasks-add-output-topics.png'], dtype=object)
array(['../_images/tasks-add-processor.png',
'../_images/tasks-add-processor.png'], dtype=object)
array(['../_images/tasks-start.png', '../_images/tasks-start.png'],
dtype=object)
array(['../_images/tasks-running.png', '../_images/tasks-running.png'],
dtype=object)
array(['../_images/tasks-monitoring.png',
'../_images/tasks-monitoring.png'], dtype=object)] | docs.uaofficelink.com |
Challenge 6: Accompanying the transformation¶
Artificial Intelligence (AI) is both a technological and social innovation, since it brings with it all the benefits and complexities that can radically transform society, including the public sector. An innovation, therefore, that can contribute to improving the quality of the services offered and to reinforcing the relationship of trust between administration and citizen.
The opportunities offered by the AI concern both the increase in efficiency of administration operations and user satisfaction. To exploit them to the fullest and to ensure that citizens fully understand their advantages and potential, it is also necessary to deal with issues concerning governance, the use of new technologies and the ability to manage data.
An aspect not to be underestimated in our country is related to the existence of a fullbodied role of “intermediaries” in the relationship between citizens/businesses and the Public Administration, combined with a culture of “delegation” that often introduces a real barrier in the relationship between users and institutions.
In this sense, it will be desirable to invest heavily in the cultural change necessary to create a substrate on which to reposition, in the key of simplification and use of digital, the new relationship between citizens/businesses and the Public Administration.
Furthermore, the transformation process we are witnessing involves the creation of a culture within Public Administration that includes capacity building activities, both with respect to the presence of a leadership that promotes the use of artificial intelligence, as well as the capacity of public officials to implement them.
Within the scope of public services, AI can be used to optimise the internal resources of PA to increase the use of online services, supporting, for example, a series of activities such as:
- completing complex tasks;
- dispatching citizens’ requests and answering their questions;
- effectively managing large amounts of data;
- combining information from different datasets;
- providing faster answers based on predictive scenarios;
- automating repetitive processes;
- analysing data that includes text/audio and video information [1].
Finally, the integration of AI can contribute to increasing the capacity of public employees, as a tool to support decision-making and without ever replacing human judgment. The immediately perceptible benefit, together with the possibility of having systems that learn to accompany decisions in an accurate and personalised way, is the possibility of saving time for employees who can dedicate themselves to more specialised activities or that require greater creativity and empathy. In this way, services become more efficient, relations with citizens are improved and the level of trust in institutions is increased.
The introduction of AI in people’s lives requires the design of processes that facilitate the understanding and acceptance of technologies by the user, not only through the use of experimentation but also through collaboration mechanisms that allow citizens to participate in the design of AI platforms.
Thanks to the co-creation approach, as happens in design thinking, users perceive technology as their own and show a greater propensity to use it. Moreover, where issues or problems in its use are found, citizens show a greater propensity to actively participate in their solution [2].
In facilitating the vicinity and engagement of citizens towards new AI-based public services, design itself plays a key role. In fact, it represents the meeting point between technology and people.
Designers will have to design interfaces that do not just mimic human actions, since this mechanism can generate alienation, but that are able to establish a relationship of trust with citizens, using a language that is understandable and that puts them at ease [3].
The challenge will be to build flexible systems able to provide answers that adapt to the user’s contingent needs, thus ensuring better and more efficient services. A peculiar characteristic of AI is indeed that of correlating continuously evolving data coming from multiple sources and extracting dynamic response models from it.
Another area on which the designers will have to focus will be the design of AI systems able to anticipate the needs of citizens without having an invasive approach that could compromise the user experience.
Another crucial element for introducing AI in a structured manner in the administration concerns the ability to manage data and to exploit the great wealth of information that PA possesses, facilitating not only interoperability, but also transparency and reliability.
In light of this, it is desirable that the application of AI technologies to public administration aims at adopting shared ontologies in line with the internal organisation of PA and with the types of services to be provided, developing controlled vocabularies able to interpret and interoperate the databases of national interest to the fullest [4].
In this regard, knowledge, representation, and self-learning systems can be a valuable aid in increasing the accountability of the models. The adoption of collaborative methods can further ensure that the models adopted are compatible with PA and remain consistent with the regulatory framework.
In addition to the potential described above, some criticalities in the adoption of AI in the public sector can be identified: in general, AI systems can be implemented successfully only if high data quality is guaranteed.
In terms of governance, the transformation process we are witnessing also involves the evolution of relations between public and private players.
Benefiting from AI in public services does not necessarily mean developing new solutions from scratch. On the contrary, it is possible to look at what has already been adopted by other governments, or draw on technologies already established on the market.
Area of collaboration between the public and private sectors is that of procurement. In this sense, AgID, for example, has recently started initiated comparison and experimentation of new scenarios for the dissemination of PCP (pre commercial procurement).
The program deals with issues of significant social impact and public innovation: from autism to protection from environmental risks, to food safety and quality, as well as innovative technological solutions applied to healthcare and e-government. Not only large companies but also start-ups, small businesses and venture capitalists have the opportunity to present innovative ideas and proposals. The PCP is therefore a fertile ground for experimentation and research aimed at meeting social needs even with innovative tools related to AI. An example in this sense is the “Technologies for Autism” contract aimed at identifying Virtual Reality and Augmented Reality technologies typified for people with an autism spectrum condition (ASC).
Footnotes | https://libro-bianco-ia.readthedocs.io/en/latest/doc/capitolo_3_sfida_6.html | 2020-10-20T00:46:28 | CC-MAIN-2020-45 | 1603107867463.6 | [] | libro-bianco-ia.readthedocs.io |
Display options (for Shiny app)
- Text size : text size of the waterfall graph (by default at 12)
- Contextual help precision: Out of 10, this ladder allows to adapt the level of detail available in the contextual help (executive summary, info bubble,…). By default the contextual help is at 5.
- Contextual help: Display executive summary and contextual worded sentences in help tooltips of charts. Swith off to avoid contextual help in the interface
- | https://www.docs.datama.fr/docs/datama-compare/web-application/menu/setting/display-options/ | 2020-10-20T00:36:16 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['https://www.docs.datama.fr/wp-content/uploads/2018/12/DisplayOption.jpg',
None], dtype=object) ] | www.docs.datama.fr |
Appeon Workspace provides several ways to add applications.
Step 1: Use one of the following methods to open the Add App screen.
Tap the add icon (
) on the left of the titlebar; or
Tap the Add App icon (
) (only available when there is no apps installed) in the data area; or
Tap the settings icon (
) on the right of the titlebar, then tap the Settings option on the popup menu, and then tap the Add App option.
The Add App screen appears, as shown in the following figure.
Step 2: In the App URL text box, you
can either manually type the application URL (a typical application URL
looks similar to), or tap the scanner icon
(
) to scan the QR code and automatically get the
application URL.
If you are a server administrator, you might be interested to know how to generate the QR code (see the section called “QR code generation for mobile application(s)” in PowerServer Configuration Guide for .NET or PowerServer Configuration Guide for J2EE).
Both HTTP and HTTPs are supported in Appeon Workspace running on mobile devices.
Step 3: Tap the Test Connection button to test the app URL connection. Make sure that the connection is successful.
Step 4: Tap the back icon (
) on the left of the titlebar to save the information
and return to the previous screen.
Once the application information is saved successfully, the application will appear in the data area on the Appeon Workspace home screen, and will start the download and install process immediately, as shown in the following figure.
The newly added applications will queue up and will be automatically downloaded and installed once the previous application is finished.
During the download and install process, you can pause and resume the
process by tapping the
icon. For example, suppose you want to download and
install application_2 first instead of application_1, you can pause the
process for application_1 and tap application_2 to start the download and
install process for application_2. | https://docs.appeon.com/pb2017r3/appeon_workspace_user_guide/ch03s01.html | 2020-10-19T23:59:47 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.appeon.com |
-
-
-
-
-
-
-
-
-
-
Role-based access control
Multi-tenancy - Provide exclusive management environment to your tenants!
Role-based access control
Citrix ADM based on specific conditions. In Citrix ADM, creating roles and policies are specific to the RBAC feature in Citrix ADC. must have read-only access for system administration operations. An application administrator should be able to access only the resources within the scope.
Example:
Chris, the ADC group head, is the super administrator of Citrix ADM in his organization. Chris multitenant.
Citrix ADM Citrix ADM > Instances, Citrix ADM, Citrix ADC instance even if they don’t have access to that instance.
Orchestration - RBAC is not supported for Orchestration.
Role-based access. | https://docs.citrix.com/en-us/citrix-application-delivery-management-software/current-release/access-control/role-based-access-control.html | 2020-10-20T01:29:01 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['/en-us/citrix-application-delivery-management-software/media/rbac-usecases.jpg',
'localized image'], dtype=object) ] | docs.citrix.com |
An email address can look right but still be wrong. Real email uses in depths address validation to check if emails really exist without sending any messages.
From the Real Email dashboard you can upload a csv file to interactively validate any email address in it. This can be used to filter email group lists or mail out lists.
Many programs allow for importing and exporting csv files, including form spreadsheets and email sending platforms like MailChimp and SendInBlue. For more control you may like to process the files using the api and shell scripts.
If you would like more control you can validate a csv file using your Real Email api key and the bellow bash shell script. First login to and get your API key and place it into the Authorization header.
Given an example file like emails-input.csv
If you run that through the bellow script.
#!/bin/bashwhile IFS='' read -r line || [[ -n "$line" ]]; doecho $line, `curl "" -H 'Authorization: bearer xxx' | jq .status`;done < emails-input.csv > emails-output.csv
Then It will give you the output file that looks like this. | https://docs.isitarealemail.com/bulk-csv-file-email-validation | 2020-10-20T00:29:47 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.isitarealemail.com |
The API Credentials interface allows you view, create, edit, delete, activate, and deactivate API credentials from a single screen in the mParticle UI.
The API Credentials interface is available to users with the Admin or Compliance Admin role, and can be accessed via the API Credentials tab in the Settings screen. You can learn more about roles in the platform guide.
When creating new credentials, you must specify which APIs the credentials have access to, along with permissions and scope (which Accounts and Workspaces the access and permissions apply to) of the credentials. These can all be edited after you have issued your credentials.
First, select the API(s) you would like the credentials to have access to.
Next, select the permissions you would like to associate with these credentials. The available permissions are dependent on API. For example, the Profile API only allows for Read-Only permissions, while the Platform API allows for Admin or User permissions.
The last step in configuring your credentials is to select the scope. Different mParticle APIs apply to different levels in the mParticle hierarchy of Org -> Account -> Workspace, and will impact your available selections. For example, the Platform API applies only to the Account-level, and thus the scope is selected for you. The Profile API reads from the Workspace level, and so you must select which Workspaces the credentials should have access to.
After clicking “Save” in your modal, you will be issued a Client ID and a Client Secret. This is the only time you will have access to your Client Secret in the mParticle UI, while the Client ID is always available by clicking into the associated credential.
You must copy your Client Secret directly from the New Credentials modal before closing the modal. The Client Secret is not accessible or recoverable after closing the modal. If for some reason you don’t copy your Client Secret, you can always issue new keys later on.
The APIs, permissions, and scope associated with any credentials can be edited at any time in the modal that is instantiated by clicking on the row of the credential in the API Credentials tab. In this modal, you can also activate/deactive any credentials.
Credentials can be permanently deleted by clicking on the trash can icon in the row of that credential. This cannot be undone.
Was this page helpful? | https://docs.mparticle.com/developers/credential-management/ | 2020-10-20T00:32:41 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.mparticle.com |
Game:
- Use mainly transformations, such as move, rotate, scale, and skew.
- Create additional drawing swaps when needed.
- If you use Curve and Envelope deformers or Morphing, you’ll need to bake out the drawings for export. Be careful when doing this, as you may want to keep the number of drawings small. Don’t bake out an entire sequence, just selected drawings. You don't need to bake the Game Bone deformers.
- The bigger the drawings are in the Drawing view, the more pixels they will occupy in the texture size on the sprite sheet. When setting up your rig, make sure to not scale individual layers by using a keyframe with the Transform tool. If you want to scale things up or down, use the Select tool. This will keep things the same relative size on the sprite sheet. When you export the sprite sheets, in the script you can also set the resolution of the sprite sheet so the drawings can be scaled down for smaller devices.
- Only drawings which are exposed in the scene will be exported to the sprite sheet. For example, if you have 10 drawings in your Library view, but only two of them are showing in your scene, only those two will be exported. This keeps the sprite sheet as tight as possible.
Animating Multiple Sequences
You will always have multiple animations for your characters. For example, an idle sequence, a run sequence, an action sequence, and so on. You need to work in a specific structure so you can export all of these animations to a single sprite sheet.
There are two different work flows that you can use:
- Workflow 1: Separate Scenes
- Workflow 2: Separating Using Scene Markers
Workflow 1: Separate Scenes <![CDATA[ ]]>:
- Idle
- Run
- Jump
- Shoot.
Workflow 2: Separating Using Scene Markers
You can also create all of your character animations in a single scene, one after the other, such as idle, run, jump and shoot. Then use scene markers to mark and separate the individual animations, see—Creating Scene Markers on Frame Ranges
When you are marking individual animation frame ranges, be sure that they start and end with a keyframe. Do not create scene markers for a range of frames that starts or ends in the middle of an interpolated movement.
When exporting your sprite sheet, in the Export To Sprite Sheet dialog box, be sure to check the Use Scene Markers to Export Clips option.
The animated clips are divided and listed in the stage.xml in the same way that they would appear if you had exported each animated sequence from separate scenes to the same file location. | https://docs.toonboom.com/help/harmony-14/premium/gaming/concept-game-animation-tip.html | 2020-10-20T00:50:02 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['../Resources/Images/HAR/Gaming/HAR14_scene-marker-keyframes.png',
None], dtype=object)
array(['../Resources/Images/HAR/Gaming/HAR14_spritesheet-scene-marker-option.png',
None], dtype=object) ] | docs.toonboom.com |
Sysinfo collection¶
Avocado comes with a
sysinfo plugin, which automatically gathers some
system information per each job or even between tests. This is very useful
when later we want to know what caused the test’s failure. This system
is configurable but we provide a sane set of defaults for you.
In the default Avocado configuration (
/etc/avocado/avocado.conf) there
is a section
sysinfo.collect where you can enable/disable the sysinfo
collection as well as configure the basic environment. In
sysinfo.collectibles section you can define basic paths of where
to look for what commands/tasks should be performed before/during
the sysinfo collection. Avocado supports three types of tasks:
- commands - file with new-line separated list of commands to be executed before and after the job/test (single execution commands). It is possible to set a timeout which is enforced per each executed command in [sysinfo.collect] by setting “commands_timeout” to a positive number.
- files - file with new-line separated list of files to be copied
- profilers - file with new-line separated list of commands to be executed before the job/test and killed at the end of the job/test (follow-like commands)
Additionally this plugin tries to follow the system log via
journalctl
if available.
By default these are collected per-job but you can also run them per-test by
setting
per_test = True in the
sysinfo.collect section.
The sysinfo can also be enabled/disabled on the cmdline if needed by
--sysinfo on|off.
After the job execution you can find the collected information in
$RESULTS/sysinfo of
$RESULTS/test-results/$TEST/sysinfo. They
are categorized into
pre,
profile folders and
the filenames are safely-escaped executed commands or file-names.
You can also see the sysinfo in html results when you have html
results plugin enabled.
Warning
If you are using avocado from sources, you need to manually place
the
commands/
files/
profilers into the
/etc/avocado/sysinfo
directories or adjust the paths in
$AVOCADO_SRC/etc/avocado/avocado.conf. | https://avocado-framework.readthedocs.io/en/61.0/Sysinfo.html | 2020-10-19T23:58:14 | CC-MAIN-2020-45 | 1603107867463.6 | [] | avocado-framework.readthedocs.io |
Secure WordPress
Install the Wordfence Security plugin via the WordPress dashboard and run a scan of your WordPress installation, as follows:
- Log in to your WordPress dashboard.
- Select the “Plugins -> Add New” option.
- Type “wordfence” in the search box.
Install the “Wordfence Security” plugin by clicking the “Install Now” button.
Click the “Activate plugin” link. A new entry should now appear in the left navigation menu.
Click the “Wordfence -> Scan” menu item and then the “Start New Scan” button.
Wait until the scan ends. | https://docs.bitnami.com/aws/apps/wordpress/troubleshooting/enforce-security/ | 2020-10-20T00:38:04 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.bitnami.com |
This class represents Inference Engine Core entity. More...
#include <ie_core.hpp>
This class represents Inference Engine Core entity.
It can throw exceptions safely for the application, where it is properly handled.
Registers extension.
Registers extension for the specified plugin.
Create a new shared context object on specified accelerator device using specified plugin-specific low level device API parameters (device handle, pointer, etc.)
Returns devices available for neural networks inference.
Gets configuration dedicated to device behaviour.
The method is targeted to extract information which can be set via SetConfig method.
Get a pointer to default(plugin-supplied) shared context object for specified accelerator device.
Gets general runtime metric for dedicated hardware.
The method is needed to request common device properties which are executable network agnostic. It can be device name, temperature, other devices-specific values.
Returns plugins version information.
Creates an executable network from a previously exported network.
Creates an executable network from a previously exported network.
Creates an executable network from a previously exported network within a specified remote context.
Creates an executable network from a network object.
Users can create as many networks as they need and use them simultaneously (up to the limitation of the hardware resources)
Creates an executable network from a network object within a specified remote context.
Query device if it supports specified network with specified configuration.
Reads IR xml and bin files.
Reads IR xml and bin (with the same name) files.
Register new device and plugin which implement this device inside Inference Engine.
Registers plugin to Inference Engine Core instance using XML configuration file with plugins description.
XML file has the following structure:
nameidentifies name of device enabled by plugin
locationspecifies absolute path to dynamic library with plugin. A path can also be relative to inference engine shared library. It allows to have common config for different systems with different configurations.
SetConfigmethod.
AddExtensionmethod.
Sets configuration for device, acceptable keys can be found in ie_plugin_config.hpp.
Sets logging callback.
Logging is used to track what is going on inside the plugins, Inference Engine library
Unloads previously loaded plugin with a specified name from Inference Engine The method is needed to remove plugin instance and free its resources. If plugin for a specified device has not been created before, the method throws an exception. | https://docs.openvinotoolkit.org/2020.3/classInferenceEngine_1_1Core.html | 2020-10-20T00:21:51 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.openvinotoolkit.org |
Single Diode Equation¶
This section reviews the solutions to the single diode equation used in pvlib-python to generate an IV curve of a PV module.
pvlib-python supports two ways to solve the single diode equation:
- Lambert W-Function
- Bishop’s Algorithm
The
pvlib.pvsystem.singlediode() function allows the user to choose the
method using the
method keyword.
Lambert W-Function¶
When
method='lambertw', the Lambert W-function is used as previously shown
by Jain, Kapoor [1, 2] and Hansen [3]. The following algorithm can be found on
Wikipedia: Theory of Solar Cells, given the basic single
diode model equation.
Lambert W-function is the inverse of the function \(f \left( w \right) = w \exp \left( w \right)\) or \(w = f^{-1} \left( w \exp \left( w \right) \right)\) also given as \(w = W \left( w \exp \left( w \right) \right)\). Defining the following parameter, \(z\), is necessary to transform the single diode equation into a form that can be expressed as a Lambert W-function.
Then the module current can be solved using the Lambert W-function, \(W \left(z \right)\).
Bishop’s Algorithm¶
The function
pvlib.singlediode.bishop88() uses an explicit solution [4]
that finds points on the IV curve by first solving for pairs \((V_d, I)\)
where \(V_d\) is the diode voltage \(V_d = V + I*Rs\). Then the voltage
is backed out from \(V_d\). Points with specific voltage, such as open
circuit, are located using the bisection search method,
brentq, bounded
by a zero diode voltage and an estimate of open circuit voltage given by
We know that \(V_d = 0\) corresponds to a voltage less than zero, and we can also show that when \(V_d = V_{oc, est}\), the resulting current is also negative, meaning that the corresponding voltage must be in the 4th quadrant and therefore greater than the open circuit voltage (see proof below). Therefore the entire forward-bias 1st quadrant IV-curve is bounded because \(V_{oc} < V_{oc, est}\), and so a bisection search between 0 and \(V_{oc, est}\) will always find any desired condition in the 1st quadrant including \(V_{oc}\).
References¶
[1] “Exact analytical solutions of the parameters of real solar cells using Lambert W-function,” A. Jain, A. Kapoor, Solar Energy Materials and Solar Cells, 81, (2004) pp 269-277. DOI: 10.1016/j.solmat.2003.11.018
[2] “A new method to determine the diode ideality factor of real solar cell using Lambert W-function,” A. Jain, A. Kapoor, Solar Energy Materials and Solar Cells, 85, (2005) 391-396. DOI: 10.1016/j.solmat.2004.05.022
[3] “Parameter Estimation for Single Diode Models of Photovoltaic Modules,” Clifford W. Hansen, Sandia Report SAND2015-2065, 2015 DOI: 10.13140/RG.2.1.4336.7842
[4] “Computer simulation of the effects of electrical mismatches in photovoltaic cell interconnection circuits” JW Bishop, Solar Cell (1988) DOI: 10.1016/0379-6787(88)90059-2 | https://pvlib-python.readthedocs.io/en/stable/singlediode.html | 2020-10-20T00:01:46 | CC-MAIN-2020-45 | 1603107867463.6 | [] | pvlib-python.readthedocs.io |
.
Browsee gives you the flexibility to not record a specific element on your page. You can exclude or suppress specific elements from a recording by making a small change in your HTML code.
You just need to attach this attribute to any element that you do not wish to record, store, and replay. For example, if you do not wish to record the credit card numbers, you can simply add an attribute "data-br-exclude" to your element as shown below:
<input data-br-exclude
With this attribute on any element, that element will not be recorded in any of the sessions.
Note that this attribute is not specific to an input field, say you have an area where you show the email id of a user inside a div tag,
<div data-br-exclude>{{email}}</div>
Placing this tag will not record the value in that element or any subsequent changes to it.
This tag will also work in a recursive manner. So, for example, if you do not wish to record all the input fields or elements inside a form or a div. You can just place this tag at the parent element and all the children element will not be saved or recorded.
<form data-br-exclude><label for="fname">First Name</label><input type="text" id="fname" name="fname"><label for="lname">Last Name</label><input type="text" id="lname" name="lname"><label for="pass">Password</label><input type="text" id="pass" name="pass"><label for="cars">Choose a car:</label><select name="cars" id="cars"><option value="volvo">Volvo</option><option value="saab">Saab</option><option value="mercedes">Mercedes</option><option value="audi">Audi</option></select></form>
In this above-mentioned example, all the children elements including label texts and select options will not be recorded. | https://docs.browsee.io/data-privacy/privacy | 2021-02-25T05:18:24 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.browsee.io |
Property
Changed Event Manager Class
Definition
Provides a WeakEventManager implementation so that you can use the "weak event listener" pattern to attach listeners for the PropertyChanged event.
public ref class PropertyChangedEventManager : System::Windows::WeakEventManager
public class PropertyChangedEventManager : System.Windows.WeakEventManager
type PropertyChangedEventManager = class inherit WeakEventManager
Public Class PropertyChangedEventManager Inherits WeakEventManager
- Inheritance
- PropertyChangedEventManager
Remarks
In order to be listeners in this pattern, your listener objects must implement IWeakEventListener. You do not need to implement IWeakEventListener on the class that is the source of the events. | https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.propertychangedeventmanager?view=net-5.0 | 2021-02-25T05:58:47 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.microsoft.com |
Use comments to troubleshoot a search
You can use the inline commenting feature.
index=_internal source=*license* type=usage | stats sum(b) BY idx
(Thanks to Splunk user Runals for this example.)
For more information about adding inline comments to your searches, see Help reading searches.
This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/8.1.1/Search/Addcommentstosearches | 2021-02-25T05:49:07 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Data Exchange Layer client This page provides information about the Data Exchange Layer client for McAfee ePO. Option definitions Option Definition Connection State Shows whether the DXL client is connected to a broker. Current Broker Shows information about the connected broker. If the client is not connected to a broker, this field doesn't appear. Broker Keepalive Interval Determines how often a ping occurs between the DXL client and the broker. The default is 10 minutes. Client UID The unique identifier for the DXL client. Registered Services Services supported by McAfee ePO. Subscriptions Displays information about the topics that the DXL client subscribes to. Click Details to see additional information. | https://docs.mcafee.com/bundle/data-exchange-layer-5.0.x-interface-reference-guide/page/GUID-AE123197-0215-4B2E-9435-FDAAD594D7CC.html | 2021-02-25T05:05:06 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.mcafee.com |
You create at least an aggregate on each of the new nodes to store the volumes you want to move from the original nodes. You must identify an aggregate for each volume and move each volume individually.
Data protection mirror relationships must have been initialized before you can move a volume.
Find a Data Protection Guide for your version of Data ONTAP 8
Both the new aggregate and the old aggregate from which the volume will be moved must be in the same SVM.
vs1SVM and the
node0node:
cluster::> volume show -vserver vs1 -node node0 Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- vs1 clone aggr1 online RW 40MB 37.87MB 5% vs1 vol1 aggr1 online RW 40MB 37.87MB 5% vs1 vs1root aggr1 online RW 20MB 18.88MB 5% 3 entries were displayed.
user_maxvolume on the
vs2SVM can be moved to any of the listed aggregates:
cluster::> volume move target-aggr show -vserver vs2 -volume user_max Aggregate Name Available Size Storage Type -------------- -------------- ------------ aggr2 467.9GB FCAL node12a_aggr3 10.34GB FCAL node12a_aggr2 10.36GB FCAL node12a_aggr1 10.36GB FCAL node12a_aggr4 10.36GB FCAL 5 entries were displayed | https://docs.netapp.com/platstor/topic/com.netapp.doc.hw-upgrade-controller/GUID-AFE432F6-60AD-4A79-86C0-C7D12957FA63.html?lang=en | 2021-02-25T05:57:23 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.netapp.com |
System and User Requirements¶
There are many free (and paid) resources that you can use to learn more about SQL and Oracle. Before we can talk about optimization of SQL queries and PL/SQL scripts, however, you have to understand the basics of SQL. This manual is written for people with at least some experience with SQL or PL/SQL on Oracle databases in a production environment, in particular (aspiring) developers. Should you have only ever looked at the sample databases, you are not likely to gain much by reading these pages; we shall briefly cover a bunch of database basics but they are mainly meant to refresh people’s memories.
It is recommended that you read the manual in its entirety, as we have tried our best to gradually go from easy to more advanced optimization techniques. Because this manual is an ongoing work in progress, more content will be added as time advances. Professional developers obviously do not need to read everything; we hope that there is something of interest to professional developers too, but there are no guarantees.
Should you wish to contribute to this project, head on over to the public repository to contact us. And if you ever have a question or comment? Just send us your query. | https://oracle.readthedocs.io/en/latest/intro/prerequisites.html | 2021-02-25T04:52:33 | CC-MAIN-2021-10 | 1614178350717.8 | [] | oracle.readthedocs.io |
DXGK_SEGMENTDESCRIPTOR structure (d3dkmddi.h)
The DXGK_SEGMENTDESCRIPTOR structure contains information about a segment that the driver supports.
Syntax
typedef struct _DXGK_SEGMENTDESCRIPTOR { PHYSICAL_ADDRESS BaseAddress; PHYSICAL_ADDRESS CpuTranslatedAddress; SIZE_T Size; UINT NbOfBanks; SIZE_T *pBankRangeTable; SIZE_T CommitLimit; DXGK_SEGMENTFLAGS Flags; } DXGK_SEGMENTDESCRIPTOR;
BaseAddress
[out] The base address of the segment, as determined by the graphics processing unit (GPU). The physical address of an allocation that the video memory manager paged in the segment is assigned a GPU address that is offset from the base address that BaseAddress specifies.
The video memory manager ignores the base address of AGP-type aperture segments (where the Agp bit-field flag is specified in the Flags member) and instead uses the actual physical address of the segment within the AGP aperture, as determined on the bus where the GPU is located. In this situation, the driver can use addresses that the video memory manager generated for allocation directly without requiring translation.
CpuTranslatedAddress
[out] The base address of the segment, relative to the bus that the GPU is connected on. For example, when the GPU is connected on the PCI bus, CpuTranslatedAddress is the base address of the usable range that is specified by a PCI base-address register (BAR). The driver specifies this address only if it specifies a CPU-accessible segment by setting the CpuVisible bit-field flag in the Flags member.
This member is ignored for aperture segments, including the AGP-type aperture segment. The only exception occurs when the user-mode display driver has not set up an alternate virtual address for a primary allocation (that is, when the driver has not set UseAlternateVA in the Flags member of the D3DDDICB_LOCKFLAGS structure during a call to the pfnLockCb function).
Before the video memory manager maps a virtual address to the physical range, the video memory manager translates this physical address based on the CPU view of the bus and informs the driver about the operation so the driver can set up an aperture to access the content of the segment at the given location.
Size
[out] The size, in bytes, of the segment. This size must be a multiple of the native host page size (for example, 4 KB on the x86 architecture).
For AGP-type aperture segments (where the Agp bit-field flag is specified in the Flags member), the video memory manager allocates as much aperture space as possible, so this member is ignored.
NbOfBanks
[out] The number of banks in the segment, if banking is used (that is, if the UseBanking bit-field flag is set in the Flags member).
pBankRangeTable
[out] An array of values that indicates the ranges that delimit each bank in the segment. The array specifies the end addresses of the first bank through the n−1 bank (that is, the end offsets into the segment for each bank). Note the following:
- Banks are contiguous.
- The first bank starts at offset zero of the segment.
- The last bank ends at the end of the segment, so the driver is not required to specify the end address of the last bank.
- The driver specifies this array only if it also sets the UseBanking bit-field flag in the Flags member.
CommitLimit
[out] The maximum number of bytes that can be committed to the segment. For a memory segment, the commit limit is always the same as the size of the segment, which is specified in the Size member. For an aperture segment, the driver can limit the amount of memory that can be committed to the segment on systems with small amounts of physical memory.
Flags
[out] A DXGK_SEGMENTFLAGS structure that identifies properties, in bit-field flags, for the segment.
Note that for an AGP-type aperture segment, the driver must exclusively set the Agp member of the structure in the union that DXGK_SEGMENTFLAGS contains. Although the AGP-type aperture segment is an aperture and is accessible to the CPU, if any other members are set, the adapter fails to initialize. | https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/d3dkmddi/ns-d3dkmddi-_dxgk_segmentdescriptor | 2021-02-25T06:46:11 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.microsoft.com |
The NOV is a description of the tasks and activities, operational elements, and information exchanges required to accomplish NATO missions. The NOV contains graphical and textual products that comprise an identification of the nodes, their assigned tasks and activities, and dependencies between nodes. It defines the types of information exchanged, which tasks and activities are supported by the information exchanges, and the operational details of information exchanges.
The views of this viewpoint are described in the following sections:
No Magic, Inc.
Copyright © 1998 – 2021 No Magic, Incorporated, a Dassault Systèmes company – All Rights Reserved.
Company
Resources
Connect | https://docs.nomagic.com/display/UPDM2P190SP4/NATO+Operational+Viewpoint | 2021-02-25T05:16:16 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.nomagic.com |
PATRIC Team Presents Workshop at Duke and UNC¶
Rebecca Wattam, Maulik Shukla, and Joe Gabbard presented a day-long workshop for over 50 researchers from UNC’s Department of Microbiology and Immunology, Duke’s Department of Molecular Genetics and Microbiology, and North Carolina State’s College of Veterinary Medicine. The workshop showcased PATRIC’s comparative genomics tools, as well as new transcriptomics analysis capabilities. The workshop also allowed the PATRIC team to engage in one-one-one interactions with participants, helping them analyze their data and understand how to best leverage PATRIC for their research.
| https://docs.patricbrc.org/news/2012/20121011-patric-team-presents-workshop-at-duke-and-unc.html | 2021-02-25T04:15:49 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['../../_images/IMAG0514.jpg', '../../_images/IMAG0514.jpg'],
dtype=object) ] | docs.patricbrc.org |
Limitations and Usage Requirements
“Flash Actions” in the PrestaBay module use ebay feature named “Large Merchant Services”. This service has very important limits that you need to be aware of start using it.
Performing any of “Flash Actions” required data submission to ebay. Every data submission named “Data file”.
Every ebay account has a strong daily limit of how many data files could be sent.
- Send — 4 data files in 24 hours
- Relist — 2 data files in 24 hours
- Full Revise — 4 data files in 24 hours
- QTY & Price Revise — 96 data files in 24 hours
- End — 96 data files in 24 hours
Additionally, ebay limits the size of the data file to 15 Mb. Exactly the number of products in this data file depends on the number of information that needs to be transferred. For “Send” and “Full Revise” this could be equivalent to 1500 products. For QTY/Price revise around 10`000 of items.
“Send and “Full Revise” limitation. When you are using “Send” or “Full Revise” as Flash Actions only a single photo of the product could be transferred to ebay. This is referred to as normal and variation products.
QTY & Price Revise limitation. This operation allowed by ebay executed every 15 minutes. But for performing the stock update with variation products it’s required unique “SKU” for each of variation combination.
Blocked Products
When any of Flash Actions performed with PrestaBay items they are blocked before flash actions are not finished execution. In most cases, you will not possibly revise/relist/list any of these items. | https://docs.salest.io/article/78-limitations-and-usage-requirements | 2021-02-25T04:14:46 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.salest.io |
A Shedmake Package is nothing more than a fodler with a handful of files that provide recipes for patching, compiling and archiving a piece of software. The only files required to be present are
package.txt, which contains metadata about the packaged software, and
build.sh, a shell script invoked to compile it. Bare versions of these files are provided in
shedmake's default template and are copied over automatically when using its
create action:
shedmake create my-new-package
A complete list of supported package files and folders, and a explanation of the fields in
package.txt follows.
During the build process,
shedmake generates folders to receive build artifacts that can be accessed using the following script variables: | https://docs.shedbuilt.net/packaging/shedmake-packages | 2021-02-25T05:43:34 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.shedbuilt.net |
You can create a UDF containing one or more parameters and return type that are a JSON data type. The JSON type is supported for the following types of UDFs:
- Scalar and aggregate UDFs, table functions, and table operators written in C, C++, or Java.
- SQL UDFs
For SQL UDFs, the RETURN clause can be an SQL statement that evaluates to the JSON data type.
You can specify the JSON type with the STORAGE FORMAT specification to pass binary JSON data to an external routine. . | https://docs.teradata.com/r/HN9cf0JB0JlWCXaQm6KDvw/RS5PViB4YiSwa1_0cZwk9g | 2021-02-25T05:29:35 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.teradata.com |
Before the regular process starts, the module makes a call to the DataDome API using a KeepAlive connection.
Depending on the API response, the module will either block the query or enable the regular process to proceed.
The module has been developed to protect the users' experience: If any errors were to occur during the process, or if the timeout is reached, the module will automatically disable its blocking process and allow those hits.
Compatibility
The module is plain Java 1.5 and allows two different integrations:
- Java Servlet Filter: Tested on Jetty, Tomcat and should work with Jboss as well as other servers supporting Servlet API.
- Vert.x-Web: Route handler for Vert.x HTTP servers. Please refer to the dedicated documentation for compatibility and implementation details.
How to install
Please refer to the dedicated documentation below:
- Tomcat/Jetty: Module Java/Tomcat-Jetty
- Vert.x-Web: Module Java/Vert.x-Web
Updated about a month ago | https://docs.datadome.co/docs/module-java | 2021-02-25T04:54:45 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.datadome.co |
This FAQ answers common general questions about OnCommand Insight.
OCI is one of the most mature infrastructure monitoring products in the industry today with over a decade in active development. Formerly known as Onaro or SANscreen, the SANScreen name was changed when joining the OnCommand Portfolio suite of products and is now referred to as OnCommand Insight, or more commonly Insight or OCI.
OCI is simply a software download. Software is installed on two dedicated virtual or physical servers. Typical installations can be performed in as little as 2 hours and inventory, capacity and performance data will begin to be provided almost immediately. Any additional performance and best practice policies, user annotation, and cost awareness setup will require additional planning discussions.
OCI is 100% agentless and does not require the use of agents, taps or probes. All device discovery is read only, performed out of band, and over IP.
OCI setup leverages the native APIs and protocols often already present in the data center environment, with no need of agents or probes. SSH, HTTP, SMIS and CLI are just a few examples. Where device element managers already exist (such as EMC’s Unisphere, for example), OCI will communicate to the element manager(s) to capture the existing environmental data. Most device discoveries require only an IP address and read-only username and password. These device discoveries can be “one-to-many”, such as with OCI’s VMware data source. By discovering the VMware vCenter, OCI in turn discovers all of its ESXi hosts and their associated VM’s, all with a single IP address and credential.
For moderately-sized environments we recommend Professional Services for deployment, configuration, and integrations, as well as a wide variety of custom reporting and data validation possibilities. A short discussion with the OCI team and account engagement manager can help determine what services will benefit you the most.
Product updates and Service Packs are available for multiple versions of OCI. Major or minor releases are typically provided every few months, with service packs including new device support and firmware released more frequently. Both are available on the support.netapp.com download site. Certain updates such as new disk models that come out more frequently from manufacturers are pushed out automatically to the OCI software. Additionally, OCI data source device collection can be patched on site immediately after a development fix or update.
OCI’s Product Management team actively tracks all customer enhancement and interoperability feature requests (IFR’s). Each request is detailed, evaluated for feasibility and prioritized based on customer demand and overall strategic business impact. Once accepted, requests are sized based on level of effort and scheduled for future development. The agile nature of the OCIs development process routinely allows for new data sources to be made available outside regular scheduled release cycles. NetApp account representatives can assist in customer inquiries and in submitting new requests on your behalf. Data sources can be patched on site, without the need to upgrade OCI.
Yes, OCI supports several flavors of Linux as well as Windows. Be aware that Cognos (IBM's reporting tool used by OCI in conjunction with the Data Warehouse) is only supported on Windows, so if you are using OCI for reporting, you will need to run the reporting tool on a Windows server. The OCI Installation Guide lists the server requirements and supported operating systems for each OCI component.
Yes, OCI is used by the top 10 Fortune 500 companies and by leading banking, healthcare, research and government agencies around the world today. OCI provides support for US military common access cards (CAC) and offers solutions for geographically-dispersed or heavily-firewalled environments.
OnCommand Unified Manager operates at the storage array “device management” layer, providing in-depth incident and event-based analysis of Clustered Data ONTAP (cDOT) arrays and their cluster interconnects. OCI provides a holistic view of on-premise and globally-dispersed environments consisting of 7-mode, Clustered Data ONTAP and other 3rd party arrays. Its end-to-end visibility, from VM to spindle, allows for historical trending and forecasting of capacity, performance and cost modeling that promotes a proactive service quality approach to data center management.
The "Secondary ETL" requirement referenced in some OnCommand ETL scripting or data validation needs, please contact your NetApp Sales Representative and discuss how NetApp's Professional Services can assist you. | https://docs.netapp.com/oci-73/topic/com.netapp.doc.oci-faq/GUID-9FDD6D89-5282-49A0-9087-656C739B6BB9.html?lang=en | 2021-02-25T05:46:47 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.netapp.com |
1.0.0.RELEASE
Table of Contents
This section provides a brief overview of Spring IO Platform reference documentation.
Spring IO Platform reference documentation. pom>1.0.0.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> … </project>
Alternatively, rather than importing the Platform’s pom,>1.0.0.RELEASE</version> <relativePath/> </parent> … < configure your build to apply
Spring Boot’s Gradle plugin and, in the
versionManagement
configuration that it provides, declare a dependency on the Platform’s
platform-versions
artifact:
buildscript { repositories { mavenCentral() } dependencies { classpath 'org.springframework.boot:spring-boot-gradle-plugin:1.1.1.RELEASE' } } apply plugin: 'spring-boot' repositories { mavenCentral() } dependencies { versionManagement 'io.spring.platform:platform-versions:1.0.0.RELEASE@properties' }. | http://docs.spring.io/platform/docs/1.0.0.RELEASE/reference/htmlsingle/ | 2014-12-18T00:42:27 | CC-MAIN-2014-52 | 1418802765093.40 | [] | docs.spring.io |
$Date: 2004/09/09 01:31:36 $(UTC)
Table of Contents
This collection contains examples of formatting specifications that can be followed and transformation products and technologies.
The formatting specifications referenced below point to various layouts for the presentation of the information found in UBL instances. Two sets of layouts are simplified presentations of the example Office and Joinery scenarios. The third set of layouts is intended to conform to the United Nations Layout Key for printed business documents, though no attempt has been made to measure their conformance or completeness.
A formatting specification is a recipe for a stylesheet, but is not in and of itself a transformation script. Writers of stylesheets, programs, or any other open and proprietary transformation technologies rely on formatting specifications for direction regarding content identification and layout.
These formatting specifications follow the documentation conventions described below to indicate which content in an instance of a UBL document type belongs on the page for the sample layout. PDF files are used to illustrate the sample layouts, though in these non-normative samples there are no detailed specifications of layout dimensions and formatting properties per se.
The following collection of formatting specifications describes candidate renderings for the following UBL document types:
The following is an example of the documentation found in a formatting specification for a given field of a form on the rendered output.
The box above includes two XML Path Language (XPath) addresses that document cbc:InvoicedQuantity element that is a child of the cac:InvoiceLine element that is a child of the document element in:Invoice. In the second of the two examples, the item being addressed is the quantityUnitCode attribute of the cbc:InvoicedQuantity element.
The namespace prefixes are arbitrary but follow the documentary conventions used throughout UBL. Each formatting specification enumerates the prefixes used for that document type and the associated normative namespace URI strings. indicated location on the page.
A number of example implementations must not be considered as reference implementations of UBL formatting specifications or as normative components of the UBL delivery; they are merely examples from what will probably be many available UBL stylesheet libraries.
See FS-implementations.html for a list of known implementations of UBL Formatting Specifications at the time of publication.
The UBL committees welcome your input. We need feedback on the utility of the form layouts themselves and on the choice of XPath addresses for information that belongs in the form.
If you have any comments regarding these formatting specifications, please do not hesitate to contact the UBL committees following the directions on the UBL home page.
All of the documentation files in these fs/ directories are produced in HTML by formatting source documents edited in XML. These XML source documents can all be found in the fs/xml/ directory and were all validated using the DocBook 4.2 XML Document Type Definition (DTD).
The enhancement stylesheet fs/xml/fsdb2db.xsl transforms a UBL formatting specification written in DocBook to produce an enhanced DocBook instance with some rearranged information for XPath addresses and committee editorial notes.
Off-the-shelf DocBook stylesheets are then used on the enhanced instance to create the resulting HTML, invoked with only the section.autolabel=1 parameter to engage the section numbering.
See for more information on DocBook and for the stylesheets.
Some readers of these specifications have reported display problems with the presentation of the XPath addresses. Reports typically indicate a mystery box-like character is being displayed after every oblique in the XPath address. This is caused by the browser not correctly interpreting the Unicode character entity for a zero-width space. When the browser correctly recognizes this character, it is interpreted as a space for the purposes of line-breaking, but the zero-width of the character gives the appearance of the surrounding characters being adjacent (when it is properly interpreted).
To illustrate, the following is the sequence "AB": "AB" (which should appear as "AB"). | http://docs.oasis-open.org/ubl/cd-UBL-1.0/fs/ | 2014-12-18T00:40:16 | CC-MAIN-2014-52 | 1418802765093.40 | [] | docs.oasis-open.org |
17 Creating Languages.. | http://docs.racket-lang.org/guide/languages.html | 2014-12-18T00:39:07 | CC-MAIN-2014-52 | 1418802765093.40 | [] | docs.racket-lang.org |
The Edge Collider 2D component is a Collider for use with 2D physics. The Collider’s shape is defined by a freeform edge made of line segments, so you can adjust it to fit the shape of the Sprite graphic with great precision. Note that this Collider’s. | https://docs.unity3d.com/es/2017.4/Manual/class-EdgeCollider2D.html | 2021-07-23T20:27:21 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.unity3d.com |
rust-c
Forked from rust-cpp version 0.1.0 for situations where a full C++ compiler isn't available or warranted. Pretty hacky implementation, intended to get a few bits and pieces bootstrapped until Rust matures enough that some C code can be replaced. I couldn't have written this myself from scratch, so hats off to @mystor. One day this crate may disappear and be subsumed into rust-cpp.
Comments that follow are from rust-cpp with minor changes of C++ and
cpp to C and
c, etc.
rust-c is a build tool & macro which enables you to write C code inline in your rust code.
NOTE: This crate works on stable rust, but it is not stable itself. You can use this version all you want, but don't be surprised when a 0.2 release is made which completely breaks backwords compatibility. I view this crate as more of an experiment than a product.
As the tools come into stable rust to make this more practical to use, I expect that it will stabilize. Namely, I do not expect that this module will have a stable-ish interface until we get a stable procedural macro system.
Setup
Add
c as a dependency to your project. It will need to be added both as a
build dependency, and as a normal dependency, with different flags. You'll also
need a
build.rs set up for your project.
[package] # ... build = "build.rs" [build-dependencies] # ... c = { version = "0.1.0", features = ["build"] } [dependencies] # ... c = { version = "0.1.0", features = ["macro"] }
You'll also then need to call the
c build plugin from your
build.rs. It
should look something like this:
extern crate c; fn main() { c::build("src/lib.rs", "crate_name", |cfg| { // cfg is a gcc::Config object. You can use it to add additional // configuration options to the invocation of the C compiler. }); }
Usage
In your crate, include the cpp crate macros:
#[macro_use] extern crate c;
Then, use the
c! macro to define code and other logic which you want shared
between rust and C. The
c! macro supports the following forms:
c! { // Include a C header into the C shim. Only the `#include` directive // is supported in this context. #include <stdlib.h> #include "foo.h" // Write some logic directly into the shim. Either a curly-braced block or // string literal are supported raw { #define X 10 struct Foo { uint32_t x; }; } raw r#" #define Y 20 "# // Define a function which can be called from rust, but is implemented in // C. Its name is used as the C function name, and cannot collide with // other C functions. The body may be defined as a curly-braced block or // string literal. // These functions are unsafe, and can only be called from unsafe blocks. fn my_function(x: i32 as "int32_t", y: u64 as "uint32_t") -> f32 as "float" { return (float)(x + y); } fn my_raw_function(x: i32 as "int32_t") -> u32 as "uint32_t" r#" return x; "# // Define a struct which is shared between C and rust. In C-land its // name will be in the global namespace (there's only one)! In rust it will be located // wherever the c! block is located struct MyStruct { x: i32 as "int32_t", y: *const i8 as "const char*", } // Define an enum which is shared between C and rust. In C-land it // will be defined in the global namespace as an `enum` (there's only one)!. In rust, // it will be located wherever the c! block is located. enum MyEnum { A, // Known in C as `A` B, C, D, } }
c also provides a header which may be useful for interop code. This header
includes
<stdint.h>. This header,
rust_types.h, can be included with:-
c! { #include "rust_types.h" }
The full body of
rust_types.h is included below.
#ifndef _RUST_TYPES_H_ #define _RUST_TYPES_H_ #include <stdint.h> typedef int8_t i8; typedef int16_t i16; typedef int32_t i32; typedef int64_t i64; typedef intptr_t isize; typedef uint8_t u8; typedef uint16_t u16; typedef uint32_t u32; typedef uint64_t u64; typedef uintptr_t usize; typedef float f32; typedef double f64; typedef u8 bool_; typedef uint32_t char_; #endif
Warning about Macros
rust-cpp cannot identify and parse the information found in cpp! blocks which
are generated with macros. These blocks will correctly generate rust code, but
will not generate the corresponding C++ code, most likely causing your build to
fail with a linker error. Do not create
cpp! {} blocks with macros to avoid
this. | https://docs.rs/crate/rust_c/0.1.1 | 2021-07-23T18:26:25 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.rs |
Connecting to GitHub, Gitlab or BitBucket
With Wappler you can connect seamlessly your Wappler project to Github, Gitlab or BitBucket repositories. You can push and pull your local project to the selected remote repository.
Creating a local repository
First you need to setup a local repository.
This is quite an easy task, just open the Git Manager Panel:
And click the Create Repository button:
Select the uncommitted changes and commit them:
We are done setting up the local repository, so let’s setup a remote one.
Creating a remote repository
You can connect to Github, GitLab and BitBucket repositories directly from Wappler, as Wappler is now an officially registered app for these 3 providers!
In our example we will show you how to create and connect to a Github repository, the process is the same for the other two providers.
First go to and log in. Then click the New button in the Repositories section:
Add a name for your repository:
Select whether this repository should be a public or a private one:
It is important not to select anything under Initialize this repository with. Simply skip this step:
And click the Create repository button:
Copy your repository address, as you will need it in the next steps:
Working with remote repositories
Switch back to Wappler.
Connecting to a remote repository
Click the Connect to remote repository button, located in the Git Manager:
Paste your remote repository address:
Then add a name for this repository. This name is used in Wappler UI so that you know which remote repository is selected. It can be any random name. Click Connect:
Authorizing Wappler access
An authorize dialog will appear. You need to authorize GitHub Access for Wappler. There are two options available - authorize Wappler for just the current Github/Gitlab/Bitbucket repository and/or store the authorization globally for all your Wappler projects. We select both, so that we don’t have to do this for each of our projects:
Click the Authorize button:
You will be taken to the app authorization page in your browser. Click the Authorize Wappler button:
And click the Open Wappler button in the alert which appears:
You will be taken back to Wappler, where you can see the remote repository in the dropdown:
Pushing content to the remote repository
In order to upload local repository content to a remote repository, we use the Push option. Pushing is how you transfer commits from your local repository to a remote repository.
Click the Push to Remote button in order to do this:
And you are done! Your local repository content is now transferred to your remote repository. We can see this in GitHub:
Controlling Wappler Git Account Authorizations in Global options
You can control the Git Account Authorizations for Wappler in the global options. There you can set up them globally so you don’t have to authorize Wappler every time for each of your projects.
Click the Options button:
Select Git and you will see the connect options for Github, Gitlab and BitBucket. Here you can enable or disable global authorization for Wappler:
Enabling each of them will take you to the App Authorization pages of each of the providers. Follow the instructions in your browser to authorize Wappler. | https://docs.wappler.io/t/connecting-to-github-gitlab-or-bitbucket/25994 | 2021-07-23T18:54:51 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.wappler.io |
Configuring IPv6 Through A Tunnel Broker Service¶
A location that doesn’t have access to native IPv6 connectivity may obtain it using a tunnel broker service such as Hurricane Electric. A core site with IPv6 can deliver IPv6 connectivity to a remote site by using a VPN or GIF tunnel.
This section provides the process for connecting pfSense® with Hurricane Electric (Often abbreviated to HE.net or HE) for IPv6 transit. Using HE.net is simple and easy. It allows for multi-tunnel setup, each with a transport /64 and a routed /64. Also included is a routed /48 to be used with one the tunnels. It’s a great way to get a lot of routed IPv6 space to experiment with and learn, all for free.
Before a tunnel can be created, ICMP echo requests must be allowed to the WAN. A rule to pass ICMP echo requests from a source of any is a good temporary measure. Once the tunnel endpoint for HE.net has been chosen, the rule can be made more specific.
To get started on HE.net, sign up at. The /64 networks are allocated after registering and selecting a regional IPv6 tunnel server. A summary of the tunnel configuration can be viewed on HE.net’s website as seen in Figure HE.net Tunnel Config Summary. It contains important information such as the user’s Tunnel ID, Server IPv4 Address (IP address of the tunnel server), Client IPv4 Address (the firewall’s external IP address), the Server and Client IPv6 Addresses (representing the IPv6 addresses inside the tunnel), and the Routed IPv6 Prefixes.
The Advanced tab on the tunnel broker site has two additional notable options: An MTU Slider and an Update Key for updating the tunnel address. If the WAN used for terminating the GIF tunnel is PPPoE or another WAN type with a low MTU, move the slider down as needed. For example, a common MTU for PPPoE lines with a tunnel broker would be 1452. If the WAN has a dynamic IP address, note the Update Key for later use in this section.
Once the initial setup for the tunnel service is complete, configure pfSense to use the tunnel.
Allow IPv6 Traffic¶
On new installations of pfSense after 2.1, IPv6 traffic is allowed by default. If the configuration on the firewall has been upgraded from older versions, then IPv6 would still be blocked. To enable IPv6 traffic, perform the following:
Navigate to System > Advanced on the Networking tab
Check Allow IPv6 if not already checked
Click Save
Allow ICMP¶
ICMP echo requests must be allowed on the WAN address that is terminating the tunnel to ensure that it is online and reachable. If ICMP is blocked, the tunnel broker may refuse to setup the tunnel to the IPv4 address. Edit the ICMP rule made earlier in this section, or create a new rule to allow ICMP echo requests. Set the source IP address of the Server IPv4 Address in the tunnel configuration as shown in Figure Example ICMP Rule to ensure connectivity.
Create and Assign the GIF Interface¶
Next, create the interface for the GIF tunnel in pfSense. Complete the fields with the corresponding information from the tunnel broker configuration summary.
Navigate to Interfaces > Assignments.
See Figure Example GIF Tunnel.
If this tunnel is being configured on a WAN with a dynamic IP, see Updating the Tunnel Endpoint for information on how to keep the tunnel’s endpoint IP updated with HE.net.
Once the GIF tunnel has been created, it must be assigned:
Navigate to Interfaces > Assignments, Interface Assignments tab.
Select the newly created GIF under Available Network Ports.
Click
Add to add it as a new interface.
Configure the New OPT Interface¶
The new interface is now accessible under Interfaces > OPTx, where x depends on the number assigned to the interface.
Navigate to the new interface configuration page. (Interfaces > OPTx)
Check Enable Interface.
Enter a name for the interface in the Description field, for example WANv6.
Leave IPv6 Configuration Type as None.
Click Save
Click Apply Changes.
Setup the IPv6 Gateway¶.
Navigate to Status > Gateways to view the gateway status. The gateway will show as “Online” if the configuration is successful, as seen in Figure Example Tunnel Gateway Status.
Setup IPv6 DNS¶
The DNS servers likely answer DNS queries with AAAA results already. Entering the DNS servers supplied by the tunnel broker service under System > General Setup is recommended. Enter at least one IPv6 DNS server or use Google’s public IPv6 DNS servers at 2001:4860:4860::8888 and 2001:4860:4860::8844. If the DNS Resolver is used in non-forwarding mode, it will talk to IPv6 root servers automatically once IPv6 connectivity is functional.
Setup LAN for IPv6¶
Once the tunnel is configured and online, the firewall itself has IPv6 connectivity. To ensure clients can access the internet on IPV6, the LAN must be configured also.
One method is to set LAN as dual stack IPv4 and IPv6.
Navigate to Interfaces > LAN
Select IPv6 Configuration Type as Static IPv6
Enter an IPv6 address from the Routed /64 in the tunnel broker configuration with a prefix length of 64. For example, * 2001:db8:1111:2222::1 for the LAN IPv6 address if the Routed /64 is 2001:db8:1111:2222::/64.
Click Save
Click Apply Changes
A /64 from within the Routed /48 is another available option.
Setup DHCPv6 and/or Router Advertisements¶
To assign IPv6 addresses to clients automatically, setup Router Advertisements and/or DHCPv6. This is covered in detail in IPv6 Router Advertisements.
A brief overview is as follows:
Navigate to Services > DHCPv6 Server/RA
Check Enable
Enter a range of IPv6 IP addresses inside the new LAN IPv6 subnet
Click Save.
Switch to the Router Advertisements tab
Set the Mode to Managed (DHCPv6 only) or Assisted (DHCPv6+SLAAC)
Click Save.
Modes are described in greater detail at Router Advertisements (Or: “Where is the DHCPv6 gateway option”).
To assign IPv6 addresses to LAN systems manually, use the firewall’s LAN IPv6 address as the gateway with a proper matching prefix length, and pick addresses from within the LAN subnet.
Add Firewall Rules¶
Once LAN addresses have been assigned, add firewall rules to allow the IPv6 traffic to flow.
Navigate to Firewall > Rules, LAN tab.
Check the list for an existing IPv6 rule. If a rule to pass IPv6 traffic already exists, then no additional action is necessary.
Click
Add to add a new rule to the bottom of the list
Set the TCP/IP Version to IPv6
Enter the LAN IPv6 subnet as the Source
Pick a Destination of Any.
Click Save
Click Apply Changes
For IPv6-enabled servers on the LAN with public services, add firewall rules on the tab for the IPv6 WAN (the assigned GIF interface) to allow IPv6 traffic to reach the servers on required ports.
Try It!¶
Once firewall rules are in place, check for IPv6 connectivity. A good site to test with is test-ipv6.com. An example of the output results of a successful configuration from a client on LAN is shown here Figure IPv6 Test Results.
Updating the Tunnel Endpoint¶
For a dynamic WAN, such as DHCP or PPPoE, HE.net can still be used as a tunnel broker. pfSense includes a DynDNS type that will update the tunnel endpoint IP address whenever the WAN interface IP changes.
If DynDNS is desired, it may be configured as follows:
Navigate to Services > DynDNS
Click
Add to add a new entry.
Set the Service Type to be HE.net Tunnelbroker.
Select WAN as the Interface to Monitor.
Enter the Tunnel ID from the tunnel broker configuration into the Hostname field.
Enter the Username for the tunnel broker site.
Enter either the Password or Update Key for the tunnel broker site into the Password field.
Enter a Description.
Click Save and Force Update.
If and when the WAN IP address changes, pfSense will automatically update the tunnel broker. | https://docs.netgate.com/pfsense/en/latest/recipes/ipv6-tunnel-broker.html | 2021-07-23T19:06:22 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.netgate.com |
You're reading the documentation for a development version.
For the latest released version, please have a look at Galactic.
The following ROSCon talks have been given on ROS 2 and provide information about the workings of ROS 2 and various demos:
Title
Links
Accelerating Innovation with ROS: Lessons in Healthcare
video
Panel: Software Quality in Robotics
Panel: ROS Agriculture
Achieving Generality and Robustness in Semantic Segmentation
Navigation2: The Next Generation Navigation System
CHAMP Quadruped Control
Kiwibot: ROS2 in the atoms delivery industry
MoveItWorld
OpenCV
ROBOTIS TurtleBot3
Autoware
Dronecode
FIWARE
Lightning Talks and Sponsor Videos 1
Lightning Talks and Sponsor Videos 2
Lightning Talks and Sponsor Videos 3
Lightning Talks and Sponsor Videos 4
Migrating a large ROS 1 codebase to ROS 2
slides / video
The New Architecture of Gazebo Wrappers for ROS 2
Migrating to ROS 2: Advice from Rover Robotics
ROS 2 on VxWorks
Navigation2 Overview
Launch Testing - Launch description and integration testing for ros2
ROS 2 for Consumer Robotics: : The iRobot use-case
Composable Nodes in ROS2
Concurrency in ROS 1 and ROS 2
A True Zero-Copy RMW Implementation for ROS2
ROS2 Real-Time Behavior: Static Memory Allocation
PackML2: State Machine Based System Programming, Monitoring and Control in ROS2
Quality of Service Policies for ROS2 Communications
Micro-ROS: ROS2 on Microcontrollers
ROS2 on Large Scale Systems: Discovery Server
Bridging Your Transitions from ROS 1 to ROS 2
Markup Extensions for ROS 2 Launch
Hands-on ROS 2: A Walkthrough
Launch for ROS 2
The ROS 2 vision for advancing the future of robotics development
ROS 2 Update - summary of alpha releases, architectural overview
Evaluating the resilience of ROS2 communication layer
State of ROS 2 - demos and the technology behind
ROS 2 on “small” embedded systems
Real-time control in ROS and ROS 2
Why you want to use ROS 2
Next-generation ROS: Building on DDS | https://docs.ros.org/en/rolling/ROSCon-Content.html | 2021-07-23T18:57:35 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.ros.org |
Differences from older platform¶
s3 bucket naming constraints¶
In earlier setups we were running with
rgw_relaxed_s3_bucket_names set to
true. This allowed a bit more characters but could cause issues with clients & solutions expecting the stricter standard bucket naming constraints. To avoid such issues in the future we are now running with the default constraints which can be seen here. | https://docs.safespring.com/new/differences/ | 2021-07-23T19:03:49 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.safespring.com |
Using Globals in Server Connect
Overview
We added new Global options in the Server Connect panel. The Globals allow you to define and re-use Database Connections and other Settings (1), Variables / Input Parameters (2) and API Action Steps (3) across all your API Actions in the project:
So it’s no longer needed to add these Database Connection or Mailer Setup steps in every server action you need to use a database connection or send an email.
Defining Global Variables / Input Parameters
You can easily define a variable and access it from all your API Actions. One of the most common use cases are the session variables - that’s what our example will show you.
Select Globals and right click $_SESSION:
Create a session variable and add a name for it:
And you are done! Save the changes we made to Globals:
Now you can access the session we defined in your API Actions. Let’s create one - right click API:
Add a name for your API Action and right click steps:
Add a step, which will need the session variable value. We add a Set Value step, just to show you how it works, but this can be any other step:
Click the dynamic data button to select a dynamic value:
You will see the session variable which we defined globally, select it and you are done:
Defining Global Settings
The other Global Settings include Database Connections, Security Provider, Mailer, Oauth2 Provider, S3 Storage and JWT Signing. These can be globally defined and then reused across all your API Actions.
Database Connections
Defining a global database connection is really easy. You can define as many global database connections as your project needs. They will be available for any component which requires a database connection in any of your API Actions.
First, right click the Database Connections under Globals:
And add a database connection:
Add a name for it and then click the Connection Options button:
Enter the database connection details per selected target and click Save:
If you have multiple targets using different database connections, define Connection Options per each of the targets. Just change the active target from the Target dropdown (1) and click the Connection Options button (2):
Then Save the changes:
And you can use this Database Connection in your API Actions.
Right click Steps in your API Action:
And you can directly add a Database Query step:
Select your database connection from the Connection dropdown:
And you are done! You can now setup your query.
Other Global Settings
The other Global Settings can be defined the same way:
Just setup the setting you need and it’s ready to be used across your API Actions:
Defining Global Steps
The Global Steps are steps which will be executed for each of the API Actions in your project. The Global Steps will be executed before your API Action step, so their data will always be available in the API Action Steps:
One useful use case is providing the logged user identity to every server action. So after you’ve setup your global Security Provider:
Just select Globals and right click Steps:
Open the Security Provider menu and select Security Identify:
Select your global Security Provider from the dropdown:
And enable/disable the Output option. Output is useful if you want to access the logged user Identity on your pages - to conditionally show/hide parts of the page for example:
The output, however, is not required if you want to access the user identity in your API Actions - to filter database queries for example.
Click Save and you are done:
The Identify step will run with every API Action and its value will be available for them in the dynamic data picker:
Conclusion
These are the basics of using Globals in Server Action - try them and let us know what do you think
| https://docs.wappler.io/t/using-globals-in-server-connect/27308 | 2021-07-23T19:50:44 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['https://community.wappler.io/images/emoji/apple/slight_smile.png?v=9',
':slight_smile: :slight_smile:'], dtype=object) ] | docs.wappler.io |
Setup 101
- Setting up Leaky Paywall
- Caching with Leaky Paywall (i.e WP Engine)
- How to test a subscriber's access
- How to redirect a subscriber back to the page they came from after login or subscribing
- Customizing your subscription messages
- How To: Set up PayPal as a Payment Gateway
- What if someone tries to get around the paywall?
- How To Tell Leaky Paywall to Ignore Editors and Other User Roles
- Recommended plugins/apps to help manage your publication
- How to restrict individual articles (Visibility setting)
- How to change the subscription details on a subscribe card (and registration form)
- Leaky Paywall MailChimp Addon
- How to set up a free registration wall to build your email list
- Testing Leaky Paywall on a live site
- How to set up renewal reminders
- How to create multiple subscription levels in Leaky Paywall
- How to restrict categories, tags, edition articles, and more in Leaky Paywall
- How do I download my plugins and get my license keys?
- Create custom registration pages with shortcodes
- Use your subscriber's email address as their username | https://docs.zeen101.com/category/40-getting-started | 2021-07-23T18:04:18 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.zeen101.com |
The Benefits of Buying Used and Refurbished Medical Equipment
Medical equipment promises individuals better healthcare through innovative technology. A common question that most hospitals and clinics face when in the market for medical equipment is whether they should buy new or used. If you are a large hospital facility, a small start-up clinic, or a private medical practice, having correct line of medical equipment is a very important part of your practice. Running any medical business is expensive and maintaining cost efficient budget is as important as anything else in your practice. There are many Benefits of Buying Used and Refurbished Medical Equipment:
Cost Savings:
There’s no doubt that over the past few years the cost of healthcare has risen significantly.. It is an obvious benefit that buying used or refurbished medical equipment will keep medical providers within budget and will allow them to concentrate more on quality of healthcare and expansion of their practice.
Reliability and support:
Reliability and support are some of the most important factors when acquiring and using any medical devices in your practice and therefore, it’s very important to know that the company you’re dealing with will stand behind their product 100% and will be reachable for assistance anytime.
Eco-Friendly
Some Medical equipment releases certain toxins into the environment and takes ages to degrade. Instead of this, consider buying used/refurbished medical equipment to reduce the carbon footprint. Finding ways to be eco-friendly is difficult when it comes to medical equipment but when acquiring refurbished medical equipment, you will both, safe $$$ and act in an eco-friendly way. | https://tools4docs.equipment/blogs/blog/the-benefits-of-buying-usd-and-refurbished-medical-equipment | 2021-07-23T19:37:01 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['http://cdn.shopify.com/s/files/1/0238/1165/3709/articles/the-benefits-of-buying-usd-and-refurbished-medical-equipment_1500x844_crop_center.jpg?v=1609863653',
'The Benefits of Buying Used and Refurbished Medical Equipment'],
dtype=object)
array(['https://cdn.shopify.com/s/files/1/0238/1165/3709/t/7/assets/the-benefits-of-buying-usd-and-refurbished-medical-equipment-6350.jpg',
'Buying Used and Refurbished Medical Equipment'], dtype=object) ] | tools4docs.equipment |
Avatar Configurator
After successful capturing and uploading, the data will be processed fully automated by the Pictofit Content Service. The result is an avatar in the form of a
.hav3d file that you can configure in terms of body pose and shape using the
RRAvatar3DUserAdjustmentsLayout class. After editing, the avatar can be converted to 2D using the
RRAvatarConverter which can then be used for mix & match.
To edit an avatar, you will need to create an
RRAvatar3D object from a
.hav3d file. You now can instantiate a
RRAvatar3DUserAdjustmentsLayout object that you then just set as the
layout property of an
RRGLRenderView instance. The following code snippet shows how to initialize the configurator:
func initializeHeadAvatarEditor(avatar: RRAvatar3D) { // Instantiate the editor layout self.editorLayout = RRAvatar3DUserAdjustmentsLayout() // set the avatar you want to edit self.editorLayout!.avatar = avatar // Attach the layout to an RRGLRenderView instance you are showing on screen self.renderView.layout = self.editorLayout }
See the API docs for more details on how you can apply poses and modify the body shape using the
RRAvatar3DUserAdjustmentsLayout class. To create your own pose and shape presets you will need Pictofit Studio and create a custom
RRAvatar3DUserAdjustmentsLayoutConfig file. The SDK already comes with a set of default poses and shape modifiers.
Saving and Loading the Configurator State
The
RRAvatar3DUserAdjustmentsLayout class provides functionality to save and load the state of an edited avatar. This is required if you want to allow users to modify the avatar at a later point again. In that case, you also have to store the configurator state alongside with the
.hav3d file. The following code snippet shows how you can implement the loading and saving of the editor state:
func loadState(statePath: String) { let layoutState = RRAvatar3DUserAdjustmentsLayoutState.load(fromFile: statePath)! self.editorLayout = RRAvatar3DUserAdjustmentsLayout(state: layoutState, avatar: self.avatar!, poseAndShapeDataProvider: nil) self.renderView.layout = self.editorLayout } func saveState(statePath: String) { let layoutState = self.editorLayout!.layoutState try! layoutState.save(toFile: statePath) }
Converting to 2D for Mix & Match
After users have finished adjusting their avatar within the configurator, it can be converted to an instance of the
RRAvatar class using the
RRAvatarConverter class. The converted avatar is then immediately usable for 2D mix & match. All you are going to need is a properly set up
RRAvatar3DUserAdjustmentsLayout that is attached to an
RRGLRenderView instance and to load the avatar you want to convert. The resulting 2D avatar will then look exactly the same as seen in the configurator. The following code snippet shows how to do it:
func convertAvatar() -> RRAvatar { let avatarConverter = RRAvatarConverter.init(renderView: self.renderView, layout: self.editorLayout!) let avatar = try! avatarConverter.convertTo2DAvatar() return avatar }
Advanced
All components around avatar capturing are already preconfigured so that users have a great experience. Still our SDK allows advanced customisation which is explained in the subsequent sections.
Automatic Pose Readjustment
This features makes sure that the pose of an avatar changes naturally when users adjust properties like the weight. Automatic Pose Readjustment is enabled by default using a config file that is part of the SDK. You can easily disable it by calling the
disablePoseReadjustment method of the
RRAvatar3DUserAdjustmentsLayout class.
Depending on the range of certain shape properties and the body pose, it might be necesarry to use a different configuration than the default one. When given a customized configuration file, the following code will enable the behaviour:
func enablePoseReadjustments(configPath: String) { let config = RRPoseReadjustmentConfig.load(fromConfigFile: configPath)! try! self.editorLayout!.enablePoseReadjustment(config) }
Editor Layout Config File
The
RRAvatar3DUserAdjustmentsLayoutConfig includes necessary information for using the
RRAvatar3DUserAdjustmentsLayout class covering the following things:
- A set of adjustable morphs
- A set of pose presets
- A set of shape presets
For starters, we recommend to not provide any config file to the
RRAvatar3DUserAdjustmentsLayout class. In that case, the layout uses a default configuration that comes with the SDK. You can create a template JSON file that shows all possbile settings of the config file using the method
createTemplateConfigFile() of the
RRAvatar3DUserAdjustmentsLayoutConfig class. | https://docs.pictofit.com/ios-sdk/2.6.2/head-avatar-editor/ | 2021-07-23T20:30:42 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['/ios/2.1.0/AvatarConfigurator.gif', None], dtype=object)
array(['/ios/2.1.0/PoseReadjustment.gif', None], dtype=object)] | docs.pictofit.com |
Signal
GtkDragSource::drag-cancel
Declaration
gboolean drag_cancel ( GtkDragSource self, GdkDrag* drag, GdkDragCancelReason* reason, gpointer user_data )
Description [src]
Emitted on the drag source when a drag has failed.
The signal handler may handle a failed drag operation based on
the type of error. It should return
TRUE if the failure has been handled
and the default “drag operation failed” animation should not be shown. | https://docs.gtk.org/gtk4/signal.DragSource.drag-cancel.html | 2021-07-23T19:42:05 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.gtk.org |
NOTES:
- OpenCL* custom layer support is available in the preview mode.
- This section assumes you are familiar with developing kernels using OpenCL.
To customize your topology with an OpenCL layer, follow the steps below:
clc).
.xml) of the model IR.
NOTE: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta*, and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only, so first compile OpenCL C code using the standalone
clc compiler. You can find the compiler binary at
<INSTALL_DIR>/deployment_tools/tools/cl_compiler.
NOTE: By design, custom OpenCL layers support any OpenCL kernels written with 1.2 version assumed. It also supports half float extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
SHAVE_MA2X8XLIBS_DIR=<INSTALL_DIR>/deployment_tools/tools/cl_compiler/lib/
SHAVE_LDSCRIPT_DIR=<INSTALL_DIR>/deployment_tools/tools/cl_compiler/ldscripts/
SHAVE_MYRIAD_LD_DIR=<INSTALL_DIR>/deployment_tools/tools/cl_compiler/bin/
SHAVE_MOVIASM_DIR=<INSTALL_DIR>/deployment_tools/tools/cl_compiler/bin/
--strip-binary-headerto make an OpenCL runtime-agnostic binary runnable with the Inference Engine.
To tie the topology IR for a layer you customize, prepare a configuration file, so that the Inference Engine can find parameters for your kernel and the execution work grid is described. For example, given the following OpenCL kernel signature:
Configuration file for this kernel might be the following:
Each custom layer is described with the
CustomLayer node. It has the following nodes and attributes:
CustomLayercontains the following attributes:
name– (Required) The name of the Inference Engine layer to bind the kernel with.
typeand
version– (Required) Reserved for future use. Set them to
MVCLand
1respectively.
max-shaves– (Optional) The maximum number of SHAVE cores that should be dedicated for the layer. It is useful for debugging concurrency issues or for resource saving that memory bound kernel does not scale well with the number of cores, so more resources can be left for the rest of a topology.
Kernelmust contain the following attributes:
entry– The name of your kernel function as you defined it in a source file. In the example above, it is
reorg_nhwc.
Sourcemust contain the following attributes:
filename– The path to a compiled binary relative to the
.xmlbinding file.
Parameters– Describes parameters bindings. For more information, see the description below.
WorkSizes– Describes local and global work group sizes and the source for dimension deduction as a pair
direction,port. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the IR.
globaland
localwork group configurations support any simple math expressions with +,-,*,/, and () from
B(batch),
Y(height),
X(width) and
F(channels).
Where– Allows to customize bindings with the
key="value"attribute. For example, to substitute only 3x3 convolutions, write
<Where kernel="3,3"/>in the binding xml.
Parameter description supports
Tensor of one of tensor types such as
input,
output,
input_buffer,
output_buffer or
data,
Scalar, or
Data nodes and has the following format:
Tensornode of
inputor
outputtype must contain the following attributes:
arg-name– The name of a kernel parameter in the kernel signature.
type– Node type:
inputor
outputas in the IR.
port-index– A number of input/output ports as in the IR.
format– The channel order in the tensor. Optional conversion layers are generated if the custom layer format is not compatible with formats of neighboring layers.
BFXY,
BYXF, and
ANYformats are supported currently.
Each
Tensor node of
input_buffer or
output_buffer type must contain the following attributes:
arg-name– The name of a kernel parameter in the kernel signature.
type– Node type:
input_bufferor
output_buffer. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
port-index– The unique identifier to bind by.
dim– The dim source with the same
direction,portformat used for
WorkSizesbindings.
size– Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be expended in the future.
Here is an example of multi-stage MVN layer binding:
Tensornode that has the type
datamust contain the following attributes:
source– A name of the blob as it is in the IR. Typical example is
weightsfor convolution.
format– Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
Scalarnode must contain the following attributes:
arg-name– The name of a kernel parameter in the kernel signature.
type–
intor
floatvalue. It is used for correct argument extraction from IR parameters.
source– Contains the name of the parameter in the IR file or input/output (
I/
O,
In/
On, where
nis a port number) followed by dimension
B(batch),
Y(height),
X(width), or
F(channels).
Datanode must contain the following attributes:
arg-name– The name of a kernel parameter in the kernel signature.
type– Node type. Currently,
local_datais the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100KB for all
__localand
__privatearrays defined inside the kernel as well as all
__localparameters passed to the kernel. Note that a manual-DMA extension requires double buffering. If the custom layer is detected to run out of local memory, the inference fails.
dim– The dim source with the same
direction,portformat used for
WorkSizesbindings.
size– Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future. The example binding below illustrates a kernel with two local buffers passed to the kernel.
NOTE: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
Before loading the network that features the custom layers, provide a separate configuration file and load it using the InferenceEngine::Core::SetConfig() method with the PluginConfigParams::KEY_CONFIG_FILE key and the configuration file name as a value:
Optionally, set a path to a custom layers description with a pair of
VPU_CUSTOM_LAYERS and
/path/to/your/customLayers.xml as a network configuration:
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
Note that by the OpenCL specification, the work group execution order is not specified. This means that it is your responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime spits evenly work grid among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
max-shavesattribute for the
CustomLayernode. This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
BFXY/
BYXF) for the kernel if it improves work group partitioning or data access patterns. Consider not just specific layer boost, but full topology performance because data conversion layers would be automatically inserted as appropriate.
Offline OpenCL compiler (
clc) features automatic vectorization over
get_global_id(0) usage, if uniform access is detected. For example, the kernel below could be automatically vectorized:
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism (SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code patterns. WGV works if and only if vector types are not used in the code.
Here is a short list of optimization tips:
restrictwhere possible.
ocl_grnfrom the example below.
restrictmarkers for kernels with manually vectorized codes. In the
ocl_grnkernel below, the unrolled version without
restrictis up to 20% slower than the most optimal one, which combines unrolling and
restrict.
#pragma unroll Nto your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to annotate the code with pragmas as appropriate. The
ocl_grnversion with
#pragma unroll 4is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the fist loop
variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);is only 2-stage (load-compute). Pay attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from
4to
6results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel® Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
reqd_work_group_sizekernel attribute to ask the compiler to unroll the code up to the local size of the work group. Note that if the kernel is actually executed with the different work group configuration, the result is undefined.
halfcompute if it keeps reasonable accuracy. 16-bit float is a native type for Intel® Neural Compute Stick 2, most of the functions
half_*are mapped to a single hardware instruction. Use the standard
native_*function for the rest of types.
convert_halffunction over
vstore_halfif conversion to 32-bit float is required.
convert_halfis mapped to a single hardware instruction. For the
cvtf32f16kernel above, the line
outImage[idx] = convert_half(inImage[idx]*scale+bais);is eight times slower than the code with
vstore_half.
clccompiler due to conflicts with the auto-vectorizer. The generic advice would be to setup local size by
xdimension equal to inputs or/and outputs width. If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example,
if (get_global_id(0) >= width) return, use line-wise kernel variant with manual vectorization. The kernel example below demonstrates the impact of early exits on kernel performance.
reorgkernel is auto-vectorizable, but an input for YOLO v2 topology is
NCHW=<1,64,26,26>and it is not multiple of vector width, which is
8for
halfdata type. As a result, the Inference Engine does not select the auto-vectorized kernel. To compare performance of auto-vectorized and scalar version of the kernel, change the input size to
NCHW=<1,64,26,32>. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift. Since the auto-vectorized version is faster, it makes sense to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, 32, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
w = min(w, W-1);with
if (w >= W) return;, runtime increases up to 2x against to code without branching (initial version).
__localmemory.
reorgkernel unrolled by
stride:
scrdata in this case loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
__dlobalto
__localor
__privatememory if the data is accessed more than once. Access to
__dlobalmemory is orders of magnitude slower than access to
__local/
__privatedue to statically scheduled pipeline, which stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store from/to a
__blobalpointer since work-group copying could be done in a vector fashion.
__dma_preloadand
__dma_postwrite intrinsics. This means that instead of one kernel, a group of three kernels should be implemented:
kernelName,
__dma_preload_kernelName, and
__dma_postwrite_kernelName.
__dma_preload_kernelNamefor a particular work group
nis guaranteed to be executed before the
n-th work group itself, while
__dma_postwrite_kernelNameis guaranteed to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to
__globaland
__localmemory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
async_work_group_copy, which is also mapped to DMA call.
Here is the list of supported functions:
where
T can be
uchar,
char,
short,
ushort,
int,
uint,
long,
ulong,
half or
float.
Modified version of the GRN kernel could be the following:
Note the
get_local_size and
get_local_id usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup because it was completely limited by memory usage.
An alternative method of using DMA is to use work item copy extension. Those functions are executed inside a kernel and requires work groups equal to single work item.
Here is the list of supported work item functions:
where
T can be
uchar,
char,
short,
ushort,
int,
uint,
long,
ulong,
half or
float. | https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel.html | 2021-07-23T19:06:15 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.openvinotoolkit.org |
Last modified: May 13, 2020
Overview
The
/usr/local/cpanel/bin/autossl_check script allows you to manually check the SSL status for a single user, or for all users. The script can help you troubleshoot any AutoSSL issues via the command line. The script checks whether a user’s certificates have expired, and if there is SSL coverage for a user’s domains. The system uses this script when you select the Run AutoSSL for All Users setting in WHM’s Manage AutoSSL interface (WHM >> Home >> SSL/TLS >> Manage AutoSSL).
The system calls this script daily via a cron job in the
/etc/cron.d file.
Run the script
To run this script on the command line, use the following format:
/usr/local/cpanel/bin/autossl_check [options]
Options
Use the following options with this script: | https://docs.cpanel.net/whm/scripts/the-autossl_check-script/ | 2021-07-23T20:03:01 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.cpanel.net |
Adds a command onto the commandbuffer to draw the VR Device's occlusion mesh to the current render target.
Commands in the rendering command buffer are lower-level graphics operations that can be sequenced in scripting. This command in particular is used to render an occlusion mesh provided by the active VR Device.
Call this method before other rendering methods to prevent rendering of objects that are outside the VR device's visible regions.
See Also: XRSettings.useOcclusionMesh and XRSettings.occlusionMaskScale | https://docs.unity3d.com/ja/2019.4/ScriptReference/Rendering.CommandBuffer.DrawOcclusionMesh.html | 2021-07-23T18:17:25 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.unity3d.com |
.
- The Pre-Extend Wizard will prompt you to enter the following information.
- After receiving the message that the pre-extend checks were successful, click Next.
- Depending upon the hierarchy being extended, LifeKeeper will display a series of information boxes showing the Resource Tags to be extended, which cannot be edited. Click Extend.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/sps/8.7.1/en/topic/extending-an-oracle-hierarchy | 2020-10-23T22:36:10 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.us.sios.com |
Basler Product Documentation#
Info
Press F or S to start searching. Press C to select a camera.
For the latest version of the documentation, visit docs.baslerweb.com.
The Area Scan Cameras section includes general maintenance and safety instructions, model specifications, and task-oriented feature descriptions. A handy filter feature allows you to tailor the content to match your specific camera model.
There are also sections about Basler's embedded vision kits and frame grabber and vision component portfolios. The software section covers Basler's host software, the pylon Camera Software Suite, used for configuring your camera, as well as Visual Applets, a development environment for programming frame grabbers.
User's manuals for cameras not included in the Basler Product Documentation as well as installation guides and application notes can be found under Other Documentation. | https://docs.baslerweb.com/?filter=Camera:a2A4504-5gcPRO | 2020-10-23T21:48:06 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.baslerweb.com |
Does Crobox offer opt-out functionality?. | https://docs.crobox.com/article/49-opt-out | 2020-10-23T22:26:23 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.crobox.com |
.
Do you want to earn additional income?
We're partnering with Upwork to connect Azure certified freelancers with customers.Sign up with Upwork.
Why get certified
Certifications give you a professional edge with evidence of industry recognized role-based skills.
Benefits of Microsoft Certifications.Download Whitepaper
38%
of IT positions will be cloud related by 2021.
35%
of IT professionals say they are more influential during cloud deployments than counterparts during deployments of other technologies.
36.9%
of IT professionals claimed certification helped them perform complex tasks more confidently. | https://docs.microsoft.com/en-us/learn/certifications/?q=%E5%BE%B7%E6%83%A0%E6%89%BE%E5%B0%8F%E5%A7%90%E2%95%87%E8%96%87%E2%91%A71895854%E4%B8%8A%E9%97%A8%E6%9C%8D%E5%8A%A1%E5%85%A8%E5%A5%97%E5%B0%8F%E5%A6%B9%E3%80%91pxl | 2020-10-23T22:43:28 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/en-us/learn/certifications/images/image-working-lady-laptop.jpg',
'Woman looking at a Microsoft Surface'], dtype=object)
array(['/en-us/learn/certifications/images/image-working-group.jpg',
'Man presenting to four people seated around a table'],
dtype=object) ] | docs.microsoft.com |
MB-310: Microsoft Dynamics 365 Finance
Languages: English
Retirement date: none
This exam measures your ability to accomplish the following technical tasks: set up and configure financial management; implement and manage accounts payable and expenses; implement accounts receivable, credit, collections, and revenue recognition; and manage budgeting and fixed assets.
Price based on the country in which the exam is proctored. | https://docs.microsoft.com/en-us/learn/certifications/exams/mb-310?WT.mc_id=thomasmaurer-blog-thmaure | 2020-10-23T22:35:34 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.microsoft.com |
As your team grows, it is very difficult to keep track of all changes in the project, so we have created the ability to track any activities (delete, edit, create, etc) did by specific users.
To track user activity, simply click the Activity Log icon on the menu.
You can track the time and type of activity. You can also see which user did the changes in the project. | https://docs.jetadmin.io/user-guide/collaboration/user-activities | 2020-10-23T21:16:19 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.jetadmin.io |
- Create your DataKeeper volumes and your cluster. See Creating a DataKeeper Volume Resource in WSFC for reference.
- From the Windows Failover Cluster Manager UI (cluadmin.msc), select Configure Role and navigate to the screen to select the iSCSI target role.
- Select iSCSI Target Server role and select Next.
- The Client Access Point page appears. Type the Client Access Point name and IP address for the iSCSI Target Server instance.
- On the Select Storage dialog, select your DataKeeper volume(s).
- With the next set of screens, you should be able to complete the configuration.
- Following setup, from the Failover Cluster UI, add dependencies for the DataKeeper volume(s).
a. Click on Roles in the left pane, then click on the iSCSI Target Server resource in the top center pane.
b. In the lower center pane, select the Resources tab, then right-click on the Name: <client access point name> under the Server Name heading and select Properties.
c. Select the Dependencies tab and add the appropriate DataKeeper volume(s) as dependencies.
- Setup is complete. Proceed to the iSCSI Virtual Disks configuration.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/dkce/8.7.0/en/topic/mirror-creation-and-cluster-configuration | 2020-10-23T22:39:24 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.us.sios.com |
Configuring the version 2 Converse Web Client (deprecated)
Note: Version 3 is now available. We recommend that you use version 3, and that you upgrade any existing implementations to this version as soon as possible.
This is a standalone HTML application intended for customers who want to integrate Converse into their website, while keeping their website's look and feel.
This example shows the standard elements and style:
On the web page that will use the bot, you need to complete these steps:
- Add the scripts and style sheets required to run the bot.
- Add the JavaScript snippet for the bot. You can copy this from the bot itself.
- Configure the display options for the bot, such as the title, buttons and so on.
- Add a container for the bot to the HTML of the web page.
- Set the size of the container and, optionally, change the style to match your brand.
See also Web Client code example (version 2 deprecated). | https://docs.converse.engageone.co/Integration_web_client_v2/Concept_Web_Client.html | 2020-10-23T21:45:11 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['../images/Web_client_example.png', None], dtype=object)] | docs.converse.engageone.co |
SQL Statements
This list of SQL statements provide a record of SQL queries and other operations for each table, including insert, update, and delete. These SQL statements are linked to a query plan, and this link provides the option to freeze this query plan.
The system creates an SQL Statement for each SQL DML operation. This provides a list of SQL management (DML) operations include queries against the table, and insert, update, and delete operations. Each data management (DML) operation (both Dynamic SQL and Embedded SQL) creates an SQL Statement when the operation is executed.
Dynamic SQL SELECT commands create an SQL Statement when the query is prepared. In addition, an entry is created in the Management Portal Cached Queries listing.
Embedded SQL cursor-based SELECT commands create an SQL Statement when the OPEN command invokes a DECLARED query. No separate entry is created in the Management Portal Cached Queries listing.
If a query references more than one table, a single SQL Statement is created in the namespace’s SQL Statements listing that lists all of the referenced tables in the Table/View/Procedure Name(s) column, and for each individual referenced table the Table’s SQL Statements listing contains an entry for that query.
An SQL Statement is created when the query is prepared for the first time. If more than one client issues the same query only the first prepare is recorded. For example, if JDBC issues a query then ODBC issues an identical query the SQL Statement index would only have information about the first JDBC client and not the ODBC client.
Most SQL Statements have an associated Query Plan. When created, this Query Plan is unfrozen; you can subsequently designate this Query Plan as a frozen plan. SQL Statements with a Query Plan include DML commands that involve a SELECT operation. SQL Statements without a Query Plan are listed in the “Plan State” section below.
SQL Statements only lists the most recent version of an SQL operation. Unless you freeze the SQL Statement, InterSystems IRIS® data platform replaces it with the next version. Thus rewriting and invoking the SQL code in a routine causes the old SQL code to disappear from SQL Statements.
Other SQL Statement Operations
The following SQL commands perform more complex SQL Statement operations:. Note that this listing of SQL Statements can contain stale (no longer valid) listings., but must follow statement text punctuation whitespace (name , age, not name,age). If a query references more than one table, the Filter includes the SQL Statement if it selects for any referenced table in the Table/View/Procedure Name(s) column. The Filter option is user customized.
The Max rows option defaults to 1,000. The maximum value is 10,000. the minimum value is 10. To list more than 10,000 SQL Statements, use INFORMATION_SCHEMA.STATEMENTS. The Page size and Max rows options are user customized. InterSystems IRIS). Executing the OPEN command for a declared CURSOR generates an SQL Statement with an associated Query Plan. Embedded SQL statements that use that cursor (FETCH cursor, UPDATE...WHERE CURRENT OF cursor, DELETE...WHERE CURRENT OF cursor, and SQL statement generation normalizes lettercase and whitespace. Other differences are as follows:
If you issue a query from the Management Portal interface or the SQL Shell interface, the resulting SQL Statement differs from the query by preceding the SELECT statement with DECLARE QRS CURSOR FOR (where “QRS” default schema name..
When SQL statements are prepared via xDBC, SQL statement generation appends SQL Comment Options (#OPTIONS) to the statement text if the options are needed to generate the statement index hash. This is shown in the following example:
DECLARE C CURSOR FOR SELECT * INTO :%col(1) , :%col(2) , :%col(3) , :%col(4) , :%col(5) FROM SAMPLE . COMPANY /*#OPTIONS {"xDBCIsoLevel":0} */
Stale SQL Statements
When a routine or class associated with an SQL Statement is deleted, the SQL Statement listing is not automatically deleted. This type of SQL Statement listing is referred to as Stale. Because it is often useful to have access to this historic information and the performance statistics associated with the SQL Statement, these stale entries are preserved in the Management Portal SQL Statement listing.
You can remove these stale entries by using the Clean Stale button. Clean Stale removes all non-frozen SQL Statements for which the associated routine or class (table) is no longer present or no longer contains the SQL Statement query. Clean Stale does not remove frozen SQL Statements.
You can perform the same clean stale operation using the $SYSTEM.SQL.Statement.Clean() method.
If you delete a table (persistent class) associated with an SQL Statement, the Table/View/Procedure Name(s) column is modified, as in the following example: SAMPLE.MYTESTTABLE - Deleted??; the name of the deleted table is converted to all uppercase letters and is flagged as “Deleted??”. Or, if the SQL Statement referenced more than one table: SAMPLE.MYTESTTABLE - Deleted?? Sample.Person.
For a Dynamic SQL query, when you delete the table the Location(s) column is blank because all cached queries associated with the table have been automatically purged. Clean Stale removes the SQL Statement.
For an Embedded SQL query, the Location(s) column contains the name of the routine used to execute the query. When you change the routine so that it no longer executes the original query, the Location(s) column is blank. Clean Stale removes the SQL Statement. When you delete a table used by the query, the table is flagged as “Deleted??”; Clean Stale does not remove the SQL Statement.
A system task is automatically run once per hour in all namespaces to clean up indices for any SQL Statements that might be stale or have stale routine references. This operation is performed to maintain system performance. This internal clean-up is not reflected in the Management Portal SQL Statements listings.. Note that these operations do not change the SQL Statements listings; you must use Clean Stale to update the SQL Statements listings.
Data Management (DML) SQL Statements
The Data Management Language (DML) commands that create an SQL Statement are: INSERT, UPDATE, INSERT OR UPDATE, DELETE, TRUNCATE TABLE, SELECT, and OPEN cursor for a declared cursor-based SELECT. You can use Dynamic SQL or Embedded SQL to invoke a DML command. A DML command can be invoked for a table or a view, and InterSystems IRIS creates a corresponding SQL Statement.
The system creates an SQL Statement when Dynamic SQL is prepared or when an Embedded SQL cursor is opened, not when the DML command is executed. The SQL Statement timestamp records when this SQL code invocation occurred, not when (or if) the query was executed. Thus an SQL Statement may represent a change to table data flags the corresponding SQL Statement for Clean Stale deletion. Purging a frozen cached query removes the Location value for the corresponding SQL Statement. Unfreezing the SQL Statement flags it for Clean Stale deletion.
Executing.
Opening a cursor-based Embedded SQL Data Management Language (DML) routine creates an SQL Statement with a Query Plan. Associated Embedded SQL statements .
SELECT Commands
Invoking
There are two ways to display the SQL Statement Details:
From the SQL Statements tab, select an SQL Statement by clicking the Table/View/Procedure Name(s) link in the left-hand column. This displays the SQL Statement Details in a separate tab. This interface allows you to open multiple tabs for comparison. It also provides a Query Test button that displays the SQL Runtime Statistics page.
From the table’s Catalog Details tab (or the SQL Statements tab), select an SQL Statement by clicking the Statement Text link in the right-hand column. This displays the SQL Statement Details in a pop-up window.
You can use either SQL Statement Details display to view the Query Plan and to freeze or unfreeze the query plan.
SQL Statement Details provides buttons to Freeze or Unfreeze the query plan. It also provides a Clear SQL Statistics button to clear the Performance Statistics, an Export button to export one or more SQL Statements to a file, as well as a buttons to Refresh and to Close the page.
The SQL Statement Details display contains the following sections. Each of these sections can be expanded or collapsed by selecting the arrow icon next to the section title:). Occasionally, what appear to be identical SQL statements may have different statement hash entries. Any difference in settings/options that require different code generation of the SQL statement result in a different statement hash. This may occur with different client versions or different platforms that support different internal optimizations. InterSystems IRIS version under which the plan was created. If the Plan state is Frozen/Upgrade, this is an earlier version of InterSystems IRIS. When you unfreeze a query plan, the Plan state is changed to Unfrozen and the Version is changed to the current InterSystems IRIS an InterSystems IRIS.
Frozen plan different: if you freeze the plan, this additional field is displayed, displaying whether the frozen plan is different from the unfrozen plan. When you freeze the plan, the Statement Text and Query Plan displays the frozen plan and the unfrozen plan side-by-side for easier comparison.
This section also includes five.
InterSystems IRIS does not separately record performance statistics for %PARALLEL subqueries. %PARALLEL subquery statistics are summed with the statistics for the outer query. Queries generated by the implementation to run in parallel do not have their performance statistics tracked individually.
InterSystems IRISML commands this can be set using #SQLCompile Select; the default is Logical. If #SQLCompile Select=Runtime, a call to the SelectMode option of the $SYSTEM.SQL.Util.SetOption() method can change the query result set display, but does not change the Select Mode value, which remains Runtime.
Default schema(s): the default schema name that were set when the statement was compiled. This is commonly the default schema in effect when the command was issued, though SQL may have resolved the schema for unqualified names using a schema search path (if provided) rather than the default schema name. However, if the statement is a DML command in Embedded SQL using one or more #Import macro directives, the schemas specified by #Import directives are listed here.
Schema path: the schema path defined when the statement was compiled. (persistent class) was last compiled.
Classname: the classname associated with the table.
This section includes a Compile Class. InterSystems IRIS SQL Statement Details page Export button. From the Management Portal System Explorer SQL interface, select the SQL Statements tab and click on a statement to open up the SQL Statement Details page. Select the Export button. This opens a dialog box, allowing you to select to export the file not selected by default.
Browser: Exports the file statementexport.xml to a new page in the user’s default browser. You can specify another name for the browser export file, or specify a different software display option.
Use the $SYSTEM.SQL.Statement.ExportFrozenPlans() method.
Export all SQL Statements in the namespace:
Use the Export All Statements Action from the Management Portal. From the Management Portal System Explorer SQL interface, select the Actions drop-down list. From that list select Export All Statements. This opens a dialog box, allowing you to export all SQL Statements in the namespace selected by default. This is the recommended setting when exporting all SQL Statements. When Run export in the background is checked, you are provided with a link to view the background list page where you can see the background job status.
Browser: Exports the file statementexport.xml to a new page in the user’s default browser. You can specify another name for the browser export file, or specify a different software display option.
Use the $SYSTEM.SQL.Statement.ExportAllFrozenPlans() method.
Importing SQL Statements
Import an SQL Statement or multiple SQL Statements from a previously-exported file:
Use the Import Statements Action from the Management Portal. From the Management Portal System Explorer SQL interface, select the Actions drop-down list. From that list select Import Statements. This opens a dialog box, allowing you to specify the full path name of the import XML file.
The Run import in the background check box is selected by default. This is the recommended setting when importing a file of SQL Statements. When Run import in the background is checked, you are provided with a link to view the background list page where you can see the background job status.
Use the $SYSTEM.SQL.Statement.ImportFrozenPlans() method.
Viewing and Purging Background Tasks
From the Management Portal System Operation option, select Background Tasks to view the log of export and import background tasks. You can use the Purge Log button to clear this log. | https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GSQLOPT_sqlstmts | 2020-10-23T21:40:06 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.intersystems.com |
Tutorial: Azure Active Directory single sign-on (SSO) integration with WhosOnLocation
In this tutorial, you'll learn how to integrate WhosOnLocation with Azure Active Directory (Azure AD). When you integrate WhosOnLocation with Azure AD, you can:
- Control in Azure AD who has access to WhosOnLocation.
- Enable your users to be automatically signed-in to WhosOnLocation.
- WhosOnLocation single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
WhosOnLocation supports SP initiated SSO
Once you configure WhosOnLocation you can enforce session control, which protect exfiltration and infiltration of your organization’s sensitive data in real-time. Session control extend from Conditional Access. Learn how to enforce session control with Microsoft Cloud App Security.
Adding WhosOnLocation from the gallery
To configure the integration of WhosOnLocation into Azure AD, you need to add WhosOnLocation WhosOnLocation in the search box.
- Select WhosOnLocation from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
Configure and test Azure AD SSO for WhosOnLocation
Configure and test Azure AD SSO with WhosOnLocation using a test user called B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in WhosOnLocation.
To configure and test Azure AD SSO with WhosOnLocation, WhosOnLocation SSO - to configure the single sign-on settings on application side.
- Create WhosOnLocation test user - to have a counterpart of B.Simon in WhosOnLocation that is linked to the Azure AD representation of user.
- Test SSO - to verify whether the configuration works.
Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
In the Azure portal, on the WhosOnLocation:<CUSTOM_ID>
b. In the Identifier (Entity ID) text box, type a URL using the following pattern:<CUSTOM_ID>
c. In the Reply URL text box, type a URL using the following pattern:<CUSTOM_ID>
Note
These values are not real. Update these values with the actual Sign on URL, Reply URL and Identifier. Contact WhosOnLocation WhosOnLocation WhosOnLocation.
In the Azure portal, select Enterprise Applications, and then select All applications.
In the applications list, select WhosOnLocation. WhosOnLocation SSO
In a different browser window, sign on to your WhosOnLocation company site as administrator.
Click on Tools -> Account.
In the left side navigator, select Employee Access.
Perform the following steps in the following page.
a. Change Single sign-on with SAML to Yes.
b. In the Issuer URL textbox, paste the Entity ID value which you have copied from the Azure portal.
c. In the SSO Endpoint textbox, paste the Login URL value which you have copied from the Azure portal.
d. Open the downloaded Certificate (Base64) from the Azure portal into Notepad and paste the content into the Certificate textbox.
e. Click on Save SAML Configuration.
Create WhosOnLocation test user
In this section, you create a user called B.Simon in WhosOnLocation. Work with WhosOnLocation support team to add the users in the WhosOnLocation platform. Users must be created and activated before you use single sign-on.
Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the WhosOnLocation tile in the Access Panel, you should be automatically signed in to the WhosOnLocation WhosOnLocation with Azure AD
What is session control in Microsoft Cloud App Security?
How to protect WhosOnLocation with advanced visibility and controls | https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/whos-on-location-tutorial | 2020-10-23T22:38:06 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.microsoft.com |
Microsoft Power Platform Fundamentals
Learn the business value and product capabilities of Power Platform. Create a simple Power App, connect data with Common Data Service, build a Power BI Dashboard, and automate a process with Power Automate.
Prerequisites
None
Modules in this learning path
Learn about the components of Power Platform, the business value for customers, and security of the technology.
Learn about the basics of Common Data Service and explore ways you can connect and customize data connections to.
Learn about the value of Power Apps portals and how you can leverage it to allow internal and external audiences to view and interact with data from Common Data Service or Dynamics 365..
Learn how you can leverage Power Virtual Agents to quickly and easily create powerful bots using a guided, no-code graphical experience. | https://docs.microsoft.com/en-us/learn/paths/power-plat-fundamentals/ | 2020-10-23T22:13:11 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.microsoft.com |
Rutgers Division of Continuing Studies (DoCS) is the only university-wide department devoted to the needs of the lifelong learner.
Serving Lifelong Learners
DoCS celebrates and serves learners from preschool into retirement.
Serving the University
- DoCS supports faculty in advancing the academic mission of Rutgers.
-
- Continuing Education Coordinating Council
- Instructional Design
- Assessing Educational Outcomes
- Game Research and Design
- DIY Video Production
- iTV Studio
- Makerspace
- Meeting and Conference Space
- Event Planning
-
Leading Online Learning
- Coordinating online education at Rutgers is one of the two core responsibilities assigned to DoCS in its 1996 founding charter.
-
- Teaching and Learning with Technology
- Fully Online Degrees
- Online Certificates and Certifications
- iTV Studio
- Game Research and Immersive Design
-
- | https://docs.rutgers.edu/?fuseAction=main&Code=3712%3Fuk%3Dpolymyxin-b-sulfate-and-trimethoprim-dosage | 2020-10-23T21:02:50 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.rutgers.edu |
AppDynamics switched from Semantic Versioning to Calendar Versioning starting in February 2020 for some agents and March 2020 for the entire product suite.
Using the Controller UI, you can configure:
- how mobile requests are named
- the thresholds that cause network request snapshots to be considered slow, very slow or stalled
- percentile levels you would like to display, if any
- which network requests are sent to the Event Service
- if the IP address from which the request comes should be stored
To configure Mobile RUM from the Controller UI, your user account must belong to a role that has the Configure EUM permission. See End User Monitoring Permissions for more information.
To access mobile request configuration:
- Open the mobile application in which you are interested.
- From the left-hand navigation menu, click Configuration.
- From the Configuration page, click Mobile App Group Configuration >.
Overview
Content Tools | https://docs.appdynamics.com/pages/viewpage.action?pageId=89694612 | 2020-10-23T21:40:51 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.appdynamics.com |
public interface MBeanInfoAssembler
Interface to be implemented by all classes that can create management interface metadata for a managed resource.
Used by the
MBeanExporter to generate the management
interface for any bean that is not an MBean.
MBeanExporter
ModelMBeanInfo getMBeanInfo(Object managedBean, String beanKey) throws JMException
managedBean- the bean that will be exposed (might be an AOP proxy)
beanKey- the key associated with the managed bean
JMException- in case of errors | https://docs.spring.io/spring-framework/docs/2.5.5/javadoc-api/org/springframework/jmx/export/assembler/MBeanInfoAssembler.html | 2020-10-23T22:25:55 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.spring.io |
Las descripciones de los efectos en esta página se refieren a los efectos predeterminados encontrados dentro de la pila de post-procesamiento.
Bloom is an effect used to reproduce an imaging artifact of real-world cameras. The effect produces fringes of light extending from the borders of bright areas in an image, contributing to the illusion of an extremely bright light overwhelming the camera or eye capturing the scene.
In HDR rendering applies a fullscreen layer of smudges or dust to diffract the Bloom effect. This is commonly used in modern first person shooters.. | https://docs.unity3d.com/es/2017.2/Manual/PostProcessing-Bloom.html | 2020-10-23T22:46:35 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.unity3d.com |
Ubuntu.Components.Popups.PopupBase
The base class for all dialogs, sheets and popovers. Do not use directly. More...
Properties
- dismissArea : Item
- grabDismissAreaEvents : bool
Methods
Detailed Description
Examples: See subclasses.
Property Documentation
The property holds the area used to dismiss the popups, the area from where mouse and touch events will be grabbed. By default this area is the Popup itself.
The property specifies whether to forward or not the mouse and touch events happening outside of the popover. By default all events are grabbed.
Method Documentation
Hide the popup. Only use this function if you handle memory management. Otherwise use PopupUtils.close() to do it automatically.
Make the popup visible. Reparent to the background area object first if needed. Only use this function if you handle memory management. Otherwise use PopupUtils.open() to do it automatically. | https://phone.docs.ubuntu.com/en/apps/api-qml-current/Ubuntu.Components.Popups.PopupBase | 2020-10-23T20:53:40 | CC-MAIN-2020-45 | 1603107865665.7 | [] | phone.docs.ubuntu.com |
QtMultimedia.CameraRecorder
Controls video recording with the Camera. More...
Properties
- actualLocation : string
- audioBitRate : int
- audioChannels : int
- audioCodec : string
- audioEncodingMode : enumeration
- audioSampleRate : int
- duration : int
- errorCode : enumeration
- errorString : string
- frameRate : qreal
- mediaContainer : string
- muted : bool
- outputLocation : string
- recorderState : enumeration
- recorderStatus : enumeration
- resolution : size
- videoBitRate : int
- videoCodec : string
- videoEncodingMode : enumeration
Methods
- record()
- setMetadata(key, value)
- stop()
Detailed Description
CameraRecorder allows recording camera streams to files, and adjusting recording settings and metadata for videos.
It should not be constructed separately, instead the
videoRecorder property of a Camera should be used.
Camera { videoRecorder.audioEncodingMode: CameraRecorder.ConstantBitrateEncoding; videoRecorder.audioBitRate: 128000 videoRecorder.mediaContainer: "mp4" // ... }
There are many different settings for each part of the recording process (audio, video, and output formats), as well as control over muting and where to store the output file.
See also QAudioEncoderSettings and QVideoEncoderSettings.
Property Documentation
This property holds the actual location of the last saved media content. The actual location is usually available after the recording starts, and reset when new location is set or the new recording starts.
This property holds the audio bit rate (in bits per second) to be used for recording video.
This property indicates the number of audio channels to be encoded while recording video (1 is mono, 2 is stereo).
This property holds the audio codec to be used for recording video. Typically this is
aac or
amr-wb.
See also whileBalanceMode.
The type of encoding method to use when recording audio.
This property holds the sample rate to be used to encode audio while recording video.
This property holds the duration (in miliseconds) of the last recording.
This property holds the last error code.
This property holds the description of the last error.
This property holds the framerate (in frames per second) to be used for recording video.
This property holds the media container to be used for recording video. Typically this is
mp4.
This property indicates whether the audio input is muted during recording.
This property holds the destination location of the media content. If the location is empty, the recorder uses the system-specific place and file naming scheme.
This property holds the current state of the camera recorder object.
The state can be one of these two:
This property holds the current status of media recording.
This property holds the video frame dimensions to be used for video capture.
This property holds the bit rate (in bits per second) to be used for recording video.
This property holds the video codec to be used for recording video. Typically this is
h264.
This property holds the type of encoding method to be used for recording video.
The following are the different encoding methods used:
Method Documentation
Starts recording.
Sets metadata for the next video to be recorder, with the given key being associated with value.
Stops recording. | https://phone.docs.ubuntu.com/en/apps/api-qml-development/QtMultimedia.CameraRecorder | 2020-10-23T22:06:32 | CC-MAIN-2020-45 | 1603107865665.7 | [] | phone.docs.ubuntu.com |
.
-.
These properties hold the starting position of the path. | https://phone.docs.ubuntu.com/en/apps/api-qml-development/QtQuick.Path | 2020-10-23T21:25:02 | CC-MAIN-2020-45 | 1603107865665.7 | [] | phone.docs.ubuntu.com |
In a PowerBuilder database connection, a Transaction object is a special nonvisual object that functions as the communications area between a PowerBuilder application and the database. The Transaction object specifies the parameters that PowerBuilder uses to connect to a database. You must establish the Transaction object before you can access the database from your application, as shown in the following figure:
Figure: Transaction object to access database
Communicating with the database
In order for a PowerBuilder application to display and manipulate data, the application must communicate with the database in which the data resides.
To communicate with the database from your PowerBuilder application:
Assign the appropriate values to the Transaction object.
Connect to the database.
Assign the Transaction object to the DataWindow control.
Perform the database processing.
Disconnect from the database.
For information about setting the Transaction object for a DataWindow control and using the DataWindow to retrieve and update data, see DataWindow Programmers Guide.
Default Transaction object
When you start executing an application, PowerBuilder creates a global default Transaction object named SQLCA (SQL Communications Area). You can use this default Transaction object in your application or define additional Transaction objects if your application has multiple database connections.
Transaction object properties
Each Transaction object has 15 properties, of which:
Ten are used to connect to the database.
Five are used to receive status information from the database about the success or failure of each database operation. (These error-checking properties all begin with SQL.)
The following table describes each Transaction object property. For each of the ten connection properties, it also lists the equivalent field in the Database Profile Setup dialog box that you complete to create a database profile in the PowerBuilder development environment.
Transaction object properties for your PowerBuilder database interface
For the Transaction object properties that apply to your PowerBuilder database interface, see Transaction object properties and supported PowerBuilder database interfaces.
For information about the values you should supply for each connection property, see the section for your PowerBuilder database interface in Connecting to Your Database.
The Transaction object properties required to connect to the database are different for each PowerBuilder database interface. Except for SQLReturnData, the properties that return status information about the success or failure of a SQL statement apply to all PowerBuilder database interfaces.
The following table lists each supported PowerBuilder database interface and the Transaction object properties you can use with that interface.
* UserID is optional for ODBC. (Be careful specifying the UserID property; it overrides the connection's UserName property returned by the ODBC SQLGetInfo call.)
# PowerBuilder uses the LogID and LogPass properties only if your ODBC driver does not support the SQL driver CONNECT call. | https://docs.appeon.com/pb2019/application_techniques/ch12s01.html | 2020-10-23T21:23:07 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['images/datrn005.gif', None], dtype=object)] | docs.appeon.com |
…
Forum: Fedora Magazine Discourse forum | https://docs.fedoraproject.org/ru/fedora-magazine/ | 2020-10-23T22:27:00 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.fedoraproject.org |
Debug Information Files
Debug information files allow Sentry to extract stack traces and provide more information about crash reports for most compiled platforms. Information stored in debug files includes original function names, paths to source files and line numbers, source code context, or the placement of variables in memory. Sentry can use some of this information and display it on the issue details page.
Each major platform uses different debug information files. We currently support the following formats:
- dSYM files for iOS, iPadOS, tvOS, watchOS, and macOS
- ELF symbols for Linux and Android (NDK)
- PDB files for Windows
- Breakpad symbols for all platforms
- ProGuard mappings for Java and Android
Source maps, while also being debug information files, are handled in Sentry. For more information see Source Maps in sentry-cli.
differently
Sentry requires access to debug information files of your application as well as system libraries to provide fully symbolicated crash reports. You can either upload your files to Sentry or put them on a compatible Symbol Server to be downloaded by Sentry when needed.
Debug information files can be managed on the Debug Files section in Project Settings. This page lists all uploaded files and allows to configure symbol servers for automatic downloads.
Debug Information
Sentry differentiates in four kinds of debug information:.
The remainder of this section describes the file formats in detail..
At the moment, Sentry supports Native PDBs only..
ProGuard Mappings
ProGuard mapping files allow Sentry to resolve obfuscated Java classpaths and method names into their original form. In that sense, they act as debug information files for Java and Android applications.
Debug Identifiers
Each debug information file specifies a unique identifier. Crash reports declare these identifiers to allow debuggers and crash reporting systems to resolve the correct files. Sentry distinguishes two kinds of identifiers:
Code Identifier: The unique identifier of the executable or dynamic library -- the code file. The contents of this identifier are platform-dependent: MachO files use a UUID, ELF files a SHA hash, PE files use a concatenation of certain header attributes.
Debug Identifier: The unique identifier of the debug companion file. In contrast to the code identifier, Sentry enforces the same structure on all platforms. On Windows, this is the actual unique id of the PDB file; on all other platforms this is a lossy transformation of the code identifier.
When uploading debug information files to Sentry, the CLI and server will always compute a Debug Identifier for each uploaded file. This identifier is associated with executables and libraries as well as debug companions to ensure that they can be uniquely located via one common mechanism.
Debug information does not have to be associated with releases. The unique debug
For native events, the issue details page displays a list of Loaded Images. This list contains the executable and all loaded dynamic libraries including their debug identifiers. You can copy this identifier and search for the exact files that match it in the Debug Files settings screen.
sentry-cli can help to print properties of debug information files like their
debug identifier. See Checking Debug Information Files for more information.
GNU Build Identifiers
For ELF files on Linux, Sentry uses the GNU build identifier to compute the
debug identifier..
The identifier needs to be present and identical in the binary as well
as stripped debug information files. If the ID is missing for some reason,
upload the files before stripping so that
sentry-cli can compute a stable
identifier from the unstripped file.
PDB Age Mismatches
Microsoft PDBs compose their identifiers from two parts: A unique signature and an age field. The signature is generated when the PDB is written initially and usually changes with every build. The age is a counter that is incremented every time the PDB is modified.
PE files, such as executables and dynamic libraries, specify the full identifier of the corresponding PDB in their header. This includes the age. If the PDB is modified after the PE has been generated, however, its age might diverge. This can lead to different identifiers:
PE: 3003763b-afcb-4a97-aae3-28de8f188d7c-2 PDB: 3003763b-afcb-4a97-aae3-28de8f188d7c-4
sentry-cli can detect these differences during the upload process and
associates the same identifier to both files. However, this requires that both
files are uploaded in the same invocation of the upload command. Otherwise, the
identifiers diverge and Sentry might not be able to resolve the correct file
for symbolication.
ProGuard UUIDs
Unlike other debug information files, ProGuard files do not have an intrinsic
unique identifier. Sentry CLI assigns them a SHA1 UUID based on the checksum of
the file. You can use
sentry-cli difutil check on a ProGuard file to see the
generated UUID.
If you())
Uploading:
sentry-cli upload-dif -o <org> -p <project> /path/to/files > Found 2 debug information files > Prepared debug information files for upload > Uploaded 2 missing debug information files > File processing complete: PENDING 1ddb3423-950a-3646-b17b-d4360e6acfc9 (MyApp; x86_64 executable) PENDING 1ddb3423-950a-3646-b17b-d4360e6acfc9 (MyApp; x86_64 debug companion)
For all available options and more information refer to Uploading Debug Information.
Always ensure that debug files.
Reprocessing
Sentry can suspend incoming crash reports until all required debug information files have been uploaded. This feature is called Reprocessing. It can be configured in Project Settings > Processing Issues. By default, this feature is disabled.
If enabled, crash reports with missing debug files will not be displayed in the issues stream. Instead, you will receive a warning that events cannot be processed until all debug files have been uploaded.
Once an issue is shown in the issues stream, it is no longer processed. Even with enabled reprocessing, new file uploads will not effect such events.
At the moment, this feature only applies to iOS crashes sent with the Cocoa SDK and is not compatible with Symbol Servers.
Symbol Servers
Sentry can download debug information files from external repositories. This allows you to stop uploading debug files and instead configure a public symbol server or run your own. It is also possible to configure external repositories and upload debug files at the same time.
To configure external repositories, go to Project Settings > Debug Files. Above the list of uploaded files, there are two settings to configure external repositories:
Custom Repositories: Configures custom repositories containing debug files. You can choose from configuring an HTTP symbol server, Amazon S3 bucket or Google Cloud Storage bucket. This requires a Business or Enterprise plan.
Built-In Repositories: Allows to select from a list of pre-configured symbol servers. By default, iOS and Microsoft are enabled.
Sentry queries external repositories for debug information files in the order they are configured. If custom repositories are configured, those are probed first. Only debug information files that are not found on one of the custom repositories are queried from the built-in ones.
Built-In Repositories
To enable a built-in repository, select it from the dropdown list. This immediately adds the repository and uses its debug information files to symbolicate new crash reports. Likewise, any built-in repository can be disabled by clicking on the X next to the name.
Adding or removing external repositories applies immediately. As a result, events may group differently with the new information and create new issues. Beware that these cause notifications to your team members.
Custom Repositories
Note
Custom repositories are available for organizations on the Business and Enterprise plans.
Independent of the internal format, Sentry supports three kinds of custom repositories:
HTTP Symbol Server: An HTTP server that serves debug files at a configurable path. Lookups in the server should generally be case-insensitive, although an explicit casing can be configured in the settings.
Amazon S3 Bucket: Either an entire S3 bucket or a subdirectory. This requires
s3:GetObject, and optionally
s3:ListBucketpermissions for the configured Access Key. Lookups in the bucket are case sensitive, which is why we recommend storing all files lower-cased.
Google Cloud Storage Bucket: Either an entire GCS bucket or a subdirectory. This requires
storage.objects.getand
storage.objects.listpermissions for the configured service account. Lookups in the bucket are case sensitive, which is why we recommend storing all files lower-cased.
Apart from authentication configuration, all types have common config parameters:
Name: A name to identify the repository.
Path Casing: Overrides which casing Sentry uses to query for debug information files. The default is a mixed case, which will use the case described in the next section. When overridden, all access is either lowercased or uppercased. Defaults to "mixed case".
Directory Layout: The internal structure of the bucket, or the protocol of the symbol server. There are three layouts to choose from which are discussed in the next section. Defaults to "Platform Specific".
Directory Layouts
Sentry supports multiple layouts for external repositories. Based on the selected layout and the file type, we try to download files at specific paths.
The following table contains a mapping from the supported layouts to file path schemas applied for specific files:
The path schemas in the table above are defined as follows:
- Breakpad
- Path:
<DebugName>/<BREAKPADid>/<SymName>
Breakpad always uses a Breakpad ID to store symbols. These identifiers can be computed from Debug Identifiers by removing dashes and applying the following casing rules:
- The signature part of the id (first 32 characters) are uppercase.
- The age part of the id (remaining characters) are lowercase.
The name of the symbol file is platform dependent. On Windows, the file
extension (Either
.exe,
.dll or
.pdb) is replaced with
.sym. On all
other platforms, the
.sym extension is appended to the full file name
including potential extensions.
Examples:
wkernel32.pdb/FF9F9F7841DB88F0CDEDA9E1E9BFF3B51/wkernel32.sym
MyFramework.dylib/5E012A646CC536F19B4DA0564049169B/MyFramework.dylib.sym
- LLDB
- Path:
XXXX/XXXX/XXXX/XXXX/XXXX/XXXXXXXXXXXX[.app]
The LLDB debugger on macOS can read debug symbols from File Mapped UUID Directories. The UUID is broken up by splitting the first 20 hex digits into 4 character chunks, and a directory is created for each chunk. In the final directory, LLDB usually expects a symlink named by the last 12 hex digits, which it follows to the actual dSYM file.
This is not actually an LLVM feature. This is in fact a feature of
CoreFoundation and exclusively implemented on macOS on top of spotlight.
Spotlight indexes these paths and the private
DBGCopyFullDSYMURLForUUID API
is used by lldb to locate the symbols. macOS uses the symlinks of those
locations.
Since the executable or library shares the same UUID as the dSYM file, the
former are distinguished with a
.app suffix.
The hex digits are uppercase, the app suffix is lowercase.
Examples:
5E01/2A64/6CC5/36F1/9B4D/A0564049169B(debug companion)
5E01/2A64/6CC5/36F1/9B4D/A0564049169B.app(executable or library)
- BuildID
- Path:
nn/nnnnnnnnnnnnnnnn...[.debug]
GDB supports multiple lookup methods, depending on the way the debug info file is specified. Sentry uses the Build ID Method: Assuming that a GNU build ID note or section has been written to the ELF file, this specifies a unique identifier for the executable which is also retained in the debug file.
The GNU build ID is a variable-length binary string, usually consisting of a
20-byte SHA1 hash of the code section (
.text). The lookup path is
pp/nnnnnnnn.debug, where pp are the first 2 hex characters of the build ID
bit string, and nnnnnnnn are the rest of the hex string. To look up
executables, the
.debug suffix is omitted.
Examples:
b5/381a457906d279073822a5ceb24c4bfef94ddb(executable or library)
b5/381a457906d279073822a5ceb24c4bfef94ddb.debug(stripped debug file)
- SSQP
- Path:
<file_name>/<prefix>-<identifier>/<file_name>
SSQP Key Conventions are an extension to the original Microsoft Symbol Server protocol for .NET. It specifies lookup paths for PE, PDB, MachO and ELF files. The case of all lookup paths is generally lowercase except for the age field of PDB identifiers which should be uppercase.
For MachO files and ELF files, SSQP specifies to use the same identifiers as used in the LLDB and GNU build id method, respectively. See the sections above for more information. This results in the following paths for all possible file types:
<code_name>/<timestamp><size_of_image>/<code_name>(PE file)
<debug_name>/<signature><AGE>/<debug_name>(PDB file)
<code_name>/elf-buildid-<buildid>/<code_name>(ELF binary)
_.debug/elf-buildid-sym-<buildid>/_.debug(ELF debug file)
<code_name>/mach-uuid-<uuid>/<code_name>(MachO binary)
_.dwarf/mach-uuid-sym-<uuid>/_.dwarf(MachO binary)
SSQP specifies an additional lookup method by SHA1 checksum over the file contents, commonly used for source file lookups. Sentry does not support this lookup method.
libc-2.23.so/elf-buildid-b5381a457906d279073822a5ceb24c4bfef94ddb/libc-2.23.so
_.debug/elf-buildid-sym-b5381a457906d279073822a5ceb24c4bfef94ddb/_.debug
CoreFoundation/mach-uuid-36385a3a60d332dbbf55c6d8931a7aa6/CoreFoundation
_.dwarf/mach-uuid-sym-36385a3a60d332dbbf55c6d8931a7aa6/_.dwarf
- SymStore
- Path:
<FileName>/<SIGNATURE><AGE>/<FileName>
The public symbol server provided by Microsoft used to only host PDBs for the Windows platform. These use a signature-age debug identifier in addition to the file name to locate symbols. File paths are identical to SSQP, except for the default casing rules:
- Filenames are as given
- The signature and age of a PDB identifier are uppercase.
- The timestamp of a PE identifier is uppercase, but the size is lowercase.
Since the original Microsoft Symbol Server did not serve ELF or MachO files, we do not recommend using this convention for these types. However, Sentry will support the SSQP conventions with adapted casing rules when this layout is selected.
- Index2
- Path:
<Fi>/<FileName>/<SIGNATURE><AGE>/<FileName>
This layout is identical to SymStore, except that the first two characters of the file name are prepended to the path as an additional folder.
Examples:
wk/wkernel32.pdb/FF9F9F7841DB88F0CDEDA9E1E9BFF3B5A/wkernel32.pdb
KE/KERNEL32.dll/590285E9e0000/KERNEL32.dll
Compression of Debug Files
Sentry supports the following compression methods when downloading debug information files from external sources: Gzip, zlib (both with and without header), Zstandard, and Cabinet (CAB).
The convention on Microsoft's Symbol Server protocol is to store such files with
the last character of the file extension replaced with
_. A full example would
be:
KERNEL32.dll/590285E9e0000/KERNEL32.dl_. This is not required on your own
repositories, as Sentry detects compression on all paths.
Source. | https://docs.sentry.io/platforms/android/data-management/debug-files/ | 2020-10-23T22:31:20 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.sentry.io |
Create KPI threshold time policies in ITSI
KPI threshold time policies have threshold values that change over time. Use time policies to.
ITSI stores thresholding information at the KPI level in the KV store. Any updates you make to a KPI threshold template are applied to all KPIs using that template, overriding any changes made to those KPIs.
You can only have one active time policy at any given time. When you create a new time policy, the previous time policy is overwritten and cannot be recovered.
Available KPI threshold templates
ITSI provides a set of 32. Then select a thresholding template, such as
3-hour blocks every day (adaptive/stdev)Selecting an adaptive template automatically enables Adaptive Thresholding and Time Policies. The Preview Aggregate Thresholds window opens. ITSI backfills the preview with aggregate data from the past 7 days. If there Configure > KPI Threshold Templates.
- Click Create Threshold Template.
If your role does not have write access to the Global team, you will not see the Create Threshold Template.
- (Optional) For Enable Adaptive Thresholding, click Yes to enable time varying thresholds that update periodically based on historical KPI data.
- For Training Window select the time period over which to evaluate KPI data for adaptive thresholding.
-.! | https://docs.splunk.com/Documentation/ITSI/4.4.5/Configure/TimePolicies | 2020-10-23T21:44:58 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
EditorSpatialGizmo¶
Inherits: SpatialGizmo < Reference < Object
Custom gizmo for editing Spatial objects.
Description¶
Custom gizmo that is used for providing custom visualization and editing (handles) for 3D Spatial objects. See EditorSpatialGizmoPlugin for more information.
Method Descriptions¶
- void add_collision_segments ( PoolVector3Array segments )
Adds the specified
segments to the gizmo's collision shape for picking. Call this function during redraw.
- void add_collision_triangles ( TriangleMesh triangles )
Adds collision triangles to the gizmo for picking. A TriangleMesh can be generated from a regular Mesh too. Call this function during redraw.
- void add_handles ( PoolVector3Array handles, Material material, bool billboard=false, bool secondary=false )
Adds a list of handles (points) which can be used to deform the object being edited.
There are virtual functions which will be called upon editing of these handles. Call this function during redraw.
- void add_lines ( PoolVector3Array lines, Material material, bool billboard=false, Color modulate=Color( 1, 1, 1, 1 ) )
Adds lines to the gizmo (as sets of 2 points), with a given material. The lines are used for visualizing the gizmo. Call this function during redraw.
- void add_mesh ( ArrayMesh mesh, bool billboard=false, SkinReference skeleton=null, Material material=null )
Adds a mesh to the gizmo with the specified
billboard state,
skeleton and
material. If
billboard is
true, the mesh will rotate to always face the camera. Call this function during redraw.
- void add_unscaled_billboard ( Material material, float default_scale=1, Color modulate=Color( 1, 1, 1, 1 ) )
Adds an unscaled billboard for visualization. Call this function during redraw.
- void clear ( )
Removes everything in the gizmo including meshes, collisions and handles.
Commit a handle being edited (handles must have been previously added by add_handles).
If the
cancel parameter is
true, an option to restore the edited value to the original is provided.
Gets the name of an edited handle (handles must have been previously added by add_handles).
Handles can be named for reference to the user when editing.
Gets actual value of a handle. This value can be anything and used for eventually undoing the motion when calling commit_handle.
- EditorSpatialGizmoPlugin get_plugin ( ) const
Returns the EditorSpatialGizmoPlugin that owns this gizmo. It's useful to retrieve materials using EditorSpatialGizmoPlugin.get_material.
Returns the Spatial node associated with this gizmo.
Returns
true if the handle at index
index is highlighted by being hovered with the mouse.
-.
Sets the gizmo's hidden state. If
true, the gizmo will be hidden. If
false, it will be shown.
Sets the reference Spatial node for the gizmo.
node must inherit from Spatial. | https://docs.godotengine.org/fr/latest/classes/class_editorspatialgizmo.html | 2020-10-23T21:41:48 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.godotengine.org |
Synfig 1.3.14 Release Notes¶
Bugfixes¶
- Fixed crash when removing a Spline vertex using “Remove item (Smart)” (issue #1102).
- Fixed incorrect placement of width points on outline when loading old files (see details here and here).
- Fixed crash when Shade Layer has no sublayers (issue #1272).
- Fixed popup menu disappearing right after button release for Widget Gradient/Spline (issue #1274).
- Fixed hang when opening a second .sif file from Explorer on Windows (issue #291).
- Fixed behavior “Local Time” parameter of Time Loop layer (issue #479).
- Fixed wrong percentage displayed when exporting a subset of frames (issue #1304).
- Fixed crash when undoing deletion of Group Layer (issue #1070).
- Fixed TimeTrack not updating when new waypoints added to bone (issue #1342).
- Fixed importing of 16-bit PNG files (issues #1160 and #1371).
- Fixed some memory leaks (PR #1292, #1293, #1319). | https://synfig.readthedocs.io/en/latest/releases/development/1.3.14.html | 2020-10-23T20:52:12 | CC-MAIN-2020-45 | 1603107865665.7 | [] | synfig.readthedocs.io |
Installation
Anaconda modularization effort
Anaconda is going through a large scale effort to modularize its internals. (Note that this effort is separate from the Fedora modularization effort described in [select-modularity].) The following changes have been made to Anaconda:
Modules have been added which are separate Python processes. They are connected to the main Anaconda process using DBus.
Most Kickstart commands are now processed on separate modules, and used in the UI in many places as backend data sources.
The modularization effort has no visible impact on the user experience. To read more about the installer internals, see the installer team’s blog.
Changes in boot options
New boot options:.xtimeout=- Specifies a timeout period (in seconds) the installer will wait before starting the X server.
Reduced initial setup redundancy
When you install the Workstation variant of Fedora 28 using the live image, you will now be prompted to create a user account by GNOME Initial Setup instead of Anaconda. Additionally, you will no longer be prompted to set a root password during the installation of Fedora Workstation using the live image. The
root account will be disabled by default; you can enable it by setting a password for it using the
sudo passwd command. If you do not reenable the
root account, you can still perform any administrative tasks using
sudo and your user password.
These changes have been made in order to reduce redundancy between Anaconda, Initial Setup, and GNOME Initial Setup, which could previously all ask you to perform the same setup tasks.
Miscellaneous changes
Former dependencies of the anaconda package itself have been separated into the anaconda-install-env-deps package.
The number of dependencies for Initial Setup has been massively decreased, which enables it to run on systems with a significantly lower amount of memory than before.
Anaconda now enables hibernation by default on AMD and Intel (x86) systems.
The progress bar displayed while the installation is in progress is now more accurate.
The
pykickstartand
blivetlibraries, which are used to handle Kickstart processing and storage configuration respectively, have been both upgraded to version
3.
Payload is usable usable in
InstallClass
You can now add sources in
InstallClass
InstallClassis now selected at runtime based on the
.buildstampfile.
InstallClasscan now specify a Fedora variant which it should run on. This is defined by the
Variantitem in the
.buildstampfile. | https://docs.fedoraproject.org/el/fedora/f28/release-notes/sysadmin/Installation/ | 2020-10-23T23:08:09 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.fedoraproject.org |
6.4 Plain Text
InqScribe can import and export plain text files. These files are encoded using UTF-8 by default.
You can find general guidance for importing and exporting data elsewhere.
6.4.1 Importing Plain Text
Plain text files are imported into your transcript with no modification (other than making sure any timecodes are converted to your transcript's current frame rate). Note that the Speaker Delimiter and Place Timecodes on Separate Lines settings does not apply to plain text, because plain text is not imported as a set of records.
6.4.2 Exporting Plain Text
Exporting as plain text creates a text file containing the exact contents of your transcript. You have the option of filtering out the timecodes.
Note that in some cases, it's easier to copy and paste the text of your transcript instead of exporting as plain text. | http://docs.inqscribe.com/2.2/format_text.html | 2020-02-16T22:49:30 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.inqscribe.com |
>>". CSV records that have a
CustID value greater than 500 and a
CustName value beginning with the letter P.
-: 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 8.0.0, 8.0.1, 8.0.2
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/8.0.1/Knowledge/ConfigureCSVlookups | 2020-02-16T22:32:13 | CC-MAIN-2020-10 | 1581875141430.58 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
2. Replication¶
Replication is an incremental one way process involving two databases (a source and a destination).
The aim of replication is that at the end of the process, all active documents in the source database are also in the destination database and all documents that were deleted in the source database are also deleted in the destination database (if they even existed).
The replication process only copies the last revision of a document, so all previous revisions that were only in the source database are not copied to the destination database.
- 2 | http://docs.couchdb.com/en/3.0.0/replication/index.html | 2020-02-16T21:56:28 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.couchdb.com |
Phones and devices qualified for Lync
IP phones
The Compatible Devices Program is used by vendors to qualify their devices for use with the Lync client on Windows-based machines.
Qualified products table
It is important to ensure that both the IP phone and the firmware version are tested and qualified for Lync (refer to the table that follows).
Note
If phone vendors offer a firmware version that is newer than the qualified version, be aware that it is not supported by Microsoft and should be used for beta and evaluation purposes only. We recommend that you visit each vendor's website for the latest information about product specifications, country support, and documentation including release notes and known issues.
Contact the vendors for more information about these products.
Table - Qualified IP Phones | https://docs.microsoft.com/en-us/SkypeForBusiness/lync-cert/ip-phones?redirectedfrom=MSDN | 2020-02-16T21:48:33 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.microsoft.com |
Automating add-in/app installation to SharePoint sites using CSOM
“How do I install apps/add-ins from store, app catalog or directly to the site? ” is relatively common question cross partners and customers. We are getting this question or request quite often at the Office 365 Developer Patterns and Practices (PnP) Yammer group or through other forums. Frank Marasco from the Office Dev PnP core team blogged about this already while back with title of “SideLoading Guideance” (Finglish is contagious), but due the amount of questions, thought that it would be worth while to have another blog post on the same topic.
Before heading to the actual meat of the post, let’s have a short look on the different options of the getting your add-in/app deployed, so that we understand the different options and requirements around the request.
Add-in/app scoping
There is two different ways for add-in/app to exist in the SharePoint or to be visible in the sites and it’s important to understand the difference so that we can decide the right deployment mechanism for add-in/apps.
Web scoped add-ins/apps
This is more typical model where add-in/app is located directly in the context of specific site. Add-in/app is installed directly to the site and is visible in the site contents view. Web scoped add-ins/apps can install app parts, app custom actions and app installed events are firing for provider hosted add-in when they are added to the site. Following drawing explains the model in practice.
- End users are adding add-ins/apps to SharePoint sites
- App is created or instantiated directly to the site and it can have optional app/add-in web, which is created directly under the specific site. Add-in/app webs are basically just sub sites in the site collections with specific template. App-in/app installed events are fired in the case of the provider hosted add-ins/apps and optional app parts, including optional custom actions, are being visible in the host web where the add-in/app was installed
- User accesses the add-in from the site contents or from add-in/app part
- Content is rendered from the add-in/app. Could come from provider hosted environment or from the add-in/app web
Tenant scoped add-ins/apps
This is capability also know as “app stapling” where you install add-in to app catalog site collection and then use “Deployment” mechanism to push the add-in/app available for other sites. Add-in/app is only installed once to the app catalog site collection and in other sites there will be just links pointing to this centralized instance under app catalog. This means that custom actions and app parts are being deployed only to app catalog and not to the other sites where the app is being visible. Also app install, uninstall and upgrade events are only executed on the app catalog site context, since that’s the actual location where the app is being installed.
Richard diZerega has great blog post on this few years back called SharePoint 2013 app deployment through “app stapling”.
- Add-ins/apps are being installed by app catalog owner
- Add-in/app is being installed on the app catalog. Possible app events, app parts, custom actions are being visible in the app catalog site after the initial creation of the app. This is the only location where the app actually is being installed
- Users access the add-in/app from their browsers at the normal sites
- Add-ins/apps have been pushed to the normal sites with the “Deployment” mechanism, which makes them visible in the site contents view, but does not really install the add-in/app to the sites
How to automate web scoped add-in/app installation?
Coming back on the actual topic of the post. Since you cannot use tenant scoped deployment model to get add-in parts or custom actions to be visible in the sites, it’s relatively common ask to get web scoped add-ins/apps deployed to sites for example as part of the initial site provisioning. This is actually possible, but has few limitations, due security challenges and store processes. We cannot achieve this for add-ins/apps in the store, but we can actually make this work within enterprise development scenarios, where you have control of the add-ins/apps which are being installed.
Pre-requisites
There are few different pre-requisites for making add-ins/apps being installed directly to the sites using web scoped approach. Here’s a short list of the pre-requisites.
- Add-in/app which being installed has to request tenant level permissions, which is the reason why this is not suitable model for store add-ins/apps
- Add-in/app which being installed has to be instantiated or specifically trusted by the tenant administrator once before it can be pushed to sites
- Add-in/app which being installed has to be pushed by uploading app file directly to the site where it’s being added
- You cannot push add-ins/apps from the store or from the app catalog to the sites
- When you install add-in/app to the site, you will need to enable so called site loading feature
How does it work in practice?
Let’s have a look on the process and code in practice with the following video. This shows quickly the app-stapling process and also what are the steps getting add-ins/apps installed on any SharePoint using CSOM, with app part and custom action support. To be able to install add-in/apps directly to the site using CSOM, you’ll need to perform following steps
- Install add-in/app once on the tenant level to grant the needed permissions for the add-in/app
- Connect to SharePoint site
- Enabled side loading feature at site collection level
- Install add-in/app using Web.LoadAndInstallApp method
- Deactivate side loading feature at site collection level
You can also find this video from the Office 365 Developer Patterns and Practices video blog at Channel 9.
CSOM code to install and app/add-in to SharePoint site
Here’s the full code of the console application, which is used to connect to SharePoint online and install add-in/app to the site.
// Unique ID for side loading feature Guid sideloadingFeature = new Guid("AE3A1339-61F5-4f8f-81A7-ABD2DA956A7D"); // Prompt for URL string url = GetUserInput("Please provide URL for the site where app is being installed: \n"); // Prompt for Credentials Console.WriteLine("Enter Credentials for {0}", url); string userName = GetUserInput("SharePoint username: "); SecureString pwd = GetPassword(); // Get path to the location of the app file in file system string path = GetUserInput("Please provide full path to your app package: \n"); // Create context for SharePoint online ClientContext ctx = new ClientContext(url); ctx.AuthenticationMode = ClientAuthenticationMode.Default; ctx.Credentials = new SharePointOnlineCredentials(userName, pwd); // Get variables for the operations Site site = ctx.Site; Web web = ctx.Web; try { // Make sure we have side loading enabled. // Using PnP Nuget package extensions. site.ActivateFeature(sideloadingFeature); try { // Load .app file and install that to site var appstream = System.IO.File.OpenRead(path); AppInstance app = web.LoadAndInstallApp(appstream); ctx.Load(app); ctx.ExecuteQuery(); } catch { throw; } // Disable side loading feature using // PnP Nuget package extensions. site.DeactivateFeature(sideloadingFeature); } catch (Exception ex) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(string.Format("Exception!"), ex.ToString()); Console.WriteLine("Press any key to continue."); Console.Read(); }
Q&A
Can I automate app/add-in installation for apps in the SharePoint store?
No. This is not possible, since store apps have licensing implications, so they cannot be just installed using simple API calls.
Can I automate app/add-in installation from app-catalog
Slightly depends on the exact objective. You can push add-ins/apps from the catalog using “app-stapling” model, but that has limitation around app parts, custom actions and app installation events. With app-stapling you create instance of the add-in/app to the app catalog site collection and then push links to specific sites. You cannot deploy isolated instances of the add-in/app to different sites and site collections. You can only achieve this with the code and process explained in this blog post.
Can I use app-one/add-in only token to install app on the site using this code?
No. This does not work, since app-only means that you are basically running in the system context and SharePoint cannot associate app/add-in to the right person in the app service application. Code will execute, but installation will fail in the UI in the end like with following picture.
Video shows deployment of provider hosted add-in/app to SharePoint sites, but would same process work with SP hosted add-in?
No. Model where you actually create own app instances to specific sites with LoadAndInstallApp() does only work for provider hosted add-ins/apps. App stapling from app catalog works for SP hosted add-ins and provider hosted add-ins.
Add-in/app has to request tenant level permissions, so it can only then be used by tenant administrator?
No. Trust operation has to be done by tenant administrator, but actual API calls in the add-in/app do not require that high permissions. Add-in/app could for example perform read or write operations to host web, which would only require that level of permission from the end user using the app/add-in. Alternatively add-in/app which has been installed, could use app-only permissions which basically means same as classic elevation of privileges in server side code.
Is there any other way to make this work?
Yes. You could always fall back on so called http post pattern, but that’s not also recommended approach since that means that you’d mimic browser operations for the “Trust” operation by mimicking the needed http post traffic. This does work, but if there’s any changes in the UI of the targeted operation, your code could get broken. We do not intentially do UI changes which would break these, but we cannot also guarantee that changes which would impact your code would not be done.
Office 365 Developer Patterns and Practices
Techniques showed in this blog post are part of the Core.SideLoading sample in the Office 365 Developer Patterns and Practices guidance, which contains more than 100 samples and solutions demonstrating different patterns and practices related on the app model development together with additional documentation related on the app model techniques.
Check the details around PnP from dev.office.com” | https://docs.microsoft.com/en-us/archive/blogs/vesku/automating-add-inapp-installation-to-sharepoint-sites-using-csom | 2020-02-16T23:41:01 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.microsoft.com |
Get-Counter
Applies To: Windows PowerShell 2.0
Gets performance counter data from local and remote computers.
Syntax
Get-Counter [-Counter] <string[]> [-ComputerName <string[]>] [-Continuous] [-MaxSamples <Int64>] [-SampleInterval <int>] [<CommonParameters>] Get-Counter -ListSet <string[]> [-ComputerName <string[]>] [<CommonParameters>]>
Specifies the time between samples in seconds. The minimum value and the default value are 1 second.
.
Example 1
C:\PS># Get-Counter Description ----------- This command gets all of the counter sets on the local computer. C:\PS> get-counter -ListSet * Because many of the counter sets are protected by access control lists (ACLs), to see all counter sets, open Windows PowerShell with the "Run as administrator" option before using the Get-Counter command.
Example 2
C:\PS># Get-Counter Description ----------- This command gets the current "% Processor Time" combined values for all processors on the local computer. It collects data every two seconds until it has three values. C:\PS> get-counter -Counter "\Processor(_Total)\% Processor Time" -SampleInterval 2 -MaxSamples 3
Example 3
C:\PS># Get-Counter Description ----------- This command gets an alphabetically sorted list of the names of all of the counter sets on the local computer. C:\PS> get-counter -listset * | sort-object countersetname | format-table countersetname
Example 4
C:\PS># Get-Counter Description -----------. C:\PS> (get-counter -listset memory).paths \Memory\Page Faults/sec \Memory\Available Bytes \Memory\Committed Bytes \Memory\Commit Limit \Memory\Write Copies/sec \Memory\Transition Faults/sec \Memory\Cache Faults/sec \Memory\Demand Zero Faults/sec \Memory\Pages/sec \Memory\Pages Input/sec ... The second command gets the path names that include "cache". C:\PS> (get-counter -listset memory).paths | where {$_ -like "*cache*"} \Memory\Cache Faults/sec \Memory\Cache Bytes \Memory\Cache Bytes Peak \Memory\System Cache Resident Bytes \Memory\Standby Cache Reserve Bytes \Memory\Standby Cache Normal Priority Bytes \Memory\Standby Cache Core Bytes
Example 5
C:\PS># Get-Counter Description ----------- These commands get the Disk Reads/sec counter data from the Server01 and Server02 computers. The first command saves the Disk Reads/sec counter path in the $diskreads variable. C:\PS> $diskreads = "\LogicalDisk(C:)\Disk Reads/sec" The second command uses a pipeline operator (|) to send the counter path in the $diskreads variable to the Get-Counter cmdlet. The command uses the MaxSamples parameter to limit the output to 10 samples. C:\PS> $diskreads | get-counter -computer Server01, Server02 -maxsamples 10
Example 6
C:\PS># Get-Counter Description ----------- This command gets the correctly formatted path names for the PhysicalDisk performance counters, including the instance names. C:\PS> (get-counter -list physicaldisk).pathswithinstances
Example 7
C:\PS># Get-Counter Description -----------. C:\PS> $servers = get-random (get-content servers.txt) -count 50 The second command saves the counter path to the "% DPC Time" cmdlet in the $Counter variable. The counter path includes a wildcard character in the instance name to get the data on all of the processors on each of the computers. C:\PS> $counter = "\Processor(*)\% DPC Time" The third command uses the Get-Counter cmdlet to get the counter values. It uses the Counter parameter to specify the counters and the ComputerName parameter to specify the computers saved in the $servers variable. C:\PS> get-counter -Counter $counter -computername $servers
Example 8
C:\PS># Get-Counter Description ----------- These commands get a single value for all of the performance counters in the memory counter set on the local computer. The first command gets the counter paths and saves them in the $memCounters variable. C:\PS> $memCounters = (get-counter -list memory).paths The second command uses the Get-Counter cmdlet to get the counter data for each counter. It uses the Counter parameter to specify the counters in $memCounters. C:\PS> get-counter -counter $memCounters
Example 9
C:\PS># Get-Counter Description ----------- This example shows the property values in the PerformanceCounterSample object that represents each data sample. The first command saves a counter path in the $counter variable. C:\PS> $counter = "\\SERVER01\Process(Idle)\% Processor Time" The second command uses the Get-Counter cmdlet to get one sample of the counter values. It saves the results in the $data variable. C:\PS> $data = get-counter $counter The third command uses the Format-List cmdlet to display all the properties of the CounterSamples property of the sample set object as a list. C:\PS> $data.countersamples | format-list -property * You can use the properties of the CounterSamples object to examine, select, sort, and group the data.
Example 10
C:\PS># Get-Counter Description ----------- The command runs a Get-Counter command as background job. For more information, see Start-Job. C:\PS> $counters = "\LogicalDisk(_Total)\% Free Space" C:\PS> start-job -scriptblock {get-counter -counter $counters -maxsamples 1000}
Example 11
C:\PS># Get-Counter Description ----------- This command uses the Get-Counter and Get-Random cmdlets to find the percentage of free disk space on 50 computers selected randomly from the Servers.txt file. C:\PS> get-counter -computername (get-random servers.txt -count 50) -counter "\LogicalDisk(*)\% Free Space"
Example 12
C:\PS># Get-Counter Description ----------- $a variable. $a = get-counter "\LogicalDisk(_Total)\% Free Space" -computerName s1, s2 The second command displays the results in the $a variable. All of the data is stored in the object, but it is not easy to see it in this form. C:\PS> $a.) C:\PS> $a.countersamples | format-table -auto. C:\PS> $a.countersamples[0] | format-table -property *. C:\PS> $a.countersamples | where {$_.cookedvalue -lt 15} Path InstanceName CookedValue ---- ------------ ----------- \\s2\\logicaldisk(_total)\% free... _total 3.73238142733405
Example 13
C:\PS># Get-Counter Description ----------- This example shows how to sort the performance counter data that you retrieve. The example finds the processes on the computer that are using the most processor time during the sampling. The first command gets the "Process\% Processor Time" counter for all the processes on the computer. The command saves the results in the $p variable. C:\PS> $p = get-counter '\Process(*)\% Processor Time' The second command gets the CounterSamples property of the sample set object in $p and it sorts the samples in descending order based on the cooked value of the sample. The command uses the Format-Table cmdlet and its AutoFormat parameter to position the columns in the table. C:\PS> $p.CounterSamples | sort-object -property CookedValue -Descending | format-table -auto
C:\PS># Get-Counter Description -----------. C:\PS> $ws = get-counter "\Process(*)\Working Set - Private". C:\PS> $ws.countersamples | sort-object -property cookedvalue -descending | format-table -property InstanceName, CookedValue -auto InstanceName CookedValue ------------ ----------- _total 162983936 svchost 40370176 powershell 15110144 explorer 14135296 svchost 10928128 svchost 9027584 ...
Example 15
C:\PS># Get-Counter Description ----------- This command gets a series of samples of the Processor\% Processor Time counter at the default one second interval. To stop the command, press CTRL + C. C:\PS> get-counter -counter "\processor(_total)\% processor time" -continuous
See Also
Concepts
Import-Counter
Export-Counter | https://docs.microsoft.com/en-us/previous-versions/dd367892(v=technet.10)?redirectedfrom=MSDN | 2020-02-16T21:37:18 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.microsoft.com |
7. Output¶
The output directory structure contains ten major sub-directories when all modules are turned on. In addition to the main directories, EDGE will generate a final report in portable document file format (pdf), process log and error log file in the project main directory.
- AssayCheck
- AssemblyBasedAnalysis
- HostRemoval
- HTML_Report
- JBrowse
- QcReads
- ReadsBasedAnalysis
- ReferenceBasedAnalysis
- Reference
- SNP_Phylogeny
In the graphic user interface, EDGE generates an interactive output webpage which includes summary statistics and taxonomic information, etc. The easiest way to interact with the results is through the web interface. If a project run finished through the command line, user can open the report html file in the HTML_report subdirectory off-line. When a project run is finished, user can click on the project id from the menu and it will generate the interactive html report on the fly. User can browse the data structure by clicking the project link and visualize the result by JBrowse links, download the pdf files, etc.
7.1. Example Output¶
See
Note
The example link is just an example of graphic output. The JBrowse and links are not accessible in the example links. | https://edge.readthedocs.io/en/latest/output.html | 2020-02-16T23:15:57 | CC-MAIN-2020-10 | 1581875141430.58 | [array(['_images/output.png', '_images/output.png'], dtype=object)] | edge.readthedocs.io |
User-Defined Detection Rules
In some cases, it may prove useful to add a signature for attack detection manually or to create a so-called virtual patch. As such, Wallarm does not use regular expressions to detect attacks, but it does allow users to add additional signatures based on regular expressions.
Adding a New Detection Rule
To do this, you need to create the rule Define a request as an attack based on a regular expression and fill in the fields:
Regex: regular expression (signature). If the value of the following parameter matches the expression, that request is detected as an attack. Note that the system supports a limited subset of the regular expression syntax.
Attack: the type of attack that will be detected when the parameter value in the request matches the regular expression.
Experimental: this flag allows you to safely check the triggering of a regular expression without blocking requests. The requests won't be blocked even when the filter node is set to the blocking mode. These requests will be considered as attacks detected by the experimental method. They can be accessed using search query
experimental attacks.
in this part of request: determines a point in the request, where the system should detect the corresponding attacks.
Example: Blocking All Headers with an Incorrect X-Authentication Header
If the following conditions take place:
- the application is accessible at the domain example.com
- the application uses the X-Authentication header for user authentication
- the header format is 32 hex symbols
Then, to create a rule for rejecting incorrect format tokens:
- Go to the Rules tab
- Find the branch for
example.com/**/*.*and click Add rule
- Select Define as an attack on the basis of a regular expression
- Set Regex ID value as
42
- Set Regex value as
[^0-9a-f]|^.{33,}$|^.{0,31}$
- Choose
Virtual patchas the type of Attack
- Set the point
Header X-AUTHENTICATION
- Click Create
Partial Disabling of a New Detection Rule
If the created rule should be partially disabled for a particular branch, this can easily be done by creating the rule Ignore regular expression with the following fields:
- Regex ID: identifiers of the previously created regular expressions that must be ignored.
- in this part of request: indicates the parameter that requires setting up an exception.
Getting an ID of a Regular Expression
Identifier is generated automatically when you add a new regular expression rule. To get an ID of a regular expression, proceed to the following steps:
- In the Rules tab click the button All rules and select Define a request as an attack based on a regular expression from the drop-down list.
- Select the branch which the desired regular expression was set for.
- Select the group of rules which contains the desired regular expression.
- Click the desired regular expression entry.
- The Regex ID field on the appeared panel contains the desired regular expression identifier. Click the button next to the field to copy it to the clipboard.
Example: Permit an Incorrect X-Authentication Header for a Designated URL.
Let's say you have a script at
example.com/test.php, and you want to change the format of the tokens for it.
To create the relevant rule:
- Go to the Rules tab
- Find or create the branch for
example.com/test.phpand click Add rule
- Choose Ignore regular expressions
- Enter the ID of the rule that you want to disable into the Regex ID field
- Set the point
Header X-AUTHENTICATION
- Click Create
| https://docs.wallarm.com/en/user-guides/cloud-ui/rules/regex-rule.html | 2019-09-15T09:49:41 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['../../../../images/en/user-guides/cloud-ui/rules/regex-rule-1.png',
'Regex rule first example'], dtype=object)
array(['../../../../images/en/user-guides/cloud-ui/rules/regex-id.png',
'Getting an ID of a regular expression'], dtype=object)
array(['../../../../images/en/user-guides/cloud-ui/rules/regex-rule-2.png',
'Regex rule second example'], dtype=object) ] | docs.wallarm.com |
Panel.
Grouping
Panel. Text Grouping
Panel. Text Grouping
Panel. Text Grouping
Property
Text
Definition
Gets or sets the caption for the group of controls that is contained in the panel control.
public: virtual property System::String ^ GroupingText { System::String ^ get(); void set(System::String ^ value); };
public virtual string GroupingText { get; set; }
member this.GroupingText : string with get, set
Public Overridable Property GroupingText As String
Property Value
The caption text for the child controls contained in the panel control. The default is an empty string ("").
Remarks. | https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.panel.groupingtext?view=netframework-4.8 | 2019-09-15T10:47:33 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.