content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Introduction to languages
Languages provides an interface to install, import, and export various language dictionaries. A language dictionary is mapped to a standard IETF language code. The default language dictionary that is packaged with Appspace is EN-US (English US). Language dictionaries are modular and can exist for every module. Translations for each dictionary can either be:
- Full: Every phrase has been completely translated.
- Partial: Some phrases are complete, and other phrases may be blank.
- None: None of the phrases have been translated.
Language Packs
Language packs can be added or removed from the Appspace server through the use of some quick-launch icons (Add, Import, Delete). Additionally, the search bar allows users to search languages by name.
Note
You can only import language packs that were previously exported from Appspace.
Languages Interface
The following illustrates the interface of Languages.
Overview Sub-Tab
The left pane lists all the language dictionaries that are currently available. Select a specific language to display the translation status for each Appspace feature.
Translation Sub-Tab
The Translation sub-tab displays the complete list of phrases translated in the selected language. You can manually edit a phrase by clicking on the individual phrase. The Module drop-down menu provides quick access to different modules available. You can also use the Search bar to search for a specific key word.
| https://docs.appspace.com/appspace/6.0/appspace/on-premises/languages/introduction/ | 2021-09-17T01:58:14 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['../../../../_images/01106.png', None], dtype=object)
array(['../../../../_images/02107.png', None], dtype=object)
array(['../../../../_images/0384.png', None], dtype=object)] | docs.appspace.com |
Advanced
In case your Sophos UTM cannot send emails directly, you can configure a smarthost to send the emails. Proceed as follows:
Enable External SMTP server status on the Management > Notifications > Advanced tab.
Click the toggle switch.
The toggle switch turns amber and the External SMTP Server area becomes editable.
Enter your smarthost.
You can use drag-and-drop. The port is preset to the default SMTP port 25.
Specify the authentication settings.
If the smarthost requires authentication, check the Authentication checkbox and enter the corresponding username and password.
Click Apply.
Your settings will be saved.
The toggle switch turns green. | https://docs.sophos.com/nsg/sophos-utm/utm/9.707/help/en-us/Content/utm/utmAdminGuide/MgmtNotificationsAdvanced.htm | 2021-09-17T01:10:50 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.sophos.com |
Table of contents
Introduction: Estimated time to complete, what you need before you start, and what we'll show you how to create
Key concepts: Video about the Deadline Funnel webhook and email links
Core setup: Create a Deadline Funnel campaign, create a zap in Zapier, use Deadline Funnel email links
Next steps: Email timers, sales tracking, and testing
Pages: Add a Deadline Funnel countdown timer to your Kajabi pages
Introduction
In this guide, we will show you how to create an evergreen campaign in Deadline Funnel that connects to your automation in Kajabi.
We recommend starting with this guide if you are using Kajabi as your email provider and want to create an evergreen campaign. Kajabi
Write at least two or three emails in the automation
Create a special offer page on your website that will be specifically for subscribers going through this automation
Create an expired page on your website that subscribers will be redirected to after their deadline expires (You can also redirect your subscribers to a different/regular offer after their deadline expires)
🏗️ We'll show you how to create:
An evergreen Deadline Funnel campaign using the Email Sequence + Special Offer blueprint (required)
Zapier integration between Deadline Funnel and Kajabi (required)
One or more Deadline Funnel email links in your automation emails (required)
One or more Deadline Funnel email timers in your automation emails (optional)
Key concepts
Please watch this quick video about how Deadline Funnel integrates with Kajabi:
⏰ THE WEBHOOK: The trigger that starts each subscriber's deadline
Each subscriber who is tagged or submits a form in Kajabi (depending on which action you choose in Zapier), will trigger the webhook via the zap in Zapier and will be added to your Deadline Funnel campaign.
Example: In a 3-day evergreen campaign, if someone is tagged on Monday, their deadline will be Thursday. And someone else who is tagged were tagged in Kajabi and triggered the Deadline Funnel webhook via Zapier. + Kajabi.. Create a zap in Zapier that will trigger the Deadline Funnel webhook and start each subscriber's unique deadline
After you've created your Deadline Funnel campaign, set up the integration between Deadline Funnel, Zapier and Kajabi
This integration allows you to trigger the deadline based on when a specific tag is applied to a contact or when a specific form is submitted.
➡️ Please visit our guide for more details about how to trigger the deadline by creating a zap in Zapier will experience
Adding the Countdown timer to your Kajabi pages
How you add the Deadline Funnel countdown timer to your Kajabi page(s) depends on the type of page you're using:
How to integrate with a Landing Page
How to integrate with a Sales Page
How to integrate with a Checkout page
If you have any questions, please reach out to us via the Messenger. | https://docs.deadlinefunnel.com/en/articles/4160713-kajabi | 2021-09-17T01:12:41 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://downloads.intercomcdn.com/i/o/217889499/9bd91ebc24dc041c7475a301/index.png?expires=1620321325&signature=659057b034cebfd66ff1d4f0481819723985ed7ff60874f56b02c1e4c319884e',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/347821026/4a5e1ad0f4c1d41227f7c316/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/347822624/1b41603940d2eb7f76e80a51/image.png',
None], dtype=object) ] | docs.deadlinefunnel.com |
Date: Wed, 08 May 2013 17:54:55 +0300 From: "Zyumbilev, Peter" <[email protected]> To: "[email protected]" <[email protected]> Cc: [email protected] Subject: Re: small fanless mini-pc for home router/firewall? Message-ID: <[email protected]> In-Reply-To: <CAHcg-UEKJ-e7Q0E3CE6sm5wA6TvFvmMkGBL4PaYvvTYPVRYBcg@mail.gmail.com> References: <CAHcg-UEKJ-e7Q0E3CE6sm5wA6TvFvmMkGBL4PaYvvTYPVRYBcg@mail.gmail.com>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
Hi, I currently run this one: with pfsense 2 (it is freebsd too) Works great :) The only problems I see so far is when I push it at 90+ Mb/s it start to have issues with load but if do not plan such high speeds it work like charm..Kind of expensive though... Peter On 08/05/2013 17:10, [email protected] wrote: > What is the best option out there for a mini-pc to run FreeBSD as a home > router/firewall? (needs to have 2 nic's) > _______________________________________________ > [email protected] mailing list > > To unsubscribe, send any mail to "[email protected]" >
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=149987+0+archive/2013/freebsd-questions/20130512.freebsd-questions | 2021-09-17T00:35:21 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.freebsd.org |
Identity & Single Sign-On
Functional overview¶
The expectations for consumers are set by webshops like Coolblue, Apple, Amazon and Google. Insurance companies can now start to impress users and meet higher demands from business and regulation. Onegini's Identity & Single Sign-On solution - as a part of the Onegini Identity Cloud - contains components you can easily add to your existing enterprise architecture. The picture below shows the high-level functionality the platform provides.
Functional flows¶
CIM handles the complete spectrum of capabilities related to delivering a seamless and secure customer experience.
(JIT) Migration¶
Thanks to Just In Time migrations, you can let users automatically migrate to 1 standard across all of your platforms. The customer may not even be aware of it at first. Onboarding / User Registration:.
Secure business transactions based on levels of assurance¶:
- Configuring identifications required per level.
- Configuring required level of assurance per service provider.
- Configuring level of assurance per identity provider.
- Configuring level of assurance for Two-Factor authenticators like for example text/SMS, mobile or Google Authenticator.
- Configuring required level of assurance for changing attributes like for example SMS, name, birthdate, and more.
Device registration for second factor login¶.
Authentication / Login¶
Yesterday customers logged in with username/password or social, today they want to login with a mobile device. The consumer decides the preferred login and wants to change preferences over time. CIM supports all that. Self service:
Self-service is critical. If your procedures are unclear and little self-service is available, more than 30% of the calls from your helpdesk might be related to this.
Delegated Administration for Business Partners¶
If you are dealing with intermediaries and have problems with distributing authorizations among them DABP is for you. You can let your intermediaries take care of the user and authorization management themselves.
Managing and monitoring¶
Consumer Identities, CIM is the digital front door of your organization for consumers and partners. Of course you require full audit trail, event trail and monitoring capabilities. | https://docs.onegini.com/architecture/identity-single-sign-on/ | 2021-09-17T00:05:00 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.onegini.com |
Customer Self Styling¶
What is Customer Self Styling?¶
Customer Self Styling is functionality within the Onegini Identity Cloud that enables front-end developers to create templates and upload templates with a company-specific format to their Onegini environments. It allows Onegini customers to match the look-and-feel of their Onegini applications with their existing websites by change company logos, using specific colors, changing email templates, etc.
Prerequisites¶
Front-end developers must have sufficient knowledge of HTML/CSS/images and Git to work with Onegini's Customer Self Styling. | https://docs.onegini.com/self-styling/ | 2021-09-17T00:03:24 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.onegini.com |
You are viewing documentation for version: 1.2 (latest) | 1.1 | Version History otMessageSettings Struct ReferenceAPI > Message This structure represents a message settings. #include <include/openthread/message.h> Public Attributes bool mLinkSecurityEnabled TRUE if the message should be secured at Layer 2. otMessagePriority mPriority The message priority level. This structure represents a message settings. The documentation for this struct was generated from the following file: include/openthread/message.h | https://docs.silabs.com/openthread/latest/structotMessageSettings | 2021-09-17T00:10:04 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.silabs.com |
SchedulerStorage.ResourcesChanged Event
Namespace: DevExpress.Xpf.Scheduler
Assembly: DevExpress.Xpf.Scheduler.v21.1.dll
Declaration
public event PersistentObjectsEventHandler ResourcesChanged
Public Event ResourcesChanged As PersistentObjectsEventHandler
Event Data
The ResourcesChanged event's data class is PersistentObject ResourcesChanged event is fired when a property or custom field of Resource object is modified. The resources whose properties have been changed are identified by the event parameter’s PersistentObjectsEventArgs.Objects property. | https://docs.devexpress.com/WPF/DevExpress.Xpf.Scheduler.SchedulerStorage.ResourcesChanged | 2021-09-17T01:51:11 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.devexpress.com |
Components that should only ever have a single active instance should begin with the
The prefix, to denote that there can be only one.
Component names should always be PascalCase
Follow style guide
The include statement takes the code in the specified file and copies it into the file that uses the include statement.
{section-folder}/section.lh<div class="lh-container"><div>Your first setting is {{settings.first}}</div><ProducForm /></div>
For the above example, the file "ProductForm.lh" is placed in the "components" folder
If you love LayoutHub, could you consider posting an review? That would be awesome and really help us to grow our business, here is the link. | https://docs.layouthub.com/development/reference/includes | 2021-09-17T01:20:18 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.layouthub.com |
Uploading Segments
Last updated on November 12, 2020
The Segments screen allows you to upload your subscriber or customer segments to use for audience creation. Once uploaded, the segment appears in the Your Segments table including its attributes such as, addressable MAU reach, type, Your Segments screen, click
located on the top right.
The Upload Segment dialog appears. Complete the following fields:
- Segment name: Type your new segment’s name.
- Data
and select your data file.
Click
.
Your data file is now uploaded and listed in the Your Segments screen. The status of the data file goes from Processing to Ready. Once Ready, the segment is available for audience creation. | https://devint-docs.openx.com/openaudience/oa-ui-uploading-data/ | 2021-09-17T00:24:51 | CC-MAIN-2021-39 | 1631780053918.46 | [] | devint-docs.openx.com |
Overview
Service Pack 3 of version 7.1, includes bug fixes and improvements to enhance the user experience and GIS capability of the application, as well as general bug fixes.
Supported Third Party Applications and Versions
Supported Platforms and Third Party Applications for Version 7.1 SP3
New Features and Enhancements
- N/A
Dropped/Replaced Features
- N/A
Bug Fixes and Other Improvements
- Added: Map Filter as a parameter that can used when creating or updating Jasper Map Reports
- Added: The ability to export a map as an image file (png format)
- Added: Validation in the naming of maps, layers and folders to prevent entry of unsupported special characters: / # % ; . ? \
- Added: When a layer is added to a map, the layer is turned on by default
- Fixed: Issue where the display order of feature offsets is reversed, now a positive offset value offsets a feature to the right and a negative value offsets the feature to the left
- Fixed: Issue where the display order of a layers legend is displayed in reverse order on the map, now the display order of the legend is as specified in the layer panel
- Fixed: Issue where an 'Others' category in the Range styling tab could not be excluded from a set of range definitions, now the 'Others' category cFixedan be excluded or included
- Fixed: Issue where in certain instances, an error message is displayed when a user selects to show a jasper report in an html format
- Fixed: Issue where the year value is not displayed in the event comparison reports, in the Roads & Highways interface
- Added: A reason field (reason for a given invalid record) to the Invalid Reports generated from the Roads & Highways interface
- Fixed: Issue where the google maps usage was not properly reported to Google, when utilizing an enterprise license for Google Maps
- Fixed: Issue where in certain instances, a refresh of the interface was required to insert a similar record using the insert like functionality, after inserting and saving a record
Known Issues, Limitations & Restrictions
- The new GIS Interface a Firefox as your web browser, the file is downloaded as map.png.pdf. You would have to manually edit out '.pdf' from the file name after the download
- This release was not tested with a customer schema that has polygon based GEOMs in their data set
- The loading and display of maps and associated styles in IE 11 is not as performant as Chrome, Firefox and Safari
- In the GIS Interface, when you select to print a displayed map and then select a print template, the displayed map zooms out rather than staying at the previously set zoom level and map extent | https://docs.agileassets.com/display/PD10/7.1+SP3+Release+Notes | 2021-09-17T00:41:52 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.agileassets.com |
Builder to create a 'KilledByPlayer' condition.
This condition will pass if and only if the target of the loot table has been killed by the player, either directly or indirectly (such as with arrows).
It might be required for you to import the package if you encounter any issues (like casting an Array), so better be safe than sorry and add the import at the very top of the file.
ZenScriptCopy
import crafttweaker.api.loot.conditions.vanilla.KilledByPlayer;
KilledByPlayer implements the following interfaces. That means all methods defined in these interfaces are also available in KilledByPlayer | https://docs.blamejared.com/1.16/it/vanilla/api/loot/conditions/vanilla/KilledByPlayer/ | 2021-09-17T01:23:47 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.blamejared.com |
1. In the Deadline Funnel admin, navigate to Edit Campaign > Emails > Email Timer Code and click the the Image URL to copy it:
2. In your Sendinblue email select the 'Image' block in the left-hand menu and drag it to the spot in your email where you would like to display the timer:
3. Click on the 'Image' block to edit it, select 'From URL' at the bottom, paste the Image URL you copied from Deadline Funnel in the URL box, then click 'Save & Quit':
4. You'll be taken to a Preview of your email where you'll see the timer displayed:
And you're all set!
Note: Sometimes we hear from clients that when they're testing Deadline Funnel's countdown image the time in their test email doesn't match the actual time they expected to see.
If this is happening to you, this article explains why: Facts to know about the email countdown timer
If you have any questions, please let us know at [email protected]. | https://docs.deadlinefunnel.com/en/articles/4600407-how-to-add-the-email-countdown-code-to-sendinblue | 2021-09-17T00:52:54 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://downloads.intercomcdn.com/i/o/260114512/af875d242fecaaaa57963181/cc-copy-image-url.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/262946151/4051f2e23995c89076c063fb/sendinblue-image-block-yeah.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/262946227/ae2522648e48d944553fc076/sendinblue-image-url.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/262946433/290092ef555bf04acd67b4ac/sendinblue-timer-preview.png',
None], dtype=object) ] | docs.deadlinefunnel.com |
Machine learning tutorial
Preview
This feature is in Public Preview.
The fastest way to get started with machine learning in Databricks is to open the model training tutorial notebook. On the start page, click Start guide at the upper right. The notebook illustrates many of the benefits of using Databricks for machine learning, including tracking model development with MLflow and parallelizing hyperparameter tuning runs. The notebook steps through loading data, training and tuning a model, comparing and analyzing model performance, and using the model for inference.
For additional machine learning and deep learning tutorials, see 10-minute tutorials: Get started with machine learning on Databricks. | https://docs.gcp.databricks.com/applications/machine-learning/ml-quickstart.html | 2021-09-17T00:25:44 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['../../_images/ml-quickstart.png',
'Machine learning tutorial notebook'], dtype=object)] | docs.gcp.databricks.com |
Mule Application Development
You create Mule applications to perform system integrations. Typically, the application reads data from internal and external sources, processes and transforms data to the required formats or structures, and writes that output to the systems and servers where you store or use the transformed data.
Mule applications are configured to run in Mule runtime engine (Mule). A request to a Mule application triggers Mule to encode the request and data in a Mule event and to pass it to either single or multiple threads.
Getting Started with Mule Application Development
To get started with Mule application development, you can follow the steps in these tutorials:
Build a Mule application that interacts with a user in a simple HTTP request-response flow.
Mule Application Development Tutorial
Build a Mule application that retrieves data from a database and transforms the to a new structure.
Building Blocks of a Mule Application
Mule applications use connectors, modules, and components to read, write, and process data.
Anypoint connectors provide components, such as listeners, that interact with external API endpoints and act on data in the Mule application. Modules, such as the Validation, Java, Spring, and OAuth, provide modules that act on data in a Mule application without providing a direct connection to an endpoint.
Core components support flow control, error handling, logging, batch, and other programmatic operations on data that flows through your application.
DataWeave Language
DataWeave is the primary language used for formulating expressions in Mule. Connectors, modules, and components support the use of DataWeave to access, manipulate, and transform data structures and output formats, and to extract data that is processed within the Mule application.
At runtime, Mule evaluates DataWeave expressions while executing a flow to:
Extract data needed to process the current message
Set or manipulate a value in the message
Mule Flows
Understanding basic flow architecture is key to understanding a Mule application. Essentially, every Mule flow contains a series of Mule components that receive or process messages:
At the simplest level, flows are sequences of processors. A message that enters a flow can pass through a variety of processors. In a typical flow, a Mule application receives a message through a source (such as an HTTP Listener component), transforms that message into a new format, and processes any business logic before writing the processed message to an external system in a format that the system can read.
To separate processing into more manageable units, Mule applications often contain multiple, interrelated flows instead of just a single flow. One flow can call another flow as a direct reference.
For more information about this topic, see Flows and Subflows and Mule Components.
Sources
A source component (or trigger) is the first component in a flow. It receives a triggering event, creates a corresponding Mule event, and forwards that event for processing by the next component in the flow.
External clients can trigger processing in a Mule flow through several communication protocols and methods, such as JMS, HTTP, FTP, JDBC, or File. Mule translates these communication protocols and methods into a standard message format, which passes through the flow’s processors.
Sources in Mule can connect to specific external sources, either through a standard protocol or a third-party API. It is also possible to set a Scheduler component. Some schedulers can poll for specific changes to external resources, such as new files or table rows in an external resource. Examples of listeners and connector operations that can trigger a flow include:
HTTP, JMS, and VM listeners in their associated connectors
On Table Row operation in the Database connector
On New or Updated File operation in the File and FTP connectors
Scheduler
Processors
After a flow is triggered through the source component, subsequent components process the data as it travels through the flow. By default, each processor that receives a Mule event returns a new Mule message, typically with a set of attributes and the message payload that the processor returns. The processor forwards the new message as output to the next processor in the flow.
Processors available to Mule applications include:
Components from modules and connectors
Examples include operations that read from and write to an external resource and that validate data in the Mule application. Some operations can make client requests to external resources and services (including external databases and systems, such as Salesforce, Workday, ServiceNow, and many others) and to other Mule applications. Others can run your custom code, support OAuth configurations, and manage communication through asynchronous queues, for example. Many other operations are available.
Core components
Core components can route data, perform data transformations, handle errors that might occur when processing the event, and perform other tasks in a Mule application.
Transformers (such as the Transform Message, Set Variable, and others) are key to exchanging data between nodes. Transformers enable Mule to convert message data in the Mule event to a format that another application or service can read.
Mule also enables content enrichment of messages (through Target Variables) so that you can retrieve additional data and attach it to the message.
Security
Development Environments
You can develop a Mule application using Anypoint Studio (an Eclipse-based IDE), Flow Designer (a cloud-based application in Design Center, on Anypoint Platform), or, if you are an advanced developer, in your own IDE.
For example, in Studio, you build and design a Mule application in a project that contains one or more XML-based files. A Mule project supports all the dependencies required for development. The Package Explorer view in Studio provides access to the project folders and files that make up a Mule project. Studio provides a design-time environment in which you can also build, run, and test your Mule application. Flow Designer supports a cloud-based version of a Mule project.
Mule Versioning
The Mule version you use determines what your Mule application, domain, or policy can do and what features and products are compatible with Mule. For example, Core components, which process the Mule event as it travels through flows in a Mule application, are part of a Core module that is bundled with and shares the same version as Mule. Modules, connectors, the DataWeave language, and several MuleSoft products have their own versioning system but are compatible with specific versions of Mule. For example, DataWeave 2.0 and Studio 7.x are compatible Mule 4.x runtime engines, while DataWeave 1.0 and Studio 6.x are compatible with Mule 3.x runtime engines. You need to make sure the connector or module you use in a Mule application is compatible with your Mule version. | https://docs.mulesoft.com/mule-runtime/4.3/mule-app-dev | 2021-09-17T01:22:22 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['_images/flow-example.png', 'Simple Flow in Design Center'],
dtype=object)
array(['_images/flow-example.png', 'Simple Flow in Design Center'],
dtype=object) ] | docs.mulesoft.com |
# Keyboard & Clipboard Settings
Updated: 9/7/2021, 1:14:21 PM
Created: 9/7/2021, 1:14:21 PM
Last Updated By: Mike Street
Read Time: 1 minute(s)
The Keyboard Settings panel is to specify keyboard options and open the keyboard programming window.
Most terminals provide commands that allow the host to program the function keys. If you would like the host to be able to reset and reprogram the keys, select the Unlocked – host can reset or reprogram keys option. If you would like the host to be able to reprogram keys, but not reset them to their default values, select the Locked – host cannot reset keys to defaults option. If you would like to prevent the host from resetting or reprogramming keys, select the Locked – host cannot reset or reprogram keys option.
Paste Options: end-of-line When the clipboard is "pasted" to an AccuTerm session, AccuTerm transmits the clipboard text to the host computer. The end of line options determine what AccuTerm does at the end of each line: send CR (default), send LF, send CF+ LF, send TAB, do nothing, or send a user-defined character. To specify a user-defined character, select the user-defined option and enter the ASCII code of the character to be sent at the end of each line.
| https://docs.zumasys.com/accuterm/mobile/keyboard-and-clipboard-settings/ | 2021-09-17T01:03:11 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['/assets/img/1573846056106-1573846056106.0d292d6b.png',
'accuterm-mobile-keydocszumasyscom-board-clipboard-settings: 1573846056106-1573846056106'],
dtype=object)
array(['/assets/img/1573846217288-1573846217288.8bed60ba.png',
'accuterm-mobile-keydocszumasyscom-board-clipboard-settings: 1573846217288-1573846217288'],
dtype=object) ] | docs.zumasys.com |
You are viewing version 2.25 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version.
Monitor the Armory Agent with Prometheus
Learn how to configure Prometheus to get metrics from the Armory Agent and display them in a Grafana dashboard.
Configure
Import a Grafana dashboard
You can import this Grafana dashboard definition to use with Prometheus. | https://v2-25.docs.armory.io/docs/armory-agent/agent-monitoring/ | 2021-09-17T01:13:53 | CC-MAIN-2021-39 | 1631780053918.46 | [] | v2-25.docs.armory.io |
Release Date
June 27, 2018
This version uses the Arnold 5.1.1.1 core.
DOWNLOADS
MtoA 3.0.1.1 is hotfix release, including the following fixes:
- Fixed random crashes when switching to Arnold Viewport (AVP)
- Added support for multiple light AOVs in the Arnold denoiser UI
- Fixed random crashes with XGen nodes
- Fixed random freezes in ARV
- Fixed bug causing ARV to block Maya rig visibility changes
- OCIO looks were ignored in Maya 2016
- Ensure shaders referenced by operators are properly exported
- Frame padding in the Arnold denoiser UI wasn't properly supported
- Batch render with Render Setup AOV overrides was failing
- Fixed wrong transforms with Particle instances of Stand-ins | https://docs.arnoldrenderer.com/display/A5AFMUG/3.0.1.1 | 2021-09-17T00:43:58 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.arnoldrenderer.com |
Overview (Read Me)
Overview
Skill level: Novice
The following is a guide for integrating fmQBO with RCC’s FM Starting Point CRM template for FileMaker ( FMSP ). This guide will walk you through the process of integrating FMSP Accounts, Invoices and Products Modules with Quickbooks Online equivalent tables.
These tables will be suitable for 95% of use cases. We recommend the remaining tables that are stay local to the fmQBO file.
You should be comfortable editing layouts, tables, scripts and value lists, making basic layout changes and configuring external data sources.
Requirements:
fmQBO version 2.4 or above (Todd to verify due to GIT #94)
FileMaker Starting Point version <<xxx>> and above.
Drop in version: 4.7 (Todd to verify) and above
Table Mapping Reference
*FMSP allows invoices to be associated with both an Account and a Contact. QBO associates Invoices with Customers.
This integration will connect the QBO ‘Customer' to a corresponding FMSP ‘Account’ the FMSP primary contact.
This integration will connect the QBO ‘Customer' to a corresponding FMSP ‘Account’ the FMSP primary contact. | https://docs.fmqbo.com/article/126-how-to-integrate-with-fmstarting-point | 2021-09-16T23:50:59 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.fmqbo.com |
@Connector(name = "connector") public class MyConnector { @ConnectionStrategy private HttpBasicAuthStrategy strategy; @Processor @Restcall(uri = "", method = HttpMethod.POST) public abstract String method(@RestPostParam("parameter")String param); } @HttpBasicAuth(configElementName="http-ba-config", friendlyName="HTTP Basic Auth") public class HttpBasicAuthStrategy { @Configurable @BasicAuthUsername private String username; @Configurable @BasicAuthPassword private String password; }
HTTP). | https://docs.mulesoft.com/connector-devkit/3.7/http-basic-authentication | 2021-09-17T00:38:00 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.mulesoft.com |
API configuration¶
Configure API access¶
Onegini Access offers several APIs to integrate Onegini Access processes with existing systems. Access to the APIs can be managed via API clients. For every API client we need to configure client ID and his authentication method. For now only client secret basic and private key JWT are supported.
The API clients can be configured in the admin console: Configuration → System → API clients.
Per API client can be specified which API(s) can be accessed. This gives the opportunity to provide external systems using Onegini Access APIs only access to a certain function. Currently, access can be granted to the following APIs:
- Admin API
- Config API
- End user
- Events API
- Insights: communication between Onegini Insights and Onegini Access to retrieve statistics data.
- Mobile authentication
- Payload encryption policy: communication between the Onegini Security Proxy and Onegini Access to exchange payload encryption settings.
- Token introspection
- User registration: Custom Registration | https://docs.onegini.com/products/access/topics/technical-app-management/api-configuration/api-configuration/ | 2021-09-17T01:43:34 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['../img/api-configuration.png', 'api configuration'], dtype=object)] | docs.onegini.com |
These release notes include the following topics:
Benefits and Features
The vSphere Data Protection 6.0 Administration Guide provides information about benefits and features of vSphere Data Protection.
Supported Environments
VMware Interoperability Matrix provides information about supported environments..
- Root login to vSphere Data Protection 5.8 is disabled (1309755)
Root login to vSphere Data Protection 5.8 is disabled because it is not a best practice. An option is needed to enable ssh root login. If no activity occurs in the shell, the shell is set to time out after 15 minutes. An option is needed to disable the shell timeout. These improvements will help with the customer support experience during troubleshooting and while using the shell.
Workaround
Log in by using the admin account, and then use the sudo command to perform root functions.
If no activity occurs in the bash shell, it (in the /etc/profile directory) is set for a 15-minute timeout. You can add the console timeout value using the hardening script. For example:
TMOUT=900
export TMOUT
- Powering on a VM fails when the backed up VM was connected to DVS and restored to another ESXi host (222475)
When powering on a VM and the log indicates the log gap error, though the database has no log gap. Note that this issue has not been observed in restoring a backup of a primary replica.
The issue is AlwaysOn nodes synchronization-related and occurs during the creation of SQL metadata file in the backup workflow, in cases where the backup is taken on a secondary AlwaysOn replica.
This issue results in backups with a corrupted metadata file, so users could not perform a Log tail recovery or some additional differential / incremental backup could not be linked to this backup chain.
There is no data loss.
Workaround
Perform a full backup. Ensure the "Force incremental backup after full backup" backup option is unchecked..
- (55795)
Workaround
Edit the verification job and select the appropriate destination path when you perform the verification job.
-.
Fixed Problems
The following table lists the problems that have been fixed in this release of vSphere Data Protection: | https://docs.vmware.com/en/VMware-vSphere/6.0/rn/data-protection-6010-release-notes.html | 2021-09-17T02:14:17 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.vmware.com |
Enforce Keyfile Access Control in a Replica Set
On this page
Overview
If Cloud Manager or Ops Manager is managing your deployment, see the Cloud Manager manual or the Ops Manager manual for enforcing access control.
Considerations
IP Binding
This tutorial uses the
mongod programs. Windows users should use the
mongod.exe program instead.
Keyfile Security
Keyfiles are bare-minimum forms of security and are best suited for testing or development environments. For production environments we recommend using x.509 certificates.
Users
The following procedure for enforcing access control requires downtime. For a procedure that does not require downtime, see Enforce Keyfile Access Control in a Replica Set without Downtime instead.
Enforce Keyfile Access Control on Existing Replica Set
Create a keyfile.. | https://www.docs4dev.com/docs/en/mongodb/v3.6/reference/tutorial-enforce-keyfile-access-control-in-existing-replica-set.html | 2021-09-17T00:36:11 | CC-MAIN-2021-39 | 1631780053918.46 | [] | www.docs4dev.com |
The Blast Furnace is a type of IRecipeManager and implements all the methods that are available to IRecipeManager's, such as
removeRecipe() and
removeAll(). Along with the Blast Furnace is the Blast Furnace Fuel, which is also a type of IRecipeManager and implements all the methods that are available to IRecipeManager's, such as
removeRecipe() and
removeAll().
The following script will add a recipe that will output Charcoal, and a piece of String (as Slag), after 1000 ticks when any Item from the Wool Tag is given to the Blast Furnace.
ZenScriptCopy
// <recipetype:immersiveengineering:blast_furnace>.addRecipe(string recipePath, IIngredient ingredient, int time, IItemStack output, @Optional(<item:minecraft:air>) IItemStack slag) <recipetype:immersiveengineering:blast_furnace>.addRecipe("wool_to_charcoal", <tag:items:minecraft:wool>, 1000, <item:minecraft:charcoal>, <item:minecraft:string>);
The following script will add a Fuel to the Blast Furnace that will take a Golden Sword with the name "Sword of the Sungod" and will burn for 100000 ticks.
ZenScriptCopy
// <recipetype:immersiveengineering:blast_furnace_fuel>.addFuel(string name, IIngredient fuel, int burnTime) <recipetype:immersiveengineering:blast_furnace_fuel>.addFuel("the_sungods_sword_can_burn", <item:minecraft:golden_sword>.withTag({display: {Name: "{\"text\":\"Sword of the Sungod\"}" as string}}), 100000);
The following script will remove all recipes from the Blast Furnace that outputs Charcoal.
ZenScriptCopy
// <recipetype:immersiveengineering:blast_furnace>.removeRecipe(IItemStack output) <recipetype:immersiveengineering:blast_furnace>.removeRecipe(<item:minecraft:charcoal>);
The following script will remove Charcoal as a Fuel for the Blast Furnace.
ZenScriptCopy
// <recipetype:immersiveengineering:blast_furnace_fuel>.removeRecipe(IItemStack fuel) <recipetype:immersiveengineering:blast_furnace_fuel>.removeFuel(<item:minecraft:charcoal>); | https://docs.blamejared.com/1.16/ko/mods/ImmersiveEngineering/BlastFurnace/ | 2021-09-17T00:18:14 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.blamejared.com |
Create your Deadline Funnel campaign
Before you can add the Deadline Funnel timer to your Webflow page, you will need to create your first Deadline Funnel campaign.
Check out our guide here on how to create a Deadline Funnel campaign
Once you've created your Deadline Funnel campaign, you can follow the steps below.
We're here to help! Contact us on live chat (bottom right corner of the screen) Monday - Friday, 6am-6pm Eastern. Or you can shoot us an email any time at [email protected]. 🙂
Adding the countdown to a Webflow page
1. In the Deadline Funnel admin, click on the Tracking Code tab at the top of the far left hand menu. A lightbox will appear with the Tracking Code for your account, go ahead and click 'Copy the tracking code' to copy it to your clipboard:
2. Go into your Webflow account, navigate to Project Settings > Custom Code, and paste the Deadline Funnel Tracking Code into the Header box:
3. Now navigate back to Deadline Funnel and go into Edit Campaign > Pages. Click 'Add New Page' at the top and paste the URL of your Webflow page into the 'Before the Deadline' box. You can also add a redirect URL here, if you want to send people away from the replay after the deadline has expired:
4. And you're done! The Floating Bar timer will now appear on your page. Here's an example of what the Floating Bar looks like:
If you have any questions, please reach out to us in chat or shoot us an email at [email protected] | https://docs.deadlinefunnel.com/en/articles/4160202-how-to-add-a-countdown-timer-to-webflow | 2021-09-17T00:42:34 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217856786/cb62e778ee0dd02d38e9a896/file-3oVZvtBqI7.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217856789/eddca1fae9959c33490bc2dc/file-wu6CnIJMfK.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217856796/2acb084f0687c81ace06aab2/file-wWDpdpcrxT.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217856800/1dc03963f050fcc4d2948a38/file-r2622Bfum3.png',
None], dtype=object) ] | docs.deadlinefunnel.com |
.
Resizing a load balancer lets you adjust its performance to its workload. You can resize existing load balancers to any available size.
To resize a load balancer from the DigitalOcean Control Panel, click Networking, then click Load Balancers to go to the load balancer overview page. Click on the name of the load balancer you want to resize to go to its index page, then click Settings.
In the Size section of the Settings page, click Resize. Select the new size for the load balancer, then click Save. The load balancer drops all open connections and then resizes.
Once the load balancer has resized, the Size section of the Settings page now reflects the load balancer’s new size. | https://docs.digitalocean.com/products/networking/load-balancers/how-to/resize/ | 2021-09-17T00:56:27 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.digitalocean.com |
Kubernetes isn't a CI per-se, but it can serve as the underpinning for many modern CI systems. As such, this example serves as a bare-bones example to base your implementations on.
earthly has been tested with the all-in-one
earthly/earthly mode, and works as long as the pod runs in a
privileged mode.
It has also been tested with a single remote
earthly/buildkitd running in
privileged mode, and an
earthly/earthly pod running without any additional security concerns. This configuration is considered experimental. See these additional instructions.
Multi-node
earthly/buildkitd configurations are currently unsupported.
This is the recommended approach when using Earthly within Kubernetes. Assuming you are following the steps outlined in the overview, here are the additional things you need to configure:
Your Kubernetes cluster needs to allow
priveleged mode pods. It's possible to use a separate instance group, along with Taints and Tolerations to effectively segregate these pods.
The default image from
earthly/earthly should be sufficient. If you need additional tools or configuration, you can create your own runner image.
In some instances, notably when using Calico within your cluster, the MTU of the clusters network may end up mismatched with the internal CNI network, preventing external communication. You can set this through the
CNI_MTU environment variable to force a match.
earthly/earthly currently requires the use of privileged mode. Use this in your container spec to enable it:
securityContext:privileged: true
The
earthly/earthly container will operate best when provided with decent storage for intermediate operations. Mount a volume like this:
volumeMounts:- mountPath: /tmp/earthlyname: buildkitd-temp...volumes:- name: buildkitd-tempemptyDir: {} # Or other volume type
The location within the container for this temp folder is configurable with the
EARTHLY_TMP_DIR environment variable.
The
earthly/earthly image will expect to find the source code (with
Earthfile) rooted in
/workspace. To configure this, ensure that the
SRC_DIR environment variable is set correctly. In the case of the example, we are building a remote target, so mounting a dummy volume is needed.
It is possible to run multiple
earthly/buildkitd instances in Kubernetes, for larger deployments. Follow the configuration instructions for using the
earthly/earthly image above.
There are some caveats that come with this kind of a setup, though:
Some local cache is not available across instances, so it may take a while for the cache to become "warm".
Builds that occur across multiple instances simultaneously may fail in odd ways. This is not supported.
The TLS configuration needs to be shared across the entire fleet.
To mitigate some of the issues, it is recommended to run in a "sticky" mode to keep builds pinned to a single instance for the duration. You can see how to do this in our example:
# Use session affinity to prevent "roaming" across multiple buildkit instances; if needed.sessionAffinity: ClientIPsessionAffinityConfig:clientIP:timeoutSeconds: 600 # This should be longer than your build.
This example is not production ready, and is intended to showcase configuration needed to get Earthly off the ground. If you run into any issues, or need help, don't hesitate to reach out!
You can find our Kubernetes examples here.
To run it yourself, first you will need to install some prerequisites on your machine. This example requires
kind and
kubectl to be installed on your system. Here are some links to installation instructions:
When you are ready, clone the
ci-examples repository, and then run (from the root of the repository):
earthly ./kubernetes+start
Running this target will:
Create a
kind cluster named
earthlydemo-aio
Create & watch a
job that runs an
earthly build
When the example is complete, the cluster is left up and intact for exploration and experimentation. If you would like to clean up the cluster when complete, run (again from the root of the repository):
earthly ./kubernetes+clean | https://docs.earthly.dev/ci-integration/vendor-specific-guides/kubernetes | 2021-09-17T00:00:42 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.earthly.dev |
What's New in V2
What’s New
- Improved .NET Intelli-sense documentation
- Improved organization of the source code
- Improved ability to add new startup parameters and features in the future
- Improved development experience for configuration for Photino.NET users
- Multi-window support
- Programmatically close native windows
- Most properties can be set before and/or after native window initialization
- Extended fluent methods for initialization and ability to call them after initialization as well
- Handlers can now be established prior to initializing the native window
- Native browser control’s context menu and developer tools access can now be set in code
- New setting to automatically enable permissions for browser control access to local hardware such as camera and microphone without prompting the user
- New settings to use OS default window position and size for native window
- Programmatically set the native browser control’s Zoom level
- Minimize and maximize native windows programmatically
- Center native windows before and/or after initialization
- Chrome-less native windows allows developers to create custom title bars, footers and menus
- Improved support for Full Screen (Kiosk) mode
- Improved support for Topmost (Always on Top) native windows
- Improved logging
- Ability to retrieve values from native windows as well as set them via expanded set of properties
New Properties
- Centered (bool)
- Chromeless (bool)
- ContextMenuEnabled (bool)
- DevToolsEnabled (bool)
- FullScreen (kiosk mode) (bool)
- GrantBrowserPermissions (bool)
- IconFile (string)
- Maximized (bool)
- Minimized (bool)
- StartupString (string)
- StartupUrl (string)
- TemporaryFilesPath (string)
- Topmost (bool)
- UseOSDefaultLocation (bool)
- UseOSDefaultSize (bool)
- WebMessageReceivedHandler (EventHandler
)
- WindowClosingHandler (NetClosingDelegate)
- WindowCreatingHandler (EventHandler)
- WindowCreatedHandler (EventHandler)
- WindowLocationChangedHandler (EventHandler
)
- WindowSizeChangedHandler (EnventHandler
)
- Zoom (int)
New Methods
- RegisterWindowCreatingHandler()
- RegisterWindowCreatedHandler()
- RegisterWindowClosedHandler()
- SetChromeless()
- SetContextMenuEnabled()
- SetDevToolsEnabled()
- SetGrantBrowserPermissions()
- SetHeight()
- SetLeft()
- SetSize()
- SetLocation()
- SetLogVerbosity()
- SetTemporaryFilesPath()
- SetTitle()
- SetTopMost()
- SetWidth()
- SetZoom()
- SetUseOsDefaultLocation()
- SetUseOsDefaultSize()
Windows
- The Edge browser control’s temporary files are created in AppData\Local by default and the location can now be set in code.
Linux
- n/a
Mac
- n/a
Bugs Fixed
- WindowClosing event now works properly
- Fixed several issues related to custom schemes (xxx://)
Windows
- Memory leak injecting JavaScript code
- Now supports multiple windows
- Fixed pixelated window and taskbar icons
Linux
- Fixed crash when creating multiple windows
Mac
- Fixed crash when calling SetIconfile()
- Fixed crash on closing
Breaking Changes
In order to solve some technical issues, we had to make a few breaking changes in V2. The biggest change has been how PhotinoWindow is initialized. The constructor now takes a single parameter; an instance of the parent window if one exists. Properties and/or fluent methods are used configure nearly every setting prior to initialization of the native window and browser control. The only property the developer is required to set is either StartUrl or StartString allowing the native browse control to immediately render the UI. This requirement is necessary to enable multiple window support in Windows as well as provide a better experience for developers. The native window and browser control are automatically initialized and displayed when WaitForClose() is called. Nearly all properties and fluent methods can also be called after initialization of the native window and browser control to change settings at runtime, providing a consistent developer experience.
Renamed Properties
- IsOnTop –> Topmost
Renamed Methods
- FullScreen() –> SetFullScreen() (or use FullScreen property)
- Hide() –> Minimize() (or use Minimized property)
- UserCanResize() –> SetResizable() (or use Resizeable property)
Deprecated Properties
- Children
- WasShown
Deprecated Methods
- AddChild()
- Dispose()
- RemoveChild()
- Show()
Other Changes
- PhotinoWindow no longer implements IDsposable
- PhotinoWindowOptions initialization parameter has been removed. Use XXXHandler properties and/or ResgieterXXHandler() fluent methods and/or Events directly instead. Use of RegisterXXHandler() methods is preferred.
Known Issues
- ShowState() doesn’t pop up in Alert window.
Windows
- n/a
Linux
- Access to local hardware from native browser control is not yet supported by the browser control (out of our control)
- Reading the minimized (iconified) state of a native window doesn’t work in GTK 3 (out of our control)
- Calling GetSize() and SetSize() in rapid succession does not work properly. E.g. :
.Height = .Height + 5;
.Width = .Width + 5;
Mac
- SendWebMessage() is not working (root cause of custom schemes issue as well?)
- .NET Application doesn’t exit when all native windows are closed
- Closing the main window does not close child windows
- Center property and method make window ‘prominent’ per macOS / Cocoa standards and don’t exactly center the window (not a bug)
- Access to local hardware from native browser control is not yet supported by the browser control (out of our control)
- Visual error messages from .Native app not implemented as they are in Windows and Linux
- UseOsDefaultLocation sets location to 0,0 as macOS has no default window location (not a bug)
- UseOsDefaultSize is 800x600 as macOS has no default window size (not a bug)
- ContextMenuEnabled cannot be toggled after initialization as it’s not supported in WKWebKit (out of our control)
- DevToolsEnabled cannot be toggled after initialization as it’s not supported in WKWebKit (out of our control)
- GetScreenDpi always return 72 as it’s not supported on macOS (out of our control) | https://docs.tryphotino.io/What's-New-in-V2 | 2021-09-16T23:50:18 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.tryphotino.io |
Change the Size of the Oplog
On this page
New in version 3.6.
This procedure changes the size of the oplog on each member of a replica set using the
replSetResizeOplog command, starting with the secondary members before proceeding to the primary.
Important
You can only run
replSetResizeOplog on replica set member’s running with the Wired Tiger storage engine.
Perform these steps on each secondary replica set member first. Once you have changed the oplog size for all secondary members, perform these steps on the primary.
A. Connect to the replica set member
Connect to the replica set member using the
mongo shell:
Note
If the replica set enforces authentication, you must authenticate as a user with privileges to modify the
local database, such as the
clusterManager or
clusterAdmin role.
B. (Optional) Verify the current size of the oplog
To view the current size of the oplog, switch to the
local database and run
db.collection.stats() against the
oplog.rs collection.
stats() displays the oplog size as
maxSize.
The
maxSize field displays the collection size in bytes.
C. Change the oplog size of the replica set member
To change the size, run the
replSetResizeOplog passing the desired size in megabytes as the
size parameter. The specified size must be greater than
990, or 990 megabytes.
The following operation changes the oplog size of the replica set member to 16 gigabytes, or 16000 megabytes.
D. (Optional) Compact
oplog.rs to reclaim disk space
Reducing the size of the oplog does not automatically reclaim the disk space allocated to the original oplog size. You must run
compact against the
oplog.rs collection in the
local database to reclaim disk space. There are no benefits to running
compact on the
oplog.rs collection after increasing the oplog size.
Important
The replica set member cannot replicate oplog entries while the
compact operation is ongoing. While
compact runs, the member may fall so far behind the primary that it cannot resume replication. The likelihood of a member becoming “stale” during the
compact procedure increases with cluster write throughput, and may be further exacerbated by the reduced oplog size.
Consider scheduling a maintenance window during which writes are throttled or stopped to mitigate the risk of the member becoming “stale” and requiring a full resync.
Do not run
compact against the primary replica set member. Connect a
mongo shell to the primary and run
rs.stepDown(). If successful, the primary steps down and closes all open connections. Reconnect the
mongo shell to the member and run the
compact command on the member.
The following operation runs the
compact command against the
oplog.rs collection:
For clusters enforcing authentication, authenticate as a user with the
compact privilege action on the
local database and the
oplog.rs collection. For complete documentation on
compact authentication requirements, see compact Required Privileges. | https://www.docs4dev.com/docs/en/mongodb/v3.6/reference/tutorial-change-oplog-size.html | 2021-09-17T01:24:08 | CC-MAIN-2021-39 | 1631780053918.46 | [] | www.docs4dev.com |
Project reference
You can simply refer to the github project:
GROBID (2008-2017)
(please do not include a particular person name to emphasize the project and tool!)
Presentations on Grobid
GROBID in 30 slides (2015).
GROBID in 20 slides (2012).
Papers on Grobid
GROBID: Combining Automatic Bibliographic Data Recognition and Term Extraction for Scholarship Publications. P. Lopez. Proceedings of the 13th European Conference on Digital Library (ECDL), Corfu, Greece, 2009.
Automatic Extraction and Resolution of Bibliographical References in Patent Documents. P. Lopez. First Information Retrieval Facility Conference (IRFC), Vienna, May 2010. LNCS 6107, pp. 120-135. Springer, Heidelberg (2010).
Automatic Metadata Extraction The High Energy Physics Use Case. Joseph Boyd. Master Thesis, EPFL, Switzerland, 2015.
Evalution.
Phil Gooch and Kris Jack, How well does Mendeley’s Metadata Extraction Work?
Articles on CRF for bibliographical extraction
Accurate Information Extraction from Research Papers using Conditional Random Fields. Fuchun Peng and Andrew McCallum. Proceedings of Human Language Technology Conference and North American Chapter of the Association for Computational Linguistics (HLT-NAACL), 2004.
Isaac G. Councill, C. Lee Giles, Min-Yen Kan. (2008) ParsCit: An open-source CRF reference string parsing package. In Proceedings of the Language Resources and Evaluation Conference (LREC), Marrakesh, Morrocco.
Other similar Open Source tools
CiteSeerX page on Scholarly Information Extraction which list many tools and related information. | http://grobid.readthedocs.io/en/latest/References/ | 2018-03-17T14:20:54 | CC-MAIN-2018-13 | 1521257645177.12 | [] | grobid.readthedocs.io |
How to configure Foursquare API¶
Foursquare requires that you create an external application linking your website to their API. Application id and secret (also sometimes referred as Consumer key and secret or Client id and secret) are what we call an application credentials. This application will link your website example.com to Foursquare API and these credentials are needed in order for Foursquare users to access your website.
These credentials may also differ in format, name and content depending on the social network.
To enable authentication with this provider and to register a new Foursquare API Application, follow the steps:
Step 1¶
First Go to
Step 6¶
Copy and insert API into API fields in Magento Admin > Social Login > Settings > Foursquare
And that’s it!
If for some reason you still can’t manage to create an application for Foursquare, you can ask for support.
Expert’s recommendations
Tip
Must-have extensions for your Magento stores | https://docs.mageplaza.com/social-login-m2/how-to-configure-foursquare-api.html | 2018-03-17T14:37:49 | CC-MAIN-2018-13 | 1521257645177.12 | [array(['https://cdn.mageplaza.com/media/general/lwxnGfl.png',
'https://cdn.mageplaza.com/media/general/lwxnGfl.png'],
dtype=object) ] | docs.mageplaza.com |
Create a local user account
Updated: March 1, 2012
Applies To: Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012
Administrators is the minimum group membership required to complete this procedure. Review the details in "Additional considerations" in this topic.
To create a local user account.
The use of strong passwords and appropriate password policies can help protect your computer from attack.
Additional references
Why you should not run your computer as an administrator
Assign a logon script to a local user account
Assign a home folder to a local user account | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770642(v=ws.11) | 2018-03-17T15:22:57 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.microsoft.com |
How to configure Yahoo API¶
Yahoo requires that you create an external application linking your website to their API. Application id and secret (also sometimes referred as Consumer key and secret or Client id and secret) are what we call an application credentials. This application will link your website example.com to Yahoo API and these credentials are needed in order for Yahoo users to access your website.
These credentials may also differ in format, name and content depending on the social network.
To enable authentication with this provider and to register a new Yahoo API Application, follow the steps:
Step 1¶
First go to:.
Step 3¶
Fill out Application Name, Home Page URl, Callback Domain. In API Permissions (Profiles (Social Directory) choosen Read/Write Public and Private).
Step 5¶
Copy and insert Client Id and Client Secret into API fields in Magento Admin.
And that’s it!
If for some reason you still can’t manage to create an application for Facebook, you can ask for support.
Expert’s recommendations
Tip
Must-have extensions for your Magento stores | https://docs.mageplaza.com/social-login-m2/how-to-configure-yahoo-api.html | 2018-03-17T14:37:52 | CC-MAIN-2018-13 | 1521257645177.12 | [array(['https://cdn.mageplaza.com/media/general/EVAaJk8.png',
'https://cdn.mageplaza.com/media/general/EVAaJk8.png'],
dtype=object)
array(['https://cdn.mageplaza.com/media/general/9NqnIsT.png',
'https://cdn.mageplaza.com/media/general/9NqnIsT.png'],
dtype=object) ] | docs.mageplaza.com |
Discovery Configuration Console Use the Discovery Configuration Console to manage what kind of CIs and CI information you want to discover. By default, Discovery finds all the information on your network that is specified in probes and patterns. Use the controls in this console to prevent Discovery from finding data that your organization does not need.. Requirements and accessibility If you use Internet Explorer, you must use version 8 or later. The configuration console supports keyboard navigation and screen readers. Console overview The console is divided into these sections: Devices: network devices such a printers, storage devices such as storage switches, and Unix and Windows computers. Applications: automation applications such as Puppet, databases such as MSSQL, and web servers such as Tomcat. Software Filter: Unix and Windows applications that include or exclude keywords that you enter.. The probes that belong to this classifier, including the Horizontal Pattern probe that launches patterns, never launch. Software discovery Filtering out the discovery of software based on the keywords they contain affects all software packages on Windows and UNIX computers. A keyword can be any term that is present in a software package name. Keywords for filtering software are stored in the Software Filter Keys [discovery_spkg_keys] table. The following default keywords are provided for Windows default: Hotfix Language Pack Security Update Note: The system does not provide default filtering keywords for UNIX software. Affects switch to turn off the CIs you do not want to discover. The instance creates an update set record for any change you make to the console.. Switch off the software filter for either UNIX or Windows. When the filter is disabled for an operating system, all discovered software packages for that operating system are added to the CMDB, with no filtering applied. The instance creates an update set record for any change you make to the console. | https://docs.servicenow.com/bundle/kingston-it-operations-management/page/product/discovery/concept/c_DiscoveryConfigurationConsole.html | 2018-03-17T14:48:38 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.servicenow.com |
Background Information
A sample log entry with a message containing the
filter chain halted... message is:
Processing HomeController#server-status (for 127.0.0.1 at 2008-01-23 02:45:12) [GET] Session ID: e7e5b4552107447afd2ada002040ac65 Parameters: {"action"=>"server-status", "controller"=>"home", "auto"=>nil} Redirected to Filter chain halted as [#<ActionController::Filters::ClassMethods::SymbolFilter:0xb6e7ff84 @filter=:curtain_authorize>] returned false. Completed in 0.00076 (1309 reqs/sec) | DB: 0.00000 (0%) | 302 Found []
Answer
These requests are not from the load balancer, but from the monitoring tool that tries to get status information about Apache so you can view it in the Monitoring tab of the instance in your Dashboard. If these requests are getting all the way to Rails, it means that the Apache handler for server status is not correctly configured (i.e. these requests should be directly answered by Apache, rather than forwarded to Rails). Since RightScale ServerTemplates automatically configure this by default, an error would be caused by a user misconfiguration. In a nutshell, you have to make sure that
ExtendedStatus is
ON and that you have the
/server-status url enabled and handled by Apache's server-status handler. For example, you could put this snippet into
/etc/httpd/conf.d/status.conf:
ExtendedStatus On <location server- </location> <location server-SetHandler server-status </location> <location server-Order deny,allow </location> <location server-Deny from all </location> <location server-Allow from localhost </location>
and then restart apache:
service httpd graceful | http://docs.rightscale.com/faq/What_does_Filter_chain_halted_mean_in_my_logs.html | 2018-03-17T14:27:06 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.rightscale.com |
Convert a release team to a group (only for existing customers having release teams) Convert an existing release team to an assignment group of type Agile Team. Before you beginRole required: scrum_user, scrum_admin About this taskAgile development 2.0 does not use Release Team. Existing customers who have created release teams must convert the existing teams to assignment groups to assign them to a product or a release. Procedure Navigate to Agile Development > Groups. Click the Convert Release Teams to Groups related link. Select the team that you want to convert to an assignment group. Click Convert to Group. Result The release team is available as assignment group at Agile Development > Groups. The members of the release team are copied to the assignment group. | https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/agile-development/task/convert-release-team-to-group.html | 2018-03-17T14:17:44 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.servicenow.com |
Tanium: Get Result Data from Response activity The Tanium: Get Result Data from Response workflow activity processes the response body from the result data and outputs an array of JSON objects representing the results from Tanium. The Tanium: Get Result Data from Response activity can be used with any workflow to retrieve result data to use in the workflow. Results Possible results for this activity are: Table 1. Results Result Description Success Retrieved result data. Failure No data retrieved. More error information is available in the activity output error. Input variables Input variables determine the initial behavior of the activity. Variable Description response_body Encrypted SOAP response contents implementation_id Implementation identifier. affected_ci Configuration item affected. Output variables The output variables contain data that can be used in subsequent activities. Table 2. Output variables Variable Description result_data Array Element type of API variables. Each array contains key-value pairs composed of the column and values returned from the server. If no data is received from the server, the output is an empty array. output Formatted return data on running processes used by the abstract workflow. | https://docs.servicenow.com/bundle/kingston-security-management/page/product/secops-integration-tanium/reference/tanium-get-result-data-activity.html | 2018-03-17T14:16:16 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.servicenow.com |
Troubleshooting¶
This page is going to be a collection of common issues Django MongoDB Engine users faced. Please help grow this collection – tell us about your troubles!
SITE_ID issues¶
AutoField (default primary key) values must be strings representing an ObjectId on MongoDB (got u'1' instead). Please make sure your SITE_ID contains a valid ObjectId string.
This means that your
SITE_ID setting (What’s SITE_ID?!) is incorrect –
it is set to “1” but the site object that has automatically been created has an
ObjectId primary key.
If you add
'django_mongodb_engine' to your list of
INSTALLED_APPS, you
can use the
tellsiteid command to get the default site’s ObjectId and update
your
SITE_ID setting accordingly:
$ ./manage.py tellsiteid The default site's ID is u'deafbeefdeadbeef00000000'. To use the sites framework, add this line to settings.py: SITE_ID=u'deafbeefdeadbeef00000000'
Creating/editing user in admin causes
DatabaseError¶
DatabaseError at /admin/auth/user/deafbeefdeadbeef00000000/ [...] This query is not supported by the database.
This happens because Django tries to execute JOINs in order to display a list of groups/permissions in the user edit form.
To workaround this problem, add
'djangotoolbox' to your
INSTALLED_APPS
which makes the Django admin skip the groups and permissions widgets.
No form field implemented for <class ‘djangotoolbox.fields.ListField’>¶
See | http://django-mongodb-engine.readthedocs.io/en/latest/troubleshooting.html | 2018-03-17T14:10:23 | CC-MAIN-2018-13 | 1521257645177.12 | [] | django-mongodb-engine.readthedocs.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the GetTelemetryMetadata operation. Information about the data that is collected for the specified assessment run.
Namespace: Amazon.Inspector.Model
Assembly: AWSSDK.Inspector.dll
Version: 3.x.y.z
The GetTelemetryMetadataRequest type exposes the following members
Information about the data that is collected for the specified assessment run.
var response = client.GetTelemetryMetadata(new GetTelemetryMetadataRequest { AssessmentRunArn = "arn:aws:inspector:us-west-2:123456789012:target/0-0kFIPusq/template/0-4r1V2mAw/run/0-MKkpXXPE" }); List
telemetryMetadata = response.TelemetryMetadata;
| https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Inspector/TGetTelemetryMetadataRequest.html | 2018-03-17T14:59:09 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.aws.amazon.com |
" ] )
Best Match Adapter¶
chatterbot.logic.
BestMatch(*tein_distance",
Low Confidence Response Adapter¶
This adapter returns a specified default response if a response can not be determined with a high amount of confidence.
chatterbot.logic.
LowConfidenceAdapter(**kwargs)[source]¶
Returns a default response with a high confidence when a high confidence response is not known.
Low confidence response example¶
# -*- coding: utf-8 -*- from chatterbot import ChatBot # Create a new instance of a ChatBot bot = ChatBot( 'Default Response Example Bot', storage_adapter='chatterbot.storage.SQLStorageAdapter', logic_adapters=[ { 'import_path': 'chatterbot.logic.BestMatch' }, { 'import_path': 'chatterbot.logic.LowConfidenceAdapter', 'threshold': 0.65, 'default_response': 'I am sorry, but I do not understand.' } ], trainer='chatterbot.trainers.ListTrainer' ) # Train the chat bot with a few responses bot.train([ 'How can I help you?', 'I want to create a chat bot', 'Have you read the documentation?', 'No, I have not', 'This should help get you started:' ]) # Get a response for some unexpected input response = bot.get_response('How do I make an omelette?') print(response)
Specific Response Adapter¶
If the input that the chat bot receives, matches the input text specified for this adapter, the specified response will be returned.
chatterbot.logic.
SpecificResponseAdapter(**kwargs)[source]¶
Return a specific response to a specific input.
Specific response example¶
# -*- coding: utf-8 -*-:' } ], trainer='chatterbot.trainers.ListTrainer' ) # Get a response given the specific input response = bot.get_response('Help me!') print(response) | https://chatterbot.readthedocs.io/en/stable/logic/index.html | 2018-02-18T05:01:00 | CC-MAIN-2018-09 | 1518891811655.65 | [] | chatterbot.readthedocs.io |
You are viewing documentation for version 3 of the AWS SDK for Ruby. Version 2 documentation can be found here.
Class: Aws::Pinpoint::Types::GetImportJobRequest
- Defined in:
- gems/aws-sdk-pinpoint/lib/aws-sdk-pinpoint/types.rb
Overview
Note:
When making an API call, you may pass GetImportJobRequest data as a hash:
{ application_id: "__string", # required job_id: "__string", # required } | https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Pinpoint/Types/GetImportJobRequest.html | 2018-02-18T05:24:56 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.aws.amazon.com |
Bulk uploading S3 backups using the AWS CLI
Use the AWS CLI instead of the AWS SDK when bulk loading backups to Amazon S3 locations. Using the S3 CLI is a Labs feature that must be enabled.
Use the AWS CLI instead of the AWS SDK when bulk loading backups to Amazon S3 locations. Using the AWS CLI rather than the AWS SDK can result in a performance increase, with a noticeable decrease in the time it takes to complete a backup. This is an OpsCenter Labs feature (that is, under ongoing development but available for use). The feature is available in OpsCenter versions 6.1.3 and later.
For more information, see AWS CLI in the Amazon documentation.
Prerequisites
- Install the AWS CLI package on every node. DataStax recommends using the Amazon bundled installer method and upgrading to the latest version of AWS CLI if it is already installed. See Install the AWS CLI using the bundled installer in the Amazon documentation for installation procedures.Tip: As a recommended best practice for OpsCenter, install the AWS CLI bundle using APT as follows:
sudo apt-get install -y unzip curl '' -o awscli-bundle.zip unzip awscli-bundle.zip sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/awsImportant: Regardless of the install procedure used, make sure that the AWS CLI package is installed in the PATH of the
cassandrauser, or whichever user the DataStax agent runs as.
- Add an S3 location for backups. option:
[labs] use_s3_cli = True
- Save the configuration file or files.
- Restart the OpsCenter daemon.
- Optional: If you made changes to address.yaml, restart the DataStax agents. | https://docs.datastax.com/en/opscenter/6.1/opsc/online_help/services/s3cli.html | 2018-02-18T04:59:34 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.datastax.com |
Reading from Greenplum Database into Spark
Reading a Greenplum Database table into Spark loads all of the table rows into a Spark DataFrame. You can read a Greenplum Database table that you created with the
CREATE TABLE SQL command using the Spark Scala API or within the
spark-shell interactive shell..
Greenplum-Spark Connector Data Source
A Spark data source provides an access point to structured data. Spark provides several pre-defined data sources to support specific file types and databases. You specify a Spark data source using its fully qualified name.
The Greenplum-Spark Connector exposes a Spark data source named
io.pivotal.greenplum.spark.GreenplumRelationProvider to access and read data from Greenplum Database into a Spark
DataFrame.
Use the
.format(datasource: String) Scala method to identify the data source. You must provide the fully qualified Greenplum-Spark Connector data source name to the
.format() method. For example:
spark.read.format("io.pivotal.greenplum.spark.GreenplumRelationProvider")
Connector Read Options
You provide the Greenplum Database connection and read options required by the
GreenplumRelationProvider data source via generic key-value
String pairs.
GreenplumRelationProvider supports the read options identified in the table below. Each option is required unless otherwise specified.
You can specify
GreenplumRelationProvider options individually or in an options
Map. The option-related
DataFrameReader class methods of interest for the Greenplum-Spark Connector are:
.option(key: String, value: String)
for specifying an individual option, and
.options(options: Map[String, String])
for specifying an options map.
To specify an option individually, provide <option_key> and <value> strings to the
DataFrameReader.option() method. For example, to provide the
user option:
.option("user", "gpdb_role_name")
To construct a
scala.collection.Map comprising more than one option, you provide the <option_key> and <value> strings for each option. For example:
val gscOptionMap = Map( "url" -> "jdbc:postgresql://gpdb-master:5432/testdb", "user" -> "gpadmin", "password" -> "changeme", "dbtable" -> "table1", "partitionColumn" -> "id" )
To provide an options map to the data source, specify it in the
DataFrameReader.options() method. For example, to provide the
gscOptionMap map created above to the data source:
.options(gscOptionMap)
Specifying Partition OptionsPerSegment.
partitionColumn
The
partitionColumn option that you specify must have the
integer,
bigint,
serial, or
bigserial Greenplum Database data type. The
partitionColumn you identify need not be the column specified with the
DISTRIBUTED BY (<column>) clause when you created the Greenplum Database table.
partitionsPerSegment
By default, the Greenplum-Spark Connector creates one Spark partition per Greenplum Database segment. You can set the
partitionsPerSegment option to specify a larger number of Spark partitions.
Spark partitions have a 2 GB size limit. If you are using the Connector to move more than 2 GB of data per Greenplum Database segment, you must increase the
partitionsPerSegment option value appropriately.
Reading Greenplum Data
When you read a Greenplum Database table into Spark, you identify the Greenplum-Spark Connector data source, provide the read options, and invoke the
DataFrameReader.load() method. For example:
val gpdf = spark.read.format("io.pivotal.greenplum.spark.GreenplumRelationProvider") .options(gscOptionMap) .load() you can perform on the returned
DataFrame include:
- Viewing the contents of the table with
.show()
- Counting the number of rows with
.count()
- Filtering the data using
.filter()
- Grouping the data using
. | https://greenplum-spark.docs.pivotal.io/110/read_from_gpdb.html | 2018-02-18T05:10:47 | CC-MAIN-2018-09 | 1518891811655.65 | [] | greenplum-spark.docs.pivotal.io |
Acqu get started with building, delivering, and managing your websites.
Key Acquia Cloud Site Factory features:
- Multisite PaaS for flexible experience delivery (Platform as a service) - You control your own site platform codebase, using your configured Drupal distribution, with contributed and custom modules of your choice to build, provision, and operate groups of websites. You can also use multiple codebases with Factory Stacks and multiple installation profiles.
- Hosting and monitoring - Acquia Cloud Site Factory provides a high-availability hosting and monitoring infrastructure across multiple web servers, with Varnish caching and replicated database servers. An operations team is on call 24/7 to test your website and keep it up-to-date with all the relevant updates and security patches.
All of this is based on the same Acquia Cloud Enterprise infrastructure that powers and protects some of the largest and most active Drupal websites in the world. With Acquia Cloud Site Factory, you can stop worrying about website speed, website traffic, disk space, uptime, and backups.
- Multisite governance - Define, group, and manage content and website functionality, policies, and standards based on digital experiences, locations, and organization structure. Websites are built, delivered, and managed from the same and different Drupal distributions.
- Single sign-on, centralized multisite management - Instead of signing in to multiple websites, Acquia Cloud Site Factory users can sign in once to Acquia Cloud Site Factory to sign in to all of them. This helps users reduce the number of accounts and passwords they have to keep track of, and it enables Acquia Cloud Site Factory website administrators to view and manage all of their websites from a single interface.
- Unified management console - Centralized website delivery and governance across all Sites and Factory Stacks to provide visibility, trust, and control across all websites. Includes role-based user access and site permissions.
- Multiple Factory Stacks - Develop and share multiple Drupal distributions with dedicated infrastructure to independently manage groups of digital experience websites. Each Factory Stack is managed from the same central management console.
- Acquia Support - Acquia Cloud Site Factory provides you with access to Acquia support, providing you with professional Drupal support and guidance, even for custom CSS. Accounts are also entitled to support tickets, where they can have one-on-one private support conversations.
Getting started with Acquia Cloud Site Factory
Acquia recommends these Acquia Academy videos (sign-in required) for new or prospective users to learn about Acquia Cloud Site Factory concepts, vision, and governance: | https://docs.acquia.com/node/12571 | 2018-02-18T05:14:46 | CC-MAIN-2018-09 | 1518891811655.65 | [array(['/sites/default/files/doc/2014/feb/logo-site-factory.png',
'Acquia Cloud Site Factory logo'], dtype=object) ] | docs.acquia.com |
This page is tagged because it NEEDS REVIEW. You can help the Joomla! Documentation Wiki by contributing to it.
More pages that need help similar to this one are here. NOTE-If you feel the need is satistified, please remove this notice.
Select Components → Banner → Banners from the drop-down menu on the back-end of your Joomla! installation. Or select the 'Banners' link from the Banner Client Manager or the Banner Categories Manager.. | http://docs.joomla.org/index.php?title=Help15:Screen.banners.15&oldid=5400 | 2014-07-10T14:13:58 | CC-MAIN-2014-23 | 1404776417380.9 | [] | docs.joomla.org |
"> <filename>index.html</filename> <filename>helloworld.php</filename> </files> </administration> </extension>! | http://docs.joomla.org/index.php?title=User:Rvsjoen/tutorial/Developing_an_MVC_Component/Part_01&oldid=64520 | 2014-07-10T14:09:59 | CC-MAIN-2014-23 | 1404776417380.9 | [] | docs.joomla.org |
To create a new Single Article Menu Item:
To edit an existing Single Article Menu Item, click its Title in Menu Manager: Menu Items.
Used to show one article in the front end of the site. An example Single Article Layout has the following required settings:
The Archived Articles Layout has the following Article Options, as shown below. These options determine how the articles will show in the layout. | http://docs.joomla.org/index.php?title=Help25:Menus_Menu_Item_Article_Single_Article&oldid=66381 | 2014-07-10T13:17:07 | CC-MAIN-2014-23 | 1404776417380.9 | [] | docs.joomla.org |
Subpackage Installer/1.6 - Revision history 2014-07-10T13:13:56Z Revision history for this page on the wiki MediaWiki 1.21.5 There is no edit history for this page. 2014-07-10T13:13:56Z <p>The requested page does not exist. It may have been deleted from the wiki, or renamed. Try <a href="/Special:Search" title="Special:Search">searching on the wiki</a> for relevant new pages. </p> | http://docs.joomla.org/index.php?title=Subpackage_Installer/1.6&feed=atom&action=history | 2014-07-10T13:13:56 | CC-MAIN-2014-23 | 1404776417380.9 | [] | docs.joomla.org |
Source code for spack.analyzers
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) """This package contains code for creating analyzers to extract Application Binary Interface (ABI) information, along with simple analyses that just load existing metadata. """ from __future__ import absolute_import import llnl.util.tty as tty import spack.paths import spack.util.classes mod_path = spack.paths.analyzers_path analyzers = spack.util.classes.list_classes("spack.analyzers", mod_path) # The base analyzer does not have a name, and cannot do dict comprehension analyzer_types = {} for a in analyzers: if not hasattr(a, "name"): continue analyzer_types[a.name] = a[docs]def list_all(): """A helper function to list all analyzers and their descriptions """ for name, analyzer in analyzer_types.items(): print("%-25s: %-35s" % (name, analyzer.description)) | https://spack.readthedocs.io/en/v0.17.0/_modules/spack/analyzers.html | 2022-06-25T13:21:06 | CC-MAIN-2022-27 | 1656103035636.10 | [] | spack.readthedocs.io |
Boards sample app (preview)
[This article is pre-release documentation and is subject to change.]
In this tutorial, you'll learn about configuring and using the Boards sample app.
Overview
The Boards template app for Microsoft Teams provides a simple way to connect and share with people in your organization with similar interests.
Benefits of using the Boards app:
- Pin relevant information in one place.
- Organize pins by topic.
- Discover items pinned by colleagues.
Note
- Before you can use this app, you may be asked for your permissions to use the connection. More information: Allow connections in sample apps
- This app is available in three different Teams themes: Default, Dark and High contrast. When you change the theme in Teams, the app automatically updates to match the selected theme. More information: Get the Teams theme using the Teams integration object
Important
- This is a preview feature.
- Preview features aren’t meant for production use and may have restricted functionality. These features are available before an official release so that customers can get early access and provide feedback.
Prerequisites
Before using this app:
- Install the app by side-loading the manifest for the app into Teams. You can get the manifest from. For more information and help with installing this app, read the documentation available with the app manifest.
- Set up the app for the first use.
Using Boards
In this section, you'll learn about the following capabilities in the Boards app:
Open the Boards app
To open the Boards app:
Select the Team in which the app was installed.
Select the channel where you installed the Boards app.
Select the Boards tab.
Select Allow if the app asks for your permissions to use the connectors.
You can learn more about extending this app's capabilities on the splash screen. Select Got it to close the screen, and go to the app. To hide this message while opening this app again, select Don't show this again before you select Got it.
Understanding the Boards app interface
The Boards app displays boards grouped by category. A Board is a collection of pinned items for a topic.
Add a Board
In Teams, go to the team in which Boards is installed
Select the Boards tab
In right corner, select Add a board.
Enter the title, category, and a description of the board.
Select Save.
New board is created.
Open a board
From the Boards app, you can search for board topics that interest you and open them to view the pinned items. For example, if you're interested in hiking, you could search for hiking-related boards and see what your colleagues have pinned regarding hiking.
In the Boards app in Teams, select the Find a Board search field.
Type the name of the board or category you want to find.
Boards in your organization that match the search words will be displayed. Select the board you want.
The selected board will be displayed. From this screen you can see information about the websites, Teams, channels, and conversation chats related to the board.
Search results are organized by category, or you can select All to view all returned results.
Pin an item to a board
If you want to share an item with your colleagues, you can easily pin it to the appropriate board for the item category.
For example, you might want to share a link to a book written by an author with the Book Club.
Open the board you want. In this example, we open the sample board Book Club.
Select Add to board.
Select the appropriate item type.
Add a link to the website.
Enter title and description
You'll see a preview of what the card will look like.
Select Save.
Sort boards
You can sort the order in which board categories are displayed on the main Boards screen.
Select the Sort button.
From the pop-up, select arrows up or down to make categories sorted and displayed in that order.
To hide a category from the boards screen, select the visibility button.
Select Apply.
Boards matching the selected sort order will be displayed.
Edit a board
You can Edit the boards under your organization under the Boards app.
To edit the boards:
In Teams, go to the team in which Boards is installed.
Select the Boards tab.
Select a Board.
Select Edit.
Change the Title, Category, and Description.
Add relevant Image.
Select Save.
Add categories
You can add the categories under your organization under the Boards app.
To add categories:
In Teams, go to the team in which Boards is installed.
Select the Boards tab.
Select Settings gear in upper right corner.
Select Add Category.
Change the title, category, and description.
Add an image.
Select Save.
Edit the Boards app in Power Apps
In Teams, add the Power Apps app from the Teams store by selecting the ellipses in the app menu, searching for Power Apps, and then selecting install.
Right-click on Power Apps icon and select Pop out app to open the app in a new window. This pop-out action will ensure that you don't lose your changes if you browse somewhere else in Teams.
Select Build tab.
Select the team in which the Boards app is installed, then select Installed apps.
In the Boards tile, select the Boards app to open it in Power Apps in Team.
You may get prompted to authorize the app's connectors to connect. Select Allow.
You can now customize the app.
See also
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/power-apps/teams/boards | 2022-06-25T15:13:35 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['media/boards/main-boards-screen.png',
'Main Boards screen The main screen of the Boards app.'],
dtype=object)
array(['media/boards/power-apps-studio.png',
'Power Apps studio Boards Power App in Power Apps studio in Teams.'],
dtype=object) ] | docs.microsoft.com |
DDL Triggers.
In This Section
Understanding DDL Triggers
Explains DDL triggers, how they work, when to use them, and how they differ from regular triggers.
Designing DDL Triggers
Explains DDL trigger scope and which Transact-SQL DDL statements can raise a DDL trigger.
Implementing DDL Triggers
Explains the steps that are required to create a DDL trigger.
Getting Information About DDL Triggers
Contains links to catalog views that can be used to get information about DDL triggers. | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms190989(v=sql.105)?redirectedfrom=MSDN | 2022-06-25T15:26:48 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
Octavia Base Image¶
Launchpad blueprint:
Octavia is an operator-grade reference implementation for Load Balancing as a Service (LBaaS) for OpenStack. The component of Octavia that does the load balancing is known as amphora. Amphora may be a virtual machine, may be a container, or may run on bare metal. Creating images for bare metal amphora installs is outside the scope of this 0.5 specification but may be added in a future release.
Amphora will need a base image that can be deployed by Octavia to provide load balancing.
Problem description¶
Octavia needs a method for generating base images to be deployed as load balancing entities.
Proposed change¶
Leverage the OpenStack diskimage-builder project [1] tools to provide a script that builds qcow2 images or a tar file suitable for use in creating containers. This script will be modeled after the OpenStack Sahara [2] project’s diskimage-create.sh script.
This script and associated elements will build Amphora images. Initial support with be with an Ubuntu OS and HAProxy. The script will be able to use Fedora or CentOS as a base OS but these will not initially be tested or supported. As the project progresses and/or the diskimage-builder project adds support for additional base OS options they may become available for Amphora images. This does not mean that they are necessarily supported or tested.
The script will use environment variables to customize the build beyond the Octavia project defaults, such as adding elements.
The initial supported and tested image will be created using the diskimage-create.sh defaults (no command line parameters or environment variables set). As the project progresses we may add additional supported configurations.
Command syntax:
Container support
The Docker command line required to import a tar file created with this script is [3]:
$ docker import - image:amphora-x64-haproxy < amphora-x64-haproxy.tar
Alternatives¶
Deployers can manually create an image or container, but they would need to make sure the required components are included.
Data model impact¶
None
REST API impact¶
None
Security impact¶
None
Notifications impact¶
None
Other end user impact¶
None
Performance Impact¶
None
Other deployer impact¶
This script will make creating an Octavia Amphora image or container simple.
Developer impact¶
None
Implementation¶
Assignee(s)¶
Michael Johnson <johnsom>
Work Items¶
Write diskimage-create.sh script based on Sahara project’s script.
Identify the list of packages required for Octavia Amphora.
Create required elements not provided by the diskimage-builder project.
Create unit tests
Dependencies¶
This script will depend on the OpenStack diskimage-builder project.
Testing¶
Initial testing will be completed using the default settings for the diskimage-create.sh tool.
- Unit tests with tox
Validate that the image is the correct size and mounts via loopback
Check that a valid kernel is installed
Check that HAProxy and all required packages are installed
tempest tests | https://docs.openstack.org/octavia/latest/contributor/specs/version0.5/base-image.html | 2022-06-25T14:44:54 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.openstack.org |
Daily Log Rotation
Log file rotation keeps log files manageable and organized, as old log files are automatically deleted when they pass a given threshold, instead of staying on the system and eventually running the system out of disk space. Log file rotation still allows you to access previous logs, but its unlikely that you would need a log file older than a couple weeks.
Enabling daily rotation
By default a size rotation of 2mb is used for logs in Payara Server Enterprise, meaning no log files will be deleted until the size limit is reached and a new log is made at midnight.
Payara Server Enterprise also has a number of rotation conditions which can be changed in the admin console.
Time - Daily, weekly, monthly or even hourly log rotation.
Size - Logs are rotated when they exceed a certain limit.
Number - Maximum number of enteries in a log file.
Which allows you to change how the logs are rotated to your needs and can be combined with daily log rotation. Enabling daily log rotation and setting a limit on the number of logs to keep will keep a certain number of days of logs before the oldest log file gets deleted at midnight. | https://docs.payara.fish/enterprise/docs/5.38.0/documentation/payara-server/logging/daily-log-rotation.html | 2022-06-25T13:37:18 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['../../../_images/logging/daily-log-rotation.png',
'Daily rotation config'], dtype=object)
array(['../../../_images/logging/log_rotation_settings.png',
'Log rotation settings'], dtype=object) ] | docs.payara.fish |
The Vivado IP catalog includes IP available for purchase from select Alliance Partners. These IP are signified by the blue color disk.
When you select an Alliance Partner IP, a dialog box opens that provides you with a link to where you can purchase the IP. The option to Customize IP is greyed-out until you purchase and install the IP. | https://docs.xilinx.com/r/2021.2-English/ug896-vivado-ip/Partner-Alliance-IP | 2022-06-25T13:44:52 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.xilinx.com |
Optimize.
Each vCPU is a thread of a CPU core, except for T2 instances and instances powered by Amazon Graviton2 processors.
In most cases, there is an Amazon EC2 instance type that has a combination of memory and number of vCPUs to suit your workloads. However, you can specify the following CPU options to optimize your instance for specific workloads or business needs:
Number of CPU cores: You can customize the number of CPU cores for the instance. You might do this to potentially optimize the licensing costs of your software with an instance that has sufficient amounts of RAM for memory-intensive workloads but fewer CPU cores.
Threads per core: You can disable multithreading by specifying a single thread per CPU core. You might do this for certain workloads, such as high performance computing (HPC) workloads.
You can specify these CPU options during instance launch. There is no additional or reduced charge for specifying CPU options. You're charged the same as instances that are launched with default CPU options.
Contents | https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/instance-optimize-cpu.html | 2022-06-25T14:23:49 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.amazonaws.cn |
Install with MSI Installer
Overview
The easiest way to install Aspose.BarCode for Reporting Services is using MSI installer. The installer proceeds the following steps:
- Installing library binary files and utilities;
- Coping proper versions of the library to Report Servers and Visual Studio Report Extensions;
- Add changes to configuration files of Report Servers and Visual Studio Report Extensions.
After this you need to do two additional steps:
- Install license by ConfigLicense utility;
- Add Aspose.BarCode for Reporting Services visual component to Visual Studio Toolbox.
Installation with MSI installer
- Run installer and wait of library binary files installing and running ConfigTool utility.
- Choose “Configure Aspose Barcode for Reporting Services”.
- On next screen you can select SQL Server Reporting Services version where Aspose.BarCode for Reporting Services library will be installed.
- And on this screen, you can select Visual Studio versions with SQL Server Data Tools installed. The same version of Visual Studio can contain different versions of SSRS engine. As an example, Visual Studio 2017 can contain SSRS 14.x and 15.x version. Utility analyzes SSRS libraries and, in main case, selects right version of Aspose.BarCode for Reporting Services library.
Package uninstallation
At any time, you can uninstall the package the same way as any other program. The binary files of the package are removed as any other with exception of applied library files and configurations settings which should are removed from Report Servers and Visual Studio Report Extensions by ConfigTool utility which is run as final step of deinstallation. To remove installed configuration and binary files you need to do the following steps:
- Select “Remove Aspose Barcode for Reporting Services configuration”.
- On next screen you can select SQL Server Reporting Services versions where Aspose.BarCode for Reporting Services library will be removed.
- And on this screen, you can select Visual Studio versions with SQL Server Data Tools installed, where Aspose.BarCode for Reporting Services library will be removed.
| https://docs.aspose.com/barcode/reportingservices/install-with-msi-installer/ | 2022-06-25T14:36:33 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['installer_config_01.png', 'ConfigTool utility'], dtype=object)
array(['installer_config_02.png',
'ConfigTool utility Configure operation'], dtype=object)
array(['installer_config_03.png',
'ConfigTool utility Report Servers selection'], dtype=object)
array(['installer_config_04.png',
'ConfigTool utility Visual Studio selection'], dtype=object)
array(['uninstaller_config_01.png', 'ConfigTool utility Remove operation'],
dtype=object)
array(['uninstaller_config_02.png',
'ConfigTool utility Report Servers remove'], dtype=object)
array(['uninstaller_config_03.png',
'ConfigTool utility Visual Studio remove'], dtype=object)] | docs.aspose.com |
memoQ TM Search tool
When you write original documents, you may want to make it easy to translate. One of the ways to this is to use phrases, sentences that are already in the translation memory.
You may also want to interpret documents in another language, and use your translation memories as an aid to do this.
To find words and phrases in translation memories outside memoQ, use the memoQ TM Search tool.
memoQ must be installed and licensed: The memoQ TM Search tool is installed with memoQ. It works only if you have a valid memoQ license on your computer.
Cannot use the same translation memories at the same time in memoQ and in the TM Search tool: If memoQ is running while you use the TM Search tool, you cannot use the translation memories that are used in the project that is open in memoQ. If you need to use the same translation memories, close memoQ.
Local translation memories only: The memoQ TM Search tool can use translation memories from your computer. It cannot access translation memories on a memoQ server.
How to get here
No matter where you are in Windows, press Ctrl+Shift+Q.
The memoQ TM Search window opens.
To change this keyboard shortcut:
- Open the Windows Start menu, and type "tm search".
- Right-click the memoQ <version> TM Search menu item, and choose Open file location. A Windows Explorer window opens.
- Right-click the memoQ <version> TM Search icon, and choose Properties.
In the Properties window, click the Shortcut key box, and press the shortcut combination you want to use.
To turn off the keyboard shortcut: press Delete on your keyboard.
You may need to switch accounts: You need to use an administrator account in Windows to change the shortcut.
What can you do?
When you start the memoQ TM Search tool for the first time, the TM settings tabs appear.
Check the check box for every translation memory that memoQ TM Search should use.
If there are too many translation memories, you can filter them by language. From the Source language and the Target language drop-down boxes, choose the languages you need.
After you check the check boxes for the translation memories, click OK.
Before you press Ctrl+Shift+Q, copy text to the clipboard. memoQ TM Search searches for the text that it finds on the clipboard.
Or, you can type or paste an expression in the Search phrase box. Then click Search or press Enter.
memoQ TM Search will list all the hits in a table. Source-language phrases are on the left, target-language phrases are on the right.
Next to each phrase, there is a Copy
icon. To copy the phrase to the clipboard, click the Copy icon next to it. You can paste the phrase anywhere.
The hits are sorted by match rate, with the highest match rate at the top.
If memoQ TM Search is open, it will automatically look up a new phrase when you copy it to the clipboard.
When you press Ctrl+C anywhere in Windows, memoQ TM Search will wake up automatically and search for the text you copied.
To turn this off: Clear the Activate the TM search tool whenever I press Ctrl+C check box.
At the bottom of the memoQ TM Search window, click Search settings. The settings tabs appear.
Use the TMs to search tab to change the translation memories that you search.
Check the check box for every translation memory that memoQ TM Search should use.
If there are too many translation memories, you can filter them by language. From the Source language and the Target language dropdowns, choose the languages you need.
To clear all the check boxes: Below the list, click Select all, then click Select all again. (Do not double-click; click twice, slowly.)
To set the minimum match rate and apply penalties, click the Search settings tab.
You can use the following settings:
- Minimum match rate to display hits (%): memoQ TM Search will display those matches where the match rate is the same or higher than the percentage entered here. Normally, it's zero - all matches appear. To use a different match rate, type it here.
- Alignment penalty: memoQ TM Search will return a lower match rate for translation units that come from alignment.
- Inline tag strictness for exact matches: Choose how strict memoQ TM Search should be about exact matches. You can choose Strict, Medium or Permissive. Strict means that tags, numbers, punctuation marks must be exactly the same in the search phrase and in the translation memory to get an exact match. With the other settings, some parts - not the text - of the phrase can be different, and an exact match will still appear. However, if you usually search for just phrases, exact matches will be very rare anyway.
You can apply penalties to individual translation memories. Under TM penalties, memoQ TM Search lists all the translation memories you use for searching. You can type a number next to the name of each translation memory. When there is a number, and there is a match from that translation memory, memoQ TM Search will take away this number from the original match rate, and sort the list as if the match rate were lower.
To return to searching: Click OK.
When you finish
If you do not need memoQ TM Search any longer, close it. You can open it again by pressing Ctrl+Shift+Q. | https://docs.memoq.com/current/en/Places/memoq-tm-search-tool.html?Highlight=memoQ%20TM%20search%20tool | 2022-06-25T14:30:43 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['../Images/m-q/memoq-tm-search.png', 'memoq-tm-search'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Images/m-q/memoq-tm-search-tms2search.png',
'memoq-tm-search-tms2search'], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Images/m-q/memoq-tm-search.png', 'memoq-tm-search'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Images/m-q/memoq-tm-search-tms2search.png',
'memoq-tm-search-tms2search'], dtype=object)
array(['../Images/m-q/memoq-tm-search-search-settings.png',
'memoq-tm-search-search-settings'], dtype=object) ] | docs.memoq.com |
Introduction
This section of the Sitecore Content Hub™ documentation set provides the information you need to manage data models and web pages. This includes extending the domain model, managing the sitemap, extending Content Hub with editorial pages (for example, adding pages for the brand guidelines), and building pages that allow users to display content stored in the database (such as the most recent assets for a specific brand).
You access the features and functions used to manage Content Hub from the Manage menu. To access this menu:
Open the Content Hub application.
On the menu bar, click Manage .
On the Manage page, click the required tile (for example, Schema, from which you can manage your data models).
Important
Only superusers can access the Manage menu.
Can we improve this article ? Provide feedback | https://docs.stylelabs.com/contenthub/4.2.x/content/user-documentation/manage/introduction.html | 2022-06-25T14:42:56 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.stylelabs.com |
If you have a problem with the DMT132, first read the following sections concerning the behavior and error indications of the transmitter:
Typical causes of errors include the following:
- Incorrect supply voltage
- Incorrect wiring
- Physical damage to the transmitter, especially the sensor elements
- Contamination or condensation on the sensors
Some problems can be solved by simply resetting the transmitter. You can reset the transmitter by disconnecting the power or issuing the RESET command using the service port.
If resetting does not help, and if the problem is related to user calibration or transmitter software, you can restore the factory configuration of the transmitter by issuing the FRESTORE command.
If you cannot solve the problem and return the transmitter to the normal state, please contact Vaisala technical support.
If repair is needed, you can have the transmitter serviced by a Vaisala Service Center. | https://docs.vaisala.com/r/M211289EN-B/en-US/GUID-0A33E1DC-E031-4F73-96FF-60E38B2BF63B | 2022-06-25T14:20:44 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.vaisala.com |
Implement 1904 Date System
Microsoft Excel supports two date systems: 1900 date system (the default date system implemented in Excel for Windows) and 1904 date system. The 1904 date system is normally used to provide compatibility with Macintosh Excel files and is the default system if you are using Excel for Macintosh. You can set the 1904 date system for Excel files using Aspose.Cells.
To implement 1904 date system in Microsoft Excel (for example Microsoft Excel 2003):
From the Tools menu, select Options, and select the Calculation tab.
Select the 1904 date system option.
Click OK.
Selecting 1904 date system in Microsoft Excel
See the following sample code on how to achieve this using Aspose.Cells APIs. | https://docs.aspose.com/cells/java/implement-1904-date-system/ | 2022-06-25T14:26:51 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['implement-1904-date-system_1.png', 'todo:image_alt_text'],
dtype=object) ] | docs.aspose.com |
Inserting Documents Dynamically
You can insert contents of outer documents to your reports dynamically using
doc tags. A
doc tag denotes a placeholder within a template for a document to be inserted during runtime.
Syntax of a
doc tag is defined as follows.
<<doc [document_expression]>>
Note – A
doc tag can be used almost anywhere in a template document except textboxes and charts.
An expression declared within a
doc tag is used by the engine.
Note – If an expression declared within a
doc tag returns a stream object, then the stream is closed by the engine>> (see “Using Contextual Object Member Access” for more information)
- Known external types (see “Setting up Known External Types” for more information) | https://docs.aspose.com/words/net/inserting-documents-dynamically/ | 2022-06-25T13:37:24 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.aspose.com |
Center of Excellence (CoE) command line interface (CLI) overview
The Center of Excellence (CoE) command line interface provides automation for common tasks of the CoE starter toolkit. It's been designed to provide a set of commands that meet the needs of different personas across the organization.
It can be used to automate the process of installing CoE CLI components covering Azure, Azure DevOps, and Power Platform.
Unified administration
The CoE CLI provides a set of commands that can be used to automate the end-to-end deployment for CoE solutions.
Comparing and contrasting the CoE CLI to other CLI or APIs:
The CoE CLI aims to automate the end-to-end deployment of components across the Microsoft cloud.
The Azure CLI is aimed at automating Azure resources, and via extensions, Azure DevOps. The CoE CLI uses the Azure CLI for authentication and managing Azure related resources.
The Power Platform CLI is a simple, one-stop developer CLI that empowers developers and independent software vendors (ISVs) to perform various operations in Microsoft Power Platform related to environment lifecycle features, and to authenticate and work with Microsoft Dataverse environments, solution packages, portals, and code components. As new features are added to the cross platform Power Platform CLI, the CoE CLI will use the Power Platform CLI features.
The Azure DevOps services REST API provides a REST based set of commands to interact with Azure DevOps. The CoE CLI makes use of these APIs to build aggregate commands.
What next
As you consider an enterprise deployment the following sections outline the key concepts, you will need to understand:
Install CoE CLI - How to install the CoE CLI using local host computer or via a Docker container.
Set up ALM Accelerator for Power Platform (AA4PP) components - Use CLI commands to set up and configure an environment for ALM Accelerator to enable them to achieve more within your organization.
Getting started
Once the CoE CLI has been installed, you can use
-h argument to see help options.
coe -h
Authentication for tasks is managed using the Azure CLI. Using standard Azure CLI commands you can log in, log out, and select accounts. For example:
az login coe alm install -c aad az logout
Getting help
You can get short descriptions of any command by adding
--help to the command line. To get more detailed help, you can use the help command. For example, to get help on the ALM Accelerator for Power Platform use the following command.
coe help alm install
Read more in the help articles for detailed descriptions for each command.
Learn more
- CoE CLI upgrade - How to upgrade to a new version of the CoE CLI install.
How does it work
Interested in learning how the CoE CLI works or want to extend the functionality? The CLI development overview provides the best place to start.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/power-platform/guidance/coe/cli/overview | 2022-06-25T13:46:12 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['media/cli-unified-process.png', 'CLI Unified Process'],
dtype=object) ] | docs.microsoft.com |
Compiler Error CS0117
'type' does not contain a definition for 'identifier'
- This error occurs in some situations when a reference is made to a member that does not exist for the data type.
Example
The following sample generates CS0117:
// CS0117.cs public class BaseClass { } public class base021 : BaseClass { public void TestInt() { int i = base.someMember; //CS0117 } public static int Main() { return 1; } } | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/c4aad8at(v=vs.90)?redirectedfrom=MSDN | 2022-06-25T14:04:07 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
Overview
The TIM Edge solution delivers TIM's main functionality packaged to be deployed on the edge. As such, it is suitable for on-premise deployments without requiring an active internet connection. The solution is accompanied by a simple and straightforward REST API, which enables users to quickly evaluate forecasts made with a prebuilt TIM forecasting model. In other words, users can iterate over the modeling process until they find the optimal configuration for their use case, build a model with this configuration, and 'deploy' this model to production by using the TIM Edge solution to make forecasts with it on the latest available data.
The documentation of the TIM Edge solution is comprised of:
- Installation of TIM Edge: an overview of how to install the TIM Edge solution,
- The API: Swagger documentation of the solution's REST API, and
- A Notebook example, available for download. | https://docs.tangent.works/docs/TIM-Clients/TIM-Edge/Overview | 2022-06-25T14:06:34 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.tangent.works |
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
This version of the Yocto Project Mega-Manual is for the 3 \ pylint3-2.6.2 yocto-2.7 yocto_1.5_M5.rc8
For this example, check out the branch based on the yocto-3.0.4 release:
$ git checkout tags/yocto-3.0.4 -b my-yocto-3.0.4 Switched to a new branch 'my-yocto-3.0.4'
The previous Git checkout command creates a local branch named my-yocto-3.0.4. The files available to you in that branch exactly match the repository's files in the "zeus" development branch at the time of the Yocto Project yocto-3.0.4.7.3, 3.0.4, (BSP).
For further introductory information on the Yocto Project, you might be interested in this article by Drew Moseley and in this short introductory video.
The remainder of this section overviews advantages and challenges tied to the Yocto Project. names.
Layers typically have names that begin with the string
meta-. to help build,
test, and package software within the eSDK.
You can use the tool to optionally integrate what you
build into an image built by the OpenEmbedded build
system.
The
devtool command command
updates an existing recipe so that you can build it
for an updated set of source files.
You can read about the
devtool workflow
commands emulation),.
Poky is the Yocto Project reference distribution. It contains the Open-Embedded.
You can read more about Poky in the "Reference Embedded Distribution (Poky)" section.® Windows™ or macOS®)..
Toaster: Regardless of what your Build Host is running, you can use Toaster to develop software using the Yocto Project. Toaster is a web interface to the Yocto Project's Open-Embedded.
"Poky", which is pronounced Pock-ee, is the name of the Yocto Project's reference distribution or Reference OS Kit. Poky contains the OpenEmbedded Build System (BitBake and OpenEmbedded.
pokyGit repository, see the "Top-Level Core Components" section in the Yocto Project Reference Manual.
The following figure illustrates what generally comprises Poky:.
To use the Yocto Project tools, you can use Git to clone (download) the Poky repository then use your local copy of the reference distribution to bootstrap your own distribution..
The OpenEmbedded build system uses a "workflow" to accomplish image and SDK generation. The following figure overviews that workflow:-Embedded
make
tool.
meta directory (BSP) makes
"Submitting a Change to the Yocto Project"
section in the Yocto Project Development Tasks Manual..
As mentioned briefly in the previous section and also in the "Git Workflows and the Yocto Project" section, that repository
"Locating Yocto Project Source Files"
section in the Yocto Project Development Tasks Manual.
It is important to understand that Git tracks content change and
not files.
Git uses "branches" to organize different development efforts.
For example, the
poky repository has
several branches that include the current "zeus" upstream source Git repository. in other words, you can define your local Git environment to work on any development branch in the repository. To help illustrate, consider the following example Git commands:
$ cd ~ $ git clone git://git.yoctoproject.org/poky $ cd poky $ git checkout -b zeus origin/zeus
In the previous example after moving to the home directory, the
git clone command creates a
local copy of the upstream
poky Git repository.
By default, Git checks out the "master" branch for your work.
After changing the working directory to the new local repository
(i.e.
poky), the
git checkout command creates
and checks out a local branch named "zeus", which
tracks the upstream "origin/zeus" branch.
Changes you make while in this branch would ultimately affect
the upstream "zeus" branch of the
poky repository.
It is important to understand that when you create and checkout a local working branch based on a branch name, your local environment matches the "tip" of that particular development branch at the time you created your local branch, which could be different from the files in the "master" branch of the upstream repository. In other words, creating and checking out a local branch based on the "zeus" branch name is not the same as checking out the "master" branch in the repository. Keep reading to see how you create a local snapshot of a Yocto Project Release.
Git uses "tags" to mark specific changes in a repository branch
structure.
Typically, a tag is used to mark a special point such as the final
change (or commit) before a project is released.
You can see the tags used with the
poky Git
repository by going to and
clicking on the
[...]
link beneath the "Tag" heading.
Some key tags for the
poky repository are
jethro-14.0.3,
morty-16.0.1,
pyro-17.0.0, and
zeus-22.0.4.
These tags represent Yocto Project releases.
When you create a local copy of the Git repository, you also have access to all the tags in the upstream fetch --tags $ git checkout tags/rocko-18.0.0 -b my_rocko-18.0.0
In this example, the name of the top-level directory of your
local Yocto Project repository is
poky.
After moving to the
poky directory, the
git fetch command makes all the upstream
tags available locally in your repository.
Finally, the
git checkout command
creates and checks out a branch named "my-rocko-18.0.0" that is
based on the upstream branch whose "HEAD" matches the
commit in the repository associated with the "rocko-18.0.0" tag.
The files in your repository now exactly match that particular
Yocto Project release as it is tagged in the upstream Git
repository.
It is important to understand that when you create and
checkout a local working branch based on a tag, your environment
matches a specific point in time and not the entire development
branch (i.e. from the "tip" of the branch backwards). repository.
commandkpackage.
This chapter provides explanations for Yocto Project concepts that go beyond the surface of "how-to" information and reference (or look-up) material. Concepts such as components, the OpenEmbedded build system workflow, cross-development toolchains, shared state cache, and so forth are explained.
The BitBake task executor together with various types of configuration files form the OpenEmbedded ,
where
packagename
packagename is the name of the
package.
Layers are repositories that contain related metadata (i.e. sets of instructions) that tell the OpenEmbedded build system how to build a target. Yocto Project's
configuration
command, BitBake sorts out the configurations to ultimately
define your build environment.
It is important to understand that the
OpenEmbedded build system
reads the configuration files in a specific order:
target
Poky Reference Distribution directory that
contains distro configuration files (e.g.
poky.conf
that contain many policy configurations for the
Poky distribution.
The following figure shows an expanded representation of these three layers from the general workflow figure:."
BitBake uses the
conf/bblayers.conf file,
which is part of the user configuration, to find what layers it
should be using as part of the build.
The distribution layer provides policy configurations for
your distribution.
Best practices dictate that you isolate these types of
configurations into their own layer.
Settings you provide in
conf/distro/ override
similar settings that BitBake finds in your
distro.conf/),
and any distribution-wide include files.
distro.conf
recipes-*:
Recipes and append files that affect common
functionality across the distribution.
This area could include recipes and append files
to add distribution-specific configuration,
initialization scripts, custom image recipes,
and so forth.
Examples of
recipes-*
directories are
recipes-core
and
recipes-extra.
Hierarchy and contents within a
recipes-* directory can vary.
Generally, these directories contain recipe files
(
*.bb), recipe append files
(
*.bbappend), directories
that are distro-specific for configuration files,
and so forth./)
and, of course, the layer
(
machine.conf
conf/layer.conf).
The remainder of the layer is dedicated to specific recipes
by function:
recipes-bsp,
recipes-core,
recipes-graphics,
recipes-kernel, and so forth.
Metadata can exist for multiple formfactors, graphics
support systems, and so forth.
recipes- recipes, append files, and patches, that your project needs. in the Yocto Project Reference Manual..
The OpenEmbedded build system uses BitBake to produce images and Software Development Kits (SDKs). You can see from the general workflow the
Build Directory.
file://) that is part of a recipe's
SRC_URIstatement, the OpenEmbedded build system takes a checksum of the file for the recipe and inserts the checksum into the signature for the
do_fetchtask. If any local file has been modified, the
do_fetchtask directory. the name
of the recipe.
WORKDIR:
The location where the OpenEmbedded build system
builds a recipe (i.e. does the work to create the
package).
PV:
The version of the recipe used to build the
package.
PR:
The revision of the recipe used to build the
package.
S:
Contains the unpacked source files for a given
recipe.
BPN:
The name of the recipe used to build the
package.
The
BPN variable is
a version of the
PN
variable but with common prefixes and
suffixes removed.
PV:
The version of the recipe used to build the
package.
PACKAGE_ARCH) and one based on a machine (i.e.
MACHINE). The underlying structures are identical. The differentiator being what the OpenEmbedded build system is using as a build target (e.g. general architecture, a build host, an SDK, or a specific machine).
Once source code is fetched and unpacked, BitBake locates patch files and applies them to the source files:.
After source code is patched, BitBake executes tasks that configure and compile the source code. Once compilation occurs, the files are copied to a holding area (staged) in preparation for packaging:
This step in the build process consists of the following tasks:
do_prepare_recipe_sysroot:
This task sets up the two sysroots in
${
WORKDIR
}
(i.e.
recipe-sysroot.
After source code is configured, compiled, and staged, the build system analyzes the results and splits the output into packages:_package task build system uses BitBake to generate the root filesystem image:.
The OpenEmbedded build system uses BitBake to generate the Software Development Kit (SDK) installer scripts for both the standard SDK and the extensible SDK (eSDK):
do_populate_sdktask,.
For each task that completes successfully, BitBake writes a
stamp file into the
STAMPS_DIR
directory.
The beginning of the stamp file's filename is determined
by the
STAMP
variable, and the end of the name consists of the task's
name and current
input checksum.
BB_SIGNATURE_HANDLER.
The description of tasks so far assumes that BitBake needs to build everything and no available prebuilt objects exist. BitBake does support skipping tasks if prebuilt objects are available. These objects are usually made available in the form of a shared state (sstate) cache.
SSTATE_DIRand
SSTATE_MIRRORSvariables.
The idea of a setscene task (i.e
do_
taskname
_setscene)
is a version of the task where
instead of building something, BitBake can skip to the end
result and simply place a set of files into specific
locations as needed.
In some cases, it makes sense to have a setscene task
variant (e.g. generating package files in the
do_package_write_*
task).
In other cases, it does not make sense (e.g. a
do_patch
task or a
do_unpack
task) since the work involved would be equal to or greater
than the underlying task.
In the build system, the common tasks that have setscene
variants are
do_package,
do_package_write_*,
do_deploy,
do_packagedata,
and
do_populate_sysroot.
Notice that these tasks represent most of the tasks whose
output is an end result.
The build system has knowledge of the relationship between
these tasks and other preceding tasks.
For example, if BitBake runs
do_populate_sysroot_setscene for
something, it does not make sense to run any of the
do_fetch,
do_unpack,
do_patch,
do_configure,
do_compile, and
do_install tasks.
However, if
do_package needs to be
run, BitBake needs to run those other tasks.
It becomes more complicated if everything can come
from an sstate cache because some objects are simply
not required at all.
For example, you do not need a compiler or native tools,
such as quilt, if nothing exists to compile or patch.
If the
do_package_write_* packages
are available from sstate, BitBake does not need the
do_package task data.
To handle all these complexities, BitBake runs in two phases. The first is the "setscene" stage. During this stage, BitBake first checks the sstate cache for any targets it is planning to build. BitBake does a fast check to see if the object exists rather than a complete download. If nothing exists, the second phase, which is the setscene stage, completes and the main build proceeds.
If objects are found in the sstate cache, the build system works backwards from the end targets specified by the user. For example, if an image is being built, the build system first looks for the packages needed for that image and the tools needed to construct an image. If those are available, the compiler is not needed. Thus, the compiler is not even downloaded. If something was found to be unavailable, or the download or setscene task fails, the build system then tries to install dependencies, such as the compiler, from the cache.
The availability of objects in the sstate cache is
handled by the function specified by the
BB_HASHCHECK_FUNCTION
variable and returns a list of available objects.
The function specified by the
BB_SETSCENE_DEPVALID
variable is the function that determines whether a given
dependency needs to be followed, and whether for any given
relationship the function needs to be passed.
The function returns a True or False value.:
The build process writes images out to the
Build Directory
inside the
tmp/deploy/images/
folder as shown in the figure.
This folder contains any files expected to be loaded on the
target device.
The
machine/ or
*.bz2 files). directory.
to create the SDK, a set of default packages apply.
This variable allows you to add more packages.
imagename.
The Yocto Project does most of the work for you when it comes to creating cross-development toolchains..
gcc-cross-canadiansince this SDK ships a copy of the OpenEmbedded build system and the sysroot within it contains
gcc-cross.
The chain of events that occurs when
gcc-cross is
bootstrapped is as follows:
gcc -> binutils-cross -> gcc-cross-initial -> linux-libc-headers -> glibc-initial -> gl.
glibc-initial:
An initial version of the Embedded GNU C Library
(GLIBC) needed to bootstrap
glibc.
glibc:
The GNU C Library..
By design, the OpenEmbedded build system builds everything from scratch unless BitBake can determine that parts do not need to be rebuilt. Fundamentally, building from scratch is attractive as it means all parts are built fresh and no possibility of stale data exists that can cause problems. When developers hit problems, they typically default back to building from scratch so they have a know to be rebuilt.
The Yocto Project implements shared state code that supports incremental builds. The implementation of the shared state code answers the following questions that were fundamental roadblocks within the OpenEmbedded incremental build support system:
What pieces of the system have changed and what pieces have not changed?
How are changed pieces of software removed and replaced?
How are pre-built components that do not need to be rebuilt from scratch used when they are available? build system does not maintain
PR
information as part of the shared state packages.
Consequently, considerations exist that affect
maintaining shared state feeds.
For information on how the build system works with
packages and can track incrementing
PR information, see the
"Automatically Incrementing a Binary Package Revision Number"
section in the Yocto Project Development Tasks Manual.
The code in the build system that supports incremental builds is not simple code. For techniques that help you work around issues related to shared state code, see the "Viewing Metadata Used to Create the Input Signature of a Shared State Task" and "Invalidating Shared State to Force a Task to Run" sections both in the Yocto Project Development Tasks Manual.
The rest of this section goes into detail about the overall incremental build architecture, the checksums (signatures), and shared state.
When determining what parts of the system need to be built,
BitBake works on a per-task basis rather than a per-recipe
basis.
You might wonder why using a per-task basis is preferred over
a per-recipe basis.
To help explain, consider having the IPK packaging backend
enabled and then switching to DEB.
In this case, the
do_install
and
do_package
task outputs are still valid.
However, with a per-recipe approach, the build would not
include the
.deb files.
Consequently, you would have to invalidate the whole build and
rerun it.
Rerunning everything is not the best solution. work directory changes because it
should not affect the output for target packages.
Also, the build process has the objective of making native
or cross packages relocatable.
The checksum therefore needs to exclude
WORKDIR.
The simplistic approach for excluding the work, solutions for shell scripts exist. situations, you need to add dependencies BitBake is not able to find. You can accomplish this by using a line like the following:
PACKAGE_ARCHS[vardeps] = "MACHINE"
This example explicitly adds the
MACHINE
variable as a dependency for
PACKAGE_ARCHS.
As an example, consider a case with in-line Python, the question of a task's indirect inputs still exits - items, a variety of ways exist by which both the basehash and the dependent task hashes can be influenced. Within the BitBake configuration file, you can give BitBake some extra information to help it construct the basehash. The following statement effectively results in a list of global variable dependency excludes (i.e. variables never included in any checksum):
WORKDIR
since that variable, a dummy "noop" signature handler is of supporting a shared state. The other half of the problem is being able to use checksum information during the build and being able to reuse or rebuild specific components.
The
sstate
class is a relatively generic implementation of how to
"capture" a snapshot of a given task.
The idea is that the build process does not care about the
source of a task's output.
Output could be freshly built or it could be downloaded and
unpacked from somewhere.
In other words, the build process does not need to worry about
its origin.
Two types of output exist.
One type class.
From a user's perspective, adding shared state wrapping to a
task is as simple as this
do_deploy
example taken from the
deploy
class:
DEPLOYDIR = "${WORKDIR}/deploy-${PN}" SSTATETASKS += "do_deploy" do_deploy[sstate-inputdirs] = "${DEPLOYDIR}" do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}" python do_deploy_setscene () { sstate_setscene(d) } addtask do_deploy_setscene do_deploy[dirs] = "${DEPLOYDIR} ${B}" do_deploy[stamp-extra-info] = "${MACHINE_ARCH}"
The following list explains the previous example:
Adding "do_deploy" to
SSTATETASKS
adds some required sstate-related processing, which is
implemented in the
sstate
class, to before and after the
do_deploy
task.
The
do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
declares that
do_deploy places its
output in
${DEPLOYDIR} when run
normally (i.e. when not using the sstate cache).
This output becomes the input to the shared state cache.
The
do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
line causes the contents of the shared state cache to be
copied to
${DEPLOY_DIR_IMAGE}.
do_deployis not already in the shared state cache or if its input checksum (signature) has changed from when the output was cached, the task runs to populate the shared state cache, after which the contents of the shared state cache is copied to
${DEPLOY_DIR_IMAGE}. If
do_deployis in the shared state cache and its signature indicates that the cached output is still valid (i.e. if no relevant task inputs have changed), then the contents of the shared state cache copies directly to
${DEPLOY_DIR_IMAGE}by the
do_deploy_setscenetask instead, skipping the
do_deploytask.
The following task definition is glue logic needed to make the previous settings effective:
python do_deploy_setscene () { sstate_setscene(d) } addtask do_deploy_setscene
sstate_setscene() takes the flags
above as input and accelerates the
do_deploy task through the
shared state cache if possible.
If the task was accelerated,
sstate_setscene() returns True.
Otherwise, it returns False, and the normal
do_deploy task runs.
For more information, see the
"setscene"
section in the BitBake User Manual.
The
do_deploy[dirs] = "${DEPLOYDIR} ${B}"
line creates
${DEPLOYDIR} and
${B} before the
do_deploy task runs, and also sets
the current working directory of
do_deploy to
${B}.
For more information, see the
"Variable Flags"
section in the BitBake User Manual.
sstate-inputdirsand
sstate-outputdirswould be the same, you can use
sstate-plaindirs. For example, to preserve the
${PKGD}and
${PKGDEST}output from the
do_packagetask, use the following:
do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST}"
The
do_deploy[stamp-extra-info] = "${MACHINE_ARCH}"
line appends extra metadata to the
stamp file.
In this case, the metadata makes the task specific
to a machine's architecture.
See
"The Task List"
section in the BitBake User Manual for more
information on the
stamp-extra-info
flag.
sstate-inputdirs and
sstate-outputdirs can also be used
with multiple directories.
For example, the following declares
PKGDESTWORK and
SHLIBWORK as shared state
input directories, which populates the shared state
cache, and
PKGDATA_DIR and
SHLIBSDIR as the corresponding
shared state output directories:
do_package[sstate-inputdirs] = "${PKGDESTWORK} ${SHLIBSWORKDIR}" do_package[sstate-outputdirs] = "${PKGDATA_DIR} ${SHLIBSDIR}"
These methods also include the ability to take a lockfile when manipulating shared state directory structures, for cases where file additions or removals are sensitive:
do_package[sstate-lockfile] = "${PACKAGELOCK}"
Behind the scenes, the shared state code works by looking in
SSTATE_DIR
and
SSTATE_MIRRORS
for shared state files.
Here is an example:
SSTATE_MIRRORS ?= "\ file://.*;downloadfilename=PATH _package task of each
recipe, all pkg-config modules
(
*.pc files) installed by the recipe
are located.
For each module, the package that contains the module is
registered as providing the module.
The resulting module-to-package mapping is saved globally in
PKGDATA_DIR by the
do_packagedata task..
pcdepsmechanism most often infers dependencies between
-devpackages.
depchains:
If a package
foo depends on a package
bar, then
foo-dev
and
foo-dbg are also made to depend on
bar-dev and
bar-dbg, respectively.
Taking the
-dev packages as an
example, the
bar-dev package might
provide headers and shared library symlinks needed by
foo-dev, which shows the need
for a dependency between the packages.
The dependencies added by
depchains are in the form of
RRECOMMENDS.
foo-devalso has an
RDEPENDS-style dependency on
foo, because the default value of
RDEPENDS_${PN}-dev(set in
bitbake.conf) includes "${PN}".
To ensure that the dependency chain is never broken,
-dev and
-dbg
packages.
pokyRepository
This chapter provides procedures related to getting set up across a large team.. Keep in mind, the procedure here is a starting point. You can build off these steps and customize the procedure to fit any particular working environment and set of practices.
Determine Who is Going to be Developing: You need to understand who is going to be doing anything related to the Yocto Project and determine their roles. Making this determination is essential to completing steps two and three,. Not all environments) or it can be a machine (Linux, Mac, or Windows) that uses CROPS, which leverages Docker Containers. and access the source files that ship with the Yocto Project. You establish and use these local files to work on projects..0.4 to
view files associated with the Yocto Project 3.0.4
release (e.g.
poky-zeus-22.0. zeus, warrior,/pyro remotes/origin/pyro-next remotes/origin/rocko remotes/origin/rocko-next remotes/origin/sumo remotes/origin/sumo-next remotes/origin/thud remotes/origin/thud-next remotes/origin/warrior
Checkout the Branch: Checkout the development branch in which you want to work. For example, to access the files for the Yocto Project 3.0.4 Release (Zeus), use the following command:
$ git checkout -b zeus origin/zeus Branch zeus set up to track remote branch zeus from origin. Switched to a new branch 'zeus'
The previous command checks out the "zeus" development branch and reports that the branch is tracking the upstream "origin/zeus" branch.
The following command displays the branches that are now part of your local poky repository. The asterisk character indicates the branch that is currently checked out for work:
$ git branch master * zeus.0.4 -b my_yocto_3.0.4 Switched to a new branch 'my_yocto_3.0.4' $ git branch master * my_yocto_3.0.4
The previous command creates and checks out a local
branch named "my_yocto_3.0.4", which is based on
the commit in the upstream poky repository that has
the same tag.
In this example, the files you have available locally
as a result of the
command are a snapshot of the
"zeus" development branch at the point
where Yocto Project 3.0.4 was released..0.4.bbappend must apply to
someapp_3.0s defer ./ | https://docs.yoctoproject.org/3.0.4/mega-manual/mega-manual.html | 2022-06-25T14:36:21 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.yoctoproject.org |
String.Compare
From Xojo Documentation
Method
String.Compare(other As String, Optional compare As ComparisonOptions = comparisonOptions.CaseInsensitive, Optional locale As Locale = Nil) As Integer
Supported for all project types and targets.
Supported for all project types and targets.
Compares a String value with another string value. A non-empty String is always greater than an empty String. By default, a case-insensitive comparison is done. Returns a negative integer if the value is less than other, 0 if the two values are equal, and a positive integer if the value is greater than other.
Parameters
Notes
By default this performs a case-insensitive comparison. To do a case-sensitive comparison, supply the ComparisonOptions.CaseSensitive enum value to the compare parameter.
By default comparisons are done in an invariant locale (i.e. not dependent on the user's preferences). The locale parameter can be used to specify an explicit locale to do comparisons in.
Exceptions
- RuntimeException when the specified options are invalid.
Sample Code
Compare two String values: | http://docs.xojo.com/index.php?title=String.Compare&oldid=76783 | 2022-06-25T13:07:17 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.xojo.com |
Tooz – Distributed System Helper Library¶
The Tooz project aims at centralizing the most common distributed primitives like group membership protocol, lock service and leader election by providing a coordination API helping developers to build distributed applications. [1]
Contents¶
- Installation
- Drivers
- Compatibility
- Using Tooz in Your Application
- Developers | https://docs.openstack.org/tooz/ocata/ | 2022-06-25T14:23:39 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.openstack.org |
Obsidian Language Basics¶
Contracts, transactions, and main contracts¶
Obsidian is object-oriented. A
contract is like a class: it can be instantiated as many times as needed. Each contract supports operations; each one is called a
transaction. Transactions are akin to methods in traditional object-oriented languages. However, unlike methods, transactions either completely finish or revert. If a transaction reverts (via the
revert statement), then all changes that the transaction made will be discarded.
Main contracts¶
A
main contract may be instantiated on the blockchain. The contract’s transactions are then available for clients to invoke. Only
main contracts can be deployed directly, and the Obsidian compiler expects every program to have one
main contract.
Constructors¶
Constructors must initialize all the fields of their contracts. In addition, construtors of contracts that have defined states must transition the object to a particular state. It is good practice for constructors to specify a specific state that the object will be in if possible; otherwise, generally you should declare constructors to return an
Owned reference. For example:
contract LightSwitch { state On; state Off; LightSwitch@Off() { // the resulting object will be in Off state ->Off; } } | https://obsidian.readthedocs.io/en/latest/reference/basics.html | 2022-06-25T14:06:09 | CC-MAIN-2022-27 | 1656103035636.10 | [] | obsidian.readthedocs.io |
How to troubleshoot CPIX documents using cpix_verify¶
Table of Contents
Introduction¶
This tutorial showcases the use of our
cpix_verify command-line tool to
troubleshoot problems you might encounter with CPIX documents. The
cpix_verify CLI tool is automatically installed when you install alongside
our software. For more detail on the different options of
cpix_verify,
please refer to Checking CPIX documents with cpix_verify.
Using print-cpix to catch the basic errors¶
One of the most basic usages of the verify tool is parsing your CPIX file
against syntax errors. As the CPIX files are basic XML files, changing
parameters or different text editors might cause some unexpected problems. In
this case,
cpix_verify outputs the incorrect line from the document with an
error message.
#!/bin/bash $ cpix_verify cpixverify_tutorial1.cpix print-cpix cpix_verify: FMP4_400: not well-formed (invalid token) @ line 13 col 0
This is the
CPIX file which generates
one of these errors. It's a misspelled element on line 13 which stops the verify
process. This also means the verify tool stop scanning on the first detected
error. For a complete scan you must correct the file and run the verify once
more to continue.
Another good example would be that one of the CPIX requirements is using base64 characters. This type of errors mostly caused by an XML creator script. In this example the error also tells the process is aborted once the unencoded characters had been found.
#!/bin/bash $ cpix_verify tutorial2.cpix print-cpix cpix_verify: FMP4_415: Invalid base64 character. parsing aborted @ line 9 col 62
Using build-evaluator for more complicated cases¶
Even if print-cpix is fast and successful for the basic level problems, it might not be the solution for all cases. Especially, for the big content providers use multi DRM solutions. Also, this type of content contains multi audio, video and subtitle tracks.
The next use case includes a CPIX file does not output any errors from printing command. Nevertheless, it still needs to be fixed so the content can be played by the players.
Please take a look at Trans DRM page if you want to test it all scenario
on your end and then use the
tutorial3.cpix file
instead the given example.
#!/bin/bash ubuntu@ip-172-31-32-140:~$ cpix_verify drm.cpix evaluate-tracks tears-of-steel.ism enc-tears-of-steel-aac-128k.isma track id=1 timescale=48000 lang=en soun/mp4a dref=1 bitrate=128002/0 tag=255 samplerate=48000 channels=2 sample_size=16 packet_size=4 scheme=cenc: cpix_verify: FMP4_500: Multiple content keys (4E2D509A-753F-5E26-B253-CB7D21C3BF05 and 80964B5A-22DC-5C93-B18D-8C68B9FB8FC0) found for track id=1 timescale=48000 lang=en soun/mp4a dref=1 bitrate=128002/0 tag=255 samplerate=48000 channels=2 sample_size=16 packet_size=4 scheme=cenc (time: 00:00:00.000000(0/1))
The error shows that the problem is related to multiple key usage for the track1. Please take a look at the DRM with multiple keys page to learn more about how to use the multiple keys. In our example ContentKeyUsageRule wasn't embedded in ContentKeyUsageRuleList. The error will be fixed as updating this relevant part of the document. | https://beta.docs.unified-streaming.com/tutorials/drm/cpix_verify.html | 2022-06-25T14:30:30 | CC-MAIN-2022-27 | 1656103035636.10 | [] | beta.docs.unified-streaming.com |
Plenum 2021-06-29 Minutes
Venue
20:39 Open
Credentials
- Present: Roland, Valentine, Martin, Joyce, Elise
- Apologies: Andrei appears to have been felled by the publishing of the wrong Zoom link) Find out and update dates for each of the three filings: date of last filing, date of next filing, status of next filing before next plenum.
Finance
Cash
- $21.5K in the StanChart+PayPal as of 1 May 2021
- 0.66914519 in BTC
Accrual (up/down $?/month)
- Including predicted upcoming costs amortised monthly, we are down $1.37K a month.
Significant changes to regular expenses (if any)
Memberships/Contributions
- (Valentine, Jen) Update on new members and contributions (if any).
- downgrades
- 1 x 128 -> 0
- upgrades
- Current membership/contribution breakdown by tiers.
Facilities
- (All) Drive-by updates on facility maintenance and issues.
- Aircon was cleaned/maintained on 15 June 2021 (thanks Huiren).
- Fuji Xerox laser printer toner has been purchased from Taobao during 6/18 sales and will arrive in mid July.
- 4x6" label printer has been set up; pending wireless printing setup and idiot-proofing.
Ongoing business
Broken and orphaned items (Old Hardware Pattern (OHP))
- (Valentine, Andrei, Roland) Updates; review if Old Hardware Pattern works.
- Adrian will collect his printer some time in the future.
- Roland expresses thanks for the use of the WordPress tote bags for packaging old hardware claimed for removal.
- Jen has processed most of the laptops and gifted or absorbed them (will consider a donation).
Storage boxes
- (Valentine) Updates.
- 9 x 45L boxes bought during 6/6 sales.
- To implement (HSG's own) Bounded Storage Pattern for members and community organisations.
- Timeline (proposed, not adopted):
- End-July for Phase 1 (members should start putting most items into labelled boxes)
- August to September for Phase 2 (OHP acceleration)
- Start-October for intended end state (all items should be individually labelled or in boxes).
- Additional 45L boxes will be purchased during 7/7 sales.
- Roland suggests immediate OHP for all clutter currently located in the space. The process is built with clear protection for personal property for those who wish to assert ownership over items left in the space, and a fair basis for resolving dibs claims of abandonned property, plus a predictable timeline for the clutter to leave the space. This being the case, inserting additional steps to protect these interests appears excessive effort, prolongs the problem, and noting the potential for an imminent move, risks complicating that too. If it does turn out that members have a problem with the process then that's great, that's an opportunity to improve the process, but delay doesn't achieve that for us, it just prolongs the problem.
Back room water issue / Shifting of Hardware Room
- GitHub issue:
- (Valentine) Updates, if any.
- No further update from MCST
- No further water ingress
KGB premises lease renewal / Potential reboot
- (Valentine) Should we ask to renew for 2+2 years early? (Keeping in mind back room issue.)
- Curent lease expires 2021-11-14
- Valentine is taking point on planning
- (Valentine) For next plenum: alternatives, action plans ( (a) stay, (b) move), market price estimates, (recruit someone(s) to help)
Governance Reboot
- (Valentine) Finalise Mission Statement, Values, and Vision by July 2
- (Valentine) Report on non-profit options (identify, state pros and cons from a high-level perspective) by July 9
- (Valentine) Determine organisational needs (with aid of survey) by July 16
- (Valentine) Set 2 years' quarterly goals, with a plan, and communicate to membership by July 30
- (Jen) Create engagement survey by July 2
- (Jen) Run survey until July 11
- (Jen) Send reminder to Kheng Hui (icedwater) re Social Enterprise Presentation for comment, then submit to "the social enterprise grant people" by end July.
- Note that the social enterprise grant is only relevant while we remain a Pte. Ltd. The thinking is to move forward until/unless we commit to reorganising.
House Rules
- (Valentine) KIV for July: Write/adopt house rules
Website revamp
- (Valentine) Updates, if any.
- No progress at this point.
Recruiting additional members/sponsors
- (Roland) KIV: Roland is concerned that this is not a primary area of activity.
Large groups during Phase 3
- (Valentine) KIV till Singapore "opens up".
Upcoming events that HackerspaceSG might be interested in
- (Roland, all) Updates, if any.
- Science Centre is contemplating a maker festival in late 2021.
New business
- (none scheduled)
Any Other Business
- (none)
Next Meeting
- 2021-07-20 20:00 tentative (tentatively scheduled for 3rd Tuesday of every month, but subject to change depending on the schedule of key people) | https://docs.hackerspace.sg/plenum/2021-06-29/ | 2022-06-25T13:00:27 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.hackerspace.sg |
architecture
Ceph is based on Reliable Autonomic Distributed Object Store (RADOS). RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:.
Note
If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).
For more information about usable file systems, see ceph.com/ceph-storage/file-system/.
To store and access your data, you can use the following storage systems:
Ceph exposes RADOS; you can access it through the following interfaces:
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/newton/config-reference/block-storage/drivers/ceph-rbd-volume-driver.html | 2022-06-25T13:31:11 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.openstack.org |
great_expectations.core.usage_statistics.anonymizers.anonymizer¶
Module Contents¶
Classes¶
- class
great_expectations.core.usage_statistics.anonymizers.anonymizer.
Anonymizer(salt=None)¶
Anonymize string names in an optionally-consistent way.
anonymize_object_info(self, anonymized_info_dict, ge_classes, object_=None, object_class=None, object_config=None, runtime_environment=None)¶
- static
_is_parent_class_recognized(classes_to_check, object_=None, object_class=None, object_config=None)¶
Check if the parent class is a subclass of any core GE class. This private method is intended to be used by anonymizers in a public is_parent_class_recognized() method. These anonymizers define and provide the core GE classes_to_check. :returns: The name of the parent class found, or None if no parent class was found | https://legacy.docs.greatexpectations.io/en/latest/autoapi/great_expectations/core/usage_statistics/anonymizers/anonymizer/index.html | 2022-06-25T14:34:12 | CC-MAIN-2022-27 | 1656103035636.10 | [] | legacy.docs.greatexpectations.io |
community.docker.docker_volume module – Manage Docker volumes_volume.
Synopsis
Create/remove Docker volumes.
Performs largely the same function as the
docker volumeCLI subcommand.
Requirements
The below requirements are needed on the host that executes this module.)
The docker server >= 1 a volume community.docker.docker_volume: name: volume_one - name: Remove a volume community.docker.docker_volume: name: volume_one state: absent - name: Create a volume with options community.docker.docker_volume: name: volume_two driver_options: type: btrfs device: /dev/sda2
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Repository (Sources) Submit a bug report Request a feature Communication | https://docs.ansible.com/ansible/latest/collections/community/docker/docker_volume_module.html | 2022-06-25T14:11:11 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.ansible.com |
You can monitor the network usage of devices and operating systems for a specific Edge.
Clickto view the following:
>.
To view drill-down reports with more details, click the links displayed in the metrics column.
The following image shows a detailed report of top clients.
Click the arrows displayed next to Top Applications to navigate to the Applications tab. | https://docs.vmware.com/en/VMware-SD-WAN/4.0/VMware-SD-WAN-by-VeloCloud-Administration-Guide/GUID-0E2010D4-EB43-4EE7-906B-8CAAC5B5DEB4.html | 2022-06-25T15:11:42 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['images/GUID-3A597248-3E38-49AB-8932-6514FF3063AF-low.png', None],
dtype=object)
array(['images/GUID-C24A9BC8-F7AD-467C-905E-6EEF0F0EC9E5-low.png', None],
dtype=object) ] | docs.vmware.com |
The Pipeline Configuration File¶
Table of Contents
In this section, you will learn the generic structure of a .yaml Pipeline Configuration file and the meaning of each section.
We are going to refer to the following example configuration file:
mpd: - manifest_edit.plugins.mpd.utc_remove: - '*' - manifest_edit.plugins.mpd.utc_add: ntp:
The Pipeline Configuration File is written in yaml format. It is a human-readable data representation format, similar to JSON but easier to use. If you're not familiar with its syntax, you can have a look at this basic introduction.
If you're totally new to .yaml, you most likely will be able to follow this chapter by keeping in mind the following basic facts about yaml:
- indentation matters and is what defines hierarchy
- always indent using spaces and not TABs
- a .yaml file is essentially a collection of key and value pairs (as in
name: Joe).
Let's breakout the example file section by section:
Manifest format¶
At the root of the Pipeline Configuration File is a section specifying the specific manifest format that this pipeline will handle.
mpd:
In this specific case, this line specifies that the pipeline will only handle mpd (Dash) and will completely ignore any manifest it may receive having a different format.
The two supported format at the moment are mpd and m3u8_main. Further manifest formats will be introduced in future versions of Manifest Edit.
Pipeline definition¶
At the next level of hierarchy in the Pipeline Configuration File comes the pipeline definition section.
mpd: - manifest_edit.plugins.mpd.utc_remove: [...] - manifest_edit.plugins.mpd.utc_add: [...]
For each manifest format, in fact, you must specify the list of plugins you want to activate in the processing pipeline. This example describes a two-stages pipeline: the first stage will apply the logic implemented by the utc_remove plugin; the second stage will apply the logic implemented by the utc_add plugin.
This is a visual representation of the pipeline built by Manifest Edit when provided this configuration file: two plugins will be picked from the Library (utc_remove and utc_add) and will be put in the pipeline one after the other
When building your Pipeline Configuration File, keep in mind the following two important aspects:
- the pipeline is ordered. Each plugin in fact will operate on the manifest resulting from the modifications already applied by previous plugins in the pipeline. This means that changing the order of appearance of plugins in the pipeline configuration file will in general lead to different results.
- strings such as
manifest_edit.plugins.mpd.utc_removeare directly related to the name and path of the python module of the specified plugins. Refer to the Plugins Library documentation to know the correct string for each plugin.
Plugins configuration¶
The last element in the Pipeline Configuration File hierarchy is dedicated to the individual plugin configuration. A plugin in fact can implement a quite generic logic (i.e. "add an element"); in order to apply this logic to a real-world use case, you need to provide to the plugin additional information (i.e. "what" should be added, "where" should it be added). This is the section responsible for that.
Let's focus on the utc_add part of the example Pipeline Configuration File:
mpd: - manifest_edit.plugins.mpd.utc_add: ntp:
In this case the entire section
ntp:
represents the configuration of the utc_add plugin. As such, it is specific to it and, in general, you have to consult the plugin documentation to understand what needs to be specified in this section.
This part of the Pipeline Configuration File is thus very variable and its content depends on the particular plugin you have activated.
Example Pipeline Configuration files¶
Manifest Edit comes with a list of example Pipeline Configuration files,
covering the usage of every plugin from the Library. You can find these files
in the
/usr/share/manifest-edit folder in the supported *nix-based
operating systems, or in
C:\Program Files\Unified Streaming\ManifestEditConf on Windows. Comments
and examples embedded in the example files will drive you trough the
individual configuration options available for each plugin. We also encourage
you to check the chapter Included Use Cases for additional use cases.
Warning
When you are ready to create your own .yaml file, it is recommended that you create a copy of the example file you want to start from and edit the copy. Do not modify the provided example files: they will be overwritten when a new manifest-edit package is installed, and you will lose all your local modifications! | https://beta.docs.unified-streaming.com/documentation/manifest-edit/pipeline_config_file/index.html | 2022-06-25T13:27:32 | CC-MAIN-2022-27 | 1656103035636.10 | [] | beta.docs.unified-streaming.com |
Multi-DRM protected HLS and DASH from a shared CMAF source¶
Table of Contents
This tutorial describes how to create the 'holy grail' of packaging: using the same media segments to serve both DASH and HLS, with content protection by all three major DRM systems (FairPlay, PlayReady and Widevine).
The output is a set of encrypted CMAF media tracks, along with HLS and DASH manifests that reference these tracks with all DRM signaling included. The tracks are encrypted with different keys to allow for different levels of DRM protection for HD or 4K content.
This tutorial makes use of functionality that was added in CPIX 2.2, for which support was added in version 1.10.16.
Demo content¶
For this tutorial you will need the Tears of Steel content used for evaluation: Your own Video on Demand demo.
Requirement: CPIX¶
The use of CPIX is required for this workflow, as it is the only way in which specifying the 'cbcs' encryption scheme for DASH output is supported. See CPIX Document Requirements for more information.
The CPIX document shown below uses keys that work with the publicly available Widevine and PlayReady test servers. As there is no easily available FairPlay test server, this setup uses a simple key server that provides the decryption key in the clear for Safari and iOS clients (technically making the stream SAMPLE-AES instead of FairPlay protected, but the set up process is exactly the same).
To work with below CPIX document, save
multi-format-drm.cpix in the
same location as your
tears-of-steel directory with the demo content.
Packaging media files as CMAF¶
With the CPIX document and your demo content in place, the first step is to encrypt and package all of the media files as CMAF.
To do this, specify the CPIX document each time you package a media track as
CMAF. This will make
mp4split read the CPIX document and use the
appropriate key to encrypt each track.
As the content has a frame rate of 24fps and a GOP length of 96 frames, you should specify this as the fragment duration (or choose a multiple of it):
#!/bin/bash # audio mp4split -o tears-of-steel-aac-64k.cmfa \ --fragment_duration 96/24 \ --cpix multi-format-drm.cpix \ tears-of-steel/tears-of-steel-aac-64k.mp4 # video mp4split -o tears-of-steel-avc1-400k.cmfv \ --fragment_duration 96/24 \ --cpix multi-format-drm.cpix \ tears-of-steel/tears-of-steel-avc1-400k.mp4 mp4split -o tears-of-steel-avc1-750k.cmfv \ --fragment_duration 96/24 \ --cpix multi-format-drm.cpix \ tears-of-steel/tears-of-steel-avc1-750k.mp4 mp4split -o tears-of-steel-avc1-1000k.cmfv \ --fragment_duration 96/24 \ --cpix multi-format-drm.cpix \ tears-of-steel/tears-of-steel-avc1-1000k.mp4 mp4split -o tears-of-steel-avc1-1500k.cmfv \ --fragment_duration 96/24 \ --cpix multi-format-drm.cpix \ tears-of-steel/tears-of-steel-avc1-1500k.mp4
Creating HLS Media and Master Playlists¶
For each of the CMAF tracks you need to create an HLS Media Playlist, then from those you can create the Master Playlist.
Media Playlists¶
Make sure that you specify the CPIX document each time you create an (audio or video) Media Playlist.
Because a 6 seconds segment duration is recommended for HLS, you may want to
use the
--fragment_duration option when creating the Media Playlists, too.
Note that in such a case the length that you specify for your HLS segments
should be a multiple of the fragment duration of your CMAF source (in this
particular case, 8 seconds):
#!/bin/bash mp4split -o tears-of-steel-avc1-400k.m3u8 \ --fragment_duration 192/24 \ --cpix multi-format-drm.cpix \ tears-of-steel-avc1-400k.cmfv mp4split -o tears-of-steel-avc1-750k.m3u8 \ --fragment_duration 192/24 \ --cpix multi-format-drm.cpix \ tears-of-steel-avc1-750k.cmfv mp4split -o tears-of-steel-avc1-1000k.m3u8 \ --fragment_duration 192/24 \ --cpix multi-format-drm.cpix \ tears-of-steel-avc1-1000k.cmfv mp4split -o tears-of-steel-avc1-1500k.m3u8 \ --fragment_duration 192/24 \ --cpix multi-format-drm.cpix \ tears-of-steel-avc1-1500k.cmfv mp4split -o tears-of-steel-aac-64k.m3u8 \ --fragment_duration 192/24 \ --cpix multi-format-drm.cpix \ tears-of-steel-aac-64k.cmfa
Master Playlist¶
Specifying the CPIX document or a fragment duration when creating the Master Playlist is not necessary. The command-line is very straightforward:
#!/bin/bash mp4split -o master.m3u8 \ tears-of-steel-avc1-400k.m3u8 \ tears-of-steel-avc1-750k.m3u8 \ tears-of-steel-avc1-1000k.m3u8 \ tears-of-steel-avc1-1500k.m3u8 \ tears-of-steel-aac-64k.m3u8
Creating DASH client manifest (MPD)¶
For DASH only a single client manifest is required. For this you can enable the
--mpd.inline_drm option to make sure that explicit DRM information is
included in the MPD:
#!/bin/bash mp4split -o dash-cbcs.mpd \ --cpix multi-format-drm.cpix \ --mpd.inline_drm \ tears-of-steel-aac-64k.cmfa \ tears-of-steel-avc1-400k.cmfv \ tears-of-steel-avc1-750k.cmfv \ tears-of-steel-avc1-1000k.cmfv \ tears-of-steel-avc1-1500k.cmfv
Testing playback¶
As we used keys obtained from the Widevine test server we can use the test license server to get a decryption license.
To test playback just upload the files to an https webserver, as this is a requirement for most DRM systems.
As an example, we have added the packaged assets to our demo site:
The SAMPLE-AES encrypted HLS can be tested in the Safari or iOS native players.
Widevine DRM can be tested in Chrome or Firefox:
- DASH.js reference player (the license server URL must be set manually)
- Shaka DASH
- Shaka HLS | https://beta.docs.unified-streaming.com/documentation/package/multi-format-drm.html | 2022-06-25T13:40:06 | CC-MAIN-2022-27 | 1656103035636.10 | [] | beta.docs.unified-streaming.com |
improved
HTTP Responses by Endpoint
about 2 months ago by Andriy Mysyk
We updated our API Reference to show a full list of HTTP responses that you might receive for each endpoint. With the list of responses, you can build better quality integrations with Aurora by proactively handling error states in your code.
| https://docs.aurorasolar.com/changelog/http-responses-by-endpoint | 2022-06-25T13:32:34 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['https://files.readme.io/5bfe401-Screen_Shot_2022-05-04_at_4_54_45_PM.png',
'Screen_Shot_2022-05-04_at_4_54_45_PM.png'], dtype=object)
array(['https://files.readme.io/5bfe401-Screen_Shot_2022-05-04_at_4_54_45_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ea3bfc1-Screen_Shot_2022-05-04_at_4_55_16_PM.png',
'Screen_Shot_2022-05-04_at_4_55_16_PM.png'], dtype=object)
array(['https://files.readme.io/ea3bfc1-Screen_Shot_2022-05-04_at_4_55_16_PM.png',
'Click to close...'], dtype=object) ] | docs.aurorasolar.com |
.
A resource data sync helps you view data from multiple sources in a single location. Amazon Web Services Systems Manager offers two types of resource data sync:
SyncToDestination and
SyncFromSource .
You can configure Systems Manager Inventory to use the
SyncToDestination type to synchronize Inventory data from multiple Amazon Web Services Regions to a single Amazon Simple Storage Service (Amazon S3) bucket. For more information, see Configuring resource data sync for Inventory in the Amazon Web Services Systems Manager User Guide .
You can configure Systems Manager Explorer to use the
SyncFromSource type to synchronize operational work items (OpsItems) and operational data (OpsData) from multiple Amazon Web Services Regions to a single Amazon S3 bucket. This type can synchronize OpsItems and OpsData from multiple Amazon Web Services accounts and Amazon Web Services Regions or
EntireOrganization by using Organizations. For more information, see Setting up Systems Manager Explorer to display data from multiple accounts and Regions in the Amazon Web Services Systems Manager User Guide .
A resource data sync is an asynchronous operation that returns immediately. After a successful initial sync is completed, the system continuously syncs data. To check the status of a sync, use the ListResourceDataSync .
Note
By default, data isn't encrypted in Amazon S3. We strongly recommend that you enable encryption in Amazon S3 to ensure secure data storage. We also recommend that you secure access to the Amazon S3 bucket by creating a restrictive bucket policy.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
create-resource-data-sync --sync-name <value> [--s3-destination <value>] [--sync-type <value>] [--sync-source <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--sync-name (string)
A name for the configuration.
--s3-destination (structure)
Amazon S3 configuration details for the sync. This parameter is required if the
SyncTypevalue is SyncToDestination.
BucketName -> (string)The name of the S3 bucket where the aggregated data is stored.
Prefix -> (string)An Amazon S3 prefix for the bucket.
SyncFormat -> (string)A supported sync format. The following format is currently supported: JsonSerDe
Region -> (string)The Amazon Web Services Region with the S3 bucket targeted by the resource data sync.
AWSKMSKeyARN -> (string)The ARN of an encryption key for a destination in Amazon S3. Must belong to the same Region as the destination S3 bucket.
DestinationDataSharing -> (structure)
Enables destination data sharing. By default, this field is
null.
DestinationDataSharingType -> (string)The sharing data type. Only
Organizationis supported.
Shorthand Syntax:
BucketName=string,Prefix=string,SyncFormat=string,Region=string,AWSKMSKeyARN=string,DestinationDataSharing={DestinationDataSharingType=string}
JSON Syntax:
{ "BucketName": "string", "Prefix": "string", "SyncFormat": "JsonSerDe", "Region": "string", "AWSKMSKeyARN": "string", "DestinationDataSharing": { "DestinationDataSharingType": "string" } }
--sync-type (string) Amazon Web Services accounts and Amazon Web Services Regions, as listed in Organizations for Explorer. If you specify
SyncFromSource, you must provide a value for
SyncSource. The default value is
SyncToDestination.
--sync-source (structure)
Specify information about the data sources to synchronize. This parameter is required if the
SyncTypevalue is SyncFromSource.
SourceType -> (string)The type of data source for the resource data sync.
SourceTypeis either
AwsOrganizations(if an organization is present in Organizations) or
SingleAccountMultiRegions.
AwsOrganizationsSource -> (structure)
Information about the
AwsOrganizationsSourceresource data sync source. A sync source of this type can synchronize data from Organizations.
OrganizationSourceType -> (string)If an Amazon Web Services organization is present, this is either
OrganizationalUnitsor
EntireOrganization. For
OrganizationalUnits, the data is aggregated from a set of organization units. For
EntireOrganization, the data is aggregated from the entire Amazon Web Services organization.
OrganizationalUnits -> (list)
The Organizations organization units included in the sync.
(structure)
The Organizations organizational unit data source for the sync.
OrganizationalUnitId -> (string)The Organizations unit ID data source for the sync.
SourceRegions -> (list)
The
SyncSourceAmazon Web Services Regions included in the resource data sync.
(string)
IncludeFutureRegions -> (boolean)Whether to automatically synchronize and aggregate data from new Amazon Web Services Regions when those Regions come online.
EnableAllOpsDataSources -> (boolean)When you create a resource data sync, if you choose one of the Organizations options, then Systems Manager automatically enables all OpsData sources in the selected Amazon Web Services Regions for all Amazon Web Services accounts in your organization (or in the selected organization units). For more information, see About multiple account and Region resource data syncs in the Amazon Web Services Systems Manager User Guide .
JSON Syntax:
{ "SourceType": "string", "AwsOrganizationsSource": { "OrganizationSourceType": "string", "OrganizationalUnits": [ { "OrganizationalUnitId": "string" } ... ] }, "SourceRegions": ["string", ...], "IncludeFutureRegions": true|false, "EnableAllOpsDataSources": true data sync
This example creates a resource data sync. There is no output if the command succeeds.
Command:
aws ssm create-resource-data-sync --sync-name "ssm-resource-data-sync" --s3-destination "BucketName=ssm-bucket,Prefix=inventory,SyncFormat=JsonSerDe,Region=us-east-1" | https://docs.aws.amazon.com/cli/latest/reference/ssm/create-resource-data-sync.html | 2022-06-25T15:23:34 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.aws.amazon.com |
Ensure ECS task definition variables do not expose secrets
Error: ECS task definition variables expose secrets
Bridgecrew Policy ID: BC_AWS_SECRETS_4
Severity: HIGH
ECS task definition variables expose secrets
Description
ECS task definition variables are metadata definitions, which usually contain small configurations that define the ECS cluster execution parameters. These variables can be accessed by any entity with the most basic read-metadata-only permissions, and can't be encrypted.
We recommend you remove secrets from unencrypted places, especially if they can be easily accessed, to reduce the risk of exposing data to third parties.
Fix - Runtime
Guidance
ECS enables storing sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters. For additional guidance, see.
CLI Command
To see the secret, run the following CLI command:
aws ecs describe-task-definition --region <REGION> --task-definition <TASK_DEFINITION_NAME> --query taskDefinition.containerDefinitions[*].environment
Updated 12 months ago | https://docs.bridgecrew.io/docs/bc_aws_secrets_4 | 2022-06-25T13:08:35 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.bridgecrew.io |
Decision notification listener
Description
The DECISION notification listener sends a stream of all feature flag decisions. It is most commonly used to send decisions to an analytics provider or to an internal data warehouse to join it with other data that you have about your users. Note that if you only want impression-emitting decisions, you can use the deprecated ACTIVATE notification listener.
To track feature usage:
- Sign up for an analytics provider of your choice (for example, Segment)
- Set up a DECISION notification listener.
- Follow your analytics provider's documentation and send events from within the decision listener callback
Steps 1 and 3 aren't covered in this documentation. However, the DECISION notification listener is covered below.
Parameters
The following tables show the information provided to the listener when it is triggered:
The following tables shows details about the parameters for the DECISION notification listener.
type parameter
typeparameter
The following table shows how the
type parameter populates with the following values depending on which method triggers the DECISION notification listener:
decision info parameter
decision infoparameter
The following table shows how the decision info parameter populates, depending on the decision
type provided to the notification when triggered:
Examples
For example code, see the notification listener topic in your SDK language.
Updated almost 2 years ago | https://docs.developers.optimizely.com/full-stack/docs/decision-notification-listener | 2022-06-25T13:09:41 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.developers.optimizely.com |
- 23 Mar 2022
- 4 Minutes to read
- DarkLight
Feed Replication & Mirroring Overview
- Updated on 23 Mar 2022
- 4 Minutes to read
- DarkLight
ProGet's replication features lets you synchronize packages, containers, and assets across multiple ProGet instances. This can help implement a variety of scenarios and complex network architectures, including:
- Simplifying disaster recovery
- Easily distributing content for edge computing
- Multisite development & federated development teams
Configuration
A "replication" requires at least two instances of ProGet Enterprise Edition, and at least one feed of the same type across those instances. However, you can configure a replication with multiple feeds, and you can also configure multiple replications on a single instance. This allows you to design complex content distribution schemes, such as multi-hub private content delivery network
Communication Mode
At least one instance is configured with "outgoing" communication (similar to how a web browser accesses a web server), and the other with "incoming" communication (like how a web server responds to a web browser).
Prior to ProGet v6, incoming replication configurations are called replication servers and outgoing replication configurations are called replication clients.
Behind the scenes, incoming communication is handled by the Replication API endpoint that's hosted on the ProGet Web Application, just like all other API endpoints. This means that it's accessed using HTTP/S in the same manner your browser and other client tools interact with ProGet.
Outgoing communication is handled by the ProGet Service application, which communicates with the other instance's Replication API endpoint. The "outgoing" instance is where most of the work is done, and is where you'll find detailed replication logs.
Replication Mode
Regardless of the communication mode you choose, you can configure how content (packages, containers, and assets) are synchronized across ProGet instances.
Each instance must be configured in one of these way:
- Mirror Content; Push and pull packages so that the feed is in sync. The other instance should be configured to Mirror Content
- Pull Content from other Instance ; Download missing content from the other instance and delete content that was deleted from that instance. The other instance should be configured to Push Content.
- Push Content to other Instance; Upload missing content to the other instance and delete content on that instance that was deleted on this instance. The other instance should be configured to Pull Content.
You can also select "Custom/Advanced", which allows for more control over what gets replicated.
Multiple Replications
Instead of selecting multiple feeds in a single replication, you can configure multiple replications on a single instance. This is common to do when distributing content for edge computing, but there are other reasons you may wish to do this.
- Use different sync tokens, so that you don’t have to share them for all feeds
- Replicate content from multiple other instances in outgoing mode
- Different replication modes on different feeds, like having some be Pull and others be Push
Whatever the reason, be cautious when configuring different replications for the same feed. For example, if you pull packages from and delete content from two different instances, you may find unexpected results.
Best-Practices
Based on usage and support tickets we have encountered in the past, we recommend the following guidelines:
- DO: ensure all ProGet installations involved in replication are running the exact same version of ProGet, or at the very least, the same minor version (e.g. 5.2.x)
- DO NOT: configure replication to another feed in the same instance of ProGet
- DO NOT: configure two instances to be both incoming and outgoing replication feeds pointing to each other
- DO NOT: use replication for availability purposes if you are already using a package store that provides its own replication, for example if you are using the AWS package store with cross-region replication
- DO NOT: configure an incoming replication feed to pull changes and then distribute the sync token to untrusted parties as that will effectively grant full access to the contents of the feed, which could include poisoned packages, malware, etc.
Troubleshooting
Perhaps the most common issue related to replication is that ProGet Enterprise (or trial) is required for each server cluster involved in the replication, otherwise an error will be generated and replication will not occur.
Replication Status & Overview
To view the list of feeds that have replication configured, visit the Replication Overview page.
Logging
Outgoing Replication Feeds
The logs for outgoing replication feeds can be found on the Executions page. To view them, visit the
Administration >
Executions page, then filter by "Feed Replication".
Incoming Replication Feeds
The logs for incoming replication feeds will be found in the Diagnostic Center. Visit
Administration >
Diagnostic Center >
View Log Messages and filter the category by "Feed Replication" to view any error messages.
Note that by default, only incoming replication.
Outgoing & Incoming Options Missing
The outgoing/incoming (client/server) replication distinction was added in ProGet v5.2, and previous versions only allowed one or the other to be configured, and not both. The options "replicate from external feed" and "allow other feeds to replicate with this one" are equivalent to client and server respectively.
Docker
Docker feed replication is supported starting in ProGet 5.2.5, and has the following limitations:
- Publish date and download count are not synchronized.
- All blobs and manifests are synchronized, so retention rules need to be set up on both ends of synchronization.
- Deletion records ("tombstones") are not recorded in Docker feeds, so images can only be added by feed replication, never removed; a retention rule must be configured to delete untagged images. | https://docs.inedo.com/docs/proget-advanced-feed-replication | 2022-06-25T13:50:11 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.inedo.com |
J2.5:Noções básicas sobre as categorias e artigos
From Joomla! Documentation
O que são categorias e artigos, e para que são utilizados nos sites Joomla!?
Artigos
Let's start with some definitions. In Joomla!, an Artigo.
Categorias
Categorias.
Por que utilizar categorias?
There are two main reasons you might want to organize your Articles in categories.
Apresentações de Lista e Blogue.
Organizar Artigos no Gestor de Artigos.
Outra Informação nas Categorias
Categorias vs. Organização de Menus
It is important to understand that the structure of Categories (por exemplo, Organismos -> Mascotes -> Animais -> Cães) has nothing to do with the structure of the menus on your site. For example, your site could have one menu level or six menu levels.
Outros Tipos de Categorias. | https://docs.joomla.org/Understanding_categories_and_articles/pt | 2022-06-25T13:48:24 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.joomla.org |
Analysis
MASS™ can be readily used to perform masonry analysis rather than design. As a result, the program can be used as an exceptional learning and teaching tool.
To analyse an assemblage, users are required to follow the same steps as the ones followed to design an assemblage (refer to Section 2.7). There are a few minor differences users may wish to take note of and these are discussed in the following sections.
Assemblage Configuration (Analysis)
In the assemblage configuration design step, users are required to de-select all masonry unit sizes and masonry unit strengths except for the specific masonry unit users wish to use.
Moment and Deflection Design (Analysis)
It is necessary to complete the moment design step before going back to de-select the moment steel properties not required. Users can specify the location of any reinforcement using the ‘Minimum clearances’ box. For more information on the placement of reinforcing bars, refer to Section 3.2.3 (Beams), Section 4.2.3 (Walls) and Section 5.2.3 (Shear Walls) of this user manual.
Shear Design (Analysis)
It is necessary to complete the shear design step before going back to de-select the shear steel properties not required.
Bearing Design (Analysis)
It is necessary to complete the bearing design step before going back to alter any bearing plate properties. Note, however, that the bearing length is specified during the assemblage configuration design step, and cannot be altered during the bearing design step.
Continue Reading: Printing
Was this post helpful? | https://docs.masonryanalysisstructuralsystems.com/getting-to-know-mass/analysis/ | 2022-06-25T15:02:58 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.masonryanalysisstructuralsystems.com |
Alteryx Forecasting Tool
v3.0.0 [2020-11-12] (current stable release)
- The third version of the TIM Forecasting Tool2.0.1 [2020-10-06]
New features
- possibility to run more forecasts in parallel by specifying the "Group By" column in the configuration pane
- maximum model complexity setting in advanced settings
- changed timestamp format in prediction
Bug fixes
- running the tool with TIM v4.5 and backtesting turned on fails because of a changed logic behind the aggregated forecasts response
v1.2.0 [2020-05-26]
New features
upper and lower boundary of prediction interval in the "P" output
backtesting columns in the "P" output
new advanced settings GUI layout
advaced settings:
- configurable backtesting length
- configurable confidence for prediction interval
- new dictionaries: Trend, Fourier, Month, Exact day of week
- quality parameter
- model
- new dictionaries: Identity and Simple Moving Average
Bug fixes
- fixed bug that occured when format of the timestamp column is "yyyy" (yearly data) or "yyyy-mm" (monthly data) | https://docs.tangent.works/docs/Release-Notes/Alteryx-Forecasting-Tool | 2022-06-25T14:23:19 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.tangent.works |
Register an external database
Create the external RDBMS instance and database and then register it with Cloudbreak.
Once you have the database instance running, you can:
- Register it in Cloudbreak web UI or CLI.
- Once registered, the database will now show up in the list of available databases when creating a cluster under advanced External Sources > Configure Databases. You can use it with one or more clusters.
Prerequisites
If you are planning to use an external MySQL or Oracle database, you must download the JDBC connector’s JAR file and place it in a location available to the cluster host on which Ambari is installed. The steps below require that you provide the URL to the JDBC connector’s JAR file.
Steps
- From the navigation pane, select External Sources > Database Configurations.
- Select Register Database Configuration.
- Provide the following information:
- Click Test Connection to validate and test the RDS connection information.
- Once your settings are validated and working, click REGISTER to save the configuration.
- The database will now show up on the list of available databases when creating a cluster under advanced External Sources > Configure Databases. You can select it and click Attach each time you would like to use it for a cluster: | https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.1/advanced-cluster-options/content/cb_register-an-external-database.html | 2019-07-15T21:05:41 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hortonworks.com |
Set default Cloudbreak credential
If using multiple Cloudbreak credentials, you can select one credential and use it as default for creating clusters. This default credential will be pre-selected in the create cluster wizard.
Steps
- In the Cloudbreak web UI, select Credentials from the navigation pane.
- Click Set as default next to the credential that you would like to set as default.
- Click Yes to confirm.
Alternatively, you can perform the same steps from the Settings page. | https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.1/configure/content/cb_set-a-default-cloudbreak-credential.html | 2019-07-15T21:09:05 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hortonworks.com |
DC/OS 1.11.2 was released on May 18, 2018.
DC/OS 1.11.2 includes the following:
- Apache Mesos 1.5.1-aedbcfd change log.
- Marathon 1.6.392 change log.
- Metronome 0.4.2 change log.
Issues Fixed in DC/OS 1.11.2Issues Fixed in DC/OS 1.11.2
- COPS-3195 - Mesos: Fixed an issue where the authentication token refresh would not be performed. Enterprise
- DCOS-14199 - Consolidated the Exhibitor bootstrapping shortcut by atomically reading and writing the ZooKeeper PID file.
- DCOS-20514 - Added licensing information to the diagnostics bundle. Enterprise
- DCOS-20568 - Fixed diagnostics bundle creation bug regarding insufficient service account permissions. Enterprise
- DCOS-21596 - If a local user account matches an LDAP username that exists within an LDAP group, the local user account is now automatically added to the LDAP group. Enterprise
- DCOS-21611 - The IP detect script and fault domain detect script can be changed with a config upgrade.
- DCOS-22128 - Fixed an issue in the Service view of DC/OS UI, when cluster has pods with not every container mounting a volume Enterprise
- DCOS-22041 - Admin Router: Fixed a race condition in the permission data cache. Enterprise
- DCOS-22133 - DC/OS IAM: Fixed a rare case where the database bootstrap transaction would not insert some data. Enterprise
- DCOS_OSS-2317 - Consolidated pkgpanda’s package download method.
- DCOS_OSS-2335 - Increased the Mesos executor re-registration timeout to consolidate an agent failover scenario.
- DCOS_OSS-2360 - DC/OS Metrics: metric names are sanitized for better compatibility with Prometheus.
- DCOS_OSS-2378 - DC/OS Net: Improved stability of distribution protocol over TLS.
- DC/OS UI: Incorporated multiple fixes and improvements.
Notable Changes in DC/OS 1.11.2Notable Changes in DC/OS 1.11.2
- MARATHON-8090 - Reverted the Marathon configuration change for GPU resources which was introduced in 1.11.1 release.
- QUALITY-2006 - RHEL 7.4 with Docker EE 17.06.2 is supported.
- QUALITY-2007 - RHEL 7.4 with Docker 17.12.1-ce is supported.
- QUALITY-2057 - CentOS 7.4 with Docker EE 17.06.2 is supported.
Security Enhancements in DC/OS 1.11.2Security Enhancements in DC/OS 1.11.2
- DCOS-21465 - Updated python3-saml for CVE-2017-11427. Enterprise
- DCOS-21958 - Admin Router on master nodes no longer supports the older TLS 1.1 protocol and 3DES encryption algorithm by default. Enterprise
Note:
- New Docker versions are supported on RHEL 7. | https://docs.mesosphere.com/1.11/release-notes/1.11.2/ | 2019-07-15T20:19:30 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.mesosphere.com |
Understand views (lists)
Applies to Dynamics 365 for Customer Engagement apps version 9.x (on-premises)
With Dynamics 365 for Customer Engagement apps, use views to define how a list of records for a specific entity is displayed in the application. A view defines:
- The columns to display
- How wide each column should be
- How the list of records should be sorted by default
- What default filters should be applied to restrict which records will appear in the list
A drop-down list of views is frequently displayed in the application so that people have options for different views of entity data.
The records that are visible in individual views are displayed in a list, sometimes called a grid, which frequently provides options so that people can change the default sorting, column widths, and filters to more easily see the data that’s important to them. Views also define the data source for charts that are used in the application.
Types of Views
There are three types of views: personal, system, and public.
This topic is about how system administrators and system customizers work with system and public views. For more information about personal views, see Create, edit, or save an Advanced Find search.
Personal views
You and anyone else who has at least User level access to actions for the Saved View entity can also create personal views. As system administrator, you can modify the access level for each action in the security role to control the depth to which people can create, read, write, delete, assign, or share personal views.
Personal views are owned by individuals and, because of their default User level access, they are visible only to that person or anyone else they choose to share their personal views with. You can create personal views by saving a query that you define by using Advanced Find or by using the Save Filters as New Views and Save Filters to Current View options in the list of views. These views are typically included at the bottom in lists of system or public views that are available in the application. While you can create a new personal view based on a system or public view, you cannot create a system or public view based on a personal view.
System views
As a system administrator or system customizer, you can edit system views. System views are special views the application depends on, which exist for system entities or are automatically created when you create custom entities. These views have specific purposes and some additional capabilities.
These views are not shown in the view selector and you can’t use them in sublists in a form or as a list in a dashboard. You cannot delete or deactivate these views. More information: Remove views
System views are owned by the organization so that everyone can see them. For example, everyone has organization-level access to read records for the View (savedquery) entity. These views are associated with specific entities and are visible within the solution explorer. You can include these views in solutions because they are associated with the entity.
Public views
Public views are general purpose views that you can customize as you see fit. These views are available in the view selector and you can use them in sub-grids in a form or as a list in a dashboard. Some public views exist by default for system entities and for any custom entity. For example, when you create a new custom entity, it will have the following combination of public and system views.
You can create custom public views. You can delete any custom public views you create in an unmanaged solution. You cannot delete any system-defined public views. Custom public views added by importing a managed solution may have managed properties set that can prevent them from being deleted, except by uninstalling the managed solution.
Create or edit views
You can create or edit views in two ways:
- Using the App Designer: If you’re creating views for the first time, you may want to start with the App Designer, which provides a simple and intuitive UI with drag-and-drop capabilities. More information: Create and edit public or system views by using the app designer.
- Using the Solution Explorer: If you’re already experienced with Dynamics 365 for Customer Engagement, you may want to use the Solution Explorer. More information: Create or edit a view.
Customize views
As a system administrator and system customizer, you can customize the views through controls by making grids (lists) editable and compatible for Unified Interface. The following controls are used:
- Editable Grid: Allows users to do rich in-line editing directly from grids and sub-grids whether they’re using a web app, tablet, or phone. More information: Make grids (lists) editable using the Editable Grid custom control
- Read Only Grid: Provides users an optimal viewing and interaction experience for any screen size or orientation such as mobiles and tablets by using responsive design principles. More information: Specify properties for Unified Interface apps
See also
Create and design forms
Feedback | https://docs.microsoft.com/en-us/dynamics365/customer-engagement/customize/create-edit-views | 2019-07-15T21:10:53 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
Release Overview
Bug Fixes
- We resolved an issue for phones set to a right-to-left language. The issue caused the bloom to appear in the wrong place; now, it appears in the right place!
- We resolved an issue that prevented a user from performing a search when the app goes into the background. This issue was impacting application which utilizes the SlyceLensView. | https://docs.slyce.it/hc/en-us/articles/360017702511-Android-5-1-6-Release-Notes | 2019-07-15T19:55:08 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.slyce.it |
Menu Hierarchy ¶
Technically, the file hiearchy on disc must not affect the menu hierarchy in any way. But, as a general convention, we use the common practice that the file hierarchy reflects the menu hierarchy.
Multi-File Solution ¶
So, the menu structure for the files described under Directories and File Names
Documentation/ | --> Index.rst | --> Topic1/ | -> Index.rst -> Subtopic1.rst -> Subtopic2.rst
would look something like this:
Documentation/Index.rst:
.. toctree:: :hidden: Topic1/Index
Documentation/Topic1/Index.rst
.. toctree:: :hidden: Subtopic1 Subtopic2
See Example Toctree to see how this is rendered.
Single-File Solution ¶
What you can also do, is put everything into one file, e.g. Index.rst contains:
======= Chapter ======= Topic 1 ======= Subtopic 1 ---------- some text Subtopic 2 ---------- some text
The rendered result will look the same as the multi-file example above, meaning the menu hierarchy and the rendered headings on the page.
Tip
Whatever variant you choose, it depends what is already common practice in the manual you are working on and what is easiest to manage. | https://docs.typo3.org/m/typo3/docs-how-to-document/master/en-us/GeneralConventions/MenuHierarchy.html | 2019-07-15T21:02:57 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['../_images/toctree.png', '../_images/toctree.png'], dtype=object)] | docs.typo3.org |
. All the editing options are also available via the HTML Source Editor toolbar.
You can customize the HTML Editor as you want. To do so, go to the Options Menu:
Main Menu ⇒ Tools ⇒ Options… ⇒ Text Editor ⇒ HTML / HTML (Web Forms).
HTML Code Assistance
In the HTML between opening and closing tags, and it gives you the ability to selectively hide the parts of a code that you don't need right now,
-. | https://docs.welkinsuite.com/?id=windows:how_does_it_work:how_to_work_with_built-in_editor:html_editor:start | 2019-07-15T20:56:52 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['/lib/exe/fetch.php?media=windows:how_does_it_work:how_to_work_with_built-in_editor:html_editor:html-options.png',
'Customize your HTML Editor Customize your HTML Editor'],
dtype=object)
array(['/lib/exe/fetch.php?media=windows:how_does_it_work:how_to_work_with_built-in_editor:html_editor:html-editor.png',
'HTML Editor HTML Editor'], dtype=object) ] | docs.welkinsuite.com |
Components of a bundle
Your main item during developing with Lightning technology is a bundle. Any bundle contains a component or an app and all its related resources.
The Welkin Suite supports following Lightning items as a part of a bundle:
In addition, you are able to work with Lightning Events inside the IDE:
- Component events (
.evt) — they are handled by the component itself or a component that instantiates or contains the component,
- Application events (
.evt) — they are handled by all components that are listening to the event.
You can easily define Lightning Interfaces in The Welkin Suite as well as implement them in your Lightning Components without switching to any other tools. The Lightning Component framework supports the concept of interfaces that define a component's shape by defining its attributes.
The Lightning's named Tokens (
.tokens) allow you to capture the common values once and then reuse them through the whole org's Lightning Components and Applications.
Last modified: 2017/02/03 10:34 | https://docs.welkinsuite.com/?id=windows:how_does_it_work:tools_in_tws:lightning:components_of_bundle:start | 2019-07-15T20:32:23 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.welkinsuite.com |
Configure IP Whitelist
Now that you have registered with Dapi and created your app, its time to configure your IP whitelist. Dapi will only allow requests to be sent from the IP's specified in the Dashboard.
Note
Please make sure to add your system IP address to Allowed IPs. Otherwise you will not be able to use the API.
Use a service like whatismyipaddress.com to help you determine your IP address.
- Select your new application from the Dashboard
- Add any IP that will be sending requests to the Dapi API. Only IP addresses that you specify here will be authorized to interact with Dapi.
Updated 3 months ago | https://docs.dapi.com/docs/configure-ip-whitelist | 2022-08-07T21:13:22 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.dapi.com |
What’s New in Mule 4
Mule 4’s simplified language and reduced management complexity enables you to speed up the on ramping process and deliver applications faster.
If you are familiar with the concepts of the previous versions of the runtime, check the sections below to learn what’s changing in Mule Runtime v4.0.
Simplified Event and Message Model
Mule 4 includes a simplified Mule Event and Message model. In Mule 4, Flows are triggered by an Event. An Event has a Message and variables associated with it. A Message is composed of a payload and its attributes (metadata, such as file size). Variables hold arbitrary information, such as Messages, payload data, or attributes. This simplified message model makes it easier to work with data in a consistent way across connectors without information being overwritten.
DataWeave 2.0: The New Mule Expression Language
In Mule 3, users had to contend with learning both the Mule Expression Language (MEL) and DataWeave. MEL forced users to convert their payloads from binary data, such as XML or JSON documents, into Java objects, so they could write expressions which access that data, for example when routing to a specific location.
In Mule 4, DataWeave is now the default expression language. Combined with the built-in streaming capabilities, this simplifies many common tasks:
Events can be routed based on payload data, without first needing to convert them to Java objects.
Binary data can easily be queried from an expression anywhere in your flow, for example, when logging.
Larger than memory access to data happens transparently.
DataWeave 2.0 also features many improvements:
Language simplifications. Everything is now a function.
DataWeave scripts can now be packaged and reused, via the new imports and modules features.
Support for multi-line comments.
Support for calling static Java methods directly from DataWeave.
For details, see the links to DataWeave documentation.
Streaming Management
Mule 4 automatically handles data streams for users. This greatly simplifies working with data in the runtime because:
Data can be read multiple times or accessed randomly using the DataWeave expression language without side effects.
Data can be sent to multiple places, without the user caching that data in memory first.
Users can transparently access larger than memory data.
Users can customize whether data is stored on disk using streaming strategies.
Non-Blocking, Self-Tuning Runtime
Mule 4 includes a new execution engine that is based on a non-blocking runtime. This is a task-oriented execution model allowing you to take advantage of non-blocking IO calls and avoid performance problems due to incorrect processing strategy configurations.
As a result of this new engine, you no longer have to configure exchange patterns. Instead, flows always function synchronously. If you wish to achieve asynchronous type patterns such as fire and forget, you can use the
<async> processor.
Each Mule event processor can now inform the runtime if it is a CPU intensive, CPU light, or IO intensive operation. This helps the runtime to self-tune for different workloads dynamically, removing the need for you to manage thread pools manually. As a result, Mule 4 removes complex tuning requirements to achieve optimum performance.
Enrich Events Directly from Connectors/Modules
For any given module operation, it is now possible to define a target (or target variable), which saves the result in a variable:
<http:request
This saves the Mule message in the
myVar variable to be accessed later. This reduces flow complexity by removing the need for an enricher.
You can also control what is stored in the variable using the targetValue attribute. For example, if you wanted to only store the response code from an HTTP request, you could do the following:
<http:request target="myVar" targetValue="#[attributes.statusCode]" .../>
Simplified Connectors and Modules Experience
Mule 4 introduces more consistency around modules and connectors, creating one unified experience for how to interact with Mule components.
Transports have been completely replaced by Mule Modules. Modules and connectors can be created and managed using the Mule SDK, which provides a single way to extend Mule.
Simplified Error Handling and New Try Scope
Mule 4 includes a simplified way to manage errors. Instead of dealing with Java exceptions directly, there is now an Error concept built directly into Mule. Furthermore, Mule Modules and Connectors declare what Errors may occur for any given operation. This makes it easy for you to discover possible errors at design time and catch them.
Exception strategies are replaced by error handlers allowing you to catch errors based on both type and arbitrary expressions.
You can configure your error handlers to catch errors so that the flow can keep processing, or they can be re-propagated.
There is also a new Try Scope, which allows you to catch errors in the middle of a flow without having to create a new flow, specifically to catch that error.
Batch is easier to use and is now a scope
In Mule 3, batch jobs were top-level concerns, similar flows. But we’ve simplified this so it is now a scope that can live inside a flow–– making it easier to understand, invoke dynamically, and interact with other Mule components. There are also no longer a special set of variables (i.e. recordVars) for batch. You can now just use flow variables directly; this reduces the complexity and makes it easier to learn how to write batch jobs.
Improved Upgradeability with Classloader Isolation
Mule 4 loads each Module in its own classloader, isolating the modules from internal Mule code making runtime upgrades a lot simpler by protecting you from changes by the runtime or connectors:
Connectors are now distributed outside the runtime, making it possible to:
Get connector enhancements and fixes without having to upgrade your runtime.
Upgrade your runtime version without breaking compatibility with other modules.
There is now a well-defined Mule API, so you can be sure you’re using supported APIs.
There is classloader isolation between your application, the runtime, and connectors, so that any library changes that happen internally will not affect your app.
Improved support for configuration
Mule 4 features an easier way to configure environment specific properties, which is Spring-optional. With it, you can now define application-specific properties in a YAML file inside your application. These will be the default properties for your application and you can override them using system properties. In the future, we’ll also be using this metadata to provide an improved configuration management UI from runtime manager.
Connectors and Modules Updates
Database Connector
The database connector has undergone minor updates:
Bulk operations have been separated so that operations do not change behavior depending on the received payload
There’s single experience for executing static and dynamic queries.
DataWeave transformations can be embedded inside the insert/update operations so that you can construct the datasets you want to send to the DB without having a side effect on the message or using enrichers
The connector will use Mule’s new streaming framework to handle large data sets.
File and FTP Connectors
The File and FTP connectors have been improved so that they are operation based and share the same set of operations. This enables many new capabilities:
The ability to read files or fully list directories’ contents on demand, unlike the old transport (which only provided a polling inbound endpoint)
Top level support for common file system operations such as copying, moving, renaming, deleting, creating directories, and more
Support for locking files on the file system level
Advanced file matching functionality
Support for local files, FTP, SFTP and FTPS
JMS Connector
The JMS connector has been updated to utilize the new, simplified connector experience. In addition to the JMS listener and sender, you can also consume messages in the middle of a flow using the JMS consume operation.
Scripting Module
The scripting module is now updated for Mule 4, enabling you to now embed your Groovy, Ruby, Python, or JavaScript scripts inside Mule flows. You can inject data from the Mule message into your code using the new parameters configuration attribute.
<script:execute <script:code> return "$payload $prop1 $prop2" </script:code> <script:parameters> #[{prop1: "Received", prop2: "A-OK"}] </script:parameters> </script:execute>
Spring module
Mule 4 decouples the Mule internals from Spring, ensuring that users don’t need to know Spring to learn Mule and enables Spring users to select which version of spring they run. To use Spring beans, now you add the Spring module to your application, and simply import your Spring bean files.
<spring:config
VM Connector
The VM connector has been updated to utilize the new, simplified connector experience. In addition to the VM listener and sender, you can also consume messages in the middle of a flow using the VM consume operation
Mule SDK
The Mule SDK is a successor to the Anypoint Connector Devkit. It enables developers to easily extend Mule and create new Mule modules which can be shared in Exchange. Unlike Mule 3, where there were multiple ways to create extensions, the Mule 4 SDK provides a single way to extend Mule, assuring consistency and upgradeability of components. It was used to build all Mule 4 modules and connectors.
While similar to DevKit in many respects, it features many improvements:
The SDK does not generate code, which enables extensions to get new runtime features without having to be re-released
Transactions support
Request-Response message sources support
Dynamic configurations
Router support
Non Blocking operations
Classloading isolation | https://docs.mulesoft.com/mule-runtime/4.2/mule-runtime-updates | 2020-05-25T07:20:01 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.mulesoft.com |
outputcsv
Description
Saves search results to the specified CSV file on the local search-head in the
$SPLUNK_HOME/var/run/splunk/csv directory. Updates to
$SPLUNK_HOME/var/run/*.csv using outputcsv are not replicated across the cluster.
Syntax
outputcsv [append=<bool>] [create_empty=<bool>] [dispatch=<bool>] [usexml=<bool>] [singlefile=<bool>] [<filename>]
Optional arguments
- append
- Syntax: append=<bool>
- Description: If 'append' is true, attempts to append to an existing csv file if it exists or create a file if necessary. If there is an existing file that has a csv header already, only emits the fields that are referenced by that header. .gz files cannot be appended to.
-. If no filename specified, rewrites the contents of each result as a CSV row into the "_xml" field. Otherwise writes into a file and appends ".csv" to filename if filename has no existing extension.
- singlefile
- Syntax: singlefile=<bool>
- Description: If singlefile is set to true and output spans multiple files, collapses it into a single file.
- Default: true
- usexml
- Syntax: usexml=<bool>
- Description: If there is no filename, specifies whether or not to encode the CSV output into XML. This option should not specified when invoking the
outputcsvcommand from Splunk Web..2.13, 6.2.14, 6.2.15
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/6.1.1/SearchReference/Outputcsv | 2020-05-25T09:19:43 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Difference between revisions of "OP Snippets"
Revision as of 20:21, 8 March 2019
OP Snippets is a set of many examples of Operators that can be copied/pasted into your projects.
The OP Snippets examples can be launched from
- right-click on an existing OP in your network, and select Operator Snippets... if it selectable
- right-click on an operator name in the OP Create dialog and select Operator Snippets...
- the Help menu - > Operator Snippets.
OP Snippets is a set of numerous examples of TouchDesigner operators, which you access via the Help menu. These can be copied/pasted into your projects. | https://docs.derivative.ca/index.php?title=OP_Snippets&diff=prev&oldid=14952 | 2020-05-25T08:21:49 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.derivative.ca |
Monitoring HTTP and network requests gives you insight into how your app is performing and provides data that can help you improve your app. To find HTTP and network requests and errors, you can view them in the New Relic Mobile UI or query
MobileRequest and
MobileRequestError events in Insights.
You do not need an Insights Pro subscription to query these events. However, the amount of data you have access to varies based on the data retention in your Mobile or Insights subscription.:
- Enable for Android
Place the feature flag before the start call in the
onCreatemethod of the
MainActivityclass.
NewRelic.enableFeature(FeatureFlag.NetworkRequests); NewRelic.withApplicationToken("NEW_RELIC_TOKEN").start(this.getApplication());
- Enable for iOS
Place the feature flag before the start call which should be the first line of
didFinishLaunchingWithOptionsmethod.
Objective-C
[NewRelic enableFeatures:NRFeatureFlag_NetworkRequestEvents] [NewRelic startWithApplicationToken:@"NEW_RELIC_TOKEN"]
Swift
NewRelic.enableFeatures(NRMAFeatureFlags.NRFeatureFlag_NetworkRequestEvents) NewRelic.start(withApplicationToken:"NEW_RELIC_TOKEN")
Query HTTP and network requests in Insights
To create custom dashboards for HTTP and network requests in New Relic Insights, run queries using the following events and attributes:
MobileRequestErrorevents and attributes
MobileRequestevents and attributes
View HTTP and network requests in Mobile
To explore
MobileRequest and
MobileRequestError data in the UI, go to the following pages in New Relic Mobile: | https://docs.newrelic.com/docs/mobile-monitoring/mobile-monitoring-ui/network-pages/analyze-network-requests-using-mobilerequest-event-data | 2020-05-25T08:55:45 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.newrelic.com |
Partitioning with fdisk is a little different from the past, since fdisk now supports different partitioning schemes. If you run 'fdisk /dev/sda' and press 'm' for help, then you can see this at the bottom:
Create a new label
g create a new empty GPT partition table G create a new empty SGI (IRIX) partition table o create a new empty DOS partition table s create a new empty Sun partition table
Slackware uses LILO which writes to the MBR, so you need to configure the disk as DOS with MBR and then create at least two partitions for swap (type 82) and linux system (type 83) and set the bootable flag on it, just like in the good old, bad old days.
So if you have a 20 GB disk for example, run 'fdisk /dev/sda', type 'o' to create a DOS partition, type 'n' to make a new partition for +18 GB and again, for 2 GB, type 't' to change the 2 GB partition to 82 (Linux Swap), type 'a' to make the 2nd partition bootable and type 'w' to write it to disk.
Deviate from the above, and LILO won't install. – Herman 2016/06/19 | https://docs.slackware.com/talk:slackware:install?rev=1466363833&mddo=print | 2020-05-25T08:04:10 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png',
None], dtype=object)
array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png',
None], dtype=object) ] | docs.slackware.com |
The
sort filter sorts an array:
{% for user in users|sort %} ... {% endfor %}
Note
Internally, Twig uses the PHP asort function to maintain index association. It supports Traversable objects by transforming those to arrays.
© 2009–2018 by the Twig Team
Licensed under the three clause BSD license.
The Twig logo is © 2010–2018 Symfony | https://docs.w3cub.com/twig~2/filters/sort/ | 2020-05-25T08:19:39 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.w3cub.com |
Create a TargetServer
Create a TargetServer
Create a TargetServer in the specified environment. TargetServers are used to decouple TargetEndpoint HTTPTargetConnections from concrete URLs for backend services.
To do so, an HTTPConnection can be configured to use a LoadBalancer that lists one or more 'named' TargetSevers. Using TargetServers, you can create an HTTPTargetConnection that calls a different backend server based on the environment where the API proxy is deployed. See also Load balancing across backend servers.
For example, instead of the following configuration:
<TargetEndpoint name="default"> <HTTPTargetConnection> <URL></URL> </HTTPTargetConnection> </TargetEndpoint>
You can reference a TargetServer as follows:
<TargetEndpoint name="default"> <HTTPTargetConnection> <LoadBalancer> <Server name="target1"/> </LoadBalancer> </HTTPTargetConnection> </TargetEndpoint>
You can then set up a TargetServer called target1.
Note: Characters you can use in the name are restricted to: A-Z0-9._\-$ %.
Resource URL /organizations/{org_name}/environments/{env_name}/targetservers?) | https://apidocs.apigee.com/docs/management/apis/post/organizations/%7Borg_name%7D/environments/%7Benv_name%7D/targetservers | 2020-05-25T08:10:01 | CC-MAIN-2020-24 | 1590347388012.14 | [] | apidocs.apigee.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.