content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
SOLIDWORKS 2022 is fully supported with DriveWorks 19 SP2. Windows 11 is fully supported with DriveWorks 19 SP2. Taking a screenshot, on devices that use the Safari browser on iOS 15, will crash Safari when 3D is displayed. Apple have recently introduced this issue, see Screenshot on iOS 15 Devices for more information. DriveWorks 19 SP0 replaced the use of SQL Compact with SQLite as its underlying database for Individual Groups and exported reports. Individual Groups created in previous version are automatically backed up when converted. See DriveWorks Group Upgrades (V19) for extension and location information. DriveWorks v10 introduced the Data Table control as a successor to the Data Grid control. This offered the same functionality with enhanced customization of the behavior and appearance. To allow us to continue to take advantage of modern technology to deliver improved functionality and the best possible user experience across all of our products, we have ended support for the Data Grid control. Please implement the Data Table control to replace any Data Grid control currently in use.
https://docs.driveworkspro.com/Topic/19SP2Information
2022-06-25T03:59:23
CC-MAIN-2022-27
1656103034170.1
[]
docs.driveworkspro.com
Setting up a DriveWorks implementation requires the DriveWorks Pro Administrator module and will always involve the following: - Creating the User Forms that will be used to capture the requirements of each product that will be automated. (See Form Design) - Applying rules for the display of data and behavior of the Controls on the User Forms. Depending on the output requirement the set up could also involve: - Capturing and applying rules to 3D CAD models and 2D drawings that will be used as templates for the CAD output. (See SOLIDWORKS) - Capturing and applying rules to documents that will be used as templates for any documentation outputs. (See Documents) - Connecting to internal company systems and databases to extract or write data to. (See Info: Using External Data In DriveWorks) Depending on the visual appearance of the user forms or how the implementation is to be run, the set up could also involve - Using Internet Explorer to configure a standard DriveWorks Theme for publishing to the web. (See Project Image, Name and Description and DriveWorks Live Skins) - Image creation/editing.
https://docs.driveworkspro.com/Topic/HowToImplementationGuide
2022-06-25T05:23:35
CC-MAIN-2022-27
1656103034170.1
[]
docs.driveworkspro.com
Cynet is a cybersecurity asset management platform. This topic describes how to send system logs from your Cynet platform to Logz.io. Before you begin, you’ll need: - An active Cynet license - Cynet login credentials - An active account with Logz.io - Filebeat installed on a dedicated machine (acting as a syslog server) - Root priveleges on your machines Configure Cynet to send syslog notifications to a remote Syslog server running Filebeat - On your Cynet web interface, go to Setting > Advanced. - Select the box beside Send Audit Records to SIEM. - Go to Configuration > SIEM settings and enable the following configuration: - UDP - IP - public IP address of your syslog server - Port - port that is configured on your syslog server. We use 9000 in this example, but you can change it to your preference. - Press Add. The added IP and port will appear on the screen. These instructions are based on UDP. If you want to use TCP, make sure your syslog server configuration is aligned with this.: "0.0.0.0:9000" fields: logzio_codec: json # Your Logz.io account token. You can find your token at # token: <<LOG-SHIPPING-TOKEN>> type: cyn'] - 9000 is the port we suggest. If you use a different port, replace the default values with your parameters. - cynet to see the incoming logs. If you still don’t see your data, see log shipping troubleshooting.
https://docs.logz.io/shipping/security-sources/cynet.html
2022-06-25T05:38:26
CC-MAIN-2022-27
1656103034170.1
[]
docs.logz.io
Integration Services (SSIS) Variables Applies to: SQL Server (all supported versions) SSIS Integration Runtime in Azure Data Factory. System and user-defined variables value of a user-defined variable can be a literal or an expression. The value of a variable can't be null. Variables have the following default values: A variable has (SSIS) Expressions. ValueType Note The property value appears in the Data type column in the Variables window. Specifies the data type of the variable value. Scenarios for using Use Integration Services (SSIS) Expressions. Add and Parameters and Return Codes in the.
https://docs.microsoft.com/en-us/sql/integration-services/integration-services-ssis-variables?redirectedfrom=MSDN&view=sql-server-ver16
2022-06-25T06:20:47
CC-MAIN-2022-27
1656103034170.1
[]
docs.microsoft.com
The OpenCV Video I/O module is a set of classes and functions to read and write video or images sequence. Basically, the module provides the cv::VideoCapture and cv::VideoWriter classes as 2-layer interface to many video I/O APIs used as backend. Some backends such as (DSHOW) Direct Show, Video For Windows (VFW), Microsoft Media Foundation (MSMF), Video 4 Linux (V4L), etc... are interfaces to the video I/O library provided by the operating system. Some others backends like OpenNI2 for Kinect, Intel Perceptual Computing SDK, GStreamer, XIMEA Camera API, etc... are interfaces to proprietary drivers or to external library. See the list of supported backends here: cv::VideoCaptureAPIs OpenCV automatically selects and uses first available backend ( apiPreference=cv::CAP_ANY). As advanced usage you can select the backend to use at runtime. Currently this option is available only with VideoCapture. For example to grab from default camera using Direct Show as backend If you want to grab from a file using the Direct Show as backend: Backends are available only if they have been built with your OpenCV binaries. Check in opencv2/cvconfig.h to know which APIs are currently available (e.g. HAVE_MSMF, HAVE_VFW, HAVE_LIBV4L, etc...). To enable/disable APIs, you have to: -DWITH_MSMF=ON -DWITH_VFW=ON ...) or checking related switch in cmake-gui Many industrial cameras or some video I/O devices don't provide standard driver interfaces for the operating system. Thus you can't use VideoCapture or VideoWriter with these devices. To get access to their devices, manufactures provide their own C++ API and library that you have to include and link with your OpenCV application. Is common case that this libraries read/write images from/to a memory buffer. If it so, it is possible to make a Mat header for memory buffer (user-allocated data) and process it in-place using OpenCV functions. See cv::Mat::Mat() for more details. OpenCV can use the FFmpeg library () as backend to record, convert and stream audio and video. FFMpeg is a complete, cross-reference solution. If you enable FFmpeg while configuring OpenCV than CMake will download and install the binaries in OPENCV_SOURCE_CODE/3rdparty/ffmpeg/. To use FFMpeg at runtime, you must deploy the FFMepg binaries with your application. OPENCV_SOURCE_CODE/3rdparty/ffmpeg/readme.txtand for details and licensing information
https://docs.opencv.org/4.0.0-alpha/d0/da7/videoio_overview.html
2022-06-25T05:27:18
CC-MAIN-2022-27
1656103034170.1
[]
docs.opencv.org
(PHP 4 >= 4.0.1, PHP 5, PHP 7) array_merge_recursive — Merge one or more arrays recursively $...] ) :. ... Variable list of arrays to recursively merge. An array of values resulted from merging the arguments together. If called without any arguments, returns an empty array. )
https://php-legacy-docs.zend.com/manual/php5/en/function.array-merge-recursive
2022-06-25T04:09:47
CC-MAIN-2022-27
1656103034170.1
[]
php-legacy-docs.zend.com
Managing modifications¶ Modifications can be grouped by project and scenario, and different projects and scenarios can be compared against each other in analysis mode, giving you flexibility on how to use them. Depending on your use cases, different approaches may make sense. If one user will be responsible for analyses in your region, involving a relatively small number of modifications, we recommend doing your work in one project and assessing the impact of different combinations of modifications by creating and using scenarios within that project. If multiple users will be involved in editing scenarios, or if you want to assess more than 10 different combinations of modifications, which would make the list of scenarios annoyingly long, we recommend dividing the modifications among different projects. For example, one team member could code rail scenarios in Project A, another team member could code bus scenarios in Project B. Modifications can be imported between projects that use the same GTFS bundle; in this example, modifications from the two projects could combined in a third Project C. Toggling display of modifications¶ In the list of modifications on the initial view in editing mode, clicking the title of a modification will open it and allow you to edit it. To control whether each modification is displayed on the map, click Stops and segments representing modifications are displayed on the map, using different colors to indicate their state relative to the baseline GTFS: - Blue: Added trip pattern - Red: Removed trip pattern - Purple: Changed timetable (e.g. modified frequency, speed, or dwell time) - Gray: Unchanged (alignment is unchanged but the trip pattern is effected somehow, e.g. Reroute) Projects start with only a “Default” scenario (plus a locked Baseline in which no modifications can be active). You can create additional scenarios expanding the list of scenarios, clicking the create button, and entering a name. When the Scenario list is expanded, options next to each scenario allow you to: the modifications active in the scenario the scenario the scenario (not available for baseline or default scenario) Exporting modifications¶ To see options for exporting scenarios from the top of the editing panel, click A panel will then be shown with multiple options to download or share the scenarios in your project: - Raw scenario (.json) all scenario details - New alignments (.geojson) alignments of add-trip modifications - New stops (.geojson) new stop locations created in the scenario - Summary Report a summary of all modifications in a scenario, for printing or reference. Keep in mind that some browsers may not print more than 30 pages or so of a long report. Importing modifications¶ To import modifications from another project or a shapefile, click From another project¶ Occasionally, you may want to copy all of the modifications from one project into another. This may be useful to make a copy of a project, or to combine modifications developed by different team members into a single project (for instance, one team member working on rail changes and another on bus changes). To do so, select a project in the upload/import panel and click Import If you choose a project associated with the same GTFS bundle, all modifications will be imported; when there are multiple scenarios, the scenarios in the project being imported will be mapped directly to the scenarios in the receiving project (i.e. modifications in the first scenario will remain in the first scenario in the new project). If you choose a project associated with a different GTFS, bundle, only add-trip modifications will be imported. From shapefiles¶ In general, it is best to create all modifications directly in Conveyal Analysis as it allows full control over all aspects of transit network design. However, on occasion, it may be desirable to import modifications from an ESRI shapefile. If you have a shapefile containing lines, you can upload it to Conveyal Analysis and have it turned into a set of Add Trips modifications..
http://docs.analysis.conveyal.com/en/latest/edit-scenario/usage.html
2020-02-17T04:52:20
CC-MAIN-2020-10
1581875141653.66
[array(['../img/report.png', None], dtype=object) array(['../img/import-modifications-from-shapefile.png', None], dtype=object) ]
docs.analysis.conveyal.com
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::MediaPackage::Types::UntagResourceRequest - Defined in: - (unknown) Overview Note: When passing UntagResourceRequest as input to an Aws::Client method, you can use a vanilla Hash: { resource_arn: "__string", # required tag_keys: ["__string"], # required }
https://docs.aws.amazon.com/sdkforruby/api/Aws/MediaPackage/Types/UntagResourceRequest.html
2020-02-17T03:58:28
CC-MAIN-2020-10
1581875141653.66
[]
docs.aws.amazon.com
= admin=collections->action=*, collection=hbase_logs->action=*, config=hbase_logs_config->action=* Sentry Configuration File Sentry can store configuration as well as privilege policies in files. The sentry-site.xml file contains configuration options such as privilege policy file location. The policy files> Sentry for a Solr Collection. to be used for assigning values during using their collection-level authorization update rights. This means that a user can delete all documents in the collection. Similarly, a user might modify all documents, adding their authorization token to each one. After such a modification, the user could access any document using
https://docs.cloudera.com/documentation/enterprise/6/6.0/topics/search_sentry.html
2020-02-17T04:06:16
CC-MAIN-2020-10
1581875141653.66
[]
docs.cloudera.com
Why R2? Step-by-Step: Automated Tiered Storage with Windows Server 2012 R2. This article has been updated now that Window Server 2012 R2 is generally available. After reading this article, be sure to catch the full series at: I’ve been speaking with lots of IT Pros over the past several weeks about the new storage improvements in Windows Server 2012 R2, and one feature in particular has gained a ton of attention: Automated Storage Tiers as part of the new Storage Spaces feature set. In this article, I'll briefly discuss the benefits of Storage Tiers in Windows Server 2012 R2, and then I’ll step through the process of building an Automated Tiered Storage lab for the purpose of evaluating and demonstrating the functionality of Storage Tiers in a Windows Server 2012 R2 lab environment. What are “Storage Tiers”? Storage Tiers in "R2" enhances Storage Spaces by combining the best performance attributes of solid-state disks (SSDs) with the best cost:capacity attributes of hard disk drives (HDDs) together. It allows us to create individual Storage Spaces LUNs ( called “Virtual Disks” –.. - … - Download Windows Server 2012 R2 installation bits Be sure to download the VHD distribution of Windows Server 2012 R2 for easiest provisioning as a virtual machine. - In Hyper-V Manager, use the VHD downloaded in Step 1 above as the operating system disk to spin-up a new VM on a Hyper-V host. - Once the new VM is provisioned, use Hyper-V Manager to modify the VM settings and hot-add 6 new virtual SCSI hard disks, as follows: - Add 3 250GB Dynamic VHDs ( these will be our simulated SSDs ) - Add 3 500GB Dynamic VHDs ( these will be our simulated 7.2K HDDs ) When completed, the settings of your VM should resemble the following: Hyper-V Manager: Hot-adding virtual SCSI hard.. Let’s get started with PowerShell … - Launch the PowerShell ISE tool ( as Administrator ) from the Windows Server 2012 R2 guest operating system running inside the VM provisioned above. - Set a variable to the collection of all virtual SCSI hard disks that we’ll use for creating a new Storage Pool: $pooldisks = Get-PhysicalDisk | ? {$_.CanPool –eq $true } - Create a new Storage Pool using the collection of virtual SCSI hard disks set above: New-StoragePool -StorageSubSystemFriendlyName *Spaces* -FriendlyName TieredPool1 -PhysicalDisks $pooldisks - Tag the disks within the new Storage Pool with the appropriate Media Type ( SSD or HDD ). In the command lines below, I’ll set the appropriate tag by filtering on the size of each virtual SCSI hard disk: Get-PhysicalDisk | Where Size -EQ 267630149632 | Set-PhysicalDisk -MediaType SSD # 250GB VHDs Get-PhysicalDisk | Where Size -EQ 536065605632 | Set-PhysicalDisk -MediaType HDD # 500GB VHDs If you created your VHDs with different sizes than I’m using, you can determine the appropriate size values to include in the command lines above by running: (Get-PhysicalDisk).Size - Create the SSD and HDD Storage Tiers within the new Storage Pool by using the New-StorageTier PowerShell cmdlet: $tier_ssd = New-StorageTier -StoragePoolFriendlyName TieredPool1 -FriendlyName SSD_TIER -MediaType SSD $tier_hdd = New-StorageTier -StoragePoolFriendlyName TieredPool1 -FriendlyName HDD_TIER -MediaType HDD At this point, if you jump back into the Server Manager tool and refresh the Storage Pools page, you should see a new Storage Pool created and the Media types for each disk in the pool listed appropriately as shown below. Server Manager – New Tiered Storage Pool How do I demonstrate this new Tiered Storage Pool? To demonstrate creating a new Storage LUN that uses your new Tiered Storage Pool, follow these steps: In Server Manager, right-click on your new Tiered Storage Pool and select New Virtual Disk… In the New Virtual Disk wizard, click the Next button until you advance to the Specify virtual disk name page. On the Specify virtual disk name page, complete the following fields: - Name: Tiered Virtual Disk 01 - Check the checkbox option for Create storage tiers on this virtual disk Click the Next button to continue. On the Select the storage layout page, select a Mirror layout and click the Next button. Note: When using Storage Tiers, Parity layouts are not supported. On the Configure the resiliency settings page, select Two-way mirror and click the Next button. On the Provisioning type page, click the Next button. Note: When using Storage Tiers, only the Fixed provisioning type is supported. On the Specify the size of the virtual disk page, complete the following fields: - Faster Tier (SSD): Maximum Size - Standard Tier (HDD): Maximum Size Click the Next button to continue. On the Confirm selections page, review your selections and click the Create button to continue. On the View results page, ensure that the Create a volume when this wizard closes checkbox is checked and click the Close button. In the New Volume Wizard select the default values on each page by clicking the Next button. On the Confirm Selections page, click the Create button to create a new Tiered Storage Volume. Note: When using Tiered Storage, the Volume Size should be set to the total available capacity of the Virtual Disk on which it is created. This is the default value for Volume Size when using the New Volume Wizard.> Additional resources you may also be interested in … - Download: Windows Server 2012 R2 Evaluation Kit - Right-size IT Budgets with Windows Server 2012 Storage Spaces - FREE EBOOK: Get Started as an “Early Expert” on Windows Server 2012 R2 - Why Windows Server 2012 R2? The Complete Series - Build Your Private Cloud in a Month - Step-by-Step: Reduce Storage Costs with Data Deduplication in Windows Server 2012 - Step-by-Step: Speaking iSCSI with Windows Server 2012 and Hyper-V
https://docs.microsoft.com/en-us/archive/blogs/keithmayer/why-r2-step-by-step-automated-tiered-storage-with-windows-server-2012-r2
2020-02-17T05:11:51
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Install. An on-premises data gateway is software that you install in an on-premises network. The gateway facilitates access to data in that network. As we explain in the overview, you can install a gateway either in personal mode, which applies to Power BI only, or in standard mode. We recommend standard mode. In that mode, you can install a standalone gateway or add a gateway to a cluster, which we recommend for high availability. In this article, we show you how to install a standard gateway and then add another gateway to create a cluster. Requirements Minimum requirements - .NET Framework 4.6 (Gateway release August 2019 and earlier) - .NET Framework 4.7.2 (Gateway release September 2019 and later) - A 64-bit version of Windows 8 or a 64-bit version of Windows Server 2012 R2. - You shouldn't install a gateway on a computer, like a laptop, that might be turned off, asleep, or disconnected from the internet. The gateway can't run under any of those circumstances. - If a gateway uses a wireless network, its performance might suffer. - You can install up to two gateways on a single computer: one running in personal mode and the other running in standard mode. You can't have more than one gateway running in the same mode on the same computer. Download and install a gateway Because the gateway runs on the computer that you install it on, be sure to install it on a computer that's always turned on. For better performance and reliability, we recommend that the computer is on a wired network rather than a wireless one. - In the gateway installer, select Next. Select On-premises data gateway (recommended) > Next. Note The On-premises data gateway (personal mode) option can be used only with Power BI. For more information on installation, management, and use of personal gateways, see Use personal gateways in Power BI. Select Next. Keep the default installation path, accept the terms of use, and then select Install. Enter the email address for your Office 365 organization account, and then select Sign in. Note You need to sign in with either a work account or a school account. This account is an organization account. If you signed up for an Office 365 offering and didn't supply your work email address, your address might look like [email protected]. Your account is stored within a tenant in Azure AD. In most cases, your Azure AD account’s User Principal Name (UPN) will match the email address. The gateway is associated with your Office 365 organization account. You manage gateways from within the associated service. You're now signed in to your account. Select Register a new gateway on this computer > article. Also note that you can change the region that connects the gateway to cloud services. You should change the region to the region of your Power BI tenant or Office 365 tenant or to the Azure region closest to you.. Add another gateway to create a cluster A. Note Offline gateway members within a cluster will negatively impact performance. These members should either be removed or disabled. To create high-availability gateway clusters, you need the November 2017 update or a later update to the gateway software. Download the gateway to a different computer and install it. After you sign in to your Office 365 organization account, register the gateway. Select Add to an existing cluster. In the Available gateway clusters list, select the primary gateway, which is the first gateway you installed. Enter the recovery key for that gateway. Select Configure. Next steps Feedback
https://docs.microsoft.com/en-us/data-integration/gateway/service-gateway-install
2020-02-17T05:01:04
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Android WebView¶ To run WPT on WebView on an Android device, some additional set-up is required. Currently, Android WebView support is experimental. Prerequisites¶ Please check Chrome for Android for the common instructions for Android support first. Ensure you have a userdebug or eng Android build installed on the device. Install an up-to-date version of system webview shell: Go to chromium-browser-snapshots Find the subdirectory with the highest number and click it, this number can be found in the “Commit Position” column of row “LAST_CHANGE” (at bottom of page). Download chrome-android.zipfile and unzip it. Install SystemWebViewShell.apk. On emulator, system webview shell may already be installed by default. Then you may need to remove the existing apk: Choose a userdebug build. Run an emulator with writable system partition from command line If you have an issue with ChromeDriver version mismatch, try one of the following: Try removing _venv/bin/chromedriversuch that wpt runner can install a matching version automatically. Failing that, please check your environment path and make sure that no other ChromeDriver is used. Download the ChromeDriver binary matching your WebView’s major version and specify it on the command line ./wpt run --webdriver-binary <binary path> ... Configure host remap rules in the webview commandline file: adb shell "echo '_ --host-resolver-rules=\"MAP nonexistent.*.test ~NOTFOUND, MAP *.test 127.0.0.1\"' > /data/local/tmp/webview-command-line" Ensure that adb can be found on your system’s PATH.
http://firefox-source-docs.mozilla.org/web-platform/running-tests/android_webview.html
2020-02-17T04:35:50
CC-MAIN-2020-10
1581875141653.66
[]
firefox-source-docs.mozilla.org
Custom Data provider example Customization of the data provider allows you to implement your own database connector. You cannot modify the data provider in the App_Code folder. The customizations must be added as part of a new assembly, which you then need to register via the CMSDataProviderAssembly web.config key. Custom data providers are not intended for accessing non-Microsoft SQL Server database engines. They only allow you to modify how queries are executed against the Microsoft SQL Server. Adding the custom data provider assembly - Copy the CustomDataProvider project from your Kentico installation directory (typically C:\Program Files\Kentico\<version>\CodeSamples\CustomDataProvider) to a development folder. - Open your web project in Visual Studio (using the WebSite.sln/WebApp.sln file). - Click File -> Add -> Existing Project and select CustomDataProvider.csproj in the folder where you copied the CustomDataProvider project. - Unfold the References section of the CustomDataProvider project and delete all invalid references. - Right-click the CustomDataProvider project and select Add Reference. - Open the Browse tab of the Reference manager dialog, click Browse and navigate to the Lib folder of your Kentico web project. - Add references to the following libraries: - CMS.Base.dll - CMS.DataEngine.dll CMS.Helpers.dll - Right‑click the main Kentico website object (or the CMSApp project if your installation is a web application) and select Add Reference. - Open the Solution -> Projects tab and add a reference to the CustomDataProvider project. - Rebuild the CustomDataProvider project. The CustomDataProvider project is now integrated into your application. By default, the classes in the custom data provider are identical to the ones used by default, but you can modify them according to your own requirements. Registering the custom data provider - Edit your application's web.config file. Add the following key to the configuration/appSettings section: <add key="CMSDataProviderAssembly" value="CMS.CustomDataProvider"/> The system loads the DataProvider class from the namespace specified as the value of the CMSDataProviderAssembly key. Your application now uses the custom data provider for handling database operations. Was this page helpful?
https://docs.kentico.com/k10/custom-development/customizing-providers/custom-data-provider-example
2020-02-17T04:30:37
CC-MAIN-2020-10
1581875141653.66
[]
docs.kentico.com
Windows event log data sources in Azure Monitor Windows Event logs are one of the most common data sources for collecting data using Windows agents since many applications write to the Windows event log. You can collect events from standard logs such as System and Application in addition to specifying any custom logs created by applications you need to monitor. Configuring Windows Event logs Configure Windows Event logs from the Data menu in Advanced Settings. Azure Monitor only collects events from the Windows event logs that are specified in the settings. You can add an event log by typing in the name of the log and clicking +. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You cannot provide any additional criteria to filter events. As you type the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add does not appear in the list, you can still add it by typing in the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the Properties page for the log and copy the string from the Full Name field. Note Critical events from the Windows event log will have a severity of "Error" in Azure Monitor Logs. Data collection Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a period of time, then it collects events from where it last left off, even if those events were created while the agent was offline. There is a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline. Note Azure Monitor does not collect audit events created by SQL Server from source MSSQLSERVER with event ID 18453 that contains keywords - Classic or Audit Success and keyword 0xa0000000000000. Windows event records properties Windows event records have a type of Event and have the properties in the following table: Log queries with Windows Events The following table provides different examples of log queries that retrieve Windows Event records. Next steps - Configure Log Analytics to collect other data sources for analysis. - Learn about log queries to analyze the data collected from data sources and solutions. - Configure collection of performance counters from your Windows agents. Feedback
https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-sources-windows-events
2020-02-17T05:12:03
CC-MAIN-2020-10
1581875141653.66
[array(['media/data-sources-windows-events/overview.png', 'Windows Events'], dtype=object) array(['media/data-sources-windows-events/configure.png', 'Configure Windows events'], dtype=object) ]
docs.microsoft.com
Because you can now separate the data from the user interface in Word and Excel solutions, it is possible to manipulate the data in a document without starting Office. The new ServerDocument class allows you to access the cached data and the application manifest in a Microsoft Office Word document or Microsoft Office Excel workbook, without ever having to launch Word or Excel. This opens up new possibilities for server-based solutions in which Office acts as a client and the data resides on a server. In this model, you do not need Word and Excel to write to the data on the server, only to view it on the client. When the end user opens the document in Office, data binding code in the solution assembly binds the customized data into the document. You do not even need Word and Excel installed on the server. Code on the server (for instance, in an ASP.NET page) can customize the data in the document and send the customized document to the end user.
https://docs.microsoft.com/en-us/previous-versions/aa718529%28v%3Dmsdn.10%29
2020-02-17T04:05:28
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Data Context Reference¶ A DataContexts add-datasource is the easiest way to add a new datasource. The datasources section declares which Datasources Using custom expectations with:: class_name: InMemoryEvaluation: FixedLengthTupleS3StoreBackend base_directory: expectations/ bucket: ge.my_org.com prefix: validations_store: class_name: ValidationsStore store_backend: class_name: FixedLengthTupleS3StoreBackend bucket: ge.my_org.com prefix: common_validations evaluation_parameter_store: class_name: InMemoryEvaluation. Validation Operators¶ See the Validation Operators for more information regarding configuring and using validation operators. Managing Environment and Secrets¶ In a DataContext configuration, values that should come from the runtime environment or secrets can be injected via a separate config file or using environment variables. Use the ${var} syntax in a config file to specify a variable to be substituted.. Default Out of Box Config File¶ Should you need a clean config file you can run great_expectation init in a new directory or use this template: # Welcome to Great Expectations! Always know what to expect from your data. # # Here you can define datasources, generators, integrations and more. This file # is intended to be committed to your repo. For help with configuration please: # - Read our docs: # - Join our slack channel: # # NOTE: GE uses the names of configured `datasources` and `generators` to manage # how `expectations` and other artifacts are stored in the `expectations/` and # `datasources/` folders. If you need to rename an existing `datasource` or # `generator`, be sure to also update the relevant directory names. config_version: 1 # Datasources tell Great Expectations where your data lives and how to get it. # You can use the CLI command `great_expectations add-datasource` to help you # add a new datasource. Read more at datasources: {} edw: class_name: SqlAlchemyDatasource credentials: ${edw} data_asset_type: class_name: SqlAlchemyDataset generators: default: class_name: TableGenerator # This config file supports variable substitution which`..: # Evaluation Parameters enable dynamic expectations. Read more here: # class_name: InMemoryEvaluation: FixedLengthTupleFilesystemStoreBackend base_directory: uncommitted/data_docs/local_site/
https://docs.greatexpectations.io/en/latest/reference/data_context_reference.html
2020-02-17T03:19:39
CC-MAIN-2020-10
1581875141653.66
[]
docs.greatexpectations.io
Time zone and DST changes in Windows for Morocco and the West Bank and Gaza As it has in previous years, Morocco is suspending its daylight saving time (DST) period for the observation of the Muslim month of Ramadan from 03:00 on Sunday, May 13, 2018, to 02:00 on Sunday, June 17, 2018, in the Gregorian calendar. Updates are also available for the West Bank and Gaza, as users can revert to "(UTC+02:00) Gaza, Hebron" after the latest updates are installed. You can read more about the upcoming change and the related updates here.
https://docs.microsoft.com/en-us/archive/blogs/mthree/time-zone-and-dst-changes-in-windows-for-morocco-and-the-west-bank-and-gaza
2020-02-17T05:03:35
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Forwarding Database¶ ForwardingDatabase is a mechanism to perform additional logic before executing a query against a database. One example is to create a forwarding database that prevents returning cached query results, as in the following snippet. ForwardingDatabase noCacheDatabase = new ForwardingDatabase() { @Override protected <T> Query<T> filterQuery(Query<T> query) { return query.clone().noCache(); } }; You can then use the no-cache forwarding database by specifying it in Query#using: Query.from(User.class).using(noCacheDatabase).selectAll(); Because this query specified noCacheDatabase, Dari invokes the filterQuery method before executing the query. Both Dari and Brightspot use ForwardingDatabase for a variety of scenarios. Here are a few examples: - Dari’s Caching Database - Brightspot’s Preview Database - Brightspot’s Authentication Filter
http://docs.brightspot.com/dari/databases/forwarding-database.html
2020-02-17T03:39:45
CC-MAIN-2020-10
1581875141653.66
[]
docs.brightspot.com
Some projects require HTML code that conforms to certain standards or limitations. You might be writing code that needs to run well on a legacy browser or a mobile device, or you may need to adhere to an XHTML or other coding standard. Visual Studio 2005 lets you target your code to all the major browser versions and XHTML standards right out of the box. Your HTML will then be validated in real-time as you type in the source editor. Tooltips identify and explain HTML code that is invalid for your chosen target, and validation errors are summarized in the Task List window. If your project has special coding requirements, you can easily customize and extend the pluggable validation rules in Visual Studio 2005 to meet your particular needs. Target different browsers and standards and let Visual Studio 2005 validate your HTML code.
https://docs.microsoft.com/en-us/previous-versions/aa718439%28v%3Dmsdn.10%29
2020-02-17T05:01:37
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Single-Page JavaScript App This topic describes the OAuth 2.0 implicit grant type supported by Pivotal Single Sign‑On. The implicit grant type is for apps with a client secret that is not guaranteed to be confidential. OAuth 2.0 Roles - Resource Owner: A person or system capable of granting access to a protected resource. - Application: A client that makes protected requests using the authorization of the resource owner. - Authorization Server: The Single Sign‑On server that issues access tokens to client apps after successfully authenticating the resource owner. - Resource Server: The server that hosts protected resources and accepts and responds to protected resource requests using access tokens. apps access the server through APIs. Implicit Flow - Access Application: The user accesses the app and triggers authentication and authorization. - Authentication and Request Authorization: The app prompts the user for their username and password. The first time the user goes through this flow for the app, the user sees an approval page. On this page, the user can choose permissions to authorize the app to access resources on their behalf. - Authentication and Grant Authorization: The authorization server receives the authentication and authorization grant. - Issue Access Token: The authorization server validates the authorization code and returns an access token with the redirect URL. - Request Resource w/ Access Token in: The app attempts to access the resource from the resource server by presenting the access token in the URL. - Return Resource: If the access token is valid, the resource server returns the resources that the user authorized the app to receive. The resource server runs in Pivotal Platform under a given space and org. Developers set the permissions for the resource server API endpoints. To do this, they create resources that correspond to API endpoints secured by Single Sign‑On. apps can then access these resources on behalf of users.
https://docs.pivotal.io/p-identity/1-10/configure-apps/single-page-js-app.html
2020-02-17T05:05:30
CC-MAIN-2020-10
1581875141653.66
[array(['../images/oauth_implicit.png', 'Oauth implicit'], dtype=object)]
docs.pivotal.io
Single-Page Javascript implicit grant type supported by Pivotal Single Sign-On (SSO). The implicit grant type is for applications with a client secret that is not guaranteed to be confidential.. Implicit. - Issue Access Token: The authorization server validates the authorization code and returns an access token with the redirect URL. - Request Resource w/ Access Token in: The application attempts to access the resource from the resource server by presenting the access token in the URL. -.
https://docs.pivotal.io/p-identity/1-5/configure-apps/single-page-js-app.html
2020-02-17T05:04:14
CC-MAIN-2020-10
1581875141653.66
[array(['../images/oauth_implicit.png', 'Oauth implicit'], dtype=object)]
docs.pivotal.io
>> [. Would it be possible to document that "| noop" takes a "log_DEBUG=*" argument to put an individual search.log in DEBUG mode? Dhazekamp Please see version 7.3.0 or greater. The <log-level-expression> is documented there.
https://docs.splunk.com/Documentation/Splunk/7.1.0/SearchReference/Noop
2020-02-17T03:45:33
CC-MAIN-2020-10
1581875141653.66
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
base/frameworks/netcontrol/plugin.zeek¶ This file defines the plugin interface for NetControl. Detailed Interface¶ Types¶ NetControl::Plugin¶ Definition of a plugin. Generally a plugin needs to implement only what it can support. By returning failure, it indicates that it can’t support something and the framework will then try another plugin, if available; or inform the that the operation failed. If a function isn’t implemented by a plugin, that’s considered an implicit failure to support the operation. If plugin accepts a rule operation, it must generate one of the reporting events rule_{added,remove,error}to signal if it indeed worked out; this is separate from accepting the operation because often a plugin will only know later (i.e., asynchronously) if that was an error for something it thought it could handle.
https://docs.zeek.org/en/current/scripts/base/frameworks/netcontrol/plugin.zeek.html
2020-02-17T03:27:20
CC-MAIN-2020-10
1581875141653.66
[]
docs.zeek.org
Bitbond Documentation About Bitbond API documentation Search… Introduction Operating model Offering Manager Intro Offering Manager Protocols and blockchains Admin user accounts Offerings Investors Labs Configuration Payments Payments from investors Payments to token holders Token custody Offering Manager API Token Tool Intro Token Tool Create Token Manage Token Distribute Token Create Token Sale Manage Token Sale Create Token Locker Create NFT Distribute NFT Investor UI Intro Investor UI GitBook Payments There are several instances when payments need to be settled and processed in the context of a token offering. Additionally, there are multiple ways, how these payments are processed. When a token offering is conducted – this is also called the primary issuance – at some point investors will pay for the asset that they bought. This part is covered in the next section on Payments from investors . After a token offering is completed – this is also called the post-trade part or life-cycle management part – investors typically receive payments. The triggers for such payments are for example coupon payments on bonds, dividend payments on equity instruments, repayments of bonds, share buybacks etc. This part is covered in the section on Payments to token holders . Payments can be settled via the following currencies : Via cryptocurrencies (e.g. Bitcoin, Ether, Stellar Lumens etc.) Via stable coins (USDT, USDC etc.) Via fiat money (USD, EUR, GBP, CHF etc.) There are several payment methods how payments can be conducted: Via a paying agent (typically a bank) for fiat Via a regular business bank account of the issuer for fiat Via the investor wallet directly to the issuer wallet in crypto or stable coins Via a payment service provider for fiat which offers a variety of payment methods (e.g. credit card, PayPal, direct debit etc.) – while this option can be convenient, it is almost never used in the context of token offerings because the costs of easily 1 to 2% of the processed payments volume are usually too high from the perspective of a an issuer relative to the margins that can be achieved with financial instruments Via a crypto payments processor that will let the issuer accept payments in crypto and convert them immediately to fiat and pay out to the issuer's bank account From the methods above, typically the first three are most commonly used and compatible with the Bitbond Offering Manager. It always depends on the context and especially the types of investors that an offering primarily caters to, which currencies and payment methods make the most sense. To evaluate this and find the best solution is normally part of a concept and pilot phase where Bitbond supports the issuer / arranger prior to the implementation of the Offering Manager. Offering Manager - Previous Configuration Payments from investors Last modified 3mo ago Copy link
https://docs.bitbond.com/asset-tokenization-suite/offering-manager/payments
2022-05-16T12:38:33
CC-MAIN-2022-21
1652662510117.12
[]
docs.bitbond.com
public class DoubleDeserializer extends Object implements Deserializer<Double> clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait close, configure, deserialize public DoubleDeserializer() public Double deserialize(String topic, byte[] data) Deserializer deserializein interface Deserializer<Double> topic- topic associated with the data data- serialized bytes; may be null; implementations are recommended to handle null by returning a value or null rather than throwing an exception.
https://docs.confluent.io/platform/current/clients/javadocs/javadoc/org/apache/kafka/common/serialization/DoubleDeserializer.html
2022-05-16T11:13:51
CC-MAIN-2022-21
1652662510117.12
[]
docs.confluent.io
Version End of Life. 31 July 2020 Tungsten Cluster 6.0.0 is a major update to the operation and deployment of composite clusters. Within the new framework, a Composite Active/Active cluster is configured as follows: Clusters within a composite cluster are now managed in a unified fashion, including the overall replication progress across clusters. Cross-site replicators are configured as additional services within the main replicator. Cross-site replicators are managed by the manager as part of a complete composite cluster solution. A new global progress counter displays the current progress for the local and cross-site replication. Connectors are configured by default to provide affinity for the local, and then the remote cluster. The cluster package name has been changed, and upgrades from older versions to the new configuration and layout are supported. The following changes have been made to Tungsten Cluster and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration: Installation and Deployment A new unified cluster deployment is available, the Composite Active/Active. This is an updated version of the Multi-Site/Active-Active deployment in previous releases. It encompasses a number of significant changes and improvements: Single, cluster-based, deployment using the new deployment type of composite-multi-master. Unified Composite Active/Active cluster status within cctrl. Global progress counter indiciting the current cluster and cross-cluster performance. Issues: CT-105, CT-313, CT-431, CT-467 The name of the cluster deployment package for Tungsten Cluster has changed. Packages are now named to match the product, for example, release-notes-1-99.tar.gz. Issues: CT-271, CT-438 Support for using Java 7 with Tungsten Cluster has been removed. Java 8 or higher must be used for all deployments. Issues: CT-450 The behavior of the cctrl has changed to operate better within the new composite deployments. Without the -multiargument, cctrl will cd into the local standalone service. This matches the previous releases for cctrl, but instead all services are still accessible without needing to use the -multioption. With the -multiargument, cctrl will not automatically cd into the local standalone service but will show all available services. Issues: CT-524 Due to the change in the nature of the services and clustering within Composite Active/Passive and Composite Active/Active configurations, the tungsten_provision_slave command has been updated to support cross-cluster provisioning. Because there would now be a conflict of service names, a cross cluster provision should use the --forceoption. The --serviceoption should still be set to the local service being reset. For example:shell> tungsten_provision_slave --source=db4 --service=east --direct --force Issues: CT-567 The following issues are known within this release but not considered critical, nor impact the operation of Tungsten Cluster. They will be addressed in a subsequent patch release. Installation and Deployment During an upgrade installation from a v4 or v4 MSMM deployment, you may get additional, empty, schemas creates within your MySQL database. These schemas are harmless and can safely be removed. For example, if you have two services in your MSMM deployment, eastand west, during the upgrade you will get two empty schemas, tungsten_east_from_westand tungsten_west_from_east. This will be addressed in a future release. Issues: CT-559 When performing a tpm update operation to change the configuration and the cluster is in AUTOMATICmode, the update will complete correctly but the cluster may be left in MAINTENANCEmode instead of being placed back into AUTOMATICmode. This will be addressed in a future release. Issues: CT-595 When performing a tpm update in a cluster with an active witness, the host with the witness will not be restarted correctly resulting in the witness being down on that host. This will be addressed in a future release. Issues: CT-596 In a Composite Active/Active cluster deployment where there are three or more clusters, a failure in the MySQL server in one node in a clsuter could fail to be identified, and ultimately the failover within the environment to fail, either within the cluster or across clusters. This will be addressed in a future release. Issues: CT-619 Improvements, new features and functionality Installation and Deployment A new utility script, tungsten_prep_upgrade has been provided as part of the standard installation. The script is specifically designed to assist during the upgrade of a Multi-Site/Active-Active deployment from 5.3.0 and earlier to the new Composite Active/Active 6.0.0 deployment. Issues: CT-104 The cctrl command now includes a show topology command, that outputs the current toplogy for the cluster or component being viewed. Issues: CT-429 The tpm diag command has been extended to include Composite Active/Active cluster status information, one for each configured service and cross-site service. Issues: CT-594 The mm_tpm diag command could complain that an extra replicator is configured and running, even though it would be valid as part of a Composite Active/Active deployment. Issues: CT-396 The mm_trepctl command could fail to display any status information while obtaining the core statistic information from each host. Issues: CT-437 Tungsten Clustering 6.0.0 Includes the following changes made in Tungsten Replicator 6.0.0hasinfohas been added which embeds row counts, both total and per schema/table, to the metadata for a THL event/transaction. Issues: CT-497 Installation and Deployment When performing a tpm reverse, the --replication-portsettingfilter.
https://docs.continuent.com/release-notes/release-notes-tc-6-0-0.html
2022-05-16T12:11:48
CC-MAIN-2022-21
1652662510117.12
[]
docs.continuent.com
The Replicator API is enabled by default but only listens to localhost connections on port 8097. The replicator REST API can be disabled with the tpm flag: replicator-rest-api=false Exposing the API to a different network address for remote access, like an internal network, consists in changing the "listen address" of the API server. For example, granting access to local 192.168.1.* network would translate to the tpm flag: replicator-rest-api-listen-address=192.168.1.0 Note that exposing the API to a public network can introduce a security breach like brute-force or DDoS attacks exposure. The listen port can be change with the tpm flag replicator-rest-port=8097 The replicator API has two sets of API calls, based either on the whole replicator or on a service of the replicator. A few examples follow and the full replicator specific developer docs can be viewed here
https://docs.continuent.com/tungsten-clustering-7.0/api-replicator.html
2022-05-16T12:48:10
CC-MAIN-2022-21
1652662510117.12
[]
docs.continuent.com
ColorScale3ConditionalFormatting Interface Represents a three-color scale conditional formatting rule. Namespace: DevExpress.Spreadsheet Assembly: DevExpress.Spreadsheet.v21.2.Core.dll Declaration Remarks The conditional formatting rule, which is specified by the ColorScale3ConditionalFormatting object, differentiates high, medium, and low values in the range of cell values using a color scale. The Worksheet.ConditionalFormattings property returns the ConditionalFormattingCollection collection that stores all conditional formatting rules specified on a worksheet. Use the methods of the ConditionalFormattingCollection object to apply (the ConditionalFormattingCollection.AddColorScale3ConditionalFormatting method) or remove (the ConditionalFormattingCollection.Remove method) the conditional format. Example This example demonstrates how to apply a three-color scale conditional formatting rule. - First of all, specify the minimum, midpoint and maximum thresholds of a range to which the rule will be applied. Threshold values are determined by the ConditionalFormattingValue object that can be accessed via the ConditionalFormattingCollection.CreateValue method. The type of the threshold value is specified by one of the ConditionalFormattingValueType enumeration values and can be a number, percent, formula, or percentile. Call the ConditionalFormattingCollection.CreateValue method with the ConditionalFormattingValueType.MinMax parameter to set the minimum and maximum thresholds to the lowest and highest values in a range of cells, respectively. To apply a conditional formatting rule represented by the ColorScale3ConditionalFormatting object, access the collection of conditional formats from the Worksheet.ConditionalFormattings property and call the ConditionalFormattingCollection.AddColorScale3ConditionalFormatting method with the following parameters: - A CellRange object that defines a range of cells to which the rule is applied. - A minimum threshold specified by the ConditionalFormattingValue object. - A color corresponding to the minimum value in a range of cells. - A midpoint threshold specified by the ConditionalFormattingValue object. - A color corresponding to the middle value in a range of cells. - A maximum threshold specified by the ConditionalFormattingValue object. - A color corresponding to the maximum value in a range of cells. Note Transparency is not supported in conditional formatting. To remove the ColorScale3ConditionalFormatting object, use the ConditionalFormattingCollection.Remove, ConditionalFormattingCollection.RemoveAt or ConditionalFormattingCollection.Clear methods. ConditionalFormattingCollection conditionalFormattings = worksheet.ConditionalFormattings; // Set the minimum threshold to the lowest value in the range of cells using the MIN() formula. ConditionalFormattingValue minPoint = conditionalFormattings.CreateValue(ConditionalFormattingValueType.Formula, "=MIN($C$2:$D$15)"); // Set the midpoint threshold to the 50th percentile. ConditionalFormattingValue midPoint = conditionalFormattings.CreateValue(ConditionalFormattingValueType.Percentile, "50"); // Set the maximum threshold to the highest value in the range of cells using the MAX() formula. ConditionalFormattingValue maxPoint = conditionalFormattings.CreateValue(ConditionalFormattingValueType.Number, "=MAX($C$2:$D$15)"); // Create the three-color scale rule to determine how values in cells C2 through D15 vary. Red represents the lower values, yellow represents the medium values and sky blue represents the higher values. ColorScale3ConditionalFormatting cfRule = conditionalFormattings.AddColorScale3ConditionalFormatting(worksheet.Range["$C$2:$D$15"], minPoint, Color.Red, midPoint, Color.Yellow, maxPoint, Color.SkyBlue); Related GitHub Examples The following code snippets (auto-collected from DevExpress Examples) contain references to the ColorScale3ConditionalFormatting interface. Note The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
https://docs.devexpress.com/OfficeFileAPI/DevExpress.Spreadsheet.ColorScale3ConditionalFormatting
2022-05-16T11:43:36
CC-MAIN-2022-21
1652662510117.12
[]
docs.devexpress.com
Bugfixing Scripts¶ fix/* scripts fix various bugs and issues, some of them obscure. Contents - fix/blood-del - fix/build-location - fix/corrupt-equipment - fix/dead-units - fix/diplomats - fix/drop-webs - fix/dry-buckets - fix/fat-dwarves - fix/feeding-timers - fix/item-occupancy - fix/loyaltycascade - fix/merchants - fix/population-cap - fix/retrieve-units - fix/stable-temp - fix/stuck-merchants - fix/stuckdoors - fix/tile-occupancy/corrupt-equipment¶ Fixes some corruption that can occur in equipment lists, as in Bug 11014. Note that there have been several possible types of corruption identified: - Items that have been deleted without being removed from the equipment lists - Items of the wrong type being stored in the equipment lists - Items of the wrong type being assigned to squad members This script currently only fixes the first two, as they have been linked to the majority of crashes. Note that in some cases, multiple issues may be present, and may only be present for a short window of time before DF crashes. To address this, running this script with repeat is recommended. For example, to run this script every 100 ticks: repeat -name fix-corrupt-equipment -time 100 -timeUnits ticks -command [ fix/corrupt-equipment ] To cancel it (which is likely safe if the script has not produced any output in some time, and if you have saved first): repeat -cancel fix-corrupt-equipment Running this script with repeat on all saves is not recommended, as it can have overhead (sometimes over 0.1 seconds on a large save). In general, running this script with repeat is recommended version 0.31.12, in which the “bug” was “fixed”. fix/drop-webs¶ Turns floating webs into projectiles, causing them to fall down to a valid surface. This addresses Bug 595. Use fix/drop-webs -all to turn all webs into projectiles, causing webs to fall out of branches, etc. Use clear-webs to remove webs entirely. version..
https://docs.dfhack.org/en/stable/docs/_auto/fix.html?highlight=fat
2022-05-16T13:13:12
CC-MAIN-2022-21
1652662510117.12
[]
docs.dfhack.org
Access Gateway Unable to Check-in to OrchestratorAccess Gateway Unable to Check-in to Orchestrator AboutAbout Description: After deploying AGW and Orchestrator, it is time to make AGW accessible from Orchestrator. After following github Magma AGW configuration guide, it was observed that AGW is not able to check-in to Orchestrator. Environment: AGW and Orc8r deployed. Affected components: AGW, Orchestrator ResolutionResolution Diagnose AGW and Orchestrator setup with script checkin_cli.py. If the test is not successful, the script would provide potential root cause for a problem. A successful script will look like below AGW$ sudo checkin_cli.py 1. -- Testing TCP connection to controller-staging.magma.etagecom.io:443 -- 2. -- Testing Certificate -- 3. -- Testing SSL -- 4. -- Creating direct cloud checkin -- 5. -- Creating proxy cloud checkin -- Success! If the output is not successful, the script will recommend some steps to resolve the problem. After following the steps the problem has not been resolved, follow below steps. Make sure that the hostnames and ports specified in control_proxy.yml file in AGW are properly set. Sample control_proxy.yml file cloud_address: controller.yourdomain.com cloud_port: 443 bootstrap_address: bootstrapper-controller.yourdomain.com bootstrap_port: 443 rootca_cert: /var/opt/magma/tmp/certs/rootCA.pem Verify the certificate rootCA.pem is in the correct location defined in rootca_cert (specified in control_proxy.yml) Make sure the certificates have not expired. Note: To obtain certificate information you can use openSSL x509 -in certificate -noout -text - In AGW: rootCA.pem - In Orc8r: rootCA.pem, controller.cert Verify the domain is consistent across AGW and Orc8r and the CN matches with the domain - CN in rootCA.pem AGW - CN in Orc8r for root and controller certificates. - The domain in main.tf Verify connectivity between AGW and Orc8r. Choose the port and domain obtained in control_proxy.yml. You can use telnet, example below: telnet bootstrapper-controller.yourdomain.com 443 Verify the DNS resolution of the bootstrap and controller domain. - In AGW: You can ping or telnet to your bootstrap and controller domain from AGW to verify which AWS address is being resolved. - In Orc8r: Verify which external-IP your cluster is assigned. You can use the command: kubectl get services The address resolved in AGW should be the same defined in Orc8r. If not, verify your DNS resolution. Verify that there are no errors in AGW magmad service. AGW$ sudo tail -f /var/log/syslog | grep -i "magmad" From Orchestrator, get all pods and verify attempts from AGW are reaching Orc8r in nginx and look for any bootstrapping erors in the bootstrapper. First, you can use below command to get all pods from orc8r kubectl -n orc8r get pods For example, boostrapper and nginx Orc8r pods should look something like below: orc8r-bootstrapper-775b5b8f6d-89spq 1/1 Running 0 37d orc8r-bootstrapper-775b5b8f6d-gfmrp 1/1 Running 0 37d orc8r-nginx-5f599dd8d5-rz4gm 1/1 Running 0 37d orc8r-nginx-5f599dd8d5-sxpzf 1/1 Running 0 37d Next, using the pod name get the logs from the pod with below command. Check if there is any problematic log for the related pod kubectl -n orc8r logs -f <nginx podname> kubectl -n orc8r logs -f <bootstrapper podname> For example: kubectl -n orc8r get logs orc8r-bootstrapper-775b5b8f6d-89spq Try restarting magmad services. AGW$ sudo service magma@magmad restart If issue still persists, please file github issues or ask in our support channels
https://docs.magmacore.org/docs/howtos/troubleshooting/agw_unable_to_checkin
2022-05-16T12:49:43
CC-MAIN-2022-21
1652662510117.12
[]
docs.magmacore.org
Manage Backup Manage your backups and control how it operates PREMIUM FEATURE There are several ways to control backup through the desktop tray app. RUN BACKUP JOB ON DEMAND There may come a time when you want to kick off a backup run manually before the next automatically scheduled run. You can do this using the Run backup now option shown below. All configured backup jobs will process during a backup run (backup jobs are not scheduled individually for specific folders). When an on-demand backup run processes, it resets the time until the next run The next scheduled backup run will always occur during the configured time increment (by default, 24 hours) after a backup job finishes. So if you normally expect backup to run around midnight each night but decide to kick off a manual backup run at 3:00pm, the next backup will be scheduled for 24 hours after the manual backup run completes. You can change the backup time interval from 24 hours to a specific number of minutes, if desired. Note that even with the default daily backup run setting, there will always be some amount of drift in the time of day that the backup job runs. This is because some finite time is required for the backup job to complete. If you need backup runs to kick off at an exact time (verses a time increment after the last run finishes), then you should consider using our CLI utility to kick off the backup run. Scripts can be scheduled using Windows Task Scheduler or cron jobs on Mac and Linux. CANCEL CURRENT BACKUP JOB If a backup job is currently running and you don't want it to run right now, you can cancel it through the tray menu. When you're ready to run backup again, you can simply kick off an on-demand backup job or wait until the next backup job run time for it to kick off automatically. LIST CURRENT BACKUP JOBS You can select the Backup to odrive option from the odrive tray menu to bring up a list of your current backup jobs. If you click on one of the listed backup jobs, you can get more information about it, including which source path and backup destination path this configured backup job represents. REMOVE A BACKUP JOB Option 1: Using the odrive tray menu As shown in the previous section, you can list your backup jobs from the tray menu by going to the Backup to odrive option. Selecting a particular backup job will bring up a dialog for you to confirm the source and destination folders. If this is the backup job you want to delete, select the Forget Backup option in the dialog. Option 2: Using the right-click menu Alternatively, you can also navigate to the folder within Windows Explorer or Mac Finder and right-click on it. If the folder has any backup jobs configured for it, you will see the Remove Backup option available for the folder. CHANGE THE INTERVAL BETWEEN BACKUP RUNS There is a configuration setting in the odrive_user_premium_conf.txt file which you can set to change the time between a backup job run finishing and the time the next job run is kicked off. See the Advanced client options section under the backupIntervalMinutes setting for more information. CONTROL BACKUP USING CLI SCRIPTS Coming soon. The backup command is already available from the CLI, but we will discuss it in more detail here in the near future. Updated over 1 year ago
https://docs.odrive.com/docs/manage-backup-beta
2022-05-16T12:04:44
CC-MAIN-2022-21
1652662510117.12
[]
docs.odrive.com
OpenClinica Enterprise is available either via a cloud hosted solution or a self hosted and managed system. The base supported platform for self hosted customers is Red Hat Enterprise Linux (RHEL) 6.5 or above. RHEL is a commercially supported operating system and recommended for enterprise customers. CentOS is a free rebuild for RHEL without commercial support and is available at For either, we highly recommend using the 64-bit version for best performance if your hardware is capable of supporting that architecture. RHEL 6.x installation guide is available at Using Kickstart is recommended if you prefer to automate the installation steps. However,unless you are running a large number of deployments, the interactive installer is easier to perform. If you choose to use RHEL, we recommend selecting the “minimal” option in the package selection. Refer to for more details. As noted in the installation guide, choosing this option maximizes security and performance since it reduces the number of services running and the number of updates required for maintaining the system. If you are using CentOS, the exact same options are available there as well. The net installation method allows you to download a small image and pull in packages on demand. Since an OpenClinica installation only requires a small number of packages within the operating system, this can be far more efficient than downloading the complete image. For CentOS, refer to for a detailed guide on doing a net installation. As an alternative in CentOS, you can download the minimal installation image at If you choose to download the minimal image, please note that the following core packages must be installed on the system for the OpenClinica installation and maintenance. * tar * unzip * vim-enhanced * cronie
https://docs.openclinica.com/3-1/installation/installation-operating-system-installation/
2022-05-16T11:56:14
CC-MAIN-2022-21
1652662510117.12
[]
docs.openclinica.com
TransformOutput Describes the results of a transform job. Contents - Accept The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job. Type: String Length Constraints: Maximum length of 256. Pattern: Required: No - AssembleWith Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None. To add a newline character at the end of every transformed record, specify Line. Type: String Valid Values: None | Line Required: No - KmsKeyId The Amazon Key Management Service (Amazon KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyIdcan be any of the following formats: Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab Alias name: alias/ExampleAlias Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide. The KMS key policy must grant permission to the IAM role that you specify in your CreateModel request. For more information, see Using Key Policies in Amazon KMS in the Amazon Key Management Service Developer Guide. Type: String Length Constraints: Maximum length of 2048. Pattern: Required: No - S3OutputPath The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix. For every S3 object used as input for the transform job, batch transform stores the transformed data with an . outsuffixfile. Type: String Length Constraints: Maximum length of 1024. Pattern: ^( Required: Yes See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_TransformOutput.html
2022-05-16T11:52:04
CC-MAIN-2022-21
1652662510117.12
[]
docs.amazonaws.cn
Mesh Sequence Cache Modifier. Options(オプション) - Cache File Data-block menu to select the Alembic or USD file. - File Path(ファイルパス) Path to Alembic or USD file. - Sequence(シーケンス)(フレームのオフセット値) Subtracted from the current frame to use for looking up the data in the cache file, or to determine which file to use in a file sequence. - Velocity Attribute The name of the Alembic attribute used for generating motion blur data; by default, this is .velocitieswhich is standard for most Alembic files. 注釈 The Velocity Attribute option is currently for Alembic files only. - Velocity Unit Defines how the velocity vectors are interpreted with regard to time. - Frame(フレーム) The velocity unit was encoded in frames and does not need to be scale by scene FPS. - Second The velocity unit was encoded in seconds and needs to be scaled by the scene FPS (1 / FPS). 注釈 The Velocity Unit option is currently for Alembic files only. - Object Path The path to the Alembic or USD object inside the archive or stage. - Read Data Type of data to read for a mesh object, respectively: vertices, polygons, UV maps and Vertex Color layers. Vertices, Faces, UV, Color - Velocity Scale Multiplier used to control the magnitude of the velocity vector for time effects such as motion blur. 注釈 The Velocity Scale option is currently for Alembic files only.
https://docs.blender.org/manual/ja/latest/modeling/modifiers/modify/mesh_sequence_cache.html
2022-05-16T11:33:31
CC-MAIN-2022-21
1652662510117.12
[]
docs.blender.org
🕷 SEO Crawling & Scraping: Strategies & Recipes Once you have mastered the basics of using the crawl function, you probably want to achieve more with better customization and control. These are some code strategies that might be useful to customize how you run your crawls. Most of these options can be set using the custom_settings parameter that the function takes. This can be set by using a dictionary, where the keys indicate the option you want to set, and the values specify how you want to set them. How to crawl a list of pages, and those pages only (list mode)? Simply provide that list as the first argument, for the url_list parameter, and make sure that follow_links=False, which is the default. This simply crawls the given pages, and stops when done. >>> import advertools as adv >>> url_list = [' ... ' ... ' ... ' >>> adv.crawl(url_list, ... output_file='example_crawl_1.jl', ... follow_links=False) How can I crawl a website including its sub-domains? The crawl function takes an optional allowed_domains parameter. If not provided, it defaults to the domains of the URLs in url_list. When the crawler goes through the pages of example.com, it follows links to discover pages. If it finds pages on help.exmaple.com it won't crawl them (it's a different domain). The solution, therefore, is to provide a list of domains to the allowed_domains parameter. Make sure you also include the original domain, in this case example.com. >>> adv.crawl(' ... 'example_crawl_1.jl', ... follow_links=True ... allowed_domains=['help.example.com', 'example.com', 'community.example.com']) How can I save a copy of the logs of my crawl for auditing them later? It's usually good to keep a copy of the logs of all your crawls to check for errors, exceptions, stats, etc. Pass a path of the file where you want the logs to be saved, in a dictionary to the cutom_settings parameter. A good practice for consistency is to give the same name to the output_file and log file (with a different extension) for easier retreival. For example: output_file: 'website_name_crawl_1.jl' LOG_FILE: 'website_name_crawl_1.log' (.txt can also work) output_file: 'website_name_crawl_2.jl' LOG_FILE: 'website_name_crawl_2.log' >>> adv.crawl(' 'example_crawl_1.jl', ... custom_settings={'LOG_FILE': 'example_crawl_1.log'}) How can I automatically stop my crawl based on a certain condition? There are a few conditions that you can use to trigger the crawl to stop, and they mostly have descriptive names: - CLOSESPIDER_ERRORCOUNT: You don't want to wait three hours for a crawl to finish, only to discover that you had errors all over the place. Set a certain number of errors to trigger the crawler to stop, so you can investigate the issue. - CLOSESPIDER_ITEMCOUNT: Anything scraped from a page is an "item", h1, title , meta_desc, etc. Set the crawler to stop after getting a certain number of items if you want that. - CLOSESPIDER_PAGECOUNT: Stop the crawler after a certain number of pages have been crawled. This is useful as an exploratory technique, especially with very large websites. It might be good to crawl a few thousand pages, get an idea on its structure, and then run a full crawl with those insights in mind. - CLOSESPIDER_TIMEOUT: Stop the crawler after a certain number of seconds. >>> adv.crawl(' 'example_crawl_1.jl', ... custom_settings={'CLOSESPIDER_PAGECOUNT': 500}) How can I (dis)obey robots.txt rules? The crawler obeys robots.txt rules by default. Sometimes you might want to check the results of crawls without doing that. You can set the ROBOTSTXT_OBEY setting under custom_settings: >>> adv.crawl(' ... 'example_crawl_1.jl', ... custom_settings={'ROBOTSTXT_OBEY': False}) How do I set my User-agent while crawling? Set this parameter under custom_settings dictionary under the key USER_AGENT. The default User-agent can be found by running adv.spider.user_agent >>> adv.spider.user_agent # to get the current User-agent >>> adv.crawl(' ... 'example_crawl_1.jl', ... custom_settings={'USER_AGENT': 'YOUR_USER_AGENT'}) How can I control the number of concurrent requests while crawling? Some servers are set for high sensitivity to automated and/or concurrent requests, that you can quickly be blocked/banned. You also want to be polite and not kill those servers, don't you? There are several ways to set that under the custom_settings parameter. The available keys are the following: CONCURRENT_ITEMS: default 100 CONCURRENT_REQUESTS: default 16 CONCURRENT_REQUESTS_PER_DOMAIN: default 8 CONCURRENT_REQUESTS_PER_IP: default 0 >>> adv.crawl(' ... 'example_crawl_1.jl', ... custom_settings={'CONCURRENT_REQUESTS_PER_DOMAIN': 1}) How can I slow down the crawling so I don't hit the websites' servers too hard? Use the DOWNLOAD_DELAY setting and set the interval to be waited before downloading consecutive pages from the same website (in seconds). >>> adv.crawl(' 'example_crawl_1.jl', ... custom_settings={'DOWNLOAD_DELAY': 3}) # wait 3 seconds between pages How can I set multiple settings to the same crawl job? Simply add multiple settings to the custom_settings parameter. >>> adv.crawl(' ... 'example_crawl_1.jl', ... custom_settings={'CLOSESPIDER_PAGECOUNT': 400, ... 'CONCURRENT_ITEMS': 75, ... 'LOG_FILE': 'output_file.log'}) I want to crawl a list of pages, follow links from those pages, but only to a certain specified depth Set the DEPTH_LIMIT setting in the custom_settings parameter. A setting of 1 would follow links one level after the provided URLs in url_list >>> adv.crawl(' ... 'example_crawl_1.jl', ... custom_settings={'DEPTH_LIMIT': 2}) # follow links two levels from the initial URLs, then stop How do I pause/resume crawling, while making sure I don't crawl the same page twice? There are several reasons why you might want to do this: You want to mainly crawl the updates to the site (you already crawled the site). The site is very big, and can't be crawled quickly. You are not in a hurry, and you also don't want to hit the servers hard, so you run your crawl across days for example. As an emergency measure (connection lost, battery died, etc.) you can start where you left off Handling this is extremely simple, and all you have to do is simply provide a path to a new folder. Make sure it is new and empty, and make sure to only use it for the same crawl job reruns. That's all you have to worry about. The JOBDIR setting handles this. >>> adv.crawl(' ... 'example_crawl_1.jl', ... custom_settings={'JOBDIR': '/Path/to/en/empty/folder'}) The first time you run the above code and then stop it. Stopping can happen by accident (lost connection, closed computer, etc.), manually (you hit ctrl+C) or you used a custom setting option to stop the crawl after a certain number of pages, seconds, etc. The second time you want to run this, you simply run the exact same command again. If you check the folder that was created you can see a few files that manage the process. You don't need to worry about any of it. But make sure that folder doesn't get changed manually, rerun the same command as many times as you need, and the crawler should handle de-duplication for you. XPath expressions for custom extraction The following are some expressions you might find useful in your crawling, whether you use advertools or not. The first column indicates whether or not the respective expression is used by default by the advertools crawler.
https://advertools.readthedocs.io/en/master/advertools.code_recipes.spider_strategies.html
2022-05-16T11:16:18
CC-MAIN-2022-21
1652662510117.12
[]
advertools.readthedocs.io
BitSport Search… BitSport BitSport - A Peer to Peer Competitive eSports Platform - DeFi Through Gaming BitSport RoadMap 2021 GitBook BitSport - A Peer to Peer Competitive eSports Platform - DeFi Through Gaming Building the future of gaming fueled finance and capturing market value for the gamer for the first time. BitSport. gg - Competitive eSports Platform BitSport is one of the first ever Peer to Peer Blockchain driven competitive eSports platform opening up several monetization avenues for Gamers and eSports in a competitive, decentralized fashion.-blood of the gaming community are left only as consumers. Our platform offers an alternative ecosystem where instead they could be content creators that are earning in a diverse blockchain driven economy, capturing their share of the market, empowering the gaming community as a whole. Along with a diverse array of opportunities for anyone to monetize their gaming skills, BitSport also gives spectators and sponsors an opportunity to capture a piece of the action with gamers themselves becoming an asset of their own. Thus creating a next generation, all inclusive, e-sports ecosystem in the process backed by trusted blockchain technology not just bringing the industry up to speed with the times, but into the future allowing for smart contracts to intertwine with existing online economies and creating new ones outright. Community Gauntlet With BitSport - we bring you the Community Gauntlet, where gamers are able to demand the type of entertainment they want. Here BitSport will host tournaments, and high-stakes matches amongst influencers. These matches can be community driven through voting, or opened by sponsors to create their own custom tournaments to suit their specific use case. These curated gaming events will then be featured on our Community Gauntlet section on BitSport.gg Current monetization modules and avenues for The Community Gauntlet are: Polls, Quiz's & Challenges Voting Rewards Pooled Staking Sponsored Tournaments P2P Challenges Peer to Peer Challenges allow gamers to capitalize off their skills off their chosen titles. These challenges are open to anyone and are fueled by BTP. Matches are created by users with BTP, and rewards are allocated in BTP. Gamers will be able to inspect their opponents stats profile, and accept along with reject challenges. P2P Matches Peer to Peer challenges allow the community to back their choice professionals and influencers in the industry to and stake on the behalf of their picks. Creating a match pool of BTP that is shared by watchers and players alike. This BTP is then locked in a trustless smart contract, to be allocated by the outcome of the match determined by the size of each contestants pool. Locked Liquidity Matches Locked Liquidity Matches allow for BitSport to incentivize and drive engagement through low entry stakes Locked Liquidity matches where a BFI entry is required at a predetermined ratio between contestants. The opponents or teams will proceed to compete in their chosen digital realm. The winners of the match will receive their BFI tokens plus interest for winning. The losing party will have their tokens locked into a smart contract and held for 14 day intervals with increased rewards in BTP for longer lock times. This feature is to limit the flow of tokens onto exchanges and promote price stability, another facet of BitSport to promote anti-fragility. Locked liquidity matches can also utilize the erc-721 protocol to reward users with unique digital assets in exchange for staking their liquidity further expanding the digital eSports economy. This protocol would allow for truly unique blockchain assets such as clan icons, or in custom in-game cosmetic rewards. Seasons Quarterly seasonal tournaments will be held, this will be dispersed into 4 continental zones. What games, formats, and prizes will be determined by the BitPlay community. These tournaments aim to drive maximum engagement from the community by giving them a real voice in the industry for the first time. Blending physical and digital realms into a new standard of experience.. Tournaments powered by BitSport will utilize existing streaming services to live broadcast to the world, while ultimately streaming native on the BitSport platform. Creating a lucrative space for brands to gain recognition, and viewers to be able to once again capture a piece of the action through features such as: Live airdrops to unique viewers reversing traditional Pay Per View structures. Various prize pool staking options Unique in-game items such as seasonal exclusive cosmetics Season Qualifiers Season qualifiers will be played in the first 3 weeks of every new fiscal quarter. A pre-determined number of participants would register to participate in their picked eSport title. Open qualification matches would begin and gamers will begin to compete for their seat at the finals. The frontrunners will be flown as part of an all expense paid trip to compete in the big leagues at the Grand BitSport Season Finals. Creating a truly AIO, level playing field for eSports to span all demographics and further empower the gamer. Sponsors (BitSport.Finance) Decentralized Finance (DeFi) is moving mountains in the financial space, changing our perception of how we can use digital assets, creating a myriad of creative solutions to intercept the dependency on banks we all currently rely on in the modern economic world. BitSport is aiming to become the first to market to merge DeFi, with a decentralized gaming economy. BitSport DeFi Monetization Modules BFI Trading Pairs and DEX: Through locked liquidity match models BitSport will be able to host a Decentralized Exchange (DEX) with BFI based trading pairs, offering yield farming opportunities through automated market makers (AMM). The locked BFI through locked liquidity matches can be used to fuel liquidity pools providing earnings for everyone to be able earn even by losing. Matches can be built off trading pairs showing risk with transparency to all involved parties. Stakeholders of matches will be either rewarded instantly as winners or over time with APY yields exceeding what is offered by any traditional banking institutions in a trustless fashion where users area able to see exactly where their assets are going and make informed decisions that reflect their personal interests instead of a third party in a legacy trust based institution. Sponsor DApp: The BitSport Sponsor DApp is a lucrative way for anyone to support a favorite eSport team, established pros or climbing gamer wanting to prove their worth on the BitSport platform and earn rewards in BFI and other associated coins. Sponsoring a tournament on BitSport is an efficient and low-risk avenue to maximize your yield and earning potentials on BitSport. Tournaments will follow predetermined protocols to allow for ultimate transparency between all parties utilizing the latest DeFi advancements to reduce risk exposure as much as currently available. For each tournament, a smart contract fund is created where anyone can stake stable coins such as DAI, USDT, USDC etc, and farm BTP token rewards. Rewards can be claimed anytime while staked positions can be withdrawn once every 14 days. Sponsoring a player on BitSport introduces a new feature to the world of eSports, turning players into assets for the first time. BitSport spectators can stake on the success of their favorite players. This gives the gamer liquidity to access bigger prize pools and progress through the ranks. Supporting spectators will be rewarded at a transparent pre-determined rate alongside their eSport picks. BitSport Finance - BFI Incentives, Rewards, Interests (for DeFi users) would be settled in BFI - the primary Cryptocurrency which governs BitSport Finance (BFI) for Sponsors. BFI Tokenomics BFI is an ERC-20 token minted on the Ethereum Blockchain. There is a total supply of 57,000 (Fifty Seven Thousand) BFI tokens. Token Name: BitSport Token Symbol: BFI Total Supply: 57,000 Features: Fixed Supply (No Minting) (Transfer Fee) Smart-Contract Address: 0xd5b89d470a8cec0d3d2bd2fa31fd5df33b2e3f97 EtherScan CoinMarketCap BFI Pre-Sale A total of 15,000 BFI has been allocated for auction at a 5% discount. Auction (Phase 3) Date: 20th of January to 6th of March 2021 Listing on Uniswap & HotBit: 6th of March 2021 Participate in the Pre-Sale from here. BitSport RoadMap 2021 Last modified 1yr ago Copy link Contents BitSport.gg - Competitive eSports Platform Community Gauntlet P2P Challenges P2P Matches Seasons Sponsors (BitSport.Finance) BitSport DeFi Monetization Modules BitSport Finance - BFI BFI Tokenomics BFI Pre-Sale
https://docs.bitsport.gg/
2022-05-16T13:27:46
CC-MAIN-2022-21
1652662510117.12
[]
docs.bitsport.gg
Reminders Reminders can be used to remind users about anything they need to know about regarding a specific component. It is assumed the reminders are always related to a component. Currently they are only implemented in the budgets component to remind users if they did not finish their vote but added something to their vote earlier. The automatic reminder generation should run once every day and the reminder generator will contain the logic that decides who to remind about what. The generator class will queue the reminders that need to be sent. To run automatic reminder generator. bundle exec rake decidim:reminders:all Key concepts Reminders can be created either automatically or through an admin triggered action from the admin panel. For instance, we can send budgeting reminders automatically after two hours, one week and two weeks after voting has been started. If the admin wanted to remind all users at the start of final day of the voting, which would be two days after the last automatic reminder or 2.5 weeks after the voting start, they would have to trigger it manually or change the configuration of the budgeting reminder. The reminders are controlled by a generator class which generates the reminders to be sent. The generator controls how and when the reminders are generated as it can be very context specific when to send the reminders. For instance, in the budgeting component, we need to find all unfinished orders that have been started more than two hours ago and which have not been already reminded enough many times. Reminders are defined through their own manifests which defines the following parameters for the reminders: The generator class which is used to run the logic for creating the reminders The form class which is used for defining specific parameters for the view where admin users can manually trigger the reminders to be sent The command class which will queue the reminders when admin triggers the reminders manually The reminder objects Reminders consist of the following database objects: Reminder which holds the main reminder object that is attached to a user to be reminded about and the component for which the reminder is created for. The reminder can have many deliveries and many records to be reminded about. ReminderDelivery which holds a log of all deliveries sent to the user. This may be useful in cases where we need to audit the system or solve a user support request asking why they were reminded for a specific thing at a specific time. In the backend, this also lets us do conditional logic based on how many times the user has been reminded and when the last reminder was sent. ReminderRecord which holds information about the records the reminder is related to. This lets us combine reminders that are related to multiple records at a time, so that we don’t need to send mulitple emails for each record. For example, the budgeting reminders will contain information about in which budgets the user has pending votes which allows us to combine this information in a single email, instead of sending one email per pending order in each budget. ReminderRecord states The ReminderRecord object holds a "state" attribute which tells whether the record is in one of the following states: active- The reminder record is active for the reminder to be sent. Only active records should be included in the reminder. pending- The reminder record is "pending" which means that the reminder should probably be sent soon but not for sure. For example, in the budgeting reminders the reminder record is "pending" if voting has been started but it has been started just a moment before automatically sending the reminders. In this situation, we would not want to remind the user if they started the voting process two minutes before the automatic reminder sending was run on the server. deleted- The record has been "deleted", so it will not need any further reminders. We still keep the ReminderRecord in order to preserve the backlog about when the previous reminders were sent. For example, in the budgeting reminders, the ReminderRecord is related to a budgeting "order" (or vote) which can be deleted by the user, and therefore won’t need any further reminders. completed- The record has been "completed", so it will not need any further reminders. The reminders can be specific to the state of the remindable objects, so we change the ReminderRecord state to "completed" when the record will not need any further reminders. For example, in the budgeting reminders, we would not want to remind the user anymore if they completed their vote in a budget but they still have pending order in another budget that will still need further reminders. In this situation, we would want to include only the pending order in the further reminders, still keeping the backlog information about the previous reminders for the already completed budget order (vote). Defining a reminder Reminders can be defined through initializers by defining calling the registed method on the reminders registry object at the Decidim main module as follows: Decidim.reminders_registry.register(:orders) do |reminder_registry| reminder_registry.generator_class_name = "Decidim::YourModule::YourReminderGenerator" reminder_registry.form_class_name = "Decidim::YourModule::Admin::YourReminderForm" reminder_registry.command_class_name = "Decidim::YourModule::Admin::CreateYourReminders" # The reminder settings object lets you define configurations that can be changed by the system administrators. # For example, if you want to make the intervals configurable when the reminders will be sent, you can provide a # configuration for that. reminder_registry.settings do |settings| # For example, if your reminder should be automatically sent three times at specific intervals settings.attribute :reminder_times, type: :array, default: [2.hours, 1.week, 2.weeks] end # The messages that will be shown for the reminder user interface if the admin wants to manually trigger the # reminders. The title is shown at the top of the page and the description will be shown under it where you can # provide information e.g. on how many reminders would be sent if the admin triggered the action. reminder_registry.messages do |msg| msg.set(:title) { |count: 0| I18n.t("decidim.budgets.admin.reminders.orders.title", count: count) } msg.set(:description) { I18n.t("decidim.budgets.admin.reminders.orders.description") } end end Defining a reminder generator The generator object holds the main logic for creating the reminders. You can see one example at Decidim::Budgets::OrderReminderGenerator which generates the reminders for the pending orders. Another example could be for the upcoming meetings that will be happening in the next two days which could be implemented by defining the following reminder generator: # frozen_string_literal: true module Decidim module Meetings # This class is the generator class which creates and updates meeting related reminders, # after reminder is generated it is send to user who are participating to upcoming meetings. class MeetingReminderGenerator attr_reader :reminder_jobs_queued def initialize @reminder_manifest = Decidim.reminders_registry.for(:meetings) @reminder_jobs_queued = 0 @queued_reminders = [] end # Creates reminders and updates them if they already exists. def generate Decidim::Component.where(manifest_name: "meetings").each do |component| send_reminders(component) end end # This can be called by the admin command that manually triggers the reminders. def generate_for(component) send_reminders(component) end private attr_reader :reminder_manifest, :queued_reminders def send_reminders(component) # before_days could be provided as a configuration option, e.g. `2.days` before_days = reminder_manifest.settings.attributes[:before_days] Decidim::Meetings::Meeting.where(component: component).where( "start_time >= ? AND start_time <= ?", DateTime.now + before_days.days DateTime.now + before_days.days + 1.day ).each do |meeting| Decidim::Meetings::Registration.where(meeting: meeting).each do |registration| reminder = Decidim::Reminder.find_or_create_by(user: registration.user, component: component) record = Decidim::ReminderRecord.find_or_create_by(reminder: reminder, remindable: meeting) record.update(state: "active") unless record.active? reminder.records << record reminder.save! next if queued_reminders.include?(reminder.id) Decidim::Meetings::SendMeetingRemindersJob.perform_later(reminder) @reminder_jobs_queued += 1 queued_reminders << reminder.id end end end end end end The Decidim::Meetings::SendMeetingRemindersJob would be responsible for delivering the emails for the upcoming meetings in the specified component. In addition, you need to create the Command and the Form objects to handle the manually triggered reminders from the admin panel in case you decide to implement these for the specified component. Please take example from Decidim::Budgets::Admin::CreateOrderReminders and Decidim::Budgets::Admin::OrderReminderForm to implement these. Also note that providing the admin triggered manual notifications is not necessary when you can omit creating these classes and the related view changes.
https://docs.decidim.org/en/develop/reminders/
2022-05-16T12:35:40
CC-MAIN-2022-21
1652662510117.12
[]
docs.decidim.org
Zone Expiry of Secondary Zones¶ NSD will keep track of the status of secondary zones, according to the timing values in the SOA record for the zone. When the refresh time of a zone is reached, the serial number is checked and a zone transfer is started if the zone has changed. Each primary server is tried in turn. Master zones cannot expire so they are always served. Zones are interpreted primary zones if they have no request-xfr: statements in the config file. After the expire timeout (from the SOA record at the zone apex) is reached, the zone becomes expired. NSD will return SERVFAIL for expired zones, and will attempt to perform a zone transfer from any of the primaries. After a zone transfer succeeds, or if the primary indicates that the SOA serial number is still the same, the zone will be OK again. In contrast with e.g. BIND, the inception time for a secondary zone is stored on disk (in xfrdfile: "xfrd.state"), together with timeouts. If a secondary zone acquisition time is recent enough, this means that NSD can start serving a zone immediately on loading, without querying the primary server. If your secondary zone has expired and no primaries can be reached, but you still want NSD to serve the zone, then you can delete the xfrd.state file, but leave the zone file for the zone intact. Make sure to stop NSD before you delete the file, as NSD writes it on exit. Upon loading NSD will treat the zone file that you as operator have provided as recent and will serve the zone. Even though NSD will start to serve the zone immediately, the zone will expire after the timeout is reached again. NSD will also attempt to confirm that you have provided the correct data by polling the primaries. So when the primary servers come back up, it will transfer the updated zone within <retry timeout from SOA> seconds. In general it is possible to provide zone files for both primary and secondary zones manually (say from email or rsync). Reload with SIGHUP or nsd-control reload to read the new zone file contents into the name database. When this is done the new zone will be served. For primary zones, NSD will issue notifications to all configured notify: targets. For secondary zones the above happens; NSD attempts to validate the zone from the primary (checking its SOA serial number).
https://nsd.docs.nlnetlabs.nl/en/latest/running/zone-expiry.html
2022-05-16T12:37:10
CC-MAIN-2022-21
1652662510117.12
[]
nsd.docs.nlnetlabs.nl
fiskaltrust.Portal - Sprint 110 October 4, 2021 In the past weeks we have been hardly working on creating the Technical Rollout which should help our customers to rollout Middleware configurations similar as they are currently rolling out products with the Rollout Management. Features Middleware Configuration Support Middleware Configuration Reworked ContactProfile Data Edit Page The ContactProfile Data edit page has been reworked and aside from offering the same functionalities as before, now client-side validation has been added and some visual improvements and performance improvements can be observed. Replaced images that are displayed while logging in Portal In the past, when users were logging in, they were not sure whether they were logging into the correct market. Now, the images shown for each market are the actual logos of said market (AT, DE and FR). Support Technical Rollout in DE After several iterations, finally we have the rollout in the DE market. The preview flag has been released and removed from the rollout management & plan. New navigation icons are available in the rollout management menu section, and the preview of technical rollout for templates has been launched. After selecting between a technical and a business rollout, the user now has the option to select one of the existing configuration templates, accounts and outlets. FInally, a quote is generated and everything is ready for checkout. Checking out the template will execute it, producing cashboxes, queues, SCUs and helpers, all done fully automated, giving finally the users the ability to roll out a single template to multiple outlets of the same customer in a safe, fast and easy way. New Product Detail Card Component Now, when users are buying a product, they have the new product detail card component, which is offering them information on each specific product, such as price, product number and a short description, whenever they want to navigate to the details page. Aside from that, they can specify the exact quantity of items to purchase and they can use a button to add items into the cart. Rollout plan site is translatable Up until now, the RolloutPlans were not being translated. In order to make things easier for POS Dealers who do not speak German, now all data that we are showing, are presented originally in English and can be found in the tr Next steps In the next weeks we will focus on connecting the Technical Rollout to the business Rollout Feedback We would love to hear what you think about these improvements and fixes. To get in touch, please reach out to [email protected].
https://docs.fiskaltrust.cloud/docs/release-notes/portal/sprint-110
2022-05-16T11:05:19
CC-MAIN-2022-21
1652662510117.12
[]
docs.fiskaltrust.cloud
GHI Electronics Documentation Here you will find GHI Electronics product documentation. For more information visit the main website. You can also visit our community forum. SITCore - the .NET C# Hardware SITCore family of .NET C# Chips and Modules for creating secure IoT devices. Learn More... TinyCLR OS - the heart of SITCore The .NET C# operating system for embedded devices, powering the SITCore product family. Develop software in Microsoft Visual Studio, and debug over USB. Learn More...
https://docs.ghielectronics.com/index.html
2022-05-16T12:17:55
CC-MAIN-2022-21
1652662510117.12
[]
docs.ghielectronics.com
The Triple Exponential Average (TEA) is a momentum indicator used to identify when a security is oversold and overbought. By exponentially smoothing out the underlying security`s moving average, the TEA filters out insignificant price movements. A positive TEA is often believed to indicate momentum is increasing and a negative TEA indicates that momentum is decreasing.
https://docs.intrinio.com/documentation/web_api/get_security_price_technicals_trix_v2
2022-05-16T11:53:23
CC-MAIN-2022-21
1652662510117.12
[]
docs.intrinio.com
framerateparameter determines the frame rate which the generated timestamps should generate. The autoTickparameter determines whether the clock should automatically advance its timestamp when the timestampproperty is accessed. In most use cases, this is best left as true. intervalis the amount of time between consecutive timestamps in seconds. For some recorders, this property determines the true frame rate of the video.
https://docs.natml.ai/natcorder/api/iclock/fixedintervalclock
2022-05-16T12:24:27
CC-MAIN-2022-21
1652662510117.12
[]
docs.natml.ai
Options Presets You can add or remove preset configurations from the tool here by pressing the + or - keys, or reset the tool to its default parameters. Source Object The following controls the parameters of how the source object is applied: Method Choose between Grid Mode or Shrinkwrap Mode. Gradient Effect This creates a vertex group which automatically weights the vertices at the bottom so the effect is less exagerated at the top: Gradient Type This is the type of gradient effect to apply: Linear: this will create a gradually decreasing effect between two values. Constant: this will create a hard falloff from 1 to 0. Start/End This controls when the gradient effect of the vertices starts and ends. A value of 0.0 is at the bottom of the object, and a value of 1.0 is at the top. Lowering the value of the end below 1.0 will stop the deformation towards the bottom of the object, and higher values will extend the weight beyond the top of the object. Increasing the start value will start the weighting higher up the object. Tip Visualise a vertex group in Edit Mode by selecting “Vertex Group Weights” in the overlays panel: Blend Normals This will blend the normals of the source object with the target object, creating a smoother transition between the two object surfaces: This effect is achieved by using a Data Transfer Modifier on the Source Object. Start/End (Blend Normals) As with the Start/End controls for the Gradient Effect, this controls which face normals are affected. Blend Whole Object This will blend all of the object’s normals regardless of the gradient effect. Align Object to Face This will automatically align the source object to the face of the target object it is being applied to if it is not already. Positioning This controls how the source object is positioned on the surface: Lowest Point: This will take the lowest point of the source object and use that to place it on the surface. Center: This will take the center of the source object and use that to place it on the surface. Add Simple Subdivisions This adds a Subdivision Surface modifier to the source object, set to ‘simple’, in case you wish to quickly subdivide the mesh when conforming the object. Subdivisions: The number of subdivisions to use in the modifier. Collapse Modifiers This will collapse the existing modifiers on the source object if they are interfering with the conform effect. Deform Modifier Position This will change the position of the deformation modifier (Either Surface Deformation of ShrinkWrap) on the source object: Start: At the start of the modifier stack. Before: This will place the modifier just before a specified modifier. Selecting the option will allow you to specify which modifier. End: At the start of the modifier stack. Grid Object This controls the nature of the deformation grid used in Grid Mode. It is a regular blender object, parented to the source object, but is configurable by the add-on: Hide Grid By default, the deformation grid is hidden but it can be displayed if you wish to configure it: Grid Subdivisions The number of vertices in the grid. If you are deforming over particularly smoothed or high resolution meshes, increasing this number can be useful. Grid X/Y Move the grid’s X/Y position. Grid Scale X/Y Scale the grid in the X/Y direction. Grid Rotation Rotate the grid over the surface. Interpolation Falloff Used on the Surface Deform Modifier for the grid. From the documentation: “How much a vertex bound to one face of the target will be affected by the surrounding faces (this setting is unavailable after binding). This essentially controls how smooth the deformations are.”
https://conform-object-docs.readthedocs.io/en/latest/options.html
2022-05-16T11:21:39
CC-MAIN-2022-21
1652662510117.12
[]
conform-object-docs.readthedocs.io
Fingerprint Although the platform should show the Versions for almost all of the contents, it’d be still possible to manipulate the content by directly editing the database. As mitigation for this risk, the platform shows a fingerprint for some important fields, for instance, a proposal body and title. Its goal is to provide a way to give an informal "receipt" to a participant to they can detect tampering. A fingerprint is a hashed representation of the content. It’s useful to ensure the content hasn’t been tampered with, as a single modification would result in a totally different value. It’s calculated using a SHA256 hashing algorithm. In order to replicate it yourself, you can use a MD5 calculator online and copy-paste the source data. Check fingerprint Go to the content that you want to check the fingerprint Click in the "Check fingerprint" link in the sidebar Follow the modal instructions It’s possible to check the fingerprint with other tools, such as the sha256sum command line tool. echo -n '{"body":{"en":"<p><strong>**Is your feature request related to a problem? Please describe.**</strong></p><p>It would be useful to set a character limit on questionnaire answers to provide guidance for users regarding how long their answers should be.</p><p><br></p><p><strong>**Describe the solution you\'d like**</strong></p><p>To have a number input field next to each question in the admin, labeled \\"Character limit\\", by default set to zero (no limit), which determines the maximum characters the user answers to those questions can have.</p><p><br></p><p><strong> **Describe alternatives you\'ve considered** </strong></p><p>Another possibility could be to define this globally for the questionnaire, setting the character limit for each type of question.</p><p><br></p><p><strong>**Additional context** </strong></p><p>-</p><p><br></p><p><strong> **Could this issue impact on users private data?** </strong></p><p>No</p><p><br></p><p><strong> **Funded by**</strong></p><p>Fundació Bosch i Guimpera</p>"},"title":{"en":"Maximum characters for questionnaire text answers"}}' | sha256sum
https://docs.decidim.org/en/admin/features/fingerprint/
2022-05-16T11:34:08
CC-MAIN-2022-21
1652662510117.12
[]
docs.decidim.org
fiskaltrust.Portal - Sprint 111 October 18, 2021 The focus of this sprint was improving the overall user experience of the Shop/Products page and the german scu selection. Features Middleware Configuration - Highlight target SCU when SCU switch has been configured in Portal - Reworked Products Page - Improved error messages on CashBox Rebuild Failures - Improved CashBox Metrics Page Support Middleware Configuration Highlight target SCU when SCU switch has been configured in Portal SCUs that are about to be switched are now highlighted in the portal by an icon. Also, a text referencing the SCU is added to the QueueSCU page. So now, users can get a visual confirmation that the configuration they applied was correct. These changes currently only affect the german portal. Reworked Products Page The Products Page has been reworked. All markets are affected by this and now -aside from all previous functionalities- the user can enjoy an improved experience while navigating. Furthermore, now OutletID and SearchText are persisted in session storage. Improved error messages on CashBox Rebuild Failures So far, whenever a user was attempting to perform a CashBox Rebuild which failed, was not getting any indication of the failure. The error messages that the user receives when a rebuild goes wrong are now implemented. They contain clear indications that the rebuild failed, as well as much information as possible on how to deal with it, depending on why the rebuild failed. Improved CashBox Metrics Page The CashBox metrics page has been improved. Now the Exception details view is formatted in a much better way, and thus more readable. Aside from that, the previously unresolved issues of the “Copy” button have been addressed and now the Copy function works the way it should. Support Plus Addressing in Portal Until now, Portal did not allow emails to contain a "+" sign. The plus sign is generally used as a hack to create multiple accounts using only one email. To allow for easier portal testing, it is helpful to have multiple accounts tied to one email address. Now, this is supported and for each mail address a person can have multiple users by adding a + and a random text, and the emails will still make it to the mailbox. For example, all of the following mail addresses (including plus addressing) will use the same mailbox ([email protected]): - [email protected] - [email protected] - [email protected] Next steps In the next weeks, we will focus on improving Bulk Import Features and on improving generally the user experience. Feedback We would love to hear what you think about these improvements and fixes. To get in touch, please reach out to [email protected].
https://docs.fiskaltrust.cloud/docs/release-notes/portal/sprint-111
2022-05-16T12:59:54
CC-MAIN-2022-21
1652662510117.12
[]
docs.fiskaltrust.cloud
%dw 2.0 output application/json fun myfun() = do { var name = "DataWeave" --- name } --- { result: myfun() } Flow Control in DataWeave" } if else An if statement evaluates a conditional expression and returns the value under the if only if the conditional expression is true. Otherwise, it will return the expression under else. Every if expression must have a matching else expression. %dw 2.0 output application/json --- if (payload.country == "USA") { currency: "USD" } else { currency: "EUR" } else if You can chain several else expressions together within an if-else construct by incorporating else if, for example: %dw 2.0 output application/json --- if (payload.country =="USA") { currency: "USD" } else if (payload.country =="UK") { currency: "GBP" } else { currency: "EUR" }
https://docs.mulesoft.com/dataweave/2.1/dataweave-flow-control
2022-05-16T11:37:35
CC-MAIN-2022-21
1652662510117.12
[]
docs.mulesoft.com
SQLAlchemy 1.3 Documentation SQLAlchemy 1.3 Documentation legacy version Home | Download this Documentation SQLAlchemy ORM - Object Relational Tutorial - Mapper Configuration - Relationship Configuration - Basic Relationship Patterns - Adjacency List Relationships - Linking Relationships with Backref - Configuring how Relationship Joins - Collection Configuration and Techniques - Special Relationship Persistence Patterns¶ - Relationships API - Using the Session - Events and Internals - ORM Extensions - ORM Examples Project Versions - Previous: Collection Configuration and Techniques - Next: Relationships API - Up: Home - On this page: Special Relationship Persistence Patterns¶ relationship: from sqlalchemy import Integer, ForeignKey, Column)) class Widget(Base): __tablename__ = 'widget' widget_id = Column(Integer, primary_key=True) favorite_entry_id = Column(Integer, ForeignKey('entry.entry_id', Column, the best strategy is to use the database’s ON UPDATE CASCADE functionality in order to propagate primary key changes to referenced foreign keys - the values cannot be out of sync for any moment unless the constraints are marked as “deferrable”, that is, not enforced until the transaction completes. It is highly recommended that an application which seeks to employ natural primary keys with mutable values to use the ON UPDATE CASCADE capabilities of the database. An example mapping which illustrates this is: class User(Base): __tablename__ = 'user' __table_args__ = {'mysql_engine': 'InnoDB'} username = Column(String(50), primary_key=True) fullname = Column(String(100)) addresses = relationship("Address") class Address(Base): __tablename__ = 'address' __table_args__ = {'mysql_engine': 'InnoDB'} email = Column(String(50), primary_key=True) username = Column(String(50), ForeignKey('user.username', onupdate="cascade") ) Above, we illustrate onupdate="cascade" on the ForeignKey object, and we also illustrate the mysql_engine='InnoDB' setting which, on a MySQL backend, ensures that the InnoDB engine supporting referential integrity is used. When using SQLite, referential integrity should be enabled, using the configuration described at Foreign Key Support. See also Using foreign key ON DELETE cascade with ORM relationships - supporting ON DELETE CASCADE with relationships mapper.passive_updates - similar feature on mapper() Simulating limited ON UPDATE CASCADE without foreign key support¶ In those cases when a database that does not support referential integrity is used, and natural primary keys with mutable values are in play, SQLAlchemy offers a feature in order to allow propagation of primary key values to already-referenced foreign keys to a limited extent, by emitting an UPDATE statement against foreign key columns that immediately reference a primary key column whose value has changed. The primary platforms without referential integrity features are MySQL when the MyISAM storage engine is used, and SQLite when the PRAGMA foreign_keys=ON pragma is not used. The Oracle database also has no support for ON UPDATE CASCADE, but because it still enforces referential integrity, needs constraints to be marked as deferrable so that SQLAlchemy can emit UPDATE statements. The feature is enabled by setting the relationship.passive_updates flag to False, most preferably on a one-to-many or many-to-many relationship(). When “updates” are no longer “passive” this indicates that SQLAlchemy will issue UPDATE statements individually for objects referenced in the collection referred to by the parent object with a changing primary key value. This also implies that collections will be fully loaded into memory if not already locally present. Our previous mapping using passive_updates=False looks')) Key limitations of passive_updates=False include: it performs much more poorly than direct database ON UPDATE CASCADE, because it needs to fully pre-load affected collections using SELECT and also must emit UPDATE statements against those values, which it will attempt to run in “batches” but still runs on a per-row basis at the DBAPI level. the feature cannot “cascade” more than one level. That is, if mapping X has a foreign key which refers to the primary key of mapping Y, but then mapping Y’s primary key is itself a foreign key to mapping Z, passive_updates=Falsecannot cascade a change in primary key value from Zto X. Configuring passive_updates=Falseonly on the many-to-one side of a relationship will not have a full effect, as the unit of work searches only through the current identity map for objects that may be referencing the one with a mutating primary key, not throughout the database. As virtually all databases other than Oracle now support ON UPDATE CASCADE, it is highly recommended that traditional ON UPDATE CASCADE support be used in the case that natural and mutable primary key values are in use. flambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.Created using Sphinx 4.4.0.
https://docs.sqlalchemy.org/en/13/orm/relationship_persistence.html
2022-05-16T12:04:42
CC-MAIN-2022-21
1652662510117.12
[]
docs.sqlalchemy.org
Document Viewer Integration (npm or Yarn Package Managers) - 3 minutes to read Note The complete sample project How to Perform the JavaScript Document Viewer (Reporting) Integration (with npm or Yarn package managers) is available in the DevExpress Examples repository. You can use the HTML5 Document Viewer in JavaScript based on the server-side model. You should create two projects: - server (backend) project that enables Cross-Origin Resource Sharing and retrieves a report from the storage - client (frontend) part that includes all the necessary styles, scripts, and HTML-templates. Server (Backend) Part Perform the steps from one of the following topics to prepare a backend application: - Document Viewer Server-Side Application from DevExpress Template (ASP.NET MVC) - Document Viewer Server-Side Application (ASP.NET Core) Client (Frontend) Part The following steps describe how to configure and host the client part: Create a new folder to store the client-side files (ClientSide in this example). Create the text file package.json in the ClientSide folder with the following content: { "name": "web-document-viewer", "dependencies": { "devextreme": "21.2.*", "@devexpress/analytics-core": "21.2.*", "devexpress-reporting": "21.2.*", "jquery-ui-dist": "^1.12.1" } } Note Frontend and backend applications should use the same version of DevExpress controls. Ensure that you have npm or Yarn package managers installed. Package manager are required to download all the necessary client resources to the node_modules folder. Open the command prompt, navigate to the client application root folder (ClientSide) and run the command: - if you have npm: npm install if you have yarn: yarn install Create the index.html file in the root folder. It is the View file in our model. Copy the following HTML code and insert it in this file: <!DOCTYPE html> <html xmlns=" <head> <title></title> <link href="node_modules/jquery-ui-dist/jquery-ui.min.css" rel="stylesheet" /> <script src="node_modules/jquery/dist/jquery.min.js"></script> <script src="node_modules/jquery-ui-dist/jquery-ui.min.js"></script> <script src="node_modules/knockout/build/output/knockout-latest.js"></script> <!--Link DevExtreme resources--> <script src="node_modules/devextreme/dist/js/dx.all.js"></script> <link href="node_modules/devextreme/dist/css/dx.light.css" rel="stylesheet" /> <!-- Link the Reporting resources --> <script src="node_modules/@devexpress/analytics-core/dist/js/dx-analytics-core.js"></script> <script src="node_modules/devexpress-reporting/dist/js/dx-webdocumentviewer.js"></script> <link href="node_modules/@devexpress/analytics-core/dist/css/dx-analytics.common.css" rel="stylesheet" /> <link href="node_modules/@devexpress/analytics-core/dist/css/dx-analytics.light.css" rel="stylesheet" /> <link href="node_modules/devexpress-reporting/dist/css/dx-webdocumentviewer.css" rel="stylesheet" /> </head> <!-- ... --> </html> Note The requirements and resources to deploy the control on the client are described in the Report Designer Requirements and Limitations document. Create the example.js file in the root folder to provide data to the View. The JavaScript code in this file creates the designerOptions variable and activates the Knockout bindings. Copy the following code and insert it in the example.js file: const host = ' reportUrl = "Products", viewerOptions = { reportUrl: reportUrl, // The URL of a report that the Document Viewer loads when the application starts. requestOptions: { // Options for processing requests from the Document Viewer. host: host, // URI of your backend project. invokeAction: "/WebDocumentViewer/Invoke", // Action to enable CORS. } } ko.applyBindings({ viewerOptions }); Modify the index.html file to specify the HTML template that uses the Document Viewer’s binding with the viewerOptions parameter. Add the following code to the body section: ... <body> <div style="width:100%; height: 1000px" data-</div> <script type="text/javascript" src="example.js"></script> </body> Host the client-side part on the web server. Start the Internet Information Services (IIS) Manager, right-click the Sites item in the Connections section, and select Add Website. In the invoked dialog, specify the site name, path to the client-side application root folder, and the website’s IP address and port. Run the backend project in Visual Studio. - Open the website, created in the step 8, in the browser. In this example the address is .
https://docs.devexpress.com/XtraReports/401546/web-reporting/javascript-reporting/knockout/document-viewer/document-viewer-integration-with-npm-yarn
2022-05-16T13:29:19
CC-MAIN-2022-21
1652662510117.12
[]
docs.devexpress.com
Message-ID: <1100581753.5725.1652701704231.JavaMail.confluence@confluence202.apps.use1c.i.riva.co> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_5724_1288049480.1652701704229" ------=_Part_5724_1288049480.1652701704229 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: This method allows removing members from a given channel. The us= er removing the members must be a member of the channel and must have permi= ssion to modify the member list. The method endpoint is lock.co/v1/channels.removeMembers and follows the method calling = conventions. An object where the keys are user ids of members that were removed and t= he values are Channel= MemberStatus objects, indicating whether the members were successf= ully removed or not. { "members": { "u:<id1>": { "status": "removed" }, "u:<id2>": { "status": "failed", "error": "UserNotFound" } } }=20
https://docs.flock.com/exportword?pageId=40239538
2022-05-16T11:48:20
CC-MAIN-2022-21
1652662510117.12
[]
docs.flock.com
TIME-to-UDT Conversion Purpose Converts TIME data to UDT data. CAST Syntax where: ANSI Compliance This is ANSI SQL:2011 compliant. As an extension to ANSI, CAST permits the use of data attribute phrases such as FORMAT. Usage Notes Explicit TIME TIME-to-UDT Conversion Teradata Database performs implicit TIME-to-UDT conversions for the following operations: Performing an implicit data type conversion requires that an appropriate cast definition (see “Usage Notes”) exists that specifies the AS ASSIGNMENT clause. If no TIME-to-UDT implicit cast definition exists, Teradata Database looks for a CHAR-to-UDT or VARCHAR-to-UDT implicit cast definition that can substitute for the TIME-to-UDT implicit cast definition. Substitutions are valid because Teradata Database can implicitly cast a TIME type to the character data type, and then use the implicit cast definition to cast from the character data type to the UDT. If multiple character-to-UDT implicit cast definitions exist, then Teradata Database returns an SQL error. Related Topics For details on data types and data attributes, see SQL Data Types and Literals.
https://docs.teradata.com/r/kmuOwjp1zEYg98JsB8fu_A/ahDGMqmmCNSxnLwfZ2N_8w
2022-05-16T13:23:19
CC-MAIN-2022-21
1652662510117.12
[]
docs.teradata.com
Results that are tables Tables display your answer in a format similar to an Excel spreadsheet. In the table view, your search identifies attributes and/or columns, and presents them as a table. ThoughtSpot aggregates the results based on the level of aggregation that you specify in the search. For example, if you only type revenue, you see the total sum of revenue as a single number. If you include the keyword monthly, the results are broken down by month. From the column header, you can rename the column, or sort or filter the column. You can rearrange the column order of your table by dragging and dropping the columns, either from the table itself or from the Edit table: Configure menu. You can also. You can also rearrange the column order from the Edit table: Configure menu. Click the edit table configuration icon . Drag and drop the attribute or measure that you would like to move to a new position. The order of columns in the Configure menu reflects the column order of the table.. Clip or wrap text You can clip or wrap long text in a table cell, or on a table header. You can configure clipped or wrapped text for the entire table, or for each column individually. When you clip long text, the table cells show only the beginning of the text. The rest appears if you increase the column width. When you wrap long text, the table shows all the text in its cells by increasing the number of lines in the cells. To clip or wrap text for the entire table, click the edit table configuration icon . Select Settings. Under text wrapping, choose wrap or clip. To clip or wrap text for each column individually, hover over the column name and click the more options menu icon . Select text wrapping, and choose wrap or clip. Number formatting You can format the numbers in any table column based on a measure. This functionality allows you to change the category (number, percentage, or currency), units (auto, none, thousand, million, billion, or trillion), or method of writing negative values (-1234, 1234-, or (1234)). To change the number formatting: Click the edit table configuration icon to the upper right of your table. The Edit table panel appears, on the Configure menu. Select the measure you want to format the labels of. The Edit panel for that column appears. You can also reach this panel from the more icon that appears when you hover over a column name: On the table, the column that you are editing is highlighted in a blue box. Under number formatting, you can edit the category, units, or method of writing negative values. Click the dropdown menus to select new values. Specify a category: number, percentage, or currency. If you select currency, you can select the type of currency: USD, AUD, EUR, and so on. If you do not pick a category, ThoughtSpot automatically picks the best category for your data. Specify units: Select none to see your data down to two decimal points, for example, or select millions to see labels rounded to the millions. If you do not specify units, ThoughtSpot automatically picks the best units for your data. Depending on the unit, you can also specify the number of decimal places, and remove or include the thousand separator. Specify the method for writing negative values: -1234, 1234-, or (1234). The default is -1234. Sort columns You can sort a table by column values by clicking on the column title. If you hold down the SHIFT key, you can sort on multiple column titles at a time. This is especially useful for date columns. For example, if you search for sales by week and by quarter, and just sort the quarterly column, the weeks are not in order: If you press SHIFT and then click on the weekly column header, the weeks are in order, by quarter: You can achieve this from the search bar, as well, by adding sort by date quarterly and sort by date weekly. Table footer Tables automatically have footers that tell you the number of rows the table has. You can enable or disable this footer from the Settings menu. Click the edit table configuration icon to the upper right of your table. The Edit table panel appears, on the Configure menu. Select Settings. Select table footer to enable or disable it. Column summaries For columns with numeric information, you can turn on column summaries that display column totals. Click the edit table configuration icon to the upper right of your table. The Edit table panel appears, on the Configure menu. Select Settings. Select column summary to enable or disable column summaries for your table. Column summaries are not available for tables with more than 1000 rows.. Headlines are not available for tables with more than 15000 rows, unless the data comes from an Embrace connection., click the ellipsis icon and select under. ThoughtSpot recalculates that function for the entire table, taking the sum total profits of all ship modes and dividing it by the sum total count of all ship modes. Here, that results in a table aggregate average profit of 28.7. The average headline option, by comparison, sums the average profit for all ship modes and divides it by the number of ship modes (4), providing a less accurate average.
https://docs.thoughtspot.com/software/6.2/about-tables
2022-05-16T10:57:33
CC-MAIN-2022-21
1652662510117.12
[]
docs.thoughtspot.com
Enabling Debug Port: Running commands in the installer from MN¶ This mode is supported with debug level set to 1 or 2 xCAT creates a service inside the installer, listening on port 3054. It executes commands sent to it from the xCAT MN and returns the response output. The command runcmdinstaller can be used to send request to installer: Usage: runcmdinstaller <node> "<command>" Note: Make sure all the commands are quoted by "" To list all the items under the /etc directory in the installer: runcmdinstaller c910f03c01p03 "ls /etc"
https://xcat-docs.readthedocs.io/en/latest/troubleshooting/os_installation/debug_port.html
2022-05-16T11:59:48
CC-MAIN-2022-21
1652662510117.12
[]
xcat-docs.readthedocs.io
Change history¶ This document contains change notes for bugfix & new features in the & 5.1.x series, please see What’s new in Celery 5.1 (Sun Harmonics) for an overview of what’s new in Celery 5.1. 5.1.2¶ - release-date 2021-06-28 16.15 P.M UTC+3:00 - release-by Omer Katz) 5.1.1¶ - release-date 2021-06-17 16.10 P.M UTC+3:00 - release-by Omer Katz Fix --pool=threadssupport in command line options parsing. (#6787) Fix LoggingProxy.write()return type. (#6791) Couchdb key is now always coerced into a string. (#6781) - grp is no longer imported unconditionally. (#6804) This fixes a regression in 5.1.0 when running Celery in non-unix systems. Ensure regen utility class gets marked as done when concertised. (#6789) Preserve call/errbacks of replaced tasks. (#6770) Use single-lookahead for regen consumption. (#6799) Revoked tasks are no longer incorrectly marked as retried. (#6812, #6816) 5.1.0¶ - release-date 2021-05-23 19.20 P.M UTC+3:00 - release-by Omer Katz celery -A app events -c cameranow works as expected. (#6774) Bump minimum required Kombu version to 5.1.0. 5.1.0rc1¶ - release-date 2021-05-02 16.06 P.M UTC+3:00 - release-by Omer Katz Celery Mailbox accept and serializer parameters are initialized from configuration. (#6757) Error propagation and errback calling for group-like signatures now works as expected. (#6746) Fix sanitization of passwords in sentinel URIs. (#6765) Add LOG_RECEIVED to customize logging. (#6758) 5.1.0b2¶ - release-date 2021-05-02 16.06 P.M UTC+3:00 - release-by Omer Katz Fix the behavior of our json serialization which regressed in 5.0. (#6561) Add support for SQLAlchemy 1.4. (#6709) Safeguard against schedule entry without kwargs. (#6619) task.apply_async(ignore_result=True)now avoids persisting the results. (#6713) Update systemd tmpfiles path. (#6688) Ensure AMQPContext exposes an app attribute. (#6741) Inspect commands accept arguments again (#6710). Chord counting of group children is now accurate. (#6733) Add a setting worker_cancel_long_running_tasks_on_connection_lossto terminate tasks with late acknowledgement on connection loss. (#6654) The task-revokedevent and the task_revokedsignal are not duplicated when Request.on_failureis called. (#6654) Restore pickling support for Retry. (#6748) Add support in the redis result backend for authenticating with a username. (#6750) The worker_poolsetting is now respected correctly. (#6711) 5.1.0b1¶ - release-date 2021-04-02 10.25 P.M UTC+6:00 - release-by Asif Saif Uddin Add sentinel_kwargs to Redis Sentinel docs. Depend on the maintained python-consul2 library. (#6544). Use result_chord_join_timeout instead of hardcoded default value. Upgrade AzureBlockBlob storage backend to use Azure blob storage library v12 (#6580). Improved integration tests. pass_context for handle_preload_options decorator (#6583). Makes regen less greedy (#6589). Pytest worker shutdown timeout (#6588). Exit celery with non zero exit value if failing (#6602). Raise BackendStoreError when set value is too large for Redis. Trace task optimizations are now set via Celery app instance. Make trace_task_ret and fast_trace_task public. reset_worker_optimizations and create_request_cls has now app as optional parameter. Small refactor in exception handling of on_failure (#6633). Fix for issue #5030 “Celery Result backend on Windows OS”. Add store_eager_result setting so eager tasks can store result on the result backend (#6614). Allow heartbeats to be sent in tests (#6632). Fixed default visibility timeout note in sqs documentation. Support Redis Sentinel with SSL. Simulate more exhaustive delivery info in apply(). Start chord header tasks as soon as possible (#6576). Forward shadow option for retried tasks (#6655). –quiet flag now actually makes celery avoid producing logs (#6599). Update platforms.py “superuser privileges” check (#6600). Remove unused property autoregister from the Task class (#6624). fnmatch.translate() already translates globs for us. (#6668). Upgrade some syntax to Python 3.6+. Add azureblockblob_base_path config (#6669). Fix checking expiration of X.509 certificates (#6678). Drop the lzma extra. Fix JSON decoding errors when using MongoDB as backend (#6675). Allow configuration of RedisBackend’s health_check_interval (#6666). Safeguard against schedule entry without kwargs (#6619). Docs only - SQS broker - add STS support (#6693) through kombu. Drop fun_accepts_kwargs backport. Tasks can now have required kwargs at any order (#6699). Min py-amqp 5.0.6. min billiard is now 3.6.4.0. Minimum kombu now is5.1.0b1. Numerous docs fixes. Moved CI to github action. Updated deployment scripts. Updated docker. Initial support of python 3.9 added.
https://docs.celeryq.dev/en/stable/history/changelog-5.1.html
2022-05-16T13:04:59
CC-MAIN-2022-21
1652662510117.12
[]
docs.celeryq.dev
UE Usage MeteringUE Usage Metering Magma currently supports basic usage metering. This allows for real-time monitoring of data usage specified with the following labels: IMSI session_id traffic direction This feature is currently built to enable post-pay charging. Metering information is available through our metrics REST endpoint. The metric name used is ue_traffic. Configuring MeteringConfiguring Metering As a pre-requisite, ensure you have the following: - a functional orc8r - a configured LTE network - a configured LTE gateway with eNodeB - subscribers that can attach to your LTE gateway - network running in un-federated mode In un-federated mode, the policydb service on the LTE gateway acts as a lightweight PCRF, and federated support for metering is not currently supported. To enable metering for a single subscriber, the following steps need to be completed: - A rating group configured with infinite, metered credit - A policy rule configured with the above rating group - Your policy rule assigned to the subscriber to be metered If you do not have a NMS setup with integration to Magma's REST API, the details below should help. If your orc8r is functional, you should be able to manually access the Swagger API. Configuring Rating GroupConfiguring Rating Group /networks/{network_id}/rating_groups Configure with the following JSON as an example. Modify the ID as necessary. { "id": 1, "limit_type": "INFINITE_METERED" } Configuring Policy RuleConfiguring Policy Rule /networks/{network_id}/policies/rules Configure with the following JSON as an example. Here, the flow list is set to allow all traffic. A high priority is also set to override other rules. You may need to modify the rating_group to match the ID of the rating group you configured earlier. Here you also have a chance to directly assign the policy to the subscriber you wish to meter for. { "app_name": "NO_APP_NAME", "app_service_type": "NO_SERVICE_TYPE", "assigned_subscribers": [], "flow_list": [ { "action": "PERMIT", "match": { "direction": "UPLINK", "ip_proto": "IPPROTO_IP" } }, { "action": "PERMIT", "match": { "direction": "DOWNLINK", "ip_proto": "IPPROTO_IP" } } ], "id": "metering_rule", "priority": 10, "qos": { "max_req_bw_dl": 0, "max_req_bw_ul": 0 }, "rating_group": 1, "tracking_type": "ONLY_OCS" } Assigning Policy to SubscriberAssigning Policy to Subscriber /lte/{network_id}/subscribers /networks/{network_id}/policies/rules/{rule_id} Two endpoints can be used for assigning the metering policy to a subscriber. Set the assigned_subscribers field for a policy rule, or set the active_policies field for a subscriber. Verifying MeteringVerifying Metering It may take up to a minute for the update configurations to propagate to the LTE gateway, where they should be received and stored by policydb. Check the metrics REST endpoint to verify that metering data is being recorded. Debugging MeteringDebugging Metering On subscriber attach, policydb will provide the metered policy to install for the subscriber. By tailing these logs, it is possible to verify that the configurations are being received. journalctl -fu magma@policydb pipelined is the service which is responsible for enforcement, by use of OVS. By using the CLI tool, it is possible to verify that the policy rule is being installed for the user. The policy id will be listed if installed, along with tracked usage. pipelined_cli.py debug display_flows This command may need to be run as root. sessiond is responsible for aggregating metrics and sending metering through our metrics pipeline. To check the tracked metrics for metering, and the sessiond service, run the following: service303_cli.py metrics sessiond
https://docs.magmacore.org/docs/howtos/ue_metering
2022-05-16T12:17:19
CC-MAIN-2022-21
1652662510117.12
[]
docs.magmacore.org
Track social media advertising campaigns How we track ads for Facebook and other social media platforms. We track social media advertising campaigns and traditional advertising sources. We use UTM codes to track data, just like any other advertising campaign. UTM codes are the additional fields that are added to a referring URL. They identify key campaign parameters. These parameters find important information about the ad that referred a visitor to your website. We automatically parse UTM codes and provides analytics without any prior setup. To successfully use UTM codes, establish them with your advertising platform. Then, the correct data is transferred when users click an ad. When you create a new campaign, create your keywords. However, most platforms let you reconfigure UTM codes at any time. To set up UTM codes for your social media campaigns, check your specific platform's documentation, and follow each step accordingly. To learn more, go to Campaign tracking and UTM codes When you change the parameters of an existing campaign, the data previously collected will not be changed. Updated over 1 year ago
https://docs.ns8.com/docs/tracking-social-media-advertising-campaigns
2022-05-16T11:28:23
CC-MAIN-2022-21
1652662510117.12
[]
docs.ns8.com
In the Built-in Render PipelineA series of operations that take the contents of a Scene, and displays them on a screen. Unity lets you choose from pre-built render pipelines, or write your own. More info See in Glossary, when writing Surface ShadersA streamlined way of writing shaders for the Built-in Render Pipeline. program that runs on the GPU. pathsThe technique that a render pipeline uses to render graphics. Choosing a different rendering path affects how lighting and shading are calculated. Some rendering paths are more suited to different platforms and hardware than others. More info See in Glossary.
https://docs.unity3d.com/Manual/SL-SurfaceShaderLighting.html
2022-05-16T13:05:30
CC-MAIN-2022-21
1652662510117.12
[]
docs.unity3d.com
TOPICS× Displaying components based on the template used When a form author creates an adaptive form using a template , the form author can see and use specific components based on template policy. You can specify a template content policy that lets you choose a group of components that the form author sees at the time of form authoring. Changing the content policy of a template When: - Open CRXDE lite.URL: https://<server>:<port>/crx/de/index.jsp - In CRXDE, navigate to the folder in which the template is created.For example: /conf/<your-folder>/ - In CRXDE, navigate to: /conf/<your-folder>/settings/wcm/policies/fd/af/layouts/gridFluidLayout/To select a group of components, a new content policy is required. To create a new policy, copy-paste the default policy, and rename it.Path to default content policy is: /conf/<your-folder>/settings/wcm/policies/fd/af/layouts/gridFluidLayout/default.After you add a component group, click OK to update the list, and then click Save All above CRXDE address bar and refresh. - ).When you author a form you create using the template, you can see the added components in sidebar.
https://docs.adobe.com/content/help/en/experience-manager-64/forms/customize-aem-forms/displaying-components-based-on-template.html
2020-08-03T19:28:58
CC-MAIN-2020-34
1596439735823.29
[]
docs.adobe.com
[−][src]Module diesel:: sql_types Types which represent a SQL data type. The structs in this module are only used as markers to represent a SQL type. They should never be used in your structs. If you'd like to know the rust types which can be used for a given SQL type, see the documentation for that SQL type. Additional types may be provided by other crates. To see which SQL type can be used with a given Rust type, see the "Implementors" section of FromSql. Any backend specific types are re-exported through this module
http://docs.diesel.rs/diesel/sql_types/
2020-08-03T17:22:03
CC-MAIN-2020-34
1596439735823.29
[]
docs.diesel.rs
TOPICS× Backing up the AEM forms data This section describes the steps that are required to complete a hot, or online, backup of the AEM forms database, the GDS, and Content Storage Root directories. After AEM forms is installed and deployed to production areas, the database administrator should perform an initial full, or cold, backup of the database. The database must be shut down for this backup. Then, differential or incremental (or hot) backups of the database should be done regularly. To ensure a successful backup and recovery, a system image backup must be available at all times. Then, if a loss occurs, you can recover your entire environment to a consistent state. Backing up the database at the same time as the GDS, AEM repository, and Content Storage Root directory backups helps keep these systems synchronized if recovery is ever required. The backup procedure described in this section requires you to enter safe backup mode before you back up the AEM forms database, AEM repository, GDS, and Content Storage Root directories. When backup is complete, you must exit safe backup mode. Safe backup mode is used to mark long-lived and persistent documents that reside in the GDS. This mode ensures that the automated file cleanup mechanism (the file collector) does not delete expired files until the safe backup mode is released. It is necessary to keep a GDS backup in synchronization with a database backup. How often the GDS location must be backed up depends on how AEM forms is used and the backup windows available. The backup window can be affected by long-lived processes because they can run for several days. If you are continually changing, adding, and removing files in this directory, you should back up the GDS location more often. If the database is running in a logging mode, as described in the previous section, the database logs also must be backed up frequently so that they can be used to restore the database in case of media failure. Files that are not referenced may persist in the GDS directory after the recovery process. This is a known limitation at this time. Back up the database, GDS, AEM repository, and Content Storage Root directories You must put AEM forms in either the safe backup (snapshot) mode or the rolling backup (continuous coverage) mode. Before you set AEM forms to enter either of the backup modes, ensure the following: - .) In addition to these, observe the following guidelines for the backup/restore process. - Back up the GDS directory by using an available operating system or a third-party backup utility. (See GDS location .) - (Optional) Back up the Content Storage Root directory by using an available operating system or a third-party backup and utility. (See Content Storage Root location (stand-alone environment) or Content Storage Root location (clustered environment) .) - Back up author and publish instances ( crx -repository backup).To back up the Correspondence Management Solution environment, perform the steps on the author and publish instances as described in Backup and Restore .Consider the following points when backing up the author and publish instances: - Ensure that backup for author and publish instances are synchronized to start at the same time.Although you can continue to use author and publish instances while the backup is being performed, it is recommended not to publish any asset during the backupto avoid). You should back up the AEM forms database, including any transaction logs. (See AEM forms database .) For more information, see the appropriate knowledge base article for your database: These articles provide guidance to basic database features for the backup and recovery of data. They are not intended as all-inclusive technical Guides of a specific vendor's database backup and recovery feature. They outline commands that are required to create a reliable database backup strategy for your AEM forms application data. The database backup must be complete before you begin backing up the GDS. If the database backup is not complete, your data will be out of sync. Entering the backup modes You can use either administration console, the LCBackupMode command, or the API available with the AEM forms installation to enter and leave backup modes. Note that for rolling backup (continuous coverage), the administration console option is not available; you should use either the command line option or the API.: In the previous commands, the placeholders are defined as follows:Host is the name of the host where AEM forms is running.port is the WebServices port of the application server on which AEM forms is running.user is the user name of the AEM forms administrator.password is the password of the AEM forms administrator.label is the text label, which can be any string, for this backup.timeout is the number of seconds after which the backup mode is automatically left. It can be 0 to 10,080. If it is 0, which is the default, the backup mode never times out.For more information about the command line interface to the backup mode, see the Readme file in the BackupRestoreCommandline directory. - (Windows) LCBackupMode.cmd enter [-Host= hostname ] [-port= portnumber ] [-user= username ] [-password= password ] [-label= labelname ] [-timeout= seconds ] - (Linux, UNIX) LCBackupMode.sh enter [-host= hostname ] [-port= portnumber ] [-user= username ] [-password= password ] [-label= labelname ] Leaving backup modes You can use either the administration console or the command line option to leave backup modes. Leave safe backup mode (snapshot mode) To use Administration Console to take AEM forms out of safe backup mode (snapshot mode), perform the following tasks. - Log in to Administration Console. - Click Settings > Core System Settings > Backup Utilities. - Deselect Operate In Safe Backup Mode and click OK. Leave all backup modes You can use the command line interface to take AEM forms out of safe backup mode (snapshot mode) or to end the current backup mode session (rolling mode). Note that you cannot use the administration console to leave rolling backup mode. While in rolling backup mode, the Backup Utilities controls on the Administration Console are disabled. You must use either API call or use the LCBackupMode command. -. - (Windows) LCBackupMode.cmd leaveContinuousCoverage [-Host= hostname ] [-port= portnumber ] [-user= username ] [-password= password ] - (Linux, UNIX) LCBackupMode.sh leaveContinuousCoverage [-Host= hostname ] [-port= portnumber ] [-user= username ] [-password= password ]In the previous commands, the placeholders are defined as follows:Host is the name of the host where AEM forms is running.port is the port on which AEM forms is running on the application server.user is the user name of the AEM forms administrator.password is the password of the AEM forms administrator.leaveContinuousCoverage Use this option to disable rolling backup mode completely.
https://docs.adobe.com/content/help/en/experience-manager-64/forms/administrator-help/aem-forms-backup-recovery/backing-aem-forms-data.html
2020-08-03T19:00:30
CC-MAIN-2020-34
1596439735823.29
[]
docs.adobe.com
Copy an incident or create a child incident You can copy or create child incident without manually entering the value of all the fields in the new incident. The Copy Incident functionality copies the details of an existing incident record to a new incident record. The Create Child Incident functionality copies the details of the parent incident and links the new incident to the parent incident. Before you begin Role required: itil, sn_incident_write, or admin Select the Enable copy incident feature (com.snc.incident.copy.enable) and the Enable create child incident feature (com.snc.incident.create.child.enable) incident properties. Note: An itil user can copy or create any incident whereas a user without any role can copy only the incident which the user has created. Procedure Navigate to Incident > Open. Open an existing incident that you want to copy or from which you want to create a child incident. Do one of the following: OptionAction Copy an incident Click and then click Copy Incident.Note: After the incident is copied, the Work notes field of the new incident is updated with the following message: Created from a similar incident: INCXXXXXX. Create a child incident Click and then click Create Child Incident.Note: Ensure that you have Incident -> Parent Incident related list and the Parent Incident field is added to the incident form. The incident from which you have created the child incident is the parent incident for the child incident. Fill out the other fields, as required. Click Submit. The default fields and related lists that are copied from the parent incident are: From where What are copied Fields Category Subcategory Business Service Configuration item Impact Urgency Assignment group Short Description Description Related lists Caused by Change Location Company Problem Change Request Parent incident Note: If the problem, change, or the parent incident is not active, then details of those fields are not copied. Related lists Affected CIs Impacted Services Note: Affected CIs (task_ci) and Impacted Services (task_cmdb_ci_service) are available by default. You cannot add any other table in this field but you can remove any of the default values. Note: You can enable the options as well as add or remove fields or related list using the copy incident and create child incident properties in Incident > Incident Properties.
https://docs.servicenow.com/bundle/orlando-it-service-management/page/product/incident-management/task/copy-incident-or-create-child-incident.html
2020-08-03T18:34:38
CC-MAIN-2020-34
1596439735823.29
[]
docs.servicenow.com
Prerequisites PrerequisitesPrerequisites The MindLink Suite of products requires a series of pre-requisites to be in place both on the MindLink Application Server, and on the Skype for Business 2008 R2, 2012, 2012 R2 or 2016 - Domain Joined - Microsoft .Net Framework 4.8 -' Skype For Business - Skype for Business Front End must be able to resolve DNS Name - Persistent Chat must be enabled in your Skype for Business. Prerequisite SoftwarePrerequisite Software. Client RequirementsClient Requirements MindLink Anywhere: - Internet Explorer 10-11 - Microsoft Edge - latest Firefox - Chrome, Opera - Safari MindLink Mobile: - Android OS 6.0 or above - Apple iOS 12 or above CertificatesCertificates For both MindLink Anywhere and MindLink Mobile it is essential that you provide appropriate certificates with the correct attributes in order to utilize the web authentication feature in the MindLink Anywhere Management Center, and to adhere to Apple's ATS requirements. It is also a mandatory requirement that the key length is set to 2048 bit as by default this is the lowest level of encryption supported by the authentication token mechanism. Generating a CertificateGenerating a Certificate If you are using a publicly signed Certificate, signed by a Certificate Authority such as Geotrust or Verisign then it is suggested that you use the Skype for Business Bootstrapper tool bundled as part of the Skype for Business installation executable. If you are using a locally signed certificate then you will need to ensure that the Certificates Root-CA is authorised on the end-user's device. A certificate is required in each of the following cases: - If MindLink is being served over HTTPS, a client-facing certificate is required. - The subject name must match the DNS name of the URL by which MindLink is accessed. - The issuer must be trusted by all client machines - i.e. a public CA may be required if clients are accessing via the internet. - A certificate is needed to perform MTLS with the Skype for Business frontend servers. - The subject name must match the FQDN of the server on which MindLink is hosted. - The issuer must be trusted by the Skype for Business frontend - i.e. an enterprise internal CA will be acceptable providing both Skype for Business and MindLink servers trust the same CA. Each server certificate must include: - EKU property for "Server Authentication" - A CRL distribution point - Subject name should be the FQDN of the server - Private key The same certificate may be used for both roles only if the issuing CA is trusted by all client computers and the Skype for Business frontend server. The DNS name on which MindLink will be accessed via HTTP is the same as the FQDN of the machine, or the certificate has SANs for the public DNS name and the FQDN. These instructions are aimed at customers using an Internally Signed Certificate 1 - From the MindLink Server, Launch an instance of MMC (Start > Search 'mmc') 2 - Click File > Add /Remove Snap-In... 3 - Click Certificates > Add > Computer Account > Next > Finish > OK 4 - Navigate to the Certificate folder within the Personal Store 5 - Right Click in a Blank Area of the center pain and select All Tasks > Request a New Certificate 6 - Click Next to begin the Wizard. Select Active Directory Enrolment Policy and click Next 7 - Set Computer checkbox to True and click Enrol 8 - Click Finish 9 - Right Click your newly created certificate and go to: All Tasks > Manage Private Keys. If this is not available the certificate has no Private key and will not work. 10 - In the dialogue Box that appears, click Add and add permissions for the Service Account that will run MindLink, and click Check Names. This step is only required for Email connector or Social connector, the other products will automatically assign permission 11 - Click OK 12 - Ensure that the permissions are set to Full Control and click OK TLSTLS As of January 2017 Apple has stated that apps and their subsequent servers have to be ATS compliant, ensuring all traffic is encrypted. This means it is a pre-requisite that your Windows Server has been configured to utilise the TLS 1.2 protocol. Example for enabling TLS 1.2 on the MindLink Server Manage ATS requirements (MindLink Mobile). for iOS 10.3+ devices, the initial callback on port 7074 must be HTTPS so the service needs to be secured by an SSL certificate. - this is one way to enable TLS 1.2 , but please consult your local deployment administrators before proceeding ****** the following link will run through how to set this up using the registry edit tool: _14<< As an end User of Skype for BusinessAs an end User of Skype for Business Anyone within the organisation who may be Pchat-enabled will have this icon visible within the Skype for Business client, allowing them to participate in Chat Rooms. for Business Server Components - Enable Skype for Business auto discover for DNS/SRV records , lyncd_16<< >>IMAGE Configuring Add-in proxies.Configuring Add-in proxies.. This can be achieved by configuring the reverse-proxy with forwarding rules based on the relative-path of the incoming HTTP request. The reverse-proxy is not a component of MindLink Anywhere and must be sourced from a third-party vendor. It may also be the case that a Client Add-In's URL as loaded by Group Chat Console clients is not that which is exposed by the MindLink Anywhere reverse-proxy. In this case, the Add-In should be configured using the Group Chat MindLink Management Center as the URL that the Group Chat Console should load. MindLink Anywhere should then be configured using the add-in re-write rules configuration key, to convert the Add-Ins URL into the URL that the reverse-proxy exposes it as. The add-in re-write rules configuration setting is a set of key/value pairs. The "key" is a regular expression to test any Client Add-In URLs against. If the regex matches, the Client Add-In URL is transformed using the "value" string. The value string supports regex style group placeholders (e.g. $1) to re-use elements of the original matched URL. For instance: to re-write an internal Client Add-In URL of: - to the external address of Anywhere.MindLink.net/addins/ the regular expression would be*, and the replacement would be Anywhere.MindLink.net/addins/$1 In the MindLink Management Center, this would be typed in the add in re write rules config box as: -*, Anywhere.MindLink.net/addins/$1; Note that the literal special characters in the regular expression "key" string are escaped with a backslash. An example Client Add-In configuration is shown below. Figure 113: Example proxy and MindLink Anywhere configuration The enable an add-in, a check box can be used to disable Client Add-In support across the whole system in all chat rooms, if needed. Secure DeploymentSecure Deployment The following diagram shows the configuration necessary for a secure deployment. We make the following assumptions: The Challenge Response Service and Host Identification Service listen on the same port. Security on the File Transfer Service, Socket Service and MDS push communication is either globally enabled or disabled. The same certificate is used to secure the Socket Service and the File Transfer Service. Figure 18: Secure Deployment for Android Figure 19: Secure Deployment for iPhone The management center is used to configure the socket service port, the port of the file transfer web service, and the shared port of the Challenge Response Service and Host Location Service. By default, the management center configures the socket service host name as the FQDN of the server. This value is customizable in the management center if the organization has its network infrastructure setup, so that clients can make connections to a different address. If security is enabled, the certificate used to secure the file transfer service and socket service must also be configured. The subject must be the host name of the broker service, and it must be issued by an authority trusted by the device. The relative paths of each HTTP service are hardcoded constants. The Host Location Service returns the details of the socket service and Challenge Response Service to the device. File download links are sent in-band with the chat history as direct download links to the file transfer service. Hence, the client must only be configured with the load-balanced URL of the Host Identification Service. HTTP ProxyHTTP Proxy Given that the client connects to the proxy and not directly to the hostname, port or even potentially the relative path of the actual broker service when using an HTTP proxy, the actual URLs to connect to must be made configurable. Since the client connects to the URL in its own IT policy or local configuration for the Host Location Service, only the URLs of the Challenge Response Service and the File Transfer Service must be configured on the server via the management center/app config. The Challenge Response Proxy URL and File Transfer Proxy URL are configured on the server via the management center/app config. The proxy URL of the Challenge Response Service is sent to the client in the response from the Host Location Service. The File Transfer Proxy URL is used to form file download links sent to the client in messages. Note: the security protocol on the proxied URLs is not necessarily linked to whether security is enabled on the server, as the HTTP proxy may be configured to perform HTTPS communication and/or offloading between itself and the client, or itself and the Mobile Broker. The client is configured with the proxied URL of the load balanced Host Location Service. Profile PicturesProfile Pictures As of 18.6 MindLink supports user profile pictures. These will be displayed in the web client and can be configured through several sources. SourcesSources User photos in Skype for Business can be specified in three ways: - URL - Exchange - Active Directory MindLink will attempt to resolve a user's photo in the order that these types are listed, so if you have a photo set in Exchange and have also configured a user photo image URL through the native client, the URL image will be shown in MindLink. Setting User Photos in MLASetting User Photos in MLA - MindLink Client MindLink also offers the ability to set your user photo directly through the client. This feature is provided by Exchange server (version 15.1 and above) which must be configured correctly to work along-side MindLink. When a user uploads a new user photo from the client, the MindLink server acts on their behalf using its service account domain credentials to authorize a request against the Exchange Web Services. This single Active Directory service account is therefore responsible for accessing Exchange information for all users, and as such, requires special elevated permissions. Exchange administration is restricted by Role-Based Access Control (RBAC), a system whereby rights to certain administrative operations and features are defined by distinct "management roles" and granted to users/groups in Active Directory either directly, via a Universal Security Group or via a role group assignment. Exchange installs with a large set of pre-defined roles out-of-the-box; these typically cover all the different access scenarios administrators are likely to require. One such role is the Mail Recipients role which includes (but is not limited to) the following entry: - SetUserPhoto It is also configured with the appropriate scopes that MindLink requires to access all user accounts across the organization. For the simplest way of granting these permissions, you can assign this role directly to the service account user: - New-ManagementRoleAssignment –Role "Mail Recipients" –User "YourServiceAccountName" The preferred approach would be to create a new admin role group, assign the role, and then add the service account as a member of the group. This can be easily achieved through the Exchange Admin Center. If you already have MindLink configured with Exchange to enable private conversation history then you may have already already created a new admin role group to apply the ApplicationImpersonation role to the service account. If this is the case, then you can simply add the Mailbox Recipients role to this group too; otherwise, create a new role. The Mail Recipients role comes with a lot of other entries that aren't directly relevant to configuring user photos. If security is a consideration, then it may be desirable to restrict the service account access to only those commands that are directly relevant. This can be be done quite easily by creating a new management role that only contains the role entry above. We can do this by "cloning" the Mail Recipients role and removing all other role entries: - New-ManagementRole -Name “Set User Photos” -Parent "Mail Recipients" - Get-ManagementRoleEntry "Set User Photos\*" | Where {$_.Name -NotLike "SetUserPhoto"} | Remove-ManagementRoleEntry We now have a new management role "Set User Photos" with all the same scopes as Mail Recipients but that only contains the entry relevant to configuring user photos. This should be assigned to the service account using either of the methods described previously. Mobile AutodiscoveryMobile Autodiscovery DNS requirementsDNS requirements It is possible to configure your mobile deployment to accept users domain email addresses i.e. [email protected] as a means of initializing against a MindLink Mobile deployment. However there a few pre-requisite steps that will be discussed to make this possible. Firstly, ensure that a CNAME (alias) record is setup in your forward lookup zone. \ Once this is done you will want to choose a target host. This will be the server hosting the MindLink Mobile service.
https://docs.mindlinksoft.com/docs/Planning_And_Prerequisites/Prerequisites/Prerequisites.html
2020-08-03T18:03:15
CC-MAIN-2020-34
1596439735823.29
[array(['/docs/assets/Install_And_Configure/Prerequisites_.Net.jpg', '.NET Framework'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites_VisualC.png', 'C++ 2015'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image011.jpg', 'mmc'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image012.png', 'Console add/remove'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image013.jpg', 'Snap ins'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image014.jpg', 'Certificates'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image015.jpg', 'Request certificate'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image016.jpg', 'Certificate enrolment'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image017.jpg', 'Enrol'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image018.jpg', 'Private Keys'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image019.png', 'Permissions'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image020.jpg', 'Full Permissions'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image034.png', 'Certificate details'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image035.png', 'Server'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/pChat_Topology.png', 'pChat Topology'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/pchat_enabled1.png', 'pchat enabled'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image009.jpg', 'Skype for Business server shell'], dtype=object) array(['/docs/assets/Install_And_Configure/Prerequisites/image010.png', 'Skype for Business control panel'], dtype=object) array(['/docs/assets/Install_And_Configure/image115.png', 'MLM'], dtype=object) array(['/docs/assets/Install_And_Configure/Configuration_SecureDeployment.Android.png', 'MLM'], dtype=object) array(['/docs/assets/Install_And_Configure/Configuration_SecureDeployment.iPhone.png', 'MLM'], dtype=object) array(['/docs/assets/Install_And_Configure/Configuration_HTTP.iPhone.png', 'MLM'], dtype=object) ]
docs.mindlinksoft.com
websocket-lite A fast, low-overhead WebSocket client. This library is optimised for receiving a high volume of messages over a long period. A key feature is that is makes no memory allocations once the connection is set up and the initial messages have been sent and received; it reuses a single pair of buffers, which are sized for the longest message seen so far. Only asynchronous access is provided at present. native_tls provides the TLS functionality for wss://... servers. This crate is fully conformant with the Autobahn test suite fuzzingserver module.
https://docs.rs/crate/websocket-lite/0.1.0
2020-08-03T17:31:23
CC-MAIN-2020-34
1596439735823.29
[]
docs.rs
Intro¶ In a human-readable language, specifications provide - code base overview (hand-drawn concept) - key concepts (generators, envs) and how are they linked - link relevant code base Core Specifications¶ Environment Class Overview¶ The Environment class contains all necessary functions for the interactions between the agents and the environment. The base Environment class is derived from rllib.env.MultiAgentEnv (). The functions are specific for each realization of Flatland (e.g. Railway, Vaccination,…) In particular, we retain the rllib interface in the use of the step() function, that accepts a dictionary of actions indexed by the agents handles (returned by get_agent_handles()) and returns dictionaries of observations, dones and infos. class Environment: """Base interface for multi-agent environments in Flatland. Agents are identified by agent ids (handles). Examples: >>> obs, info = env.reset() >>> print(obs) { "train_0": [2.4, 1.6], "train_1": [3.4, -3.2], } >>> obs, rewards, dones, infos = env.step( action_dict={ "train_0": 1, "train_1": 0}) >>> print(rewards) { "train_0": 3, "train_1": -1, } >>> print(dones) { "train_0": False, # train_0 is still running "train_1": True, # train_1 is done "__all__": False, # the env is not done } >>> print(infos) { "train_0": {}, # info for train_0 "train_1": {}, # info for train_1 } """ def __init__(self): pass def reset(self): """ Resets the env and returns observations from agents in the environment. Returns: obs : dict New observations for each agent. """ raise NotImplementedError() def step(self, action_dict): """ Performs an environment step with simultaneous execution of actions for agents in action_dict. Returns observations from agents in the environment. The returns are dicts mapping from agent_id strings to values. Parameters ------- action_dict : dict Dictionary of actions to execute, indexed by agent id. Returns ------- obs : dict New observations for each ready agent. rewards: dict Reward values for each ready agent. dones : dict Done values for each ready agent. The special key "__all__" (required) is used to indicate env termination. infos : dict Optional info values for each agent id. """ raise NotImplementedError() def render(self): """ Perform rendering of the environment. """ raise NotImplementedError() def get_agent_handles(self): """ Returns a list of agents' handles to be used as keys in the step() function. """ raise NotImplementedError() Railway Specifications¶ Overview¶ Flatland is usually a two-dimensional environment intended for multi-agent problems, in particular it should serve as a benchmark for many multi-agent reinforcement learning approaches. The environment can host a broad array of diverse problems reaching from disease spreading to train traffic management. This documentation illustrates the dynamics and possibilities of Flatland environment and introduces the details of the train traffic management implementation. Environment¶ Before describing the Flatland at hand, let us first define terms which will be used in this specification. Flatland is grid-like n-dimensional space of any size. A cell is the elementary element of the grid. The cell is defined as a location where any objects can be located at. The term agent is defined as an entity that can move within the grid and must solve tasks. An agent can move in any arbitrary direction on well-defined transitions from cells to cell. The cell where the agent is located at must have enough capacity to hold the agent on. Every agent reserves exact one capacity or resource. The capacity of a cell is usually one. Thus usually only one agent can be at same time located at a given cell. The agent movement possibility can be restricted by limiting the allowed transitions. Flatland is a discrete time simulation. A discrete time simulation performs all actions with constant time step. In Flatland the simulation step moves the time forward in equal duration of time. At each step the agents can choose an action. For the chosen action the attached transition will be executed. While executing a transition Flatland checks whether the requested transition is valid. If the transition is valid the transition will update the agents position. In case the transition call is not allowed the agent will not move. In general each cell has a only one cell type attached. With the help of the cell type the allowed transitions can be defined for all agents. Flatland supports many different types of agents. In consequence the cell type can be further defined per agent type. In consequence the allowed transition for a agent at a given cell is now defined by the cell type and agent’s type. For each agent type Flatland can have a different action space. Grid¶ A rectangular grid of integer shape (dim_x, dim_y) defines the spatial dimensions of the environment. Within this documentation we use North, East, West, South as orientation indicator where North is Up, South is Down, West is left and East is Right. Cells are enumerated starting from NW, East-West axis is the second coordinate, North-South is the first coordinate as commonly used in matrix notation. Two cells $ i$ and $ j$ ($ i \neq j$) are considered neighbors when the Euclidean distance between them is $ |\vec{x_i}-\vec{x_j}<= \sqrt{2}|$. This means that the grid does not wrap around as if on a torus. (Two cells are considered neighbors when they share one edge or on node.) For each cell the allowed transitions to all neighboring 4 cells are defined. This can be extended to include transition probabilities as well. Tile Types¶ Railway Grid¶ Each Cell within the simulation grid consists of a distinct tile type which in turn limit the movement possibilities of the agent through the cell. For railway specific problem 8 basic tile types can be defined which describe a rail network. As a general fact in railway network when on navigation choice must be taken at maximum two options are available. The following image gives an overview of the eight basic types. These can be rotated in steps of 45° and mirrored along the North-South of East-West axis. Please refer to Appendix A for a complete list of tiles. As a general consistency rule, it can be said that each connection out of a tile must be joined by a connection of a neighboring tile. In the image above on the left picture there is an inconsistency at the eastern end of cell (3,2) since the there is no valid neighbor for cell (3,2). In the right picture a Cell (3,2) consists of a dead-end which leaves no unconnected transitions. Case 0 represents a wall, thus no agent can occupy the tile at any time. Case 1 represent a passage through the tile. While on the tile the agent on can make no navigation decision. The agent can only decide to either continue, i.e. passing on to the next connected tile, wait or move backwards (moving the tile visited before). Case 2 represents a simple switch thus when coming the top position (south in the example) a navigation choice (West or North) must be taken. Generally the straight transition (S->N in the example) is less costly than the bent transition. Therefore in Case 2 the two choices may be rewarded differently. Case 6 is identical to case 2 from a topological point of view, however the is no preferred choice when coming from South. Case 3 can be seen as a superposition of Case 1. As with any other tile at maximum one agent can occupy the cell at a given time. Case 4 represents a single-slit switch. In the example a navigation choice is possible when coming from West or South. In Case 5 coming from all direction a navigation choice must be taken. Case 7 represents a deadend, thus only stop or backwards motion is possible when an agent occupies this cell. Tile Types of Wall-Based Cell Games (Theseus and Minotaur’s puzzle, Labyrinth Game)¶ The Flatland approach can also be used the describe a variety of cell based logic games. While not going into any detail at all it is still worthwhile noting that the games are usually visualized using cell grid with wall describing forbidden transitions (negative formulation). Left: Wall-based Grid definition (negative definition), Right: lane-based Grid definition (positive definition) Train Traffic Management¶ Additionally, due to the dynamics of train traffic, each transition probability is symmetric in this environment. This means that neighboring cells will always have the same transition probability to each other. Furthermore, each cell is exclusive and can only be occupied by one agent at any given time. Observations¶ In this early stage of the project it is very difficult to come up with the necessary observation space in order to solve all train related problems. Given our early experiments we therefore propose different observation methods and hope to investigate further options with the crowdsourcing challenge. Below we compare global observation with local observations and discuss the differences in performance and flexibility. Global Observation¶ Global observations, specifically on a grid like environment, benefit from the vast research results on learning from pixels and the advancements in convolutional neural network algorithms. The observation can simply be generated from the environment state and not much additional computation is necessary to generate the state. It is reasonable to assume that an observation of the full environment is beneficiary for good global solutions. Early experiments also showed promising result on small toy examples. However, we run into problems when scalability and flexibility become an important factor. Already on small toy examples we could show that flexibility quickly becomes an issue when the problem instances differ too much. When scaling the problem instances the decision performance of the algorithm diminishes and re-training becomes necessary. Given the complexity of real-world railway networks (especially in Switzerland), we do not believe that a global observation is suited for this problem. Local Observation¶ Given that scalability and speed are the main requirements for our use cases local observations offer an interesting novel approach. Local observations require some additional computations to be extracted from the environment state but could in theory be performed in parallel for each agent. With early experiments (presentation GTC, details below) we could show that even with local observations multiple agents can find feasible, global solutions and most importantly scale seamlessly to larger problem instances. Below we highlight two different forms of local observations and elaborate on their benefits. This form of observation is very similar to the global view approach, in that it consists of a grid like input. In this setup each agent has its own observation that depends on its current location in the environment. Given an agents location, the observation is simply a $ n \times m$ grid around the agent. The observation grid does not need to be symmetric or squared not does it need to center around the agent. Benefits of this approach again come from the vast research findings using convolutional neural networks and the comparably small computational effort to generate each observation. Drawbacks mostly come from the specific details of train traffic dynamics, most notably the limited degrees of freedom. Considering, that the actions and directions an agent can chose in any given cell, it becomes clear that a grid like observation around an agent will not contain much useful information, as most of the observed cells are not reachable nor play a significant role in for the agents decisions. From our past experiences and the nature of railway networks (they are a graph) it seems most suitable to use a local tree search as an observation for the agents. A tree search on a grid of course will be computationally very expensive compared to a simple rectangular observation. Luckily, the limited allowed transition in the railway implementation, vastly reduce the complexity of the tree search. The figure below illustrates the observed tiles when using a local tree search. The information contained in such an observation is much higher than in the proposed grid observation above. Benefit of this approach is the incorporation of allowed transitions into the observation generation and thus an improvement of information density in the observation. From our experience this is currently the most suited observation space for the problem. **Drawback **is** **mostly the computational cost to generate the observation tree for each agent. Depending on how we model the tree search we will be able to perform all searches in parallel. Because the agents are not able to see the global system, the environment needs to provide some information about the global environment locally to the agent e.g. position of destination. Unclear is whether or not we should rotate the tree search according to the agent such that decisions are always made according to the direction of travel of the agent. Figure 3: A local tree search moves along the allowed transitions, originating from the agents position. This observation contains much more relevant information but has a higher computational cost. This figure illustrates an agent that can move east from its current position. The thick lines indicate the allowed transitions to a depth of eight. We have gained some insights into using and aggregating the information along the tree search. This should be part of the early investigation while implementing Flatland. One possibility would also be to leave this up to the participants of the Flatland challenge. Communication¶ Given the complexity and the high dependence of the multi-agent system a communication form might be necessary. This needs to be investigated und following constraints: - Communication must converge in a feasible time - Communication… Depending on the game configuration every agent can be informed about the position of the other agents present in the respective observation range. For a local observation space the agent knows the distance to the next agent (defined with the agent type) in each direction. If no agent is present the the distance can simply be -1 or null. Action Negotiation¶ In order to avoid illicit situations ( for example agents crashing into each other) the intended actions for each agent in the observation range is known. Depending on the known movement intentions new movement intention must be generated by the agents. This is called a negotiation round. After a fixed amount of negotiation round the last intended action is executed for each agent. An illicit situation results in ending the game with a fixed low rewards. Actions¶ Transportation¶ In railway the transportation of goods or passengers is essential. Consequently agents can transport goods or passengers. It’s depending on the agent’s type. If the agent is a freight train, it will transport goods. It’s passenger train it will transport passengers only. But the transportation capacity for both kind of trains limited. Passenger trains have a maximum number of seats restriction. The freight trains have a maximal number of tons restriction. Passenger can take or switch trains only at stations. Passengers are agents with traveling needs. A common passenger like to move from a starting location to a destination and it might like using trains or walking. Consequently a future Flatland must also support passenger movement (walk) in the grid and not only by using train. The goal of a passenger is to reach in an optimal manner its destination. The quality of traveling is measured by the reward function. Goods will be only transported over the railway network. Goods are agents with transportation needs. They can start their transportation chain at any station. Each good has a station as the destination attached. The destination is the end of the transportation. It’s the transportation goal. Once a good reach its destination it will disappear. Disappearing mean the goods leave Flatland. Goods can’t move independently on the grid. They can only move by using trains. They can switch trains at any stations. The goal of the system is to find for goods the right trains to get a feasible transportation chain. The quality of the transportation chain is measured by the reward function. Environment Rules¶ - Depending the cell type a cell must have a given number of neighbouring cells of a given type. - There mustn’t exists a state where the occupation capacity of a cell is violated. - An Agent can move at maximum by one cell at a time step. - Agents related to each other through transport (one carries another) must be at the same place the same time. Environment Configuration¶ The environment should allow for a broad class of problem instances. Thus the configuration file for each problem instance should contain: - Cell types allowed - Agent types allowed - Objects allowed - Level generator to use - Episodic or non-episodic task - Duration - Reward function - Observation types allowed - Actions allowed - Dimensions of Environment? For the train traffic the configurations should be as follows: Cell types: Case 0 - 7 Agent Types allowed: Active Agents with Speed 1 and no goals, Passive agents with goals Object allowed: None Level Generator to use: ? Reward function: as described below Observation Type: Local, Targets known It should be check prior to solving the problem that the Goal location for each agent can be reached. Reward Function¶ Railway-specific Use-Cases¶ A first idea for a Cost function for generic applicability is as follows. For each agent and each goal sum up - The timestep when the goal has been reached when not target time is given in the goal. - The absolute value of the difference between the target time and the arrival time of the agent. An additional refinement proven meaningful for situations where not target time is given is to weight the longest arrival time higher as the sum off all arrival times. Initialization¶ Given that we want a generalizable agent to solve the problem, training must be performed on a diverse training set. We therefore need a level generator which can create novel tasks for to be solved in a reliable and fast fashion. Level Generator¶ Each problem instance can have its own level generator. The inputs to the level generator should be: - Spatial and temporal dimensions of environment - Reward type - Over all task - Collaboration or competition - Number of agents - Further level parameters - Environment complexity - Stochasticity and error - Random or pre designed environment The output of the level generator should be: - Feasible environment - Observation setup for require number of agents - Initial rewards, positions and observations Railway Use Cases¶ In this section we define a few simple tasks related to railway traffic that we believe would be well suited for a crowdsourcing challenge. The tasks are ordered according to their complexity. The Flatland repo must at least support all these types of use cases. Benefits of Transition Model¶ Using a grid world with 8 transition possibilities to the neighboring cells constitutes a very flexible environment, which can model many different types of problems. Considering the recent advancements in machine learning, this approach also allows to make use of convolutions in order to process observation states of agents. For the specific case of railway simulation the grid world unfortunately also brings a few drawbacks. Most notably the railway network only offers action possibilities at elements where there are more than two transition probabilities. Thus, if using a less dense graph than a grid, the railway network could be represented in a simpler graph. However, we believe that moving from grid-like example where many transitions are allowed towards the railway network with fewer transitions would be the simplest approach for the broad reinforcement learning)) RailEnv Speeds¶ One of the main contributions to the complexity of railway network operations stems from the fact that all trains travel at different speeds while sharing a very limited railway network.. Currently (as of Flatland 2.0), an agent keeps its speed over the whole episode. Because the different speeds are implemented as fractions the agents ability to perform actions has been updated. We **do not allow actions to change within the cell **. This means that each agent can only chose an action to be taken when entering a cell (ie. positional fraction is 0). There is some real railway specific considerations such as reserved blocks that are similar to this behavior. But more importantly we disabled this to simplify the use of machine learning algorithms with the environment. If we allow stop actions in the middle of cells. then the controller needs to make much more observations and not only at cell changes. (Not set in stone and could be updated if the need arises). The chosen]: action_dict.update({a: ...}) Notice that info['action_required'][a] - if the agent breaks down (see stochasticity below) on entering the cell (no distance elpased in the cell), an action required as long as the agent is broken down; when it gets back to work, the action chosen just before will be taken and executed at the end of the cell; you may check whether the agent gets healthy again in the next step by checking info['malfunction'][a] == 1. - when the agent has spent enough time in the cell, the next cell may not be free and the agent has to wait. Since later versions of Flatland might have varying speeds during episodes. Therefore, we return the agents’ speed - in your controller, you can get the agents’ speed from the info returned by step: obs, rew, done, info = env.step(actions) ... for a in range(env.get_num_agents()): speed = info['speed'][a] Notice that we do not guarantee that the speed will be computed at each step, but if not costly we will return it at each step. RailEnv Malfunctioning / Stochasticity¶ Stochastic events may happen during the episodes. This is very common for railway networks where the initial plan usually needs to be rescheduled during operations as minor events such as delayed departure from trainstations, malfunctions on trains or infrastructure or just the weather lead to delayed trains. We implemted a poisson process to simulate delays by stopping agents at random times for random durations. The parameters necessary for the stochastic events can be provided when creating the environment. ## Use a the malfunction generator to break agents from time to time stochastic_data = { 'prop_malfunction': 0.5, # Percentage of defective agents 'malfunction_rate': 30, # Rate of malfunction occurence 'min_duration': 3, # Minimal duration of malfunction 'max_duration': 10 # Max duration of malfunction } The parameters are as follows: prop_malfunctionis the proportion of agents that can malfunction. 1.0means that each agent can break. malfunction_rateis the mean rate of the poisson process in number of environment steps. min_durationand max_durationset the range of malfunction durations. They are sampled uniformly You can introduce stochasticity by simply creating the env as follows: env = RailEnv( ... stochastic_data=stochastic_data, # Malfunction data generator ... ) env.reset() In your controller, you can check whether an agent is malfunctioning: obs, rew, done, info = env.step(actions) ... action_dict = dict() for a in range(env.get_num_agents()): if info['malfunction'][a] == 0: action_dict.update({a: ...}) ## Custom observation builder tree_observation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv()) ## Different agent types (trains) with different speeds. speed_ration_map = {1.: 0.25, # Fast passenger train 1. / 2.: 0.25, # Fast freight train 1. / 3.: 0.25, # Slow commuter train 1. / 4.: 0.25} # Slow freight train env = RailEnv(width=50, height=50, rail_generator=sparse_rail_generator(num_cities=20, # Number of cities in map (where train stations are) num_intersections=5, # Number of intersections (no start / target) num_trainstations=15, # Number of possible start/targets on map min_node_dist=3, # Minimal distance of nodes node_radius=2, # Proximity of stations to city center num_neighb=4, # Number of connections to other cities/intersections seed=15, # Random seed grid_mode=True, enhance_intersection=True ), schedule_generator=sparse_schedule_generator(speed_ration_map), number_of_agents=10, stochastic_data=stochastic_data, # Malfunction data generator obs_builder_object=tree_observation) env.reset() Observation Builders¶ Every RailEnv has an obs_builder. The obs_builder has full access to the RailEnv. The obs_builder is called in the step() function to produce the observations. env = RailEnv( ... obs_builder_object=TreeObsForRailEnv( max_depth=2, predictor=ShortestPathPredictorForRailEnv(max_depth=10) ), ... ) env.reset() The two principal observation builders provided are global and tree. Global Observation Builder¶ GlobalObsForRailEnv gives a global observation of the entire rail environment. -, 4) wtih - first channel containing the agents position and direction - second channel containing the other agents positions and diretions - third channel containing agent malfunctions - fourth channel containing agent fractional speeds Tree Observation Builder¶ TreeObsForRailEnv computes the current observation for each agent. The observation vector is composed of 4 sequential parts, corresponding to data from the up to 4 possible movements in a RailEnv (up to because only a subset of possible transitions are allowed in RailEnv). The possible movements are sorted relative to the current orientation of the agent, rather than NESW as for the transitions. The order is: [data from 'left'] + [data from 'forward'] + [data from 'right'] + [data from 'back'] Each branch data is organized as: [root node information] + [recursive branch data from 'left'] + [... from 'forward'] + [... from 'right] + [... from 'back'] Each node information is composed of 9 features: if own target lies on the explored branch the current distance from the agent in number of cells is stored. - if another agents target is detected the distance in number of cells from the agents current location is stored if another agent is detected the distance in number of cells from current agent position is stored. - possible conflict detected - tot_dist = Other agent predicts to pass along this cell at the same time as the agent, we store the distance in number of cells from current agent position 0 = No other agent reserve the same cell at similar time if an not usable switch (for agent) is detected we store the distance. This feature stores the distance in number of cells to the next branching (current node) minimum distance from node to the agent’s target given the direction of the agent if this path is chosen agent in the same direction n = number of agents present same direction (possible future use: number of other agents in the same direction in this branch) 0 = no agent present same direction agent in the opposite direction n = number of agents present other direction than myself (so conflict) (possible future use: number of other agents in other direction in this branch, ie. number of conflicts) 0 = no agent present other direction than myself malfunctioning/blokcing agents n = number of time steps the oberved agent remains blocked slowest observed speed of an agent in same direction 1 if no agent is observed min_fractional speed otherwise Missing/padding nodes are filled in with -inf (truncated). Missing values in present node are filled in with +inf (truncated). In case of the root node, the values are [0, 0, 0, 0, distance from agent to target, own malfunction, own speed] In case the target node is reached, the values are [0, 0, 0, 0, 0]. Predictors¶ Predictors make predictions on future agents’ moves based on the current state of the environment. They are decoupled from observation builders in order to be encapsulate the functionality and to make it re-usable. For instance, TreeObsForRailEnv optionally uses the predicted the predicted trajectories while exploring the branches of an agent’s future moves to detect future conflicts. The general call structure is as follows: RailEnv.step() -> ObservationBuilder.get_many() -> self.predictor.get() self.get() self.get() ... Maximum number of allowed time steps in an episode¶ Whenever the schedule within RailEnv is generated, the maximum number of allowed time steps in an episode is calculated according to the following formula: RailEnv._max_episode_steps = timedelay_factor * alpha * (env.width + env.height + ratio_nr_agents_to_nr_cities) where the following default values are used timedelay_factor=4, alpha=2 and ratio_nr_agents_to_nr_cities=20 If participants want to use their own formula they have to overwrite the method compute_max_episode_steps() from the class RailEnv Observation and Action Spaces¶ This is an introduction to the three standard observations and the action space of Flatland. Action Space¶ Flatland is a railway simulation. Thus the actions of an agent are strongly limited to the railway network. This means that in many cases not all actions are valid. The possible actions of an agent are 0Do Nothing: If the agent is moving it continues moving, if it is stopped it stays stopped 1Deviate Left: If the agent is at a switch with a transition to its left, the agent will chose th eleft path. Otherwise the action has no effect. If the agent is stopped, this action will start agent movement again if allowed by the transitions. 2Go Forward: This action will start the agent when stopped. This will move the agent forward and chose the go straight direction at switches. 3Deviate Right: Exactly the same as deviate left but for right turns. 4Stop: This action causes the agent to stop. Observation Spaces¶ In the Flatland environment we have included three basic observations to get started. The figure below illustrates the observation range of the different basic observation: Global, Local Grid and Local Tree. Global Observation¶ Gives a global observation of the entire rail environment. The observation is composed of the following elements: -, 8) with the first 4 channels containing the one hot encoding of the direction of the given agent and the second 4 channels containing the positions of the other agents at their position coordinates. We encourage you to enhance this observation with any layer you think might help solve the problem. It would also be possible to construct a global observation for a super agent that controls all agents at once. Local Grid Observation¶ Gives a local observation of the rail environment around the agent. The observation is composed of the following elements: - transition map array of the local environment around the given agent, with dimensions ( 2*view_radius + 1, 2*view_radius + 1, 16), assuming 16 bits encoding of transitions. - Two 2D arrays ( 2*view_radius + 1, 2*view_radius + 1, 2) containing respectively, if they are in the agent’s vision range, its target position, the positions of the other targets. - A 3D array ( 2*view_radius + 1, 2*view_radius + 1, 4) containing the one hot encoding of directions of the other agents at their position coordinates, if they are in the agent’s vision range. - A 4 elements array with one hot encoding of the direction. Be aware that this observation does not contain any clues about target location if target is out of range. Thus navigation on maps where the radius of the observation does not guarantee a visible target at all times will become very difficult. We encourage you to come up with creative ways to overcome this problem. In the tree observation below we introduce the concept of distance maps. Tree Observation¶ The tree observation is built by exploiting the graph structure of the railway network.. The figure below illustrates how the tree observation is built: From Agent location probe all 4 directions ( L:Blue, F:Green, R:Purple, B:Red) starting with left and start branches when transition is allowed. - For each branch walk along the allowed transition until you reach a dead-end, switch or the target destination. - Create a node and fill in the node information as stated below. - If max depth of tree is not reached and there are possible transitions, start new branches and repeat the steps above. Fill up all non existing branches with -infinity such that tree size is invariant to the number of possible transitions at branching points. Note that we always start with the left branch according to the agent orientation. Thus the tree observation is independent of the NESW orientation of cells, and only considers the transitions relative to the agent’s orientation. The colors in the figure bellow illustrate what branch the cell belongs to. If there are multiple colors in a cell, this cell is visited by different branches of the tree observation. The right side of the figure shows the resulting tree of the railway network on the left. Cross means no branch was built. If a node has no children it was either a terminal node (dead-end, max depth reached or no transition possible). A circle indicates a node filled with the corresponding information stated below in Node Information. Node Information¶ Each node is filled with information gathered along the path to the node. Currently each node contains 9 features: 1: if own target lies on the explored branch the current distance from the agent in number of cells is stored. 2: if another agent’s target is detected, the distance in number of cells from the current agent position is stored. 3: if another agent is detected, the distance in number of cells from the current agent position is stored. 4: possible conflict detected (This only works when we use a predictor and will not be important in this tutorial) 5: if an unusable switch (for the agent) is detected we store the distance. An unusable switch is a switch where the agent does not have any choice of path, but other agents coming from different directions might. 6: This feature stores the distance (in number of cells) to the next node (e.g. switch or target or dead-end) 7: minimum remaining travel distance from this node to the agent’s target given the direction of the agent if this path is chosen 8: agent in the same direction found on path to node n= number of agents present in the same direction (possible future use: number of other agents in the same direction in this branch) 0= no agent present in the same direction 9: agent in the opposite direction on path to node n= number of agents present in the opposite direction to the observing agent 0= no agent present in other direction to the observing agent Rendering Specifications¶ Scope¶ This doc specifies the software to meet the requirements in the Visualization requirements doc. References¶ Interfaces¶ Data Structure¶ A definitions of the data structure is to be defined in Core requirements or Interfaces doc. Existing Tools / Libraries¶ - Pygame - PyQt - Define draw functions/classes for each primitive - Primitives: Agents (Trains), Railroad, Grass, Houses etc. - Background. Initialize the background before starting the episode. - Static objects in the scenes, directly draw those primitives once and cache. To-be-filled Visualization¶ Introduction & Scope¶ Broad requirements for human-viewable display of a single Flatland Environment. Context¶ Shows this software component in relation to some of the other components. We name the component the “Renderer”. Multiple agents interact with a single Environment. A renderer interacts with the environment, and displays on screen, and/or into movie or image files. >>>>> gd2md-html alert: inline drawings not supported directly from Docs. You may want to copy the inline drawing to a standalone drawing and export by reference. See Google Drawings by reference for details. The img URL below is a placeholder. (Back to top)(Next alert) >>>>> >>>>> gd2md-html alert: inline drawings not supported directly from Docs. You may want to copy the inline drawing to a standalone drawing and export by reference. See Google Drawings by reference for details. The img URL below is a placeholder. Requirements¶ Primary Requirements¶ - Visualize or Render the state of the environment - Read an Environment + Agent Snapshot provided by the Environment component - Display onto a local screen in real-time (or near real-time) - Include all the agents - Illustrate the agent observations (typically subsets of the grid / world) - 2d-rendering only - Output visualisation into movie / image files for use in later animation - Should not impose control-flow constraints on Environment - Should not force env to respond to events - Should not drive the “main loop” of Inference or training Secondary / Optional Requirements¶ - During training (possibly across multiple processes or machines / OS instances), display a single training environment, - without holding up the other environments in the training. - Some training environments may be remote to the display machine (eg using GCP / AWS) - Attach to / detach from running environment / training cluster without restarting training. - Provide a switch to make use of graphics / artwork provided by graphic artist - Fast / compact mode for general use - Beauty mode for publicity / demonstrations - Provide a switch between smooth / continuous animation of an agent (slower) vs jumping from cell to cell (faster) - Smooth / continuous translation between cells - Smooth / continuous rotation - Speed - ideally capable of 60fps (see performance metrics) - Window view - only render part of the environment, or a single agent and agents nearby. - May not be feasible to render very large environments - Possibly more than one window, ie one for each selected agent - Window(s) can be tied to agents, ie they move around with the agent, and optionally rotate with the agent. - Interactive scaling - eg wide view, narrow / enlarged view - eg with mouse scrolling & zooming - Minimize necessary skill-set for participants - Python API to gui toolkit, no need for C/C++ - View on various media: - Linux & Windows local display - Browser Reference Documents¶ Link to this doc: Core Specification¶ This specifies the system containing the environment and agents - this will be able to run independently of the renderer. The data structure which the renderer needs to read initially resides here. Visualization Specification¶ This will specify the software which will meet the requirements documented here. Non-requirements - to be deleted below here.¶ The below has been copied into the spec doc. Comments may be lost. I’m only preserving it to save the comments for a few days - they don’t cut & paste into the other doc! Environment Snapshot¶ Data Structure A definitions of the data structure is to be defined in Core requirements. It is a requirement of the Renderer component that it can read this data structure. Example only Investigation into Existing Tools / Libraries¶ - Pygame - Very easy to use. Like dead simple to add sprites etc. () - No inbuilt support for threads/processes. Does get faster if using pypy/pysco. - PyQt - Somewhat simple, a little more verbose to use the different modules. - Multi-threaded via QThread! Yay! (Doesn’t block main thread that does the real work), () How to structure the code - Define draw functions/classes for each primitive - Primitives: Agents (Trains), Railroad, Grass, Houses etc. - Background. Initialize the background before starting the episode. - Static objects in the scenes, directly draw those primitives once and cache. Proposed Interfaces To-be-filled
https://flatlandrl-docs.aicrowd.com/04_specifications.html
2020-08-03T18:21:31
CC-MAIN-2020-34
1596439735823.29
[array(['img/UML_flatland.png', 'Overview'], dtype=object) array(['https://i.imgur.com/oo8EIYv.png', 'https://i.imgur.com/oo8EIYv.png'], dtype=object) array(['https://i.imgur.com/sGBBhzJ.png', 'https://i.imgur.com/sGBBhzJ.png'], dtype=object)]
flatlandrl-docs.aicrowd.com
Linking¶ A Resource Container (RC) link allows one RC to reference content from another RC. All RC links follow a very simple structure in two different flavors. - Anonymous links - have no title and are declared by enclosing the link in double brackets - Titled links - have a title and are indicated by enclosing the link title in single brackets and the link in parentheses. For example: [[language/resource/project/type]] [Link Title](language/resource/project/type) Structure¶ The minimum form of a link is language/resource/project/type. We interpret this as the project content directory inside the RC. This is illustrated below: # link en/ulb/gen/book # file system en_ulb_gen_book/ |-LICENSE.md |-manifest.yaml |-content/ <-- link points here From this point we can lengthen the link to include a chapter Slug which resolves to the chapter directory. # link en/ulb/gen/book/01 # file system en_ulb_gen_book/ |-LICENSE.md |-manifest.yaml |-content/ |-01/ <-- link points here Going a step further we can link to a specific chunk # link en/ulb/gen/book/01/01 # file system en_ulb_gen_book/ |-LICENSE.md |-manifest.yaml |-content/ |-01/ |-01.usfm <-- link points here In some of the examples above the link was not pointing directly at a file. In those cases the link should resolve to the first available file in order of the sorting priority described in Content Sort Order. External URLS¶ You may link to online media by simply using a url instead of an RC identifier. [[]] [Google]() Links where the path begins with http:// or https:// are treated as external urls. Examples¶ [[en/tq/gen/help/01/02]]- links to translationQuestions for Genesis 1:2 [[en/tn/gen/help/01/02]]- links to translationNotes for Genesis 1:2 bundle¶ [Genesis](en/ulb/gen/bundle/01/01) Note Linking to a bundle will only resolve down to the project level. e.g. the 01/01 will be ignored and the entire project returned. If you must link to a section within the project you will have to parse the content and manually resolve the rest of the link if the format supports references. Formats that support references are: - usfm - osis Abbreviations¶ In certain cases it is appropriate to abbreviate a link. Below are a list of cases where you are allowed to use an abbreviation. Links within the same RC¶ When linking to a different section within the same RC you may just provide the chapter/chunk Slug s. Manual example: [Translate Unknowns](translate-unknowns) Dictionary example: [Canaan](canaan) Book example: [Genesis 1:2](01/02) Links to any language¶ At times you may not wish to restrict the link to a particular language of the RC. In that case you may exclude the language code from the beginning of the path and place an extra slash / in it’s place. Example: [[//ta-vol1/translate/man/translate-unknowns]] [Translate Unknowns](//ta-vol1/translate/man/translate-unknowns) Short Links¶ A short link is used to reference a resource but not a project. There is nothing fundamentally different from regular links. Short links are simply composed of just the language and resource. en/tn Short links are most often used within the Manifest File when referring to related resources. Automatically Linking Bible References¶ Bible references in any RC should be automatically converted into resolvable links according to the linking rules for book resource types. Of course, if the reference is already a link nothing needs to be done. Conversion of biblical references are limited to those resources that have been indexed on the users’ device. Conversion should be performed based on any one of the following: - a case insensitive match of the entire project title. - a start case (first letter is uppercase) match of the project Slug Slug s. Example¶ Given the French reference below: Genèse 1:1 If the user has only downloaded the English resource the link will not resolve because the title Genesis or genesis does not match Genèse or genèse. Neither does the camel case Slug Gen match since it does not match the entire word. If the user now downloads the French resource the link will resolve because Genèse or genèse does indeed match Genèse or genèse. The result will be: [Genèse 1:1](fr/ulb/gen/book/01/01) Multiple Matches¶ When a match occurs there may be several different resources that could be used in the link such as ulb or udb. When more than one resource Slug nessesary. For both chapter and verse numbers perform the follow: Given a chapter or verse number key. And an equivalent sorted list listof chapters or verses in the matched resource - incrementally compare the key against items in the list. - if the integer value of the current list item is less than the key: continue. - if the integer value of the current list item is greater than the key: use the previous list item. - if the end of the list is reached: use the previous list item. For example chunk 01 may contain verses 1-3 whereas chunk 02 contains verses 4-6. Therefore, verse 2 would resolve to chunk 01. If no chapter or chunk can be found to satisfy the reference it should not be converted to a link.
https://resource-container.readthedocs.io/en/v0.2/linking.html
2020-08-03T17:12:01
CC-MAIN-2020-34
1596439735823.29
[]
resource-container.readthedocs.io
paramsToInheritmap should hold the needed values on one the following keys, depending on the desired outcome: paramsToCopy- this is used to pick only a subset of parameters to be inherited from the parent process; it holds the list of key names that will be inherited from the parent parameters withoutParams- this is used in case we need to remove some parameter values from the parent process before inheriting them; it holds the list of key names that will be removed from the parent parameters
https://docs.flowx.ai/flowx-engine/orchestration
2021-11-27T15:04:49
CC-MAIN-2021-49
1637964358189.36
[]
docs.flowx.ai
Date: Mon, 6 Mar 2000 00:03:34 -0800 From: "David Schwartz" <[email protected]> To: "Brett Glass" <[email protected]>, "Jamie A. Lawrence" <[email protected]> Cc: <[email protected]> Subject: RE: Great American Gas Out Message-ID: <[email protected]> In-Reply-To: <4.2.2.20000306003715.041255e0@localhost> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help > >If everyone payed a private entity to get to work, we'd see a saner > >commute schedule. > > You'd be stopping every mile to pay a troll. Yes, experience shows that the companies that inconvenience their customers the most make the most money. DS To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-chat" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=48711+0+/usr/local/www/mailindex/archive/2000/freebsd-chat/20000312.freebsd-chat
2021-11-27T15:16:03
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Tutorial: Load data and run queries on an Apache Spark cluster in Azure HDInsight In this tutorial, you learn how to create a dataframe from a csv file, and how to run interactive Spark SQL queries against an Apache Spark cluster in Azure HDInsight. In Spark, a dataframe is a distributed collection of data organized into named columns. Dataframe is conceptually equivalent to a table in a relational database or a data frame in R/Python. In this tutorial, you learn how to: - Create a dataframe from a csv file - Run queries on the dataframe Prerequisites An Apache Spark cluster on HDInsight. See Create an Apache Spark cluster. Create a Jupyter Notebook Jupyter Notebook is an interactive notebook environment that supports various programming languages. The notebook allows you to interact with your data, combine code with markdown text and perform simple visualizations. Edit the URL replacing SPARKCLUSTERwith the name of your Spark cluster. Then enter the edited URL in a web browser. If prompted, enter the cluster login credentials for the cluster. From the Jupyter web page, Select New > PySpark to create a notebook. A new notebook is created and opened with the name Untitled( Untitled.ipynb). Note By using the PySpark kernel to create a notebook, the sparksession is automatically created for you when you run the first code cell. You do not need to explicitly create the session. Create a dataframe from a csv file Applications can create dataframes directly from files or folders on the remote storage such as Azure Storage or Azure Data Lake Storage; from a Hive table; or from other data sources supported by Spark, such as Cosmos DB, Azure SQL DB, DW, and so on. The following screenshot shows a snapshot of the HVAC.csv file used in this tutorial. The csv file comes with all HDInsight Spark clusters. The data captures the temperature variations of some buildings. Paste the following code in an empty cell of the Jupyter Notebook, and then press SHIFT + ENTER to run the code. The code imports the types required for this scenario: from pyspark.sql import * from pyspark.sql.types import * When running an interactive query in Jupyter, the web browser window or tab caption shows a (Busy) status along with the notebook title. You also see a solid circle next to the PySpark text in the top-right corner. After the job is completed, it changes to a hollow circle. Note the session id returned. From the picture above, the session id is 0. If desired, you can retrieve the session details by navigating to CLUSTERNAME is the name of your Spark cluster and ID is your session id number. Run the following code to create a dataframe and a temporary table (hvac) by running the following code. # Create a dataframe and table from sample data csvFile = spark.read.csv('/HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv', header=True, inferSchema=True) csvFile.write.saveAsTable("hvac") Run queries on the dataframe Once the table is created, you can run an interactive query on the data. Run the following code in an empty cell of the notebook: %%sql SELECT buildingID, (targettemp - actualtemp) AS temp_diff, date FROM hvac WHERE date = \"6/1/13\" The following tabular output is displayed. You can also see the results in other visualizations as well. To see an area graph for the same output, select Area then set other values as shown. From the notebook menu bar, navigate to File > Save and Checkpoint. If you're starting the next tutorial now, leave the notebook open. If not, shut down the notebook to release the cluster resources: from the notebook menu bar, navigate to File > Close and Halt. Clean up resources With HDInsight, your data and Jupyter Notebooks are stored in Azure Storage or Azure Data Lake Storage, so you can safely delete a cluster when it isn't in use. You're also charged for an HDInsight cluster, even when it's not in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they aren't in use. If you plan to work on the next tutorial immediately, you might want to keep the cluster. Open the cluster in the Azure portal, and select Delete. You can also select the resource group name to open the resource group page, and then select Delete resource group. By deleting the resource group, you delete both the HDInsight Spark cluster, and the default storage account. Next steps In this tutorial, you learned how to create a dataframe from a csv file, and how to run interactive Spark SQL queries against an Apache Spark cluster in Azure HDInsight. Advance to the next article to see how the data you registered in Apache Spark can be pulled into a BI analytics tool such as Power BI.
https://docs.microsoft.com/en-in/azure/hdinsight/spark/apache-spark-load-data-run-query
2021-11-27T16:16:43
CC-MAIN-2021-49
1637964358189.36
[array(['media/apache-spark-load-data-run-query/hdinsight-spark-sample-data-interactive-spark-sql-query.png', 'Snapshot of data for interactive Spark SQL query'], dtype=object) array(['media/apache-spark-load-data-run-query/hdinsight-azure-portal-delete-cluster.png', 'Delete HDInsight cluster'], dtype=object) ]
docs.microsoft.com
What is Nebula Explorer¶ Nebula Explorer (Explorer in short) is a browser-based visualization tool. It is used with the Nebula Graph core to visualize interaction with graph data. Even if there is no experience in graph database, you can quickly become a graph exploration expert. Enterpriseonly Explorer is only available in the enterprise version. Scenarios¶ You can use Explorer in one of these scenarios: - You need to quickly find neighbor relationships from complex relationships, analyze suspicious targets, and display graph data in a visual manner. - For large-scale data sets, the data needs to be filtered, analyzed, and explored in a visual manner. Features¶ Explorer has these features: - Easy to use and user-friendly: Explorer can be deployed in simple steps. And use simple visual interaction, no need to conceive nGQL sentences, easy to realize graph exploration. - Flexible: Explorer supports querying data through VID, Tag, Subgraph. - Multiple operations: Explorer supports operations such as expanding operations on multiple vertexes, querying the common neighbors of multiple vertexes, and querying the path between the start vertex and the end vertex. - Various display: Explorer supports modifying the color and icon of the vertex in the canvas to highlight key nodes. You can also freely choose the data display mode in dagre, force, and circular. Authentication¶ Authentication is not enabled in Nebula Graph by default. Users can log into Studio with the root account and any password. When Nebula Graph enables authentication, users can only sign into Studio with the specified account. For more information, see Authentication. Last update: November 10, 2021
https://docs.nebula-graph.io/2.6.1/nebula-explorer/about-explorer/ex-ug-what-is-explorer/
2021-11-27T14:36:41
CC-MAIN-2021-49
1637964358189.36
[array(['../../figs/explorer-en.png', 'explorer'], dtype=object)]
docs.nebula-graph.io
We usually only post feature updates here, but this is even more exciting to us than just a feature! We're excited to announce that Tadabase is now SOC2 compliant. SOC2 is the most respected accreditation standard on the market and getting certified is just one more step we're making towards our obsession with security. Read more about what SOC2 is and why we chose to do this here:
https://docs.tadabase.io:443/categories/updates
2021-11-27T14:27:38
CC-MAIN-2021-49
1637964358189.36
[]
docs.tadabase.io:443
size of the obstacle, measured in the object's local space. The size will be scaled by the transform's scale. // Fit a box shaped obstacle to the attached mesh function Start () { var obstacle : UnityEngine.AI.NavMeshObstacle = GetComponent.<UnityEngine.AI.NavMeshObstacle>(); var mesh : Mesh = GetComponent.<MeshFilter>().mesh; obstacle.shape = UnityEngine.AI.NavMeshObstacleShape.Box; obstacle.size = mesh.bounds.size; } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Start() { AI.NavMeshObstacle obstacle = GetComponent<AI.NavMeshObstacle>(); Mesh mesh = GetComponent<MeshFilter>().mesh; obstacle.shape = UnityEngine.AI.NavMeshObstacleShape.Box; obstacle.size = mesh.bounds.size; } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.1/Documentation/ScriptReference/AI.NavMeshObstacle-size.html
2021-11-27T14:39:26
CC-MAIN-2021-49
1637964358189.36
[]
docs.unity3d.com
A keystore is a repository that stores cryptographic keys and certificates. You use these artifacts for security purposes such as encrypting sensitive information, and establishing trust between your server and the outside parties that connect to it. See the following topics for details on keystores.childrenhow keystores are used in WSO2 products and the default keystore settings with which all products are shipped: The usage of keys and certificates contained in a keystore are explained below. ... See the following topics for details on how keystores are used in WSO2 products and the default keystore settings with which all products are shipped: Setting up keystores for WSO2 products ... - the related linkson how to create new keystore files, see Creating New Keystores, and for information on how to update configuration files in your product with keystore information, see Configuring Keystores in WSO2 Products. Default keystore settings in WSO2 products ... ...
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=68684533&selectedPageVersions=5&selectedPageVersions=6
2021-11-27T14:07:15
CC-MAIN-2021-49
1637964358189.36
[]
docs.wso2.com
Autofit Rows and Columns Auto Fitting wide range of properties and methods for managing a worksheet. This article looks at using the Worksheet class to autofit rows or columns. AutoFit Row - Simple The most straight-forward approach to auto-sizing the width and height of a row is to call the Worksheet class' autoFitRow method. The autoFitRow method takes a row index (of the row to be resized) as a parameter. AutoFit Row in a Range of Cells A row is composed of many columns. Aspose.Cells allows developers to auto-fit a row based on the content in a range of cells within the row by calling an overloaded version of the autoFitRow method. It takes the following parameters: - Row index, the index of the row about to be auto-fitted. - First column index, the index of the row’s first column. - Last column index, the index of the row’s last column. The autoFitRow method checks the contents of all the columns in the row and then auto-fits the row. AutoFit Column - Simple The easiest way to auto-size the width and height of a column is to call the Worksheet class' autoFitColumn method. The autoFitColumn method takes the column index (of the column about to be resized) as a parameter. AutoFit Column in a Range of Cells A column is composed of many rows. It is possible to auto-fit a column based on the content in a range of cells in the column by calling an overloaded version of autoFitColumn method that takes the following parameters: - Column index, represents the index of the column whose contents need to auto-fit - First row index, represents the index of the first row of the column - Last row index, represents the index of the last row of the column The autoFitColumn method checks the contents of all rows in the column and then auto-fits the column. AutoFit Rows for Merged Cells With Aspose.Cells it is possible to autofit rows even for cells that have been merged using the AutoFitterOptions API. AutoFitterOptions class provides AutoFitMergedCellsType property that can be used to autofit rows for merged cells. AutoFitMergedCellsType accepts AutoFitMergedCellsType enumerable which has the following members. - NONE: Ignore merged cells. - FIRST_LINE: Only expands the height of the first row. - LAST_LINE: Only expands the height of the last row. - EACH_LINE: Only expands the height of each row. You may also use the overloaded versions of autoFitRows & autoFitColumns methods accepting a range of rows/columns and an instance of AutoFitterOptions to auto-fit the selected rows/columns with the desired AutoFitterOptions accordingly. The signatures of aforesaid methods are as follow: - autoFitRows(int startRow, int endRow, AutoFitterOptions options) - autoFitColumns(int firstColumn, int lastColumn, AutoFitterOptions options)
https://docs.aspose.com/cells/java/autofit-rows-and-columns/
2021-11-27T14:16:49
CC-MAIN-2021-49
1637964358189.36
[]
docs.aspose.com
empyscripts¶ Version: 0.3.2 ~ Date: 22 May 2018 The empyscripts are add-ons for the electromagnetic modeller empymod. These add-ons provide some very specific, additional functionalities: tmtemod: Return up- and down-going TM/TE-mode contributions for x-directed electric sources and receivers, which are located in the same layer. fdesign: Design digital linear filters for the Hankel and Fourier transforms. There is also empyscripts.versions(), which can be used to show date, time, and package version information at the end of a notebook or script: versions('HTML')for Jupyter Notebooks, and versions()for IPython, QT, and Python consoles. See for a complete list of features of empymod. More information¶ For more information regarding installation, usage, add-ons, contributing, roadmap, bug reports, and much more, see - Website:, - Documentation empymod:, - Documentation add-ons:, - Source Code:, - Examples:.
https://empyscripts.readthedocs.io/en/stable/
2021-11-27T14:24:50
CC-MAIN-2021-49
1637964358189.36
[]
empyscripts.readthedocs.io
fMRI. FSL has many atlases already installed, which you can access through the FSL viewer. If you click on Settings -> Ortho View 1 -> Atlas Panel, it will open a new window called Atlases. By default, the Harvard-Oxford Cortical and Subcortical Atlases will be loaded. You can see how the atlas partitions the brain by clicking on the Show/Hide link next to the atlas name. The voxel at the center of the crosshairs in the viewing window will be assigned a probability of belonging to a brain structure. The Harvard-Oxford Cortical atlas, displayed on an MNI template brain. The Atlas window shows the probability that the voxel is located at a certain anatomical region. To save one of these regions as a file to extract data from, also known as a mask, click on the Show/Hide link next to the region you want to use as a mask - in our example, let’s say that we want to use the Paracingulate Gyrus as a mask. Clicking on the link will show that region overlaid on the brain, as well as load it as an overlay in the Overlay List window. Click on the disk icon next to the image to save it as a mask. Save it to the Flanker directory and call it PCG.nii. Warning Your results will have the same resolution as the template you used for normalization. The default in FSL is the MNI_152_T1_2mm_brain, which has a resolution of 2x2x2mm. When you create a mask, it will have the same resolution as the template that it is overlaid on. When we extract data from the mask, the data and the mask need to have the same resolution. To avoid any errors due to different image resolutions, use the same template to create the mask that you used to normalize your data. Extracting Data from an Anatomical Mask¶ Once you’ve created the mask, you can then extract each subject’s contrast estimates from it. Although you may think that we would extract the results from the 3rd-level analysis, we actually want the ones from the 2nd-level analysis; the 3rd-level analysis is a single image with a single number at each voxel, whereas in an ROI analysis our goal is to extract the contrast estimate for each subject individually. For the Incongruent-Congruent contrast estimate, for example, you can find each subjects’ data maps in the directory Flanker_2ndLevel.gfeat/cope3.feat/stats. The data maps have been calculated several different ways, including t-statistic maps, cope images, and variance images. My preference is to extract data from the z-statistic maps, since these data have been converted into a form that is normally distributed and, in my opinion, is easier to plot and to interpret. To make our ROI analysis easier, we will merge all of the z-statistic maps into a single dataset. To do this, we will use a combination of FSL commands and Unix commands. Navigate into the Flanker_2ndLevel.gfeat/cope3.feat/stats directory, and then type the following: fslmerge -t allZstats.nii.gz `ls zstat* | sort -V` This will merge all of the z-statistic images into a single dataset along the time dimension (specified with the -t option); this simply means to daisy-chain the volumes together into a single larger dataset. The first argument is what the output dataset will be called ( allZstats.nii.gz), and the code in backticks uses an asterisk wildcard to list each file beginning with “zstat”, and then sorts them numerically from smallest to largest with the -V option. Move the allZstats.nii.gz file up three levels so that it is in the main Flanker directory (i.e., type mv allZstats.nii.gz ../../..). Then use the fslmeants command to extract the data from the PCG mask: fslmeants -i allZstats.nii.gz -m PCG.nii.gz This will print 26 numbers, one per subject. Each number is the contrast estimate for that subject averaged across all of the voxels in the mask. Each number output from this command corresponds to the contrast estimate that went into the analysis. For example, the first number corresponds to the average contrast estimate for Incongruent-Congruent for sub-01, the second number is the average contrast estimate for sub-02, and so on. These numbers can be copied and pasted into a statistical software package of your choice (such as R), and then you can run a t-test on them. Extracting Data from an Sphere¶ You may have noticed that the results from the ROI analysis using the anatomical mask were not significant. This may be because the PCG mask covers a very large region; although the PCG is labeled as a single anatomical region, we may be extracting data from several distinct functional regions.: >>IMAGE, 40. The next few steps are complicated, so pay close attention to each one: - Open fsleyes, and load an MNI template. In the fields under the label “Coordinates: MNI152” in the Locationwindow, type 0 20 44. Just to the right of those fields, note the corresponding change in the numbers in the fields under Voxel location. In this case, they are 45 73 58. Write down these numbers. - In the terminal, navigate to the Flanker directory and type the following: fslmaths $FSLDIR/data/standard/MNI152_T1_2mm.nii.gz -mul 0 -add 1 -roi 45 1 73 1 58 1 0 1 Jahn_ROI_dmPFC_0_20_44.nii.gz -odt float This is a long, dense command, but for now just note where we have inserted the numbers 45, 73, and 58. When you create another spherical ROI based on different coordinates, these are the only numbers you will change. (When you create a new ROI you should change the label of the output file as well.) The output of this command is a single voxel marking the center of the coordinates specified above. - Next, type: fslmaths Jahn_ROI_dmPFC_0_20_44.nii.gz -kernel sphere 5 -fmean Jahn_Sphere_dmPFC_0_20_44.nii.gz -odt float This expands the single voxel into a sphere with a radius of 5mm, and calls the output “Jahn_Sphere.nii.gz”. If you wanted to change the size of the sphere to 10mm, for example, you would change this section of code to -kernel sphere 10. - Now, type: fslmaths Jahn_Sphere_dmPFC_0_20_44.nii.gz -bin Jahn_Sphere_bin_dmPFC_0_20_44.nii.gz This will binarize the sphere, so that it can be read by the FSL commands. Note In the steps that were just listed, notice how the output from each command is used as input to the next command. You will change this for your own ROI, if you decide to create one. - Lastly, we will extract data from this ROI by typing: fslmeants -i allZstats.nii.gz -m Jahn_Sphere_bin_dmPFC_0_20_44.nii.gz The numbers you get from this analysis should look much different from the ones you created using the anatomical mask. Copy and paste these commands into the statistical software package of your choice, and run a one-sample t-test on them. Are they significant? How would you describe them if you had to write up these results in a manuscript? Exercises¶ - The mask used with fslmeants is binarized, meaning that any voxel containing a numerical value greater than zero will be converted to a “1”, and then data will be extracted only from those voxels labeled with a “1”. You will recall that the mask created with fsleyes is probabilistic. If you want to weight the extracted contrast estimates by the probability weight, you can do this by using the -woption with fslmeants. Try typing: fslmeants -i allZstats.nii.gz -m PCG.nii.gz -w And observe how the numbers are different from the previous method that used a binarized mask. Is the difference small? Large? Is it what you would expect? - Use the code given in the section on spherical ROI analysis to create a sphere with a 7mm radius located at MNI coordinates 36, -2, 48. - Use the Harvard-Oxford subcortical atlas to create an anatomical mask of the right amygdala. Label it whatever you want. Then, extract the z-statistics from cope1 (i.e., the contrast estimates for Incongruent compared to baseline).
https://andysbrainbook.readthedocs.io/en/latest/fMRI_Short_Course/fMRI_09_ROIAnalysis.html
2021-11-27T13:55:16
CC-MAIN-2021-49
1637964358189.36
[array(['../_images/ROI_Analysis_Atlas_Example.png', '../_images/ROI_Analysis_Atlas_Example.png'], dtype=object) array(['../_images/ROI_Analysis_FSLmeants_output.png', '../_images/ROI_Analysis_FSLmeants_output.png'], dtype=object) array(['../_images/ROI_Analysis_Anatomical_Spherical.gif', '../_images/ROI_Analysis_Anatomical_Spherical.gif'], dtype=object) array(['../_images/ROI_Analysis_Jahn_Study.png', '../_images/ROI_Analysis_Jahn_Study.png'], dtype=object)]
andysbrainbook.readthedocs.io
“License Type” Rule Use this rule to perform a mass change of license types. Usage Scenario: You could use this rule if you recently updated your SAP contract and need to replace a specific license type with another license type.. - License Types—Enter the license types that are currently assigned to users that should be changed to a new specific license type. Separate multiple license type values
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/concepts/SAP-LicenseTypeRule.html
2021-11-27T14:23:14
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
Date: Wed, 27 May 2020 14:03:26 -0500 From: Valeri Galtsev <[email protected]> To: [email protected] Subject: Re: FreeBSD Cert Message-ID: <[email protected]> In-Reply-To: <20200527203627.2c9faae5@archlinux>> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help: Install the system. Solve all trouble on that way by searching for solutions, not by asking ready recipes on mail lists. Become knowledgeable USER of that system. Learn programming. Create small programs of your own for your own needs. This was you indeed will acquire invaluable knowledge. By doing. This though sounds terse will bring you to the goal you stated much faster, believe me. Good luck. Valeri > > You are interested in networking? Search > for the > term "network". > > Learn how to read man(ual) pages, such as > > > > or > > > > man pages are the build in manual, but for a newbie the man pages are > not easy to understand. > > Apropos shells: > > > > Learning by doing. Start a simple project. Kind of an advanced "Hello, > World!" script that has something to do with your interests, maybe > networking, instead of a program, > . > > _______________________________________________ >: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=556262+0+/usr/local/www/mailindex/archive/2020/freebsd-questions/20200531.freebsd-questions
2021-11-27T15:28:49
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Date: Thu, 26 Nov 2020 09:18:50 +0000 From: [email protected] To: [email protected] Subject: New Order from Printek Business Services Inc. Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I've invited you to fill out the following form: Untitled form To fill it out, visit: Good day. Greetings from business service, I am miss, Ashley. agent Sales Manager. Please kindly send your company prices list that is available. reply urgently to the email ( [email protected] )because we can only reply to you on the email below, and forward to customers in need of the products. B/R ASHLEY Senior reseller agent manager Printek Business Services INC. Tel: 7574537247 Email: [email protected] 2 Baldwin PlaceP.O. Box 1000Chester, PA 19016 Google Forms: Create and analyze surveys. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=72573+0+/usr/local/www/mailindex/archive/2020/freebsd-ruby/20201129.freebsd-ruby
2021-11-27T15:39:44
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
: - A supported Python version with development headers - HDF5 1.8.4 or newer with development headers - A C compiler On Unix platforms, you also need pkg-config unless you explicitly specify a path for HDF5 as described in Custom installation. There are notes below on installing HDF5, Python and a C compiler on different platforms. Building h5py also requires several Python packages, but in most cases pip will automatically install these in a build environment for you, so you don’t need to deal with them manually. See Development installation for a list. The actual installation of h5py should be done via: $ pip install --no-binary=h5py h5py or, from a tarball or git checkout: $ pip install -v . Development installation¶ When modifying h5py, you often want to reinstall it quickly to test your changes. To benefit from caching and use NumPy & Cython from your existing Python environment, run: $ H5PY_SETUP_REQUIRES=0 python3 setup.py build $ python3 -m pip install . --no-build-isolation For convenience, these commands are also in a script dev-install.sh in the h5py git repository. This skips setting up a build environment, so you should have already installed Cython, NumPy, pkgconfig (a Python interface to pkg-config) and mpi4py (if you want MPI integration - see Building against Parallel HDF5). See setup.py for minimum versions. This will normally rebuild Cython files automatically when they change, but sometimes it may be necessary to force a full rebuild. The easiest way to achieve this is to discard everything but the code committed to git. In the root of your git checkout, run: $ git clean -xfd Then build h5py again as above.. Downstream packagers¶ If you are building h5py for another packaging system - e.g. Linux distros or packaging aimed at HPC users - you probably want to satisfy build dependencies from your packaging system. To build without automatically fetching dependencies, use a command like: H5PY_SETUP_REQUIRES=0 pip install . --no-deps --no-build-isolation Depending on your packaging system, you may need to use the --prefix or --root options to control where files get installed. h5py’s Python packaging has build dependencies on the oldest compatible versions of NumPy and mpi4py. You can build with newer versions of these, but the resulting h5py binaries will only work with the NumPy & mpi4py versions they were built with (or newer). Mpi4py is an optional dependency, only required for Parallel HDF5 features. You should also look at the build options under Custom installation. as environment variables when you build it from source: $ The supported build options are: - To specify where to find HDF5, use one of these options: HDF5_LIBDIRand HDF5_INCLUDEDIR: the directory containing the compiled HDF5 libraries and the directory containing the C header files, respectively. HDF5_DIR: a shortcut for common installations, a directory with liband includesubdirectories containing compiled libraries and C headers. HDF5_PKGCONFIG_NAME: A name to query pkg-configfor. If none of these options are specified, h5py will query pkg-configby default for hdf5, or hdf5-openmpiif building with MPI support. HDF5_MPI=ONto build with MPI integration - see Building against Parallel HDF5. HDF5_VERSIONto force a specified HDF5 version. In most cases, you don’t need to set this; the version number will be detected from the HDF5 library. H5PY_SYSTEM_LZF=1to build the bundled LZF compression filter (see Filter pipeline) against an external LZF library, rather than using the bundled LZF C code. be done by setting the HDF5_MPI environment variable: $ export CC=mpicc $ export HDF5_MPI="ON" $ pip install --no-binary=h5py h5py You will need a shared-library build of Parallel HDF5 as well, i.e. built with ./configure --enable-shared --enable-parallel.
https://docs.h5py.org/en/stable/build.html
2021-11-27T15:17:41
CC-MAIN-2021-49
1637964358189.36
[]
docs.h5py.org
Themes This help article will demonstrate a step by step tutorial how to customize the Metro theme for RadSparkLine. - Open Visual Style Builder. - Export the built-in themes in a specific folder by selecting File>> Export Built-in Themes. - Load a desired theme from the just exported files by selecting File>> Open Package. Expand RadSparkLine and select the SparkLineSeries. For the Elementswindow, navigate to the HighPointBackColor property, and change its value. Change the HighPointBorderColor as well. The bellow image shows the result. The following article shows how you can use the new theme: Using Custom Themes.
https://docs.telerik.com/devtools/winforms/controls/sparkline/customizing-appearance/themes
2021-11-27T14:03:48
CC-MAIN-2021-49
1637964358189.36
[]
docs.telerik.com
actionlib_msgs /GoalStatus Message File: actionlib_msgs/GoalStatus.msg Raw Message Definition GoalID goal_id uint8 status uint8 PENDING = 0 # The goal has yet to be processed by the action server uint8 ACTIVE = 1 # The goal is currently being processed by the action server uint8 PREEMPTED = 2 # The goal received a cancel request after it started executing # and has since completed its execution (Terminal State) uint8 SUCCEEDED = 3 # The goal was achieved successfully by the action server (Terminal State) uint8 ABORTED = 4 # The goal was aborted during execution by the action server due # to some failure (Terminal State) uint8 REJECTED = 5 # The goal was rejected by the action server without being processed, # because the goal was unattainable or invalid (Terminal State) uint8 PREEMPTING = 6 # The goal received a cancel request after it started executing # and has not yet completed execution uint8 RECALLING = 7 # The goal received a cancel request before it started executing, # but the action server has not yet confirmed that the goal is canceled uint8 RECALLED = 8 # The goal received a cancel request before it started executing # and was successfully cancelled (Terminal State) uint8 LOST = 9 # An action client can determine that a goal is LOST. This should not be # sent over the wire by an action server #Allow for the user to associate a string with GoalStatus for debugging string text Compact Message Definition uint8 PENDING=0 uint8 ACTIVE=1 uint8 PREEMPTED=2 uint8 SUCCEEDED=3 uint8 ABORTED=4 uint8 REJECTED=5 uint8 PREEMPTING=6 uint8 RECALLING=7 uint8 RECALLED=8 uint8 LOST=9 actionlib_msgs/GoalID goal_id uint8 status string text autogenerated on Sun, 05 Oct 2014 23:11:20
http://docs.ros.org/en/groovy/api/actionlib_msgs/html/msg/GoalStatus.html
2021-11-27T13:59:14
CC-MAIN-2021-49
1637964358189.36
[]
docs.ros.org
Date: Mon, 07 Dec 2020 19:34:52 +0000 From: [email protected] To: [email protected] Subject: You get my email Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I've invited you to fill in the following form: Untitled form To fill it analyse surveys. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=35539+0+/usr/local/www/mailindex/archive/2020/freebsd-elastic/20201213.freebsd-elastic
2021-11-27T13:49:18
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
幫助畫面 3.5 From Joomla! Documentation Joomla! 3.5 This is a category page for topics related to the Help screens for the Joomla! 3.5.x series. The Documentation Working Group maintains a control page for these help screens. To appear on this page each topic page should have the following code inserted at the end: [[Category:Help screen 3.5]] Pages in category "Help screen 3.5/zh-tw" The following 5 pages are in this category, out of 5 total.
https://docs.joomla.org/Category:Help_screen_3.5/zh-tw
2021-11-27T15:31:46
CC-MAIN-2021-49
1637964358189.36
[]
docs.joomla.org
OXID eShop Component¶ OXID eShop component is a simple way for a project to add reusable code to the application via composer packages. You can write classes that have new or extended functionality and you may wire these classes together in your composer package by using the Service Container. In contrast to modules, components do not need to be activated but just installed by composer. How it works¶ On installation the OXID composer plugin will include your components services.yaml file in a file named generated_services.yaml that is read when the DI container is assembled. You will find this file in var/generated but you should not alter it manually.
https://docs.oxid-esales.com/developer/en/latest/development/modules_components_themes/component.html
2021-11-27T14:51:26
CC-MAIN-2021-49
1637964358189.36
[]
docs.oxid-esales.com
Run on YARN¶ Mars can be deployed on YARN clusters. You can use mars.deploy.yarn to start Mars clusters in Hadoop environments. Basic steps¶ Mars uses Skein to deploy itself into YARN clusters. This library bridges Java interfaces of YARN applications and Python interfaces. Before starting Mars in YARN, you need to check your environments first. As Skein supports Linux only, you need to work on a Linux client, otherwise you need to fix and compile a number of packages yourself. Skein library is also needed on client side. You may install Skein with conda conda install -c conda-forge skein or install with pip pip install skein Then you need to check Python environment inside your cluster. If you have a Python environment installed within your YARN nodes with every required packages installed, it will save a lot of time for you to start your cluster. Otherwise you need to pack your local environment and specify it to Mars. You may use conda-pack to pack your environment when you are using Conda: conda activate local-env conda install -c conda-forge conda-pack conda-pack or use venv-pack to pack your environment when you are using virtual environments: source local-env/bin/activate pip install venv-pack venv-pack Both commands will create a tar.gz archive, and you can use it when deploying your Mars cluster. Then it is time to start your Mars cluster. Select different lines when you are starting from existing a conda environment, virtual environment, Python executable or pre-packed environment archive: import os from mars.deploy.yarn import new_cluster # specify location of Hadoop and JDK on client side os.environ['JAVA_HOME'] = '/usr/lib/jvm/java-1.8.0-openjdk' os.environ['HADOOP_HOME'] = '/usr/local/hadoop' os.environ['PATH'] = '/usr/local/hadoop:' + os.environ['PATH'] # use a conda environment at /path/to/remote/conda/env cluster = new_cluster(environment='conda:///path/to/remote/conda/env') # use a virtual environment at /path/to/remote/virtual/env cluster = new_cluster(environment='venv:///path/to/remote/virtual/env') # use a remote python executable cluster = new_cluster(environment='python:///path/to/remote/python') # use a local packed environment archive cluster = new_cluster(environment='path/to/local/env/pack.tar.gz') # get web endpoint, may be used elsewhere print(cluster.session.endpoint) # new cluster will start a session and set it as default one # execute will then run in the local cluster a = mt.random.rand(10, 10) a.dot(a.T).execute() # after all jobs executed, you can turn off the cluster cluster.stop() Customizing cluster¶ new_cluster function provides several keyword arguments for users to define the cluster. You may use the argument app_name to customize the name of the Yarn application, or use the argument timeout to specify timeout of cluster creation. Arguments for scaling up and out of the cluster are also available. Arguments for supervisors: Arguments for workers: For instance, if you want to create a Mars cluster with 1 supervisor and 100 workers, each worker has 4 cores and 16GB memory, and stop waiting when 95 workers are ready, you can use the code below: import os from mars.deploy.yarn import new_cluster os.environ['JAVA_HOME'] = '/usr/lib/jvm/java-1.8.0-openjdk' os.environ['HADOOP_HOME'] = '/usr/local/hadoop' cluster = new_cluster('path/to/env/pack.tar.gz', supervisor_num=1, web_num=1, worker_num=100, worker_cpu=4, worker_mem='16g', min_worker_num=95)
https://docs.pymars.org/en/latest/installation/yarn.html
2021-11-27T15:02:59
CC-MAIN-2021-49
1637964358189.36
[]
docs.pymars.org
Asynchronous Image Load in WinExplorer and Tile Views - 8 minutes to read Displaying images may take quite a lot of time, especially if there are many images and they are large. The Asynchronous Image Load feature allows you to: - improve the Data Grid’s performance when showing large images on the initial Data Grid load and on scrolling through records. - automatically create and cache thumbnails (small versions of the original images) - manually create thumbnails, using a dedicated event, even if there are no images in the source. When you enable the async image load, the View displays textual data for the currently visible records immediately. Images are displayed for these records one by one, asynchronously, in a background thread. Important The async image load is performed in a non-UI thread. Important The async image load feature is not supported when the Data Grid is bound to a BindingSource component. This component may fail when one tries to obtain its data from a different thread. Source Images and Their Thumbnails Every time a source image needs to be displayed within a certain viewport, this image is scaled first to create a thumbnail. If the source image is large, scaling this image down requires some time. Thus, it is best to create thumbnails once and then re-use these thumbnails on re-displaying the records. Generating and caching thumbnails is only supported by the Data Grid when the async image load is enabled. When this feature is disabled, a thumbnail is generated for a source image every time a record needs to be displayed and re-displayed. No thumbnail caching is available in this mode. Tip Besides images provided by a data source, you can provide images for the Data Grid Views using unbound columns. View Properties to Provide Source Images for Asynchronous Load In the WinExplorerView, async image load is supported for all images displayed in View records. The following properties specify display images in this View: - WinExplorerViewColumns.SmallImageColumn (accessible from the WinExplorerView.ColumnSet object) - WinExplorerViewColumns.MediumImageColumn - WinExplorerViewColumns.LargeImageColumn - WinExplorerViewColumns.ExtraLargeImageColumn In the TileView, the async image load feature is supported for tile background images. Use the following property to provide these background images. - TileViewColumns.BackgroundImageColumn (accessible from the TileView.ColumnSet object) Refer to the following sections to learn about typical async image load scenarios. Scenario 1: You need to generate thumbnails automatically for existing source images It is assumed that the View is aware of an image column that contains source images. You assigned this image column(s) using the property(s) listed in the View Properties to Provide Source Images for Asynchronous Load section above. Do the following: - Enable async image load with the OptionsImageLoad.AsyncLoad setting. - Set the View’s OptionsImageLoad.LoadThumbnailImagesFromDataSource property to false. - Optionally, set the required thumbnail size using the OptionsImageLoad.DesiredThumbnailSize property. When the LoadThumbnailImagesFromDataSource property is set to false, the Data Grid assumes that the source image column contains large images (not small thumbnails). The Data Grid then automatically generates thumbnails of the size specified by the DesiredThumbnailSize setting, and caches them, provided that caching is enabled (the OptionsImageLoad.CacheThumbnails setting). The next time a record is re-displayed, the View shows the previously generated thumbnail. winExplorerView1.OptionsImageLoad.AsyncLoad = true; winExplorerView1.OptionsImageLoad.LoadThumbnailImagesFromDataSource = false; winExplorerView1.OptionsImageLoad.DesiredThumbnailSize = new Size(48, 48); WinExplorer View API: - WinExplorerViewOptionsImageLoad.AsyncLoad (accessible from WinExplorerView.OptionsImageLoad) - WinExplorerViewOptionsImageLoad.LoadThumbnailImagesFromDataSource - WinExplorerViewOptionsImageLoad.DesiredThumbnailSize - WinExplorerViewOptionsImageLoad.CacheThumbnails Tile View API: - TileViewOptionsImageLoad.AsyncLoad (accessible from TileView.OptionsImageLoad) - TileViewOptionsImageLoad.LoadThumbnailImagesFromDataSource - TileViewOptionsImageLoad.DesiredThumbnailSize - TileViewOptionsImageLoad.CacheThumbnails Scenario 2: You need to generate thumbnails manually (a source image column is specified). It is assumed that the View is aware of an image column that can contain source images. You assigned this image column(s) to the View using the property(s) listed in the View Properties to Provide Source Images for Asynchronous Load section above. The specified image column can contain images or links to images in all, none or only several records. Do the following: - Enable async image load with the OptionsImageLoad.AsyncLoad setting. - Set the View’s OptionsImageLoad.LoadThumbnailImagesFromDataSource property to false. Handle the View’s GetThumbnailImage event to supply custom thumbnail images. The GetThumbnailImage event will fire for each record, regardless of whether it contains an image in the source image column or not.OptionsImageLoad.LoadThumbnailImagesFromDataSource - WinExplorerView.GetThumbnailImage - WinExplorerViewOptionsImageLoad.CacheThumbnails Tile View API: - TileViewOptionsImageLoad.AsyncLoad - TileViewOptionsImageLoad.LoadThumbnailImagesFromDataSource - TileView.GetThumbnailImage - TileViewOptionsImageLoad.CacheThumbnails TileView Example This example shows how to manually generate custom tile background images (thumbnails) in Tile View and display them asynchronously. The TileView is bound to a list that contains texture names. We need to create custom background thumbnails for all tiles based on corresponding texture names, and display these images asynchronously.Thumbnails are generated using the GetThumbnailImage event. The async image load is enabled with the AsyncLoad setting. using DevExpress.XtraGrid.Views.Tile; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Drawing.Drawing2D; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; namespace TileView_ManualThumbs { public partial class Form1 : Form { public Form1() { InitializeComponent(); } List<Texture> textures; private void Form1_Load(object sender, EventArgs e) { InitData(); gridControl1.DataSource = textures; tileView1.OptionsTiles.ItemSize = new Size(90, 40); tileView1.GetThumbnailImage += TileView1_GetThumbnailImage; // Specify a column that provides information on images to render. tileView1.ColumnSet.BackgroundImageColumn = colName; tileView1.OptionsImageLoad.RandomShow = true; tileView1.OptionsImageLoad.LoadThumbnailImagesFromDataSource = false; // Enable async image load. tileView1.OptionsImageLoad.AsyncLoad = true; } private void TileView1_GetThumbnailImage(object sender, DevExpress.Utils.ThumbnailImageEventArgs e) { string colorName = textures[e.DataSourceIndex].Name; //Generate a thumbnail for the current record. Bitmap image = new Bitmap(e.DesiredThumbnailSize.Width, e.DesiredThumbnailSize.Height); Graphics graphics = Graphics.FromImage(image); Color tileColor = Color.FromName(colorName); GraphicsUnit grUnit = GraphicsUnit.Pixel; RectangleF imageRect = image.GetBounds(ref grUnit); LinearGradientBrush brush = new LinearGradientBrush(imageRect, Color.White, Color.White, 45, false); ColorBlend cblend = new ColorBlend(4); cblend.Colors = new Color[4] { Color.White, tileColor, tileColor, Color.White}; cblend.Positions = new float[4] { 0f, 0.5f, 0.7f, 1f }; brush.InterpolationColors = cblend; graphics.FillRectangle(brush, imageRect); e.ThumbnailImage = image; brush.Dispose(); } private void InitData() { textures = new List<Texture>(); System.Array colorsArray = Enum.GetNames(typeof(KnownColor)); foreach(var colorName in colorsArray ) { textures.Add(new Texture(colorName.ToString())); } } } public class Texture { public Texture(string name) { this.Name = name; } public string Name { get; set; } } } Scenario 3: You need to generate thumbnails manually (a source image column is not specified) Important This use case is only supported by the WinExplorer View. It is assumed that the View is not aware of any image column that may contain source images: no image column is specified using the property(s) listed in the View Properties to Provide Source Images for Asynchronous Load section above. Do the following: - Enable async image load with the OptionsImageLoad.AsyncLoad setting. Handle the GetThumbnailImage event to create thumbnails manually and supply them to the View. This event will fire on demand, so you can create and supply a thumbnail for each record..GetThumbnailImage - WinExplorerViewOptionsImageLoad.CacheThumbnails Additional Settings and Events AnimationType - Specifies the animation effect for displaying thumbs. By default, it equals AnimationType.None, which means thumbs will instantly appear when ready. You can select other animation types, such as segmented fade (in the animation below) or push. - RandomShow - Specifies whether or not grid records load their thumbs in random order. GetLoadingImage event - Allows you to set a custom loading indicator, displayed while the thumbnail image is being created. The default loading indicator depends on the currently applied application skin. The following image demonstrates a custom loading indicator.
https://docs.devexpress.com/WindowsForms/17542/controls-and-libraries/data-grid/asynchronous-image-load/asynchronous-image-load-in-winexplorer-and-tile-views
2021-11-27T15:18:07
CC-MAIN-2021-49
1637964358189.36
[array(['/WindowsForms/images/winexplorerview-async-image-loading24536.png', 'WinExplorerView - Async Image Loading'], dtype=object) array(['/WindowsForms/images/tileview-asyncbackgroundimages.png129132.png', 'TileView-asyncbackgroundimages.png'], dtype=object) ]
docs.devexpress.com
Date: Wed, 23 Jun 1999 10:50:40 -0400 From: "Christopher J. Michaels" <[email protected]> To: <[email protected]>, <[email protected]> Subject: RE: /dev/bpf0, modload ? Message-ID: <[email protected]> In-Reply-To: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help -----Original Message----- From: [email protected] [mailto:[email protected]]On Behalf Of [email protected] Sent: Wednesday, June 23, 1999 7:06 AM To: [email protected] Cc: [email protected] Subject: /dev/bpf0, modload ? Trying to build a shadow intrusion detector on FreeBSD 2.2.8. It relies upon several pieces ( ) which are libpcap, a BPF interface, and tcpdump .. WHICH someone here prolly knows is dependent upon /dev/bpfN .. That is good for the experienced kernel savvy folk .. but I have to plead ignorance . . I remember that my F.BSD 2.0.5 did NOT as I got it support BPF .. so I will guess when I ls -l /dev/bpf0 and find a device present but try to run tcpdump (as root ) and get a tcpdump: /dev/bpf0: Device not configured message .. I will guess I need to find some knowledgebase docs on how to rebuild the kernel to include the /dev/bpfN .. NOT too obvious from /sys/...conf/GENERIC and friends ... True but if you look in /src/src/sys/i386/conf/LINT, it IS in there. pseudo-device bpfilter 4 #Berkeley packet filter ^^- That's all you need to add to the kernel config. DONT suppose I can modload what I need ? Nope... SO .. PLEASE send me to the right hacks list .. thanks /Everett/ To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1051968+0+/usr/local/www/mailindex/archive/1999/freebsd-questions/19990627.freebsd-questions
2021-11-27T15:39:35
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Date: Sun, 18 Oct 1998 00:15:30 -0400 From: "Management" <[email protected]> To: <[email protected]> Subject: Can't Connect to ISP! Message-ID: <000001bdfa4d$f1e849c0$2f1dd6d1@moon> Next in thread | Raw E-Mail | Index | Archive | Help I try to connect to my ISP but don't think that it connects. I mean it dials it picks up and ask me for pass etc, butwhen i go to like your ftp to download Netscape it says something like "error something" and i can't do anything online. is ther something that i missed while i installed it? I mean i really don't know what the @!#%$! is going on. So do you thing that you can help me with solving this problem? I would really apreciate it if you would help... thanks, Anthony e-mail= [email protected] don't send to [email protected] To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=2554237+0+/usr/local/www/mailindex/archive/1998/freebsd-questions/19981011.freebsd-questions
2021-11-27T15:13:32
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Date: Tue, 24 Apr 2001 09:16:56 +0400 From: "Chernomordin Roman" <[email protected]> To: =?koi8-r?B?U3Vic2NyaXB0aW9uICj8zC4g0M/e1MEp?= <[email protected]> Subject: Multihome system Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I am a beginner in FBSD. Please help me. Can FBSD have several IP addresses on one network card and if it possible how I can do it or where I can find information about it. Thank you for advance To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=952543+0+/usr/local/www/mailindex/archive/2001/freebsd-questions/20010429.freebsd-questions
2021-11-27T15:41:40
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
edited on September 19, 2014, at 19:20.
https://docs.genesys.com/Documentation/RN/8.5.x/gvp-ctic85rn/gvp-ctic851
2021-11-27T13:43:04
CC-MAIN-2021-49
1637964358189.36
[]
docs.genesys.com
Crate solana_libra_types See all solana_libra_types's items pub use account_address::AccountAddress as PeerId; Suppose we have the following data structure in a smart contract: This module defines a data structure made to contain a cryptographic signature, in the sense of an implementation of solana_libra_crypto::traits::Signature. The container is an opaque NewType that intentionally does not allow access to the inner impl. An identifier is the name of an entity (module, resource, function, etc) in Move. For each transaction the VM executes, the VM will output a WriteSet that contains each access path it updates. For each access path, the VM can either give its new value or delete it. WriteSet
https://docs.rs/solana_libra_types/0.0.1-sol5/solana_libra_types/
2021-11-27T15:29:46
CC-MAIN-2021-49
1637964358189.36
[]
docs.rs
RStudio Professional Drivers# RStudio makes it easy to connect to your data. RStudio Professional Drivers are ODBC data connectors for some of the most popular databases: RStudio Professional Drivers help you: - Explore your databases using RStudio Workbench - Develop and deploy applications and reports that depend on databases to RStudio Connect - Use R with databases in the cloud or your production environment RStudio offers ODBC drivers for many common data sources at no additional cost to current paying customers. These drivers are commercially licensed from Magnitude Simba and are covered by the RStudio Support Program. RStudio professional drivers may only be used with other RStudio professional software and may not be used on a standalone basis or with other software. Use RStudio professional drivers with the following products: - RStudio Team bundle - RStudio Desktop Pro - RStudio Workbench - RStudio Connect - Shiny Server Pro We do not sell or offer RStudio Professional Drivers for use with our free and open-source server or desktop software and our license doesn't permit the drivers to be used with these products. Additionally, the RStudio Professional Drivers are tested against the R ODBC connection toolchain, namely the DBI and odbc R packages. Currently, RStudio Professional Drivers are tested against the pyodbc in Python.
https://docs.rstudio.com/pro-drivers/
2021-11-27T14:45:05
CC-MAIN-2021-49
1637964358189.36
[array(['/images/driver-logos/athena.png', 'Athena'], dtype=object) array(['/images/driver-logos/bigquery.png', 'Big Query'], dtype=object) array(['/images/driver-logos/cassandra.png', 'Cassandra'], dtype=object) array(['/images/driver-logos/hive.png', 'Hive'], dtype=object) array(['/images/driver-logos/impala.png', 'Impala'], dtype=object) array(['/images/driver-logos/mongo.png', 'Mongo DB'], dtype=object) array(['/images/driver-logos/mysql.png', 'MySQL'], dtype=object) array(['/images/driver-logos/netezza.jpg', 'Netezza'], dtype=object) array(['/images/driver-logos/oracle.png', 'Oracle'], dtype=object) array(['/images/driver-logos/postgresql.jpg', 'PostgresSQL'], dtype=object) array(['/images/driver-logos/redshift.png', 'Redshift'], dtype=object) array(['/images/driver-logos/teradata.png', 'Teradata'], dtype=object) array(['/images/driver-logos/salesforce.png', 'Salesforce'], dtype=object) array(['/images/driver-logos/snowflake.png', 'Snowflake'], dtype=object) array(['/images/driver-logos/sqlserver.png', 'SQL Server'], dtype=object)]
docs.rstudio.com
Install Devo Relay on an Ubuntu box This article takes you step by step through the installation of Devo Relay on a machine running Ubuntu. We will install the most recent version of Devo Relay (v2.1.0) using a .deb package that resides in Devo public repository. Relay migration Before you begin Make sure you can provide a machine with the requirements specified in this article. Installing Devo Relay Follow these instructions to install the .deb package that contains the relay. Note that the .deb package is certified for Ubuntu 18 (Bionic) and Ubuntu 20 (Focal Fossa). Important: Java versions Please note that the devo-ng-relay and CLI require Java 17 to work. Other Java versions are not supported. The devo-ng-relay package includes this required Java version (Java 17) so please consider uninstalling any other version you may have, or use the Linux command update-alternatives to make Java 17 the default Java. Import the Devo repository public key: wget -qO - | sudo apt-key add - echo "deb bionic devo" | sudo tee /etc/apt/sources.list.d/devo.list Update the resources list. sudo apt-get update Install the relay package and the relay command-line interface (CLI) using this command: sudo apt-get install devo-ng-relay Optionally, you can install the devo-monitor package that installs scripts that monitor machine status (CPU, memory, IO traffic) so their values can be sent to the Devo endpoint. The events collected by this package will be available in the box.stat.unix.*tables of your Devo domain. You can install this package using the following command: sudo apt install devo-monitor It is highly recommended that you install the devo-monitor package on your machine since it will help in case you need to troubleshoot your relay. Finally, you must configure your relay and then activate it on the Devo platform. See the Devo Relay setup process in Set up your relay. You can relaunch the setup process at any time after the installation if needed. Related articles Labels - latest
https://docs.devo.com/confluence/ndt/latest/sending-data-to-devo/devo-relay/installing-devo-relay/install-devo-relay-on-an-ubuntu-box
2022-05-16T08:36:30
CC-MAIN-2022-21
1652662510097.3
[]
docs.devo.com
The j5 Work Instructions application consists of two modules, namely j5 Work Instructions and j5 Work Planning. For the j5 Work Instructions module, the user group permissions are as follows: Any user in the same area as a work instruction can edit the work instruction, regardless of their user rights group or the user rights group assigned to the work instruction. For the above table: Team - Users have this permission for all entries in their assigned area that were assigned to their user rights group. For the j5 Work Planning module, the user group permissions are as follows: In addition to the above, Power Users have access to the Work Categories configuration module.
https://docs.hexagonppm.com/r/en-US/j5-Shift-Operations-Management-Help/Version-28.0/1046089
2022-05-16T08:42:00
CC-MAIN-2022-21
1652662510097.3
[]
docs.hexagonppm.com
Job Manager enable user to customize the home pages in visual composer. User can easily customize and make new pages by using existing widgets and short codes after following the fews steps discussed below. Step 01 Go to admin dashboard Step 02 Go to the pages form left menu Step 03 Add new Page Here you have a Page title, add page title and there is a lot of options like add elements, add templates and add to text book given below from Visual composer. User can use add new element by clicking on add element. A pop-up will appear when you click on the add new element. It shows all the existing element to add into your page. There is a menu on the top which is listed as categorical. User must select Job Manager from this. Step 04 Click on the Job Manager from menu These 23 widgets which are discussed below that will add into your page. You can select one element at one time. Another pop-up will appear when you select of any element. You can put data accordingly to the asked fields and then save changes. An element has been added to the page. You can also add multiple elements as your desired. It gives you multiple options for editing. You can edit them, delete them or add new one. After all this you just simple publish the page after clicking the button from right side menu. Final Step After publishing the page an option will appear on the screen view page. Just click on it and view your page. Found errors? Think you can improve this documentation? Simply click the Edit link at the top of the page, and then the icon on Github to make your changes.
https://docs.joomsky.com/jsjobmanager/basics/customhomepageandwidgets
2022-05-16T08:13:37
CC-MAIN-2022-21
1652662510097.3
[]
docs.joomsky.com
You can add an existing Kubernetes cluster and then manage it using KKP. From the Clusters page, click External Clusters. Click the Add External Cluster button and pick Elastic Kubernetes Engine provider. Select preset with valid credentials or enter EKS Access Key ID, Secret Access Key , and Region to connect to the provider. You should see the list of all available clusters in the region specified. click on Machine Deployments to get the details: To upgrade, click on the little dropdown arrow beside the Control Plane Version on the cluster’s page and specify the version. For more details about EKS available Kubernetes versions Amazon EKS Kubernetes versions If the upgrade version provided. The KKP platform allows getting kubeconfig file for the EKS cluster. The end-user must be aware that the kubeconfig expires after some short period of time. It’s recommended to create your kubeconfig file with the AWS CLI. The AWS CLI uses credentials and configuration settings located in multiple places, such as the system or user environment variables, local AWS configuration files, or explicitly declared on the command line as a parameter. The AWS CLI stores sensitive credential information that you specify with aws configure in a local file named credentials, in a folder named .aws in your home directory. The less sensitive configuration options that you specify with aws configure are stored in a local file named config, also stored in the .aws folder in your home directory. Example: ~/.aws/credentials [default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Now you can create kubeconfig file automatically ussing the following command: aws eks update-kubeconfig --region region-code --name cluster-name By default, the resulting configuration file is created at the default kubeconfig path (.kube/config) in your home directory or merged with an existing kubeconfig file at that location. You can specify another path with the --kubeconfig option.
https://docs.kubermatic.com/kubermatic/v2.20/tutorials_howtos/external_clusters/eks/
2022-05-16T09:02:19
CC-MAIN-2022-21
1652662510097.3
[]
docs.kubermatic.com
Notes This release of the Python agent reports error events to Insights and captures enhanced error data to support the new Advanced Error Analytics feature in APM. The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site. For a list of known issues with the Python agent, see Status of the Python agent. New Feature - Error Events The Python agent now sends TransactionError events for Advanced Error Analytics, which power the new APM Errors functionality (currently in Beta). This allows users to create charts that facet and filter their error data by attributes, as well as explore their error events in Insights. For details, see the APM Errors documentation. Changed Feature - Additional Attributes collected The agent now collects additional attributes for web transactions: - HTTP request headers: Hostand Accept - HTTP response header : Content-Length Bug Fix - Improved unicode support for exception messages Unicode exception messages will still be preserved, even if sys.setdefaultencoding() has been called to change the default encoding.
https://docs.newrelic.com/docs/release-notes/agent-release-notes/python-release-notes/python-agent-258043
2022-05-16T09:43:16
CC-MAIN-2022-21
1652662510097.3
[]
docs.newrelic.com
Notes This release of the Python agent enables Distributed Tracing by default, deprecates Cross Application Tracing. Install the agent using easy_install/pip/distribute via the Python Package Index or download it directly from the New Relic download site. New Features Reservoir sizes now configurable using settings and environment variables Reservoir sizes for span events, transaction events, error events, and custom events are now configurable via environment variables. These reservoirs limit both the maximum number of events that can be sent as well as local memory usage. The agent reservoir can be expanded to accommodate more spans in case more traces are needed or there are dropped spans. The default setting for event_harvest_config.harvest_limits.span_event_datahas been increased from 1000 to 2000 for better performance. This variable can be increased up to a size of 10,000. These settings previously existed via config file but were undocumented. For details, see the new documentation. Deprecations Cross Application Tracing is now deprecated, and disabled by default Distributed Tracing is replacing Cross Application Tracing as the default means of tracing between services. Cross Application Tracing will soon be removed entirely with a future release. The default setting for cross_application_tracer.enabledis now False, disabling Cross Application Tracing. To continue using it temporarily while transitioning to Distributed Tracing, enable it with cross_application_tracer.enabled = Trueand distributed_tracing.enabled = False. Changes Distributed Tracing is enabled by default The default setting for distributed_tracing.enabledis now True, enabling Distributed Tracing by default. To disable Distributed Tracing, please set the distributed_tracing.enabledsetting to False.
https://docs.newrelic.com/docs/release-notes/agent-release-notes/python-release-notes/python-agent-70000166
2022-05-16T09:30:13
CC-MAIN-2022-21
1652662510097.3
[]
docs.newrelic.com
SR-D80589 · Issue 544840 Check added before clearing Report definition custom filter section page Resolved in Pega Version 8.3.3 When using a Custom Section in the Report Viewer, the Page referred to at the Prefix was getting reset while running the report. Investigation showed the page was being reinitialized in pzCreateCustomFilterPage step 2, and this has been resolved by adding a 'when' rule for clearing the custom filter page. SR-D91038 · Issue 553163 Corrected report with combo chart in Case Manager portal Resolved in Pega Version 8.3.3 After adding the required columns a report in the report viewer and then adding a combo chart and dropping the summarized column on the y-axes and group by column on X-axis, clicking on "done editing" generated the error "pyUI.pyChart: You must have at least two Aggregate Columns in the chart series .pyUI.pyChart.pyDataAxis(1).pyChartOutputType: A Combo chart requires at least 1 Chart Type be a Column". Investigation showed that the second DataAxis page was getting deleted in the pzCleanChartDataAxis activity, causing the validation to fail. This has been resolved by adding a 'when' rule to "pzChartIsSingleY" that checks for "SingleYAxisClustered" chart and refers the same in pzCleanChartDataAxis to skip the data axis deletion. SR-D53176 · Issue 541792 Error when adding function filter will persist Resolved in Pega Version 8.3.3. SR-D75097 · Issue 539515 Improved handling against formula injection attacks in Export to ExcelJJ Resolved in Pega Version 8.3.383060 · Issue 547918 Repaired History class report column sorting Resolved in Pega Version 8.3.3 Attempting to sort any of the columns in a report using the History class did not render the results and the error "Cannot render the section" appeared. Tracer showed a Fail status for some out-of-the-box activities with the message "java.lang.StringIndexOutOfBoundsException". Investigation showed the logic in pzMergeAutoGenForProp activity was failing because the pyIsFunction property was not set on the UIField pages for function columns. To resolve this, the logic for pzMergeAutoGenForProp has been modified to get pyIsFunction from the field name. SR-D83373 · Issue 545750 Stage Label name displayed in chart Resolved in Pega Version 8.3.3 When pyCaseStatusControl was used, the cases label was displayed as $label instead of the Case Name. This was related to the version of Fusion Charts included, and has been resolved for this release by modifying library code in fusioncharts.js to fix the issue in datasetrollover listener code. Fusion Charts will be upgraded in v8.5 for a more complete solution to this issue. SR-D79796 · Issue 544947 Updates made for deprecated Fusion chart styles Resolved in Pega Version 8.3.3 Trying to change the background colors or font sizes for the values on the x-axis and y-axis in a report was not working. This was traced to Fusion deprecating the use of `<styles>` definitions with the introduction of JavaScript charts, and has been resolved by updating the code to compensate for this change. SR-D86864 · Issue 548092 Very long auto-generated index trimmed for use in Report Browser Resolved in Pega Version 8.3.3 The creation of a new report via the user report browser failed if there was an index with a long name (over 30 characters). The out-of-the-box method automatically generated the prefix, but the Report editor could not handle the very long declare index name and as a result did not consider properties from the embedded pages. To resolve this, pzUpdateAssociation and pzInsertNewReportColumn have been updated to trim the prefix for the declare index to 30 characters and allow for adding a new column to the report. This work does not cover adding a new filter to the report, as that fix would require substantial changes to reporting logic. SR).
https://docs.pega.com/platform/resolved-issues?f%5B0%5D=%3A29991&f%5B1%5D=resolved_capability%3A9031&f%5B2%5D=resolved_capability%3A9041&f%5B3%5D=resolved_version%3A7091&f%5B4%5D=resolved_version%3A7106&f%5B5%5D=resolved_version%3A32621&f%5B6%5D=resolved_version%3A32691
2022-05-16T09:46:34
CC-MAIN-2022-21
1652662510097.3
[]
docs.pega.com
mars.tensor.logical_or# - mars.tensor.logical_or(x1, x2, out=None, where=None, **kwargs)[source]# Compute the truth value of x1 OR x2 element-wise. - Parameters x1 (array_like) – Logical OR is applied to the elements of x1 and x2. They have to be of the same shape. x2 (array_like) – Logical OR is applied to the elements of x1 and x2. They have to be – Boolean result with the same shape as x1 and x2 of the logical OR operation on elements of x1 and x2. - Return type - See also logical_and, logical_not, logical_xor, bitwise_or Examples >>> import mars.tensor as mt >>> mt.logical_or(True, False).execute() True >>> mt.logical_or([True, False], [False, False]).execute() array([ True, False]) >>> x = mt.arange(5) >>> mt.logical_or(x < 1, x > 3).execute() array([ True, False, False, False, True])
https://docs.pymars.org/en/latest/reference/tensor/generated/mars.tensor.logical_or.html
2022-05-16T07:53:41
CC-MAIN-2022-21
1652662510097.3
[]
docs.pymars.org
Authentication Dothttp supports basic, digest and certificate auth inherently For certificate auth docs visit this Redefining authentication for each request is burden, with dothttp one can extend auth information from base request. check out more information on this here #Basic Authentication Basic authentication is nothing but setting header Authorization: <username:password with base64 encoded> dothttp provides simple way to set basic authentication Syntax: basicauth(<username>, <password>) #Example: #Digest Authentication Digest authentication is one of most used authentication mechanisms. Syntax: digestauth(<username>, <password>) #AWS Signature v4 Authentication Aws signature v4 authentication is used for interacting with amazonaws apis Syntax: awsauth(<accessId>, <secretKey>, <service>, <region>) #NTLM Authentication Windows NT LAN Manager (NTLM) is a challenge-response authentication protocol used to authenticate a client to a resource on an Active Directory domain. Syntax: ntlmauth(<username>, <password>)
https://docs.dothttp.dev/docs/auth/
2022-05-16T08:40:28
CC-MAIN-2022-21
1652662510097.3
[]
docs.dothttp.dev