content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Progress® Telerik® Reporting R3 2021 AssemblyRefCollection.AddFromAssemblyPath Method Adds the assembly pointed from the assemblyPath along with the rest of the .dll files located in the same directory.Namespace: Telerik.Reporting Assembly: Telerik.Reporting (in Telerik.Reporting.dll) Syntax Parameters - assemblyPath - Type: SystemString The path of the leading assembly Version Information Supported in: 1.0.1
https://docs.telerik.com/reporting/m-telerik-reporting-assemblyrefcollection-addfromassemblypath
2022-01-29T04:04:23
CC-MAIN-2022-05
1642320299927.25
[]
docs.telerik.com
New Features and Changes in Cloudera Data Science Workbench 1.7.1 Major features and updates for Cloudera Data Science Workbench. - Supported upgrade paths to CDSW 1.7.1 - Cloudera Data Science Workbench only supports upgrades to version 1.7.1 from version 1.5.x and 1.6.x. If you are using an earlier version of CDSW, you must first upgrade to version 1.5.x or 1.6.x, and then upgrade to version 1.7.1. - Analytical Applications - Cloudera Data Science Workbench now gives data scientists a way to create long-running standalone ML web applications/dashboards that can easily be shared with other business stakeholders. Applications can range from single visualizations embedded in reports, to rich dashboard solutions such as Tableau. Applications stand alongside other existing forms of workloads in CDSW (sessions, jobs, experiments, models). For details, see Analytical Applications. - Monitoring CDSW with Grafana - CDSW now leverages Prometheus and Grafana to provide a dashboard that allows you to monitor how CPU, memory, storage, and other resources are being consumed by CDSW deployment. For details, see Cluster Monitoring with Grafana. - Feature flag overrides - This is a new property available in the CDSW service in Cloudera Manager. It can be used to enable/disable experimental features (such as quotas) and disable metric collection in diagnostic bundles. - Quotas - CDSW site administrators can now enable CPU, GPU, and memory usage quotas per user. You can set default quotas for each user on the deployment as well as overriding custom quotas for specific users. For details, see Configuring Quotas. - Usage Metrics Collection - By default, CDSW 1.7.1 now gathers highly redacted information on which feature is being used on your deployment. When you create a diagnostic bundle, this information is packed alongside the diagnostic information. You can use the Feature flag overrides property in Cloudera Manager to disable collection of usage metrics.
https://docs.cloudera.com/cdsw/1.10.0/release-notes/topics/cdsw-new-features-and-changes-in-cloudera-data-science-workbench-1-7-1.html
2022-01-29T03:45:50
CC-MAIN-2022-05
1642320299927.25
[]
docs.cloudera.com
Requirements MacOS and Windows No requirements are requested for macOS and Windows users. Linux systems Leapp uses libsecret and gnome-keyring as dependencies to store all sensitive data into the keyring. Depending on your distribution you may need to install them before running Leapp using these commands. sudo pacman -S gnome-keyring sudo pacman -S libsecret sudo apt-get install gnome-keyring sudo apt-get install libsecret-1-dev sudo yum install gnome-keyring sudo yum install libsecret-devel Logging into EC2 Instances via AWS SSM with Leapp In order to use AWS SSM on your System through Leapp, you must be able to execute this command on your own at least once when suitable credentials are active. aws ssm start-session --region <region> --target <instanceId> If for any reason this command fails, please verify to have Python 3.x installed: Also verify that the AWS SSM Agent is installed correctly by following the official AWS guide:
https://docs.leapp.cloud/installation/requirements/
2022-01-29T03:28:59
CC-MAIN-2022-05
1642320299927.25
[]
docs.leapp.cloud
MidPoint 4.3.1 "Faraday" Update 1 Release 4.3.1 is a thirty-sixth midPoint release. It is a first maintenance update for 4.3.x version family code-named Faraday. The 4.3.1 release brings miscellaneous bugfixes. Changes With Respect To Version 4.3 Miscellaneous bugfixes Changes With Respect To Version 4.2 New Features and Improvements Major features Preliminary results from MidScale project, bringing some performance and scalability improvements. Human-friendly query language (experimental) Significant improvement to password management page Diagnostic and visibility improvements Asynchronous (messaging) outbound resources (experimental) User interface improvements Flat list display for organizational units Improved display of disabled roles Sorting of values on preview page Improved customization of basic/advanced search Search through the entire role catalog (role request page) Support for sorting of custom columns Various minor usability and UX improvements Miscellaneous improvements Minor function library improvements Support for expressions in certification definition search filters Ability to filter/skip audit records by using custom expression Diagnostics, Visibility, Robustness Storing more operation errors in the objects Retries of failed synchronization operations (experimental) Improved handling of multi-node task status Minor visibility improvements to triggers, expressions, task progress, … Internals and Development Early prototype of scalable repository implementation based on PostgreSQL (experimental) Code cleanups and dependency upgrades Support for global namespace prefixes Numerous smaller performance improvements in various places in the system. Localization service passed to script evaluators Significant improvements to Schrodinger testing framework Support for storing objects in JSON in repository (experimental) Deprecation, Feature Removal And Incompatible Changes Use of HQL query language for audit log queries and dashboard widgets is deprecated since midPoint 4.2. Support of HQL for audit log queries and dashboard widgets is planned for removal in midPoint 4.4. Please use midPoint query language instead. Use of Jasper-based reports in midPoint is deprecated since midPoint 4.2 in favor of the new "native" reports. The support for Jasper-based report is planned for removal in midPoint 4.4, therefore it is recommended to migrate the reports as soon as possible. Support for specification of custom resource namespace ( namespaceitem in ResourceType) is deprecated. The support will be dropped in midPoint 4.4. Support for production deployments of midPoint in Microsoft Windows environment is deprecated. Microsoft Windows will still be supported for evaluation, demo, development and similar non-production purposes. Support for windows-based production deployments will be removed in midPoint 4.4. Microsoft Internet Explorer is no longer supported. Support for Microsoft Internet Explorer was deprecated since midPoint 4.2. MySQL and MariaDB are no longer supported. Support for MySQL and MariaDB was deprecated since midPoint 4.1. Please see Repository Database Support for details. It is strongly recommended to use PostgreSQL database instead. JMX-based node-to-node communication in midPoint cluster is deprecated. Please use the default REST communication method instead. Explicit deployment to an external web container is deprecated since midPoint 4.1. MidPoint plug-in for Eclipse IDE was never officially supported and it will not be developed any more. This plugin is abandoned in favor of IntelliJ IDEA environment (MidPoint Studio). The namespace URI for configuration elements for experimental built-in asynchronous update connector was changed from(the .updatepackage name was added). Releases Of Other Components Docker images were released in Docker Hub: 4.3.1 and 4.3.1-alpine. Purpose and Quality Release 4.3.1 (Faraday Following list provides summary of limitation of this midPoint release. Functionality that is marked as Experimental Functionality is not supported for general use (yet). Such features are not covered by midPoint support. They are supported only for those subscribers that funded the development of this feature by the means of subscriptions and sponsoring Java client library, various samples, scripts, connectors and other non-bundled items. Support for these non-bundled items is limited. Generally speaking those non-bundled items are supported only for platform subscribers and those that explicitly negotiated the support in their contract.. This list is just an overview, it may not be complete. Please see the documentation regarding detailed limitations of individual features.. Operating System MidPoint is likely to work on any operating system that supports the Java platform. However, for production deployment, only some operating systems are supported: Linux (x86_64) Microsoft Windows Server (DEPRECATED, planned for removal in 4.4) We are positive that MidPoint can be successfully installed on other operating systems, especially macOS and Microsoft Windows desktop. Such installations can be used to for evaluation, demonstration or development purposes. However, we do not support these operating systems for production environments. The tooling for production use is not maintained, such as various run control (start/stop) script, low-administration and migration tools, backup and recovery support and so on. Java OpenJDK 11 (11.0.10). This is a recommended platform. OpenJDK 11 is a recommended Java platform to run midPoint. Support for Oracle builds of JDK is provided only for the period in which Oracle provides public support (free updates) for their builds. As far as we are aware, free updates for Oracle JDK 11 are no longer available. Which means that Oracle JDK 11 is not supported for MidPoint any. This is the default and recommended deployment option. See Stand-Alone Deployment for more details. Explicit deployment of war file to web container is deprecated. Following Apache Tomcat versions are supported: Apache Tomcat 9.0 (9.0.37) Apache Tomcat 8.0.x and 8.5.x are no longer supported. Support for explicit deployment to newer Tomcat versions is not planned. Please migrate to the default stand-alone deployment model as soon as possible. 13, 12, 11 and 10. PostgreSQL 13 or 12 is strongly recommended option. Oracle 12c Microsoft SQL Server 2019, 2016 SP1 Our strategy is to officially support the latest stable version of PostgreSQL database (to the practically possible extent). PostgreSQL database is the only database with clear long-term support plan in midPoint. We make no commitments for future support of any other database engines. See Repository Database Support page for the details. Only a direct connection from midPoint to the database engine is supported. Database and/or SQL proxies, database load balancers or any other devices (e.g. firewalls) that alter the communication are not supported. Supported Browsers Firefox Safari Chrome Edge Opera Any recent version of the browsers is supported. That programs. Upgrade from midPoint 4.3 MidPoint 4.3.1 data model (schema) and database schema are compatible with midPoint 4.3. No special migration steps are needed to migrate the data. Upgrade of software packages is enough to upgrade to midPoint 4.3 to midPoint 4.3.1. Upgrade From MidPoint 4.2.x MidPoint 4.3.1 data model is not completely backwards compatible with previous midPoint versions. However, vast majority of data items is compatible. Therefore the usual upgrade mechanism can be used. There are some important changes to keep in mind: Database schema needs to be upgraded using the usual mechanism. Please see MidPoint Upgrade Guide for details. Version numbers of some bundled connectors have changed. Therefore connector references from the resource definitions that are using the bundled connectors need to be updated. The namespace URI for configuration elements for experimental built-in asynchronous update connector was changed. Therefore, resources that use this connector need to be updated to use the new namespace URI. Upgrade From MidPoint 4.1.x Or Older Upgrade from midPoint 4.1.x or older is not supported directly. Please upgrade to midPoint 4.2.x first. Changes In Initial Objects Since 4. However, midPoint is conservative and avoids overwrite of customized configuration objects. Therefore midPoint does not overwrite existing objects when they are already in the database. This may result in upgrade problems if the existing object contains configuration that is no longer supported in a new version. changes and assess the impact on case-by-case basis: 000-system-configuration.xml: added schedulingStateto TaskTypeobject details GUI configuration, added admin-dashboardconfiguration. 021-archetype-system-role.xml, 022-archetype-business-role.xml, 521-archetype-task-approval.xml: Updated icons. 040-role-enduser.xml: added lookup table get authorization. Please review source code history for detailed list of changes. Bundled Connector Changes Since 4.2 LDAP and AD connectors were upgraded to the latest available version 3.2. DatabaseTable connector was upgraded to the latest available version 1.4.6.0. Behavior Changes Since 4.2 Task OID in audit records now points to the root of the task tree, if applicable. Note that task identifier remains to be the identifier of actual task that executed the request. Dead shadows remain linked to the focus (user). Handling links to dead shadows was inconsistent in previous midPoint versions. This was aligned in midPoint 4.3. Links to dead shadows are marked by relation of type "related". Custom dashboards are not displayed automatically in the menu. Dashboards that are to be included in the menu have to be explicitly enabled in system configuration. Requester information in notification handlers was corrected (MID-6754), which may be a minor compatibility issue. Public Interface Changes Since 4.2 Prism API was changes in several places. However, this is not yet stable public interface therefore the changes are not tracked in details. There were changes to the IDM Model Interface (Java). Please see source code history for details. Important Internal Changes Since 4. Known Issues and Limitations As all real-world software midPoint 4.3.1 has some known issues. Full list of the issues is maintained in bug tracking system. As far as we know at the time of the release there was no known critical or security issue. There is currently no plan to fix the known issues of midPoint 4.3). We have seen issues upgrading H2 instances to a new version. Generally speaking H2 is not supported for any particular use. We try to make H2 work and we try to make it survive an upgrade, but there are occasional issues with H2 use and upgrade. Make sure that you backup your data in a generic format (XML/JSON/YAML) in regular intervals to avoid losing them. It is particularly important to backup your data before upgrades and when working with development version of midPoint. Credits Majority of the work on the Faraday.
https://docs.evolveum.com/midpoint/release/4.3.1/
2022-01-29T03:45:07
CC-MAIN-2022-05
1642320299927.25
[]
docs.evolveum.com
History¶ The BrowserCompat API is still in beta, and true versioning won’t start until it is shipped to the general public. The issues are tracked on Bugzilla. When code is merged to master (once or twice a week), it is deployed to Heroku at. Here are some development milestones: - 2015-12-01 - MDN beta users can see API-backed tables on select pages - 2015-09-11 - 3rd re-write of MDN importer ships, 82% of MDN can be imported - 2015-02 - Added MDN importer - 2014-12 - Added rest of resources, sample displays. Dropped versioning pre-release. - 0.2.0 - 2014-10-13 - Add features, supports, pagination - 0.1.0d - 2014-09-29 - Add resource-level caching - 0.1.0c - 2014-09-16 - Add sample feature view, simplify draft API - 0.1.0b - 2014-09-05 - Add filtering, more JSON API tuning. - 0.1.0a - 2014-09-02 - First Heroku deployment. Browser and Version data.
https://browsercompat.readthedocs.io/en/spike5_v2_api_1159406/history.html
2022-01-29T04:06:00
CC-MAIN-2022-05
1642320299927.25
[]
browsercompat.readthedocs.io
Change Angular Speed¶ Characters » Properties » Change Angular Speed Description¶ Changes the Character's angular speed over time Parameters¶ Name Description Angular Speed The target Angular Speed value for the Character, measured in degrees per second Duration How long it takes to perform the transition Easing The change rate of the parameter over time Wait to Complete Whether to wait until the transition is finished Character The game object with the Character target Keywords¶ Rotation Euler Direction Face Look
https://docs.gamecreator.io/gamecreator/visual-scripting/actions/instructions/characters/properties/change-angular-speed/
2022-01-29T04:31:52
CC-MAIN-2022-05
1642320299927.25
[]
docs.gamecreator.io
Upload Uploading is a common pattern in enterprise applications. Well-designed upload flows allow the user control while assisting them in their task. Starting an upload If there is no data yet, an upload prompt underneath the table will allow the user to select files from their computer or drag a file into the application to begin the upload process. With pre-populated data, uploading additional data is done from an action in the top right of the table. Once the file has been selected, a preview of the data to be uploaded shows. Upload messages Confirmation Once the upload is complete, a dismissable confirmation notification will display to indicate the successful upload. Errors If there is an error in process the selected file(s), use a non-dismissable error notification at the top of the modal and a prompt to upload a new file. If there are errors (such as duplicated records), an error notification should display on the table to note this.
https://docs.pega.com/ui-kit-design-documentation/upload
2022-01-29T04:05:20
CC-MAIN-2022-05
1642320299927.25
[array(['https://docs.pega.com/sites/default/files/media/images/2021-01/v3_upload_1.png', None], dtype=object) array(['https://docs.pega.com/sites/default/files/media/images/2021-01/v3_upload_2.png', None], dtype=object) array(['https://docs.pega.com/sites/default/files/media/images/2021-01/v3_upload_3.png', None], dtype=object) array(['https://docs.pega.com/sites/default/files/media/images/2021-01/v3_upload_6.png', None], dtype=object) array(['https://docs.pega.com/sites/default/files/media/images/2021-01/v3_upload_5.png', None], dtype=object) array(['https://docs.pega.com/sites/default/files/media/images/2021-01/v3_upload_7.png', None], dtype=object) ]
docs.pega.com
TiDB Operator 1.0.6 Release Notes Release date: December 27, 2019 TiDB Operator version: 1.0.6 v1.0.6 What's New Action required: Users should migrate the configs in values.yaml of previous chart releases to the new values.yaml of the new chart. Otherwise, the monitor pods might fail when you upgrade the monitor with the new chart. For example, configs in the old values.yaml file: monitor: ... initializer: image: pingcap/tidb-monitor-initializer:v3.0.5 imagePullPolicy: IfNotPresent ... After migration, configs in the new values.yaml file should be as follows: monitor: ... initializer: image: pingcap/tidb-monitor-initializer:v3.0.5 imagePullPolicy: Always config: K8S_PROMETHEUS_URL: ... Monitor TiDB Scheduler Compatibility - Fix the compatibility issue in Kubernetes v1.17 (#1241) - Bind the system:kube-schedulerClusterRole to the tidb-schedulerservice account (#1355) TiKV Importer E2E CI Was this page helpful?
https://docs.pingcap.com/tidb-in-kubernetes/v1.1/release-1.0.6/
2022-01-29T04:33:55
CC-MAIN-2022-05
1642320299927.25
[]
docs.pingcap.com
Supported IPsec Parameters Cisco Umbrella uses the IPsec protocol for tunneling traffic. IPsec has multiple components and one of the key components is IKE, which manages negotiation with the peers, authenticating, certificate exchanges and also maintains the session by using the keepalive mechanism. Umbrella only supports IKEv2, which is faster and more secure than IKEv1. Device Compatibility with Tunnels Even if your device can establish a tunnel, it is not guaranteed to be compatible. For example, if Perfect Forward Secrecy (PFS) is enabled, you can establish a tunnel, but in the event of reconnection, it will fail to rekey and there will be a loss of service. Additionally, vendors have differences in IPsec implementation that may not be covered in these parameters. Thorough testing is recommended before putting any tunnel into production. The following device(s) have known issues: - AWS Site-to-Site VPN: Incompatible because PFS cannot be disabled Supported IPsec Parameters * Deprecated. Recommendations are in BOLD. Supported Devices Device Compatibility Umbrella is intended to be compatible with many different types of network devices. If you have a device that isn’t listed here, feel free to try it, but we may not be able to provide thorough assistance. Umbrella recommends setting your MTU size to 1350 to optimize performance and compatibility. IPsec Configuration < Supported IPsec Parameters > Network Tunnel Configuration Updated 10 days ago
https://docs.umbrella.com/umbrella-user-guide/docs/supported-ipsec-parameters
2022-01-29T03:54:06
CC-MAIN-2022-05
1642320299927.25
[]
docs.umbrella.com
I present a collection of 14 high-quality 3d models of the Stones and sand. The models were created by scanning! They will fit and decorate any project . Each model has 8k textures and PBR materials. The models are created and have the same style. The demo scene shown in the pictures is not included in this product. The images are presented to illustrate the capabilities of this set. Have a good use! Texture maps Number of Unique Meshes: 14 Collision: Yes Vertex Count: 256~30 k LODs: No Number of Materials and Material Instances: 14 and 14 inst. Number of Textures: 84 Texture Resolutions: 4k-8k Supported Development Platforms: Windows: Yes Mac: Yes
https://docs.unrealengine.com/marketplace/ja/product/rocky-beach-set-1
2022-01-29T05:28:16
CC-MAIN-2022-05
1642320299927.25
[]
docs.unrealengine.com
LifeKeeper Configuration Database The LifeKeeper Configuration Database (LCD) maintains the object-oriented resource hierarchy information and stores recovery direction information for all resource types known to LifeKeeper. The data is cached within system shared memory and stored in files so that configuration data is retained over system restarts. The LCD also contains state information and specific details about resource instances required for recovery. See the following related topics for information on the LCD directory structure, types of data stored, resource types available and use of application scripts. Related Topics __________________________________________________________________________________________________________________________________________ Structure of LCD Directory in /opt/LifeKeeper Feedback Thanks for your feedback. Post your comment on this topic.
https://docs.us.sios.com/spslinux/9.5.1/en/topic/lcd
2022-01-29T05:26:02
CC-MAIN-2022-05
1642320299927.25
[]
docs.us.sios.com
Lightfield Photo App (last updated April 2019, v0.8.0-beta) With the right data, the Looking Glass can present lightfield photos, or static 3D images. The Lightfield Photo App helps you view, edit, and manage these. You can find the latest version of the Lightfield Photo App on the Library. This app is still in beta, so if you discover any issues please help us by reporting them to our GitHub issues page. The Lightfield Photo App works with two lightfield formats, photosets and quilts. Photosets A photoset is a series of photos that can be stitched together to form a 3D scene inside your Looking Glass. The app comes loaded with two sample photosets. Click on one of those examples to see it on your Looking Glass. In addition to displaying the photoset on your Looking Glass, the app also presents controls in your 2D monitor. These controls allow you to perform actions on the photoset you're viewing. Choose Photos Selects the photos that belong in the photoset. This should only be used for new photosets, as applying to existing photosets will cause issues. For photosets to work, each photo in the set should be: - of the same scene - pointing in the same direction - horizontally offset a regular amount between each photo - numbered sequentially, left to right For the best results, select anywhere between 16 and 60 photos. Less than 16 will produce a poor image quality, and anything above 60 increases file size with diminishing returns. If you don't have a photoset, you can download this one as an example to try it out. Set Cropping Opens a window that provides helps you frame and crop the photoset. This window has the following controls: - Center View changes what is in view. Clicking and dragging the corners resizes the image, and dragging the rectangle around recenters the entire frame. - Focus changes the depth that the lightfield is focused on. - Reverse Image Order fixes the image if the images are loaded in right-to-left (by default the app expects images to be left-to-right) Rename Changes the name of the photoset. This name is displayed in the photoset collection page. Save Quilt Opens a file explorer to save a .jpg quilt. This saved quilt will adopt the current crop settings and will be ready to be shared with others to view in the Lightfield Photo App. Removes the photoset from your collection. Quilts A quilt is a single image file that combines many different perspectives into one Looking Glass experience. Compared to photosets, quilts are smaller in file size and easier to manage, but have an unchangeable crop and focus. For a technical definition of the quilt standard, click here. The Lightfield Photo App comes loaded with three sample quilts. Click on any of these examples to view. While viewing a quilt in your Looking Glass, your main monitor will display a handful of actions you can perform on the quilt. Choose Photo Selects the photo to import as a quilt. This should only be used for new quilts, as applying to an existing quilt will cause issues. Quilt Settings Quilt settings are saved to metadata and loaded automatically with the exception of view inversion, which is considered non-standard. - Number of Views identifies the full count of images inside the quilt. - Columns identifies how many columns of images are in the quilt. - Rows identifies how many rows of images are in the quilt. - Applying Invert View will assume that the leftmost view is the top right-corner instead of the default bottom-left corner. Rename Changes the name of the quilt. This name is displayed in the quilt collection page. Removes the quilt from your collection. Sharing your Quilt If you'd like to share a quilt that you've made with the wider Looking Glass community, please save it out and submit it to our community site.
https://docs.lookingglassfactory.com/LightfieldPhoto/
2019-05-19T15:04:05
CC-MAIN-2019-22
1558232254889.43
[array(['images/tabs.png', 'image of tabs'], dtype=object) array(['images/fruit_main.png', 'fruit'], dtype=object) array(['images/crop_settings.png', 'crop settings'], dtype=object) array(['images/saving.png', 'Saving'], dtype=object) array(['images/quilt_list.png', 'Quilt List'], dtype=object) array(['images/quilt_view.png', 'Quilt View'], dtype=object) array(['images/quilt_settings.png', 'Quilt Settings Popup'], dtype=object)]
docs.lookingglassfactory.com
CreateFile (Windows CE 5.0) This function creates, opens, or truncates a file, COM port, device, service, or console. It returns a handle that you can use to access the object. A RAPI version of this function exists, and it is named CeCreateFile (RAPI). HANDLECreateFile(LPCTSTRlpFileName, DWORDdwDesiredAccess, DWORDdwShareMode, LPSECURITY_ATTRIBUTESlpSecurityAttributes, DWORDdwCreationDisposition, DWORDdwFlagsAndAttributes, HANDLEhTemplateFile); Parameters lpFileName [in] Pointer to a null-terminated string that specifies the name of the object, such as file, COM port, disk device, or console, to create or open. If *lpFileName is a path, there is a default string size limit of MAX_PATH characters. This limit is related to how the CreateFile function parses paths. When lpFileName points to a COM port to open, you must include a colon after the name. For example, specify COM1: to open that port. When using IrCOMM, specify COM3:. dwDesiredAccess [in] Type of access to the object. An application can obtain read-only access, write-only access, read/write access, or device query access. The following table shows possible values for dwDesiredAccess. dwShareMode [in] Share mode for object. If dwShareMode is zero, the object cannot be shared. Subsequent open operations on the object will fail, until the handle is closed. This parameter can be set to one or more values. The following table shows the possible values for dwShareMode. lpSecurityAttributes [in] Ignored; set to NULL. dwCreationDisposition [in] Action to take on files that exist, and which action to take when files do not exist. For more information on this parameter, see Remarks. The following table shows possible values for dwCreationDisposition. dwFlagsAndAttributes [in] File attributes and flags for the file. Any combination of permitted attributes is acceptable for the dwFlagsAndAttributes parameter. All other file attributes override FILE_ATTRIBUTE_NORMAL. The following table shows possible attribute values for dwFlagsAndAttributes. Any combination of permitted flags is acceptable for the dwFlagsAndAttributes parameter. The following table shows permitted flag values for dwFlagsAndAttributes. hTemplateFile [in] Ignored; as a result, CreateFile does not copy the extended attributes to the new file. Return Values An open handle to the specified file indicates success. CE 2.01 and earlier, an application cannot use GetLastError to determine whether a file existed before the call to CreateFile. Remarks Use the CloseHandle function to close an object handle returned by CreateFile. As noted previously, specifying zero for dwDesiredAccess allows an application to query device attributes without actually accessing the device. This type of querying is useful, for example, if an application wants to determine the size of a disk drive and the formats it supports without having a disk in the drive. The following list shows how CreateFile operates on files, communication resources, devices, services, and consoles: Files When creating a new file or truncating an existing file, the CreateFile function performs the following actions: - Combines the file attributes and flags specified by dwFlagsAndAttributes with FILE_ATTRIBUTE_ARCHIVE. - Sets the file length to zero. - CreateFile cannot be used to access files in the MODULES section of ROM. Modules are stored in a different format that applications cannot access. The only ROM files that can be accessed using CreateFile are those in the FILES section. When opening an existing file, CreateFile ignores the file attributes specified by dwFlagsAndAttributes and sets the file length according to the value of dwCreationDisposition. To store the maximum number of files on PC Card storage devices, limit file names to eight uppercase characters and file extensions to three uppercase characters. Also, do not allow non-OEM characters in file names. File names that do not conform to these limits require more than one physical directory entry on a PC Card. Using the FILE_FLAG_RANDOM_ACCESS flag in the RAM file system, which places files in the object store, will prevent a file from being compressed. If performance is an issue, this could be the correct solution. Read and write operations to a compressed file are slower than read and write operations to an uncompressed file. The OS does not support the concept of a current directory. If a path to a file is not supplied along with the file name, the OS will look for the file in the \Windows directory as well as in the root of the file system. To access a file in any other path, the application must supply the absolute path to the file. In some cases, the GetModuleFileName function can supply the working directory of the currently running executable file. COM ports The CreateFile function can create a handle to a COM port. By setting the dwCreationDisposition parameter to OPEN_EXISTING, read-only, write-only, or read/write access can be specified. Devices Volume handles may be opened as noncached at the discretion of the file system, even when the noncached option is not specified. Microsoft recommends that all file systems open volume handles as noncached and follow the noncached I/O restrictions. CreateFile should only be used on a trusted $bus handle. For more information, see Device Manager Security. You can use the CreateFile function to open a disk drive or a partition on a disk drive. The function returns a handle to the disk device. That handle can be used with the DeviceIoControl function. The following list shows the requirements that must be met for such a call to succeed: - The caller must have administrative privileges for the operation to succeed on a hard disk drive. - The lpFileName string should be of the form DSKx: to open the hard disk x. Hard disk numbers start at one. For example, DSK2: obtains a handle to the second physical drive on the user's computer. - The dwCreationDisposition parameter must have the OPEN_EXISTING value. - When opening a disk or a partition on a hard disk, you must set the FILE_SHARE_WRITE flag in the dwShareMode parameter. Services For information on using CreateFile with services, see Services.exe. Consoles If Console.dll is present in the OS image, an application can use the direct console name, CONn:, to open the console with CreateFile, if it has been previously registered. n is a number between zero and 9. Directories An application cannot create a directory with CreateFile; it must call CreateDirectory. Windows Mobile Remarks You should not access files with the attribute FILE_ATTRIBUTE_ROMMODULE as normal files. Although the CreateFile function will succeed on these files, reading the contents or accessing the contents of the file through file mapping will return unexpected data. Requirements OS Versions: Windows CE 1.0 and later. Header: Winbase.h. Link Library: Coredll.lib. See Also Services.exe | CeCreateFile (RAPI) | CloseHandle | CreateDirectory | ReadFile Send Feedback on this topic to the authors
https://docs.microsoft.com/en-us/previous-versions/windows/embedded/aa517318(v=msdn.10)
2019-05-19T14:56:06
CC-MAIN-2019-22
1558232254889.43
[]
docs.microsoft.com
Quilts Overview Quilts are an image standard that Looking Glass uses to produce 3D experiences. This standard is used to describe both still and moving images (pictures and videos). Here is an example quilt picture: Quilts serve a few purposes: - to save and retrieve images displayed in a Looking Glass (similar to image & video screenshots for 2D monitors) - as a compositing step in Looking Glass render pipelines found in our developer tools - as a format to work against to manually produce lightfield photos / videos Format Each tile in the quilt is a conventional 2D image of a scene. The bottom-left tile of the quilt is the leftmost view of the scene, and the top-right tile is the rightmost. Standard formats for Looking Glass quilts are: - 45 views: 9 rows and 5 columns - 32 views: 8 rows and 4 columns Non-standard formats are valid, but may not be as widely supported. This standard may be applied to any conventional image or video filetype. The most common are jpg, png, gif, mp4, mov. Metadata Looking Glass apps read from and write to metadata for files. The image standard is as follows: - LKGNumViews: The number of views in the quilt. - LKGRows: The number of rows. - LKGColumns: The number of columns. - LKGAspect: The aspect ratio of the quilt expressed as a fraction. This often encodes as a full floating point number and so is best to round to the second decimal point. - LKGType: The type of file we're currently looking at. This can typically be inferred from the file format, but we have included "Image" and "Video" as a precaution. These parameters are written to a JSON format to the UserComment section of exif data. When read properly, the data will take the following form: {"LKGNumViews":45,"LKGRows":9,"LKGColumns":5,"LKGAspect":1.600000023841858,"LKGType":"Image"}
http://docs.lookingglassfactory.com/Appendix/quilts/
2019-05-19T15:08:28
CC-MAIN-2019-22
1558232254889.43
[array(['../images/example-quilt.jpg', 'Example quilt'], dtype=object)]
docs.lookingglassfactory.com
If one is available, Ghost automatically uses the gravatar picture tied to your email address as the author profile image. This is used across your publication’s theme by default. To use a different image, go to your user profile from the “Team” settings page of Ghost Admin and click on the current profile picture to upload a new image.
https://docs.ghost.org/faq/change-author-profile-picture/
2019-05-19T14:44:55
CC-MAIN-2019-22
1558232254889.43
[]
docs.ghost.org
This page provides directions for installing, starting, and configuring InfluxDB. Requirements Installation of the pre-built InfluxDB package requires root privileges on the host machine. Networking By default InfluxDB will use TCP ports 8083 and 8086 so these ports should be available on your system. Once installation is complete you can change those ports and other options in the configuration file, which is located by default in /etc/influxdb. Installation Ubuntu & Debian Debian and Ubuntu users can install the latest stable version of InfluxDB using the apt-get package manager. For Ubuntu users, you can add the InfluxData repository configuration by using the following commands: curl -sL | sudo apt-key add - source /etc/lsb-release echo "deb{DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list For Debian users, you can add the InfluxData repository configuration by using the following commands: And then to install and start the InfluxDB service: sudo apt-get update && sudo apt-get install influxdb sudo service influxdb start RedHat & CentOS RedHat and CentOS users can install the latest stable version of InfluxDB using the yum package manager: cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo [influxdb] name = InfluxDB Repository - RHEL \$releasever baseurl =\$releasever/\$basearch/stable enabled = 1 gpgcheck = 1 gpgkey = EOF Once repository is added to the yum configuration, InfluxDB is part of the FreeBSD package system. It can be installed by running sudo pkg install influxdb The configuration file is /usr/local/etc/influxd.conf with examples in /usr/local/etc/influxd.conf.sample. Start the backend by executing sudo service influxd onestart and/or adding influxd_enable="YES" to /etc/rc.conf to launch influxd during system boot. OS X Users of OS X 10.8 and higher can install using the Homebrew package manager. brew update brew install influxdb To have launchd start influxdb at login: ln -sfv /usr/local/opt/influxdb/*.plist ~/Library/LaunchAgents Then to load influxdb now: launchctl load ~/Library/LaunchAgents/homebrew.mxcl.influxdb.plist Or, if you don’t want/need launchctl, in a separate terminal window you can just run: influxd -config /usr/local/etc/influxdb.conf Hosted For users who don’t want to install any software and are ready to use InfluxDB, you may want to check out our managed hosted InfluxDB offering. Generate a configuration file Configuration files from prior versions of InfluxDB 0.9 should work with future releases, but the old files may lack configuration options for new features. It is a best practice to generate a new config file for each upgrade. Any changes made in the old file will need to be manually ported to the newly generated file. The newly generated configuration file has no knowledge of any local customization to the settings. To generate a new config file, run influxd config and redirect the output to a file. For example: influxd config > /etc/influxdb/influxdb.generated.conf Edit the influxdb.generated.conf file to have the desired configuration settings. When launching InfluxDB, point the process to the correct configuration file using the -config option. influxd -config /etc/influxdb/influxdb.generated.conf In addition, a valid configuration file can be displayed at any time using the command influxd config. Redirect the output to a file to save a clean generated configuration file. If no -config option is supplied, InfluxDB will use an internal default configuration equivalent to the output of influxd config Note: The influxdcommand has two similarly named flags. The configflag prints a generated default configuration file to STDOUT but does not launch the influxdprocess. The -configflag takes a single argument, which is the path to the InfluxDB configuration file to use when launching the process. The config and -config flags can be combined to output the union of the internal default configuration and the configuration file passed to -config. The options specified in the configuration file will overwrite any internally generated configuration. influxd config -config /etc/influxdb/influxdb.partial.conf The output will show every option configured in the influxdb.partial.conf file and will substitute internal defaults for any configuration options not specified in that file. The example configuration file shipped with the installer is for information only. It is an identical file to the internally generated configuration except that the example file has comments. Hosting on AWS Hardware We recommend using two SSD volumes. One for the influxdb/wal and one for the influxdb/data. Depending on your load each volume should have around 1k-3k provisioned IOPS. The influxdb/data volume should have more disk space with lower IOPS and the influxdb/wal volume should have less disk space with higher IOPS. Each machine should have a minimum of 8G RAM. We’ve seen the best performance with the C3 class of machines. Configuring the Instance This example assumes that you are using two SSD volumes and that you have mounted them appropriately. This example also assumes that each of those volumes is mounted at /mnt/influx and /mnt/db. For more information on how to do that see the Amazon documentation on how to Add a Volume to Your Instance. Config File You’ll have to update the config file appropriately for each InfluxDB instance you have. ... [meta] dir = "/mnt/db/meta" ... ... [data] dir = "/mnt/db/data" ... wal-dir = "/mnt/influx/wal" ... ... [hinted-handoff] ... dir = "/mnt/db/hh" ... Permissions When using non-standard directories for InfluxDB data and configurations, also be sure to set filesystem permissions correctly: chown influxdb:influxdb /mnt/influx chown influxdb:influxdb /mnt/db Other Considerations If you’re planning on using a cluster, you may also want to set hostname and join flags for the INFLUXD_OPTS variable in /etc/default/influxdb. For example: INFLUXD_OPTS='-hostname host[:port] [-join hostname_1:port_1[,hostname_2:port_2]]' For more detailed instructions on how to set up a cluster, see the documentation on clustering Development Versions Nightly packages are available for Linux through the InfluxData package repository by using the nightly channel. Other package options can be found on the downloads page
https://docs.influxdata.com/influxdb/v0.9/introduction/installation
2018-03-17T15:55:45
CC-MAIN-2018-13
1521257645248.22
[]
docs.influxdata.com
Test the Application Compatibility Database Applies To: Windows 7, Windows Server 2008 R2 Testing your application database is an iterative process where unsuccessful fixes are revised and retested. The testing process includes a series of tests in the test environment and one or more pilot deployments in the production environment. Perform the following procedure for each of the applications for which you created an application compatibility fix. To test an application compatibility database in a test environment - Install the application and the application compatibility database on a computer with a new installation of Windows 7 in your test environment. Important You should use a new installation of Windows 7 to ensure that any changes that you made to your test computer while determining the appropriate fix for the application do not provide inaccurate test results. Log on to the test computer as a standard user, and then perform the full suite of tests that you performed to identify application compatibility issues for the application. If the fix is not successful, revise the fix by uninstalling the application compatibility database and installing a different application compatibility database as described in Install or Uninstall an Application Compatibility Database, and then perform step 1 again. Perform the following procedure on the pilot computers for each of the applications for which you created an application compatibility fix. To test the application compatibility fixes in a pilot deployment Include the application and application compatibility fix in your pilot image so that it can be tested by pilot users., If the fix is not successful, revise the fix, repeat the steps in the previous procedure, and then perform step 1 in this procedure again.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-7/ee732426(v=ws.10)
2018-03-17T17:40:15
CC-MAIN-2018-13
1521257645248.22
[]
docs.microsoft.com
Test Summary Notifications Here's a method of sending the summary notification without waiting, use this for testing purposes knowing it will send to all of your clients (with outstanding records). Snippet of code to temporarily place in your functions.php file: Log into your admin and add "?send_summary_now=1" to the url. For example, After you load the url (and if the snippet is in your theme's functions.php file) a message will show that the notifications were sent and not to refresh the page.
https://docs.sproutapps.co/article/190-test-summary-notifications
2018-03-17T16:35:06
CC-MAIN-2018-13
1521257645248.22
[]
docs.sproutapps.co
In vSphere 5.5 and later, both the physical function (PF) and virtual functions (VFs) of an SR-IOV capable physical adapter can be configured to handle virtual machine traffic. The PF of an SR-IOV physical adapter controls the VFs that virtual machines use, and can carry the traffic flowing through the standard or distributed switch that handles the networking of these SR-IOV enabled virtual machines. The SR-IOV physical adapter works in different modes depending on whether it backs the traffic of the switch. Mixed Mode The physical adapter provides virtual functions to virtual machines attached to the switch and directly handles traffic from non SR-IOV virtual machines on the switch. You can check whether an SR-IOV physical adapter is in mixed mode in the topology diagram of the switch. An SR-IOV physical adapter in mixed mode appears with the icon in the list of physical adapters for a standard switch or in the list of uplink group adapters for a distributed switch. SR-IOV Only Mode The physical adapter provides virtual functions to virtual machines connected to a virtual switch, but does not back traffic from non SR-IOV virtual machines on the switch. To verify whether the physical adapter is in SR-IOV only mode, examine the topology diagram of the switch. In this mode, the physical adapter is in a separate list called External SR-IOV Adapters and appears with the icon. Non SR-IOV Mode The physical adapter is not used for traffic related to VF aware virtual machines. It handles traffic from non SR-IOV virtual machines only.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-6B2EBF69-E1CA-4D31-A035-2C7BF6B96397.html
2018-03-17T16:53:56
CC-MAIN-2018-13
1521257645248.22
[]
docs.vmware.com
Repadmin /showutdvec Updated: April 17, 2012 Applies To: Windows Server 2003, Windows Server 2008, Windows Server 2003 with SP2, Windows Server 2003 R2, Windows Server 2008 R2, Windows Server 2012, Windows Server 2003 with SP1, Windows 8. Syntax repadmin /showutdvec <DSA_LIST> <Naming Context> [/nocache] [/latency] Parameters Examples The following example shows the highest committed USN on a domain controller named dc1 for the contoso.com directory partition: repadmin /showutdvec dc1 dc=contoso,dc=com The following example shows the highest USN on the local domain controller for the mayberry.contoso.com directory partition, and orders the entries from least current to most current: repadmin /showutdvec localhost dc=mayberry,dc=contoso,dc=com /latency
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc742023(v=ws.11)
2018-03-17T17:40:03
CC-MAIN-2018-13
1521257645248.22
[]
docs.microsoft.com
Getting started with the carouFredSel jQuery Plugin Install files Installing the carouFredSel jQuery plugin is actually pretty simple. You just need to add the Following code to the <head> of your web page: <script src="" type="text/javascript"></script> <script src="jquery.carouFredSel.js" type="text/javascript"></script> <script src="" type="text/javascript"></script> <script src="jquery.carouFredSel.js" type="text/javascript"></script> This will make sure all the required files are loaded properly. If you have the carouFredSel CSS and Javascript files in a subfolder you will need to add it to the path. Note that the carouFredSel jQuery plugin requires jQuery v1.7+ to work. Add markup The HTML markup for carouFredSel is also very simple. You simply need to create a div with an id ( #carousel in this case) and add some content to it: <div id="carousel"> <img src="img1.jpg" width="300" /> <img src="img2.jpg" width="300" /> <img src="img3.jpg" width="300" /> <img src="img4.jpg" width="300" /> </div> The content can be whatever you want, including HTML. For example: <div id="carousel"> <div> <h3>Infinity</h3> <p>A concept that in many fields refers to a quantity without bound or end.</p> </div> <div> <h3>Circular journey</h3> <p>An excursion in which the final destination is the same as the starting point.</p> </div> <div> <h3>jQuery</h3> <p>jQuery is a cross-browser JavaScript library designed to simplify the client-side scripting.</p> </div> </div> Hook up the carousel Finally you need to hook up the carousel by adding the following code after the three links we included in the <head>: <script type="text/javascript"> $(document).ready(function() { // Using default configuration $('#carousel').carouFredSel(); // Using custom configuration $('#carousel').carouFredSel({ items : 2, direction : "up", scroll : { items : 1, easing : "elastic", duration : 1000, pauseOnHover : true } }); }); </script> Note: After the plugin has been executed, the container-element has been wrapped inside a div-element with the class caroufredsel_wrapper. Use block elements that float left To ensure the plugin is able to measure the correct sizes, always use display: block; for the items. In a horizontal carousel, you should also make the items float: left;.
https://docs.themeisle.com/article/499-getting-started-with-the-caroufredsel-jquery-plugin
2018-03-17T16:28:11
CC-MAIN-2018-13
1521257645248.22
[]
docs.themeisle.com
Tor Hidden RetroShare Nodes If you want to use Tor for anonymous web browsing, please use Tor Browser. It comes with readily configured Tor and a browser patched for better anonymity. To use SOCKS-Proxy directly (for RetroShare, instant messaging, Jabber, IRC, etc), you can point your application directly at Tor (localhost port 9050, or port 9150 for Tor Browser), but see this FAQ entry for why this may be dangerous. RetroShare can be run behind a Tor Hidden Service for incoming connections. The outgoing connections are sent through a local Tor-Socks-Proxy. This makes it possible to obfuscate your metadata which could disclose your Network Friend Graph. The hidden service address (e.g. ld546kr3zr462z3p.onion) is replacing the IPv4 address (e.g. 192.168.1.216) as the listening address for incoming connections. Tor allows clients and relays to offer hidden services. That is, you can offer a web server, SSH server, etc., without revealing your IP address to its users. In fact, because you don't use any public address, you can run a hidden service from behind your firewall. Hidden Service Setup Tor Installation This Guide requires to have Tor already installed on your System. If not, please refer to the offical Tor Documentation on how to install Tor. Outgoing Tor Proxy By default Tor will create a SOCKS Proxy on localhost for outgoing connections. ##. Configure your hidden service On Debian/Ubuntu Systems go to your tor settings directory /etc/tor. Open your torrc file in your favorite text editor. Go to the middle section and look for the line: ############### The HiddenServicePort directive contains the port where the Hidden Service should listen and the local IP and the local Port where RetroShare is listening. Add a section for your hidden service. HiddenServiceDir /var/lib/tor/hidden_rs_revy/ HiddenServicePort 32111 127.0.0.1:32111 - HiddenServiceDir The HiddenServiceDir directive tells Tor where to look for the Hidden Service Directory containing the private key of the Hidden Service. Each Hidden Service Owns it's own directory. The directory needs to be created and having permissions of the Tor User. - HiddenServicePort The HiddenServicePort directive tells tor on which port to listen, to forward to which ip and to which port. Go to /var/lib/tor/ and create a directory for your new Hidden Service. Change the owner of the directory to the user running tor. root@laptop:/var/lib/tor# mkdir hidden_rs_revy root@laptop:/var/lib/tor# chown debian-tor:debian-tor hidden_rs_revy/ root@laptop:/var/lib/tor# chmod 0700 hidden_rs_revy/ root@laptop:/var/lib/tor# ls -lha total 9.8M drwx--S--- 5 debian-tor debian-tor 4.0K Oct 23 12:26 . drwxr-xr-x 74 root root 4.0K Apr 29 20:23 .. drwxr-sr-x 3 debian-tor debian-tor 4.0K Oct 21 21:34 .arm drwx--S--- 2 debian-tor debian-tor 4.0K Oct 23 12:26 hidden_rs_revy -rw------- 1 debian-tor debian-tor 0 Oct 23 09:25 lock -rw------- 1 debian-tor debian-tor 6.1K Oct 23 12:26 state Restart your Tor Daemon. Tor will create a crypto key for your Hidden Service in the HiddenServiceDirectory. root@laptop:/var/lib/tor# /etc/init.d/tor restart [ ok ] Stopping tor daemon...done. [ ok ] Starting tor daemon...done. root@laptop:/var/lib/tor# cd hidden_rs_revy/ root@laptop:/var/lib/tor/hidden_rs_revy# ls hostname private_key The Hostname of your new Hidden Service will be available in the file hostname just created in your Hidden Service Directory. root@laptop:/var/lib/tor/hidden_rs_revy# cat hostname ld546kr3zr462z3p.onion RetroShare Tor Setup For more details please also read Create New User for clearnet IPv4 Network Node. Network Configuration Check your Network Config. It should look different to a normal setting. - Hidden Node Indicator - DHT is disabled in Network Mode Discovery is recommended, though also Darknet(Discovery & DHT are disabled) may be used. - Local Address is locked to localhost (127.0.0.1) - external Address is hidden - Known / Previous IPs are disabled - external IP Checks are disabled Hidden Service Configuration Outgoing Connections - Tor Socks Proxy Your Tor Socks Proxy is normally available at localhost and port 9050. - The Indicator shows if the proxy is working. - I2P Socks Proxy I2P also creates a SOCKS Proxy, which can be used to connect to I2P Hidden RetroShare Nodes. Incoming Connections - Local Address This Address is locked to 127.0.0.1 - Local Port The Port where RetroShare is listening - Onion Address Here you need to enter the Hostname which has been created by the Tor HiddenServiceDir. - Onion Port The Port where the Hidden Service is listening For an easier usage, both ports should be used symmetrically. Tor Proxy Usage Normal Nodes Normal Nodes may also use Tor Socks Proxy for outgoing connections to Hidden Services. Outgoing Tor Connections and Incoming&Outgoing TCP Connections Hidden Nodes Outgoing connections are always routed through the local Tor Socks Proxy to .onion addresses. All Traffic is routed inside the Tor Network. Hidden Nodes cannot connect to IPv4/Normal Nodes, because Tor-Exit nodes are not used. Though the other way Normal Nodes can reach Hidden Nodes through Tor Socks Proxy. Tor connections incoming and outgoing from and to Hidden Nodes.
https://retroshare.readthedocs.io/en/latest/tutorial/tor-hidden-rs-node/
2018-03-17T16:11:38
CC-MAIN-2018-13
1521257645248.22
[array(['../../img/tutorial/tor/create_new_profile.png', 'Create Hidden Node create hidden node'], dtype=object) array(['../../img/tutorial/tor/alice_in_out.png', 'Alice Incoming & Outgoing Connections Alice Incoming & Outgoing Connection'], dtype=object) array(['../../img/tutorial/tor/revy_out.png', 'Revy Incoming Connections Revy Incoming Connections'], dtype=object) ]
retroshare.readthedocs.io
. Donut or Semi donut. Style your chart Click the gear icon () after the Type field to configure the chart style options for the look and layout of the chart. Group by Select a field to organize data in groups from the selected table. For example, in an incident report that is grouped by Assignment group, all incidents belonging to Software, Service Desk, Network, and so on, are placed in separate groups. Make sure the name of the report reflects the selected field.. No. groups Select the maximum number of individual values that can be represented as slices. If you select Remove Other, the Other slice is hidden. Show Other Select this check box if you want to display the Other slice. This check box is not available when Show all or Remove Other are selected from the No. Change the look of your donut chart. When you create or edit a report, click the gear icon () after the Type field to open the Style your chart window with options to configure the look of your chart. Chart options are automatically saved when you click Close. Table 1. Donut. Donut Width Percent Enter a percentage for the width of the donut or semi-donut band, ranging between 1 and 100 percent. One hundred percent equals a pie chart. The default value is 50. Show total Select this check box to display the total aggregation value in the center of the donut. Selecting this option automatically hides the chart legend. Define a report drilldown.
https://docs.servicenow.com/bundle/helsinki-performance-analytics-and-reporting/page/use/reporting/concept/c_CreateDonutCharts.html
2018-03-17T16:42:11
CC-MAIN-2018-13
1521257645248.22
[]
docs.servicenow.com
How to change default confirmation email content in Pirate Forms To change the default email content, which will be sent to the user after submitting the form: 1. For Default Form: visit Pirate Forms > Settings. In Options tab, under "Send email confirmation to form submitter" field, change the text you want. Click on Save Changes. 2. For previously created Forms: NOTE: This is a premium feature. If you purchased Pirate Forms Extended, then you need to have both Pirate Forms Lite & Extended activated to take advantage of the plugin's premium features. To change the email content of previously created forms, visit Pirate Forms > All Forms. Select your form and click on edit. In Behaviour Tab, under "Send email confirmation to form submitter" field, change the text you want. After updating the text, click on Update on the right sidebar.
https://docs.themeisle.com/article/724-how-to-change-default-email-content-in-pirate-forms
2018-03-17T16:26:04
CC-MAIN-2018-13
1521257645248.22
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/59f058960428630253623fd6/file-N9hQ588jXH.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/59f059890428630253623fe1/file-d6pRNrkp4H.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/59f03ad30428630253623f24/file-VsSbmhihA4.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/59f05ae80428630253623fed/file-wGI3LyrlQF.png', None], dtype=object) ]
docs.themeisle.com
AWS text file whose format complies with the JSON standard. You can save these files with any extension, such as .json, -2f726546 AMI ID, t: { " } } ] } }, "MyEIP" : { "Type" : "AWS::EC2::EIP", "Properties" : { "InstanceId" : {"Ref": "MyEC2Instance"} } } } }. For more information, see Updating Stacks Using Change Sets.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-whatis-concepts.html
2016-07-23T11:09:37
CC-MAIN-2016-30
1469257822172.7
[]
docs.aws.amazon.com
COM/NER/jt2 Date of Issuance 6/28/2010 Decision 10-06-047 June 24, 2010 BEFORE THE PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA DECISION ADOPTING REQUIREMENTS FOR SMART GRID DEPLOYMENT PLANS PURSUANT TO SENATE BILL 17 (PADILLA), CHAPTER 327, STATUTES OF 2009 TABLE OF CONTENTS Title Page DECISION ADOPTING REQUIREMENTS FOR SMART GRID DEPLOYMENT PLANS PURSUANT TO SENATE BILL 17 (PADILLA), CHAPTER 327, STATUTES OF 2009 2 2.1. Recent Procedural History 6 2.2. Pursuant to SB 17, This Decision Adopts Policies Pertaining to Smart Grid Deployment Plans with the Input of the CEC and the ISO 8 2.3. Access to Information and Privacy Protections 10 2.4. Policies Pertaining to Functionality and Interoperability Standards Await Action by Standard Setting Bodies 11 2.4.1. Positions of Parties 12 2.4.2. Discussion: Interoperability Standards Should be Informed by National Actions 15 3. Issues before the Commission Pertaining to Use and Content of Deployment Plans 16 3.1. How Should the Commission use Smart Grid Deployment Plans? 17 3.1.1. Position of Parties 18 3.1.2. Discussion: Deployment Plans Can Set Smart Grid Baseline and Guide Investments 21 3.2. What Elements Must a Smart Grid Deployment Plan Have? 22 3.2.1. Position of Parties 24 3.2.2. Discussion: The Deployment Plan Should Have Eight Elements 28 3.3. What Should the Smart Grid Vision Statement Include? How Should the Vision Statement be Structured? 30 3.3.1. Position of Parties 32 3.3.2. Discussion: Vision Statement Should Present a Vision of Smart Energy Markets, Smart Consumers and a Smart Utility 33 3.4. What Should the Deployment Baseline Include? 37 3.4.1. Position of Parties 37 3.4.2. Discussion: Elements for Deployment Baseline 40 3.5. What Should the Smart Grid Strategy Include? 43 3.5.1. Position of Parties 43 3.5.2. Discussion: Smart Grid Strategy Should Provide Direction and Demonstrate Consistency with SB 17 and GO 156 Goals 47 3.6. What Should be in the Grid Security and Cyber Security Section of the Deployment Plan? 49 3.6.1. Position of Parties 50 3.6.2. Discussion: Deployment Plans Should Address the Security of Smart Grid 58 3.7. What Should be in the Smart Grid Roadmap? 63 3.7.1. Position of Parties 63 3.7.2. Discussion: A Roadmap Can Help Identify How Technology Deployment Aligns with Policy and Statutory Deadlines 64 3.8. What Should the Section on Cost Estimates Include? 65 3.8.1. Position of Parties 65 3.8.2. Discussion: Smart Grid Deployment Plans Should Include Cost Estimates 68 3.9. What Should the Section on Benefits Include? 69 3.9.1. Positions of Parties 70 3.9.2. Discussion: Smart Grid Deployment Plans Should Assess All Benefits 74 3.10. What Metrics Should Be Included in the Deployment Plans? 75 3.10.1. Positions of Parties 76 3.10.2. Discussion: Quantitative Metrics Should be Part of Deployment Plan, but Workshops Are Needed to Develop Metrics 84 4. Other Issues Pertaining to Deployment Plan and SB 17 that Require Resolution at this Time 85 4.1. How Should the Commission Consider/Approve Deployment Plans? 85 4.1.1. Positions of Parties 86 4.1.2. Discussion: Combined Proceeding with SCE, SDG&E and PG&E 88 4.2. How Should the Commission Review Proposed Revisions to Deployment Plans? 89 4.2.1. Positions of Parties 89 4.2.2. Discussion: Commission Will Update Procedure Following Review of Initial Deployment Plans 93 4.3. How Should the Commission Review/Consider Specific Smart Grid Investments? 94 4.3.1. Positions of Parties 94 4.3.2. Discussion: Application or GRC Offer Appropriate Procedures for Reviewing Smart Grid Investments 95 4.4. What Reports Should the Commission Require Pertaining to Smart Grid Investments? When Should They be Filed? 95 4.4.1. Positions of Parties 95 4.4.2. Discussion: Annual Reports Are Needed to Prepare An Annual Report to Legislature 99 4.5. Should the Commission Set a Demarcation Point for Utility Investments 102 4.5.1. Positions of Parties 102 4.5.2. Discussion: Commission Declines to Adopt a Demarcation Point at this Time 108 5. Comments on Proposed Decision 109 5.1. Comments on Deployment Plan Requirements and Procedures 110 5.2. Demarcation Point 115 5.3. Comments Concerning Security, Privacy and Interoperability Issues 116 6. Assignment of Proceeding 122 Attachment A - Senate Bill No. 17, Chapter 327 DECISION ADOPTING REQUIREMENTS FOR SMART GRID DEPLOYMENT PLANS PURSUANT TO SENATE BILL 17 (PADILLA), CHAPTER 327, STATUTES OF 2009 This decision provides Pacific Gas and Electric Company, San Diego Gas & Electric Company, and Southern California Edison Company with the guidance needed to file Smart Grid Deployment Plans with this Commission by July 1, 2011. As the Commission stated in Decision. The California legislature and Governor have enshrined the importance of modernizing the state's electric grid through the enactment of Senate Bill (SB) 17 (Padilla), signed into law on October 11, 2009. SB 17 states that "[i]t is the policy of the state to modernize the state's electrical transmission and distribution system to maintain safe, reliable, efficient, and secure electrical service, with infrastructure that can meet future growth in demand" and achieve purposes specified in the law. SB 17 further requires the Commission "by July 1, 2010, and in consultation with the State Energy Resources Conservation and Development Commission (Energy Commission), the Independent System Operator (ISO), and other key stakeholders, to determine the requirements for a smart grid deployment plan consistent with the policies set forth in the bill and federal law."1 Pursuant to SB 17, this proceeding, in consultation with the Energy Commission and the ISO and other key stakeholders, sets the requirements for Smart Grid Deployment Plans. This decision requires that utilities follow a common outline in preparing their Smart Grid Deployment Plans. The outline consists of eight topics as follows: 1. Smart Grid Vision Statement; 2. Deployment Baseline; 3. Smart Grid Strategy; 4. Grid Security and Cyber Security Strategy; 5. Smart Grid Roadmap; 6. Cost Estimates; 7. Benefits Estimates; and 8. Metrics. In addition, this decision sets requirements for each of these sections concerning the topics that the Smart Grid Deployment Plans must address, the information that the deployment plans must provide, and how the deployment plans must link each section and topic back to the policies set forth in SB 17 and in relevant federal law. Furthermore, we anticipate that workshops hosted by the Energy Commission concerning research on "Defining the Pathway to the Smart Grid of 2020" and workshops hosted by this Commission prior to the filing of the initial Smart Grid Deployment Plans will provide further opportunities for cooperation with the Energy Commission and the ISO. The decision requires that the Smart Grid Deployment Plans present a vision of the Smart Grid consistent with legislative initiatives. The vision must address how the plans will enable consumers to capture the benefits of a wide range of energy technologies and energy management products and services that may, or may not, be provided by the utility, while protecting consumers' privacy. The vision must also discuss how the Smart Grid will help the utility meet environmental policies already adopted by statute or Commission action, and promote innovation and competition among companies developing new products and services. The decision requires that the Smart Grid Deployment Plans provide a deployment baseline so that we understand the character of the California grid today and articulate a strategy for achieving the adopted goals. The decision requires each utility to address grid security and cyber security issues in their Smart Grid Deployment Plans to ensure that these issues are considered explicitly at the planning stage. The decision, consistent with the intent of SB 17, links California concerns for grid security with the security guidelines identified as under development by the National Institute of Standards and Technology. The decision also adopts security strategy requirements and principles to guide the development of Smart Grid Deployment Plans to ensure alignment with national efforts. Further, we note that we anticipate a separate decision before the end of the year adopting privacy rules prior to the Commission ordering third-party access to customer data. A ruling will follow this decision setting a schedule for resolving privacy issues. The decision provides a discussion of the cost and benefit procedures that the Smart Grid Deployment Plans should use to enumerate, quantify, and -- to the extent feasible -- monetize the costs and benefits of Smart Grid investments. The decision requires the plans to follow cost-effectiveness analysis to meet legislatively mandated goals in a cost effective way and requires the presentation of the "business case" analysis for other components of the Smart Grid. The decision also finds that the Smart Grid Deployment Plans should include metrics that permit the assessment of progress, but the adoption of specific metrics requires additional work by parties. A subsequent decision later this year will endorse specific metrics for inclusion in Smart Grid Deployment Plans and other reports. This decision also proposes to review the initial deployment plans in a single proceeding. Subsequent utility requests to make specific Smart Grid-related investments, however, would occur in utility-specific proceedings where the reasonableness of particular Smart Grid investments can be determined. Finally, this decision requires that the utilities file annual reports on their Smart Grid activities, with the first annual reports due on October 1, 2012. 1 Chapter 327, Statutes of 2009.
http://docs.cpuc.ca.gov/PUBLISHED/FINAL_DECISION/119902.htm
2016-07-23T11:03:58
CC-MAIN-2016-30
1469257822172.7
[]
docs.cpuc.ca.gov
Extending and Embedding the Python Interpreter¶ This. Recommended. - 1. Extending Python with C or C++ - 1.1. A Simple Example - 1.2. Intermezzo: Errors and Exceptions - 1.3. Back to the Example - 1.4. The Module’s Method Table and Initialization Function - 1.5. Compilation and Linkage - 1.6. Calling Python Functions from C - 1.7. Extracting Parameters in Extension Functions - 1.8. Keyword Parameters for Extension Functions - 1.9. Building Arbitrary Values - 1.10. Reference Counts - 1.11. Writing Extensions in C++ - 1.12. Providing a C API for an Extension Module - 2. Defining New Types - 3. Building C and C++ Extensions with distutils -.
https://docs.python.org/3.4/extending/index.html
2016-07-23T11:07:41
CC-MAIN-2016-30
1469257822172.7
[]
docs.python.org
Help17:Menus Menu Item Search Results From Joomla! Documentation Internal Link - Search When the Search link is selected, it expands to display the Search layout. This is used to show the Search form and the Search results as shown below. Parameters - Basic The Search Layout has the following Basic Parameters, as shown below. -.. - Gather Search Statistics. Whether or not to enable the gathering of Search Statistics. Yes/No/Use Global. - Show Created Date. Whether to Hide or Show the Created Date for an Article. This parameter can be overridden at the Menu Item and Article level.
http://docs.joomla.org/Help17:Menus_Menu_Item_Search_Results
2012-05-24T01:42:49
crawl-003
crawl-003-005
[]
docs.joomla.org
When configuring actions for your devices and monitors, you should take a few things into consideration. Imagine the number of messages sent if external notifications are placed on a router and every device and monitor that uses that router for their connection to the Internet. If the router goes down, it will appear as if all of the devices are down, and messages will be sent for each of them. Consider using dependencies and limiting the external notifications to the router and the most important of the devices in the group. Sound notifications are safe to use in almost any situation, but is not the best choice for items that are monitored overnight. You may want to add device states for longer periods of downtime. Perhaps creating a Down at least 60 mins state, and sending an escalated message to show that the device is still down after an hour. Whenever possible, it is a good idea to use action policies over actions configured for a single device. That way, you can reuse the work you put into the list, and can keep better watch over the actions that are being fired. Unless the device is vital to the daily-operation of the business or office, the state change color and shape should be enough to let you know what is going on with your monitored devices.
http://docs.ipswitch.com/NM/91_WhatsUp%20Gold%20v12.3.1/03_Help/action_strategies_1.htm
2012-05-24T00:53:22
crawl-003
crawl-003-005
[]
docs.ipswitch.com
Indicates whether a horizontal separator bar appears between item lines. property HorzSeparator: Boolean; __property Boolean HorzSeparator; An action band can arrange its child controls using multiple lines. Items which cannot be displayed on one line wrap to the next in the same way that words wrap on most word processors. Set HorzSeparator to true if you want a separator bar to appear between the lines.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/ActnCtrls_TActionToolBar_HorzSeparator.html
2012-05-24T00:04:28
crawl-003
crawl-003-005
[]
docs.embarcadero.com
How Splunk recognizes timestamps This documentation does not apply to the most recent version of Splunk. Click here for the latest version. How Splunk recognizes timestamps Accurate timestamps are crucial for correlating events by time, using Splunk's histogram and setting time ranges for searches. Splunk will make a best effort to assign an accurate timestamp. However, if Splunk cannot find a timestamp within a given source or event, the timestamp will be set to the current time (at indexing). Timestamp precedence When timestamping, Splunk sets a local variable for both the date and time. These variables are updated continuously throughout the indexing process, via the following steps: - Splunk looks for a time or date in the event itself. - If an event does not have a time or date, Splunk uses the timestamp from the previous event in the same source. - If no events in a source have a time or date, Splunk will look in the source (or file) name. - Splunk will use indexing time and date if no other timestamp is found. If you would like to configure Splunk to set timestamps in a different manner, please read change how Splunk recognizes timestamps. You can also train Splunk to recognize timestamps or tune timestamping to increase Splunk's performance. Configuration files for timestamps - Timestamp format and recognition can be configured via.
http://docs.splunk.com/Documentation/Splunk/3.1.1/admin/HowSplunkRecognizesTimestamps
2012-05-24T00:31:38
crawl-003
crawl-003-005
[]
docs.splunk.com
Occurs when an object with custom alignment is aligned. property OnAlignPosition: TAlignPositionEvent; __property TAlignPositionEvent OnAlignPosition; OnAlignPosition occurs when child controls with an Align property of alCustom are aligned. CustomAlignPosition triggers the OnAlignPosition event. If this event is defined, CustomAlignPosition uses the alignment parameters it obtains from OnAlignPosition. Defining this event allows users to set the alignment parameters without overriding CustomAlignPosition. It holds a TAlignPositionEvent type. These are the parameters:
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/ActnCtrls_TActionToolBar_OnAlignPosition.html
2012-05-24T00:05:15
crawl-003
crawl-003-005
[]
docs.embarcadero.com
IIS Install PHP into IIS A useful tool for configuring PHP is the install PHP Manager: Note that PHP versions deployed by PHP do not offer the lastest versions available and so installation of later PHP versions will need to be performed manually. The PHP version for a site can be registered through PHP Manager. Deploy Fusio Files on IIS Decide which folder you wish to install the application in e.g. C:\webapps\fusio.localhost.local\ e.g. C:\webapps\fusio.localhost.local\www\cache Download the release and unzip into the target folder: e.g. C:\webapps\fusio.localhost.local\ Permissions Ensure that the user account IUSR has read permissions to this folder and all subfolders. Ensure that the user account IUSR had read/write permissions on cache folder Configuring IIS for Local Development Many of the IIS settings apply for a production ISS implementation also. - Setting up a Local Domain - Adding a Website to IIS - Confirm PHP on IIS Website - Confirm Request Methods - Setting Up Url Rewrite - Debugging with xDebug Setting Up a Local Domain For local development update your hosts file to point to your local IIS web server. Pick a local domain name that ends in .local as browsers will complain that the site is not secure if you choose a domain ending in any other suffix. Add the following entry in the hosts file located at C:\Windows\System32\drivers\etc\hosts (you will need to have opened an editor as an Administrator as this is an admin secured file): 127.0.0.1 fusio.localhost.local Now the url it will resolve to your local IIS webserver. Adding Website to IIS In Internet Information Services Manager, expand the root node and right click “Sites” and “Add Website” to configure a new site. Add a Site name of fusio.localhost.local. An application pool of the same name will be created automatically. Set the physical path to the public folder within your Fusio folder: e.g. C:\webapps\fusio.localhost.local\public Set the host name to fusio.localhost.local to enable IIS to differentiate between the different requests that it might receive on the one web server. Optionally, the application pool for the Fusio installation can be configured to bypass the .net handling by clicking the “Application Pools” node in IIS manager. Double clicking the application pool e.g. fusio.localhost.local and setting the .NET CLR version to “No managed code” Confirm PHP on IIS Ensure that the required PHP version is mapped to you site using PHP Manager: Confirm Request Methods Check the Request Methods available to the Webserver / Fusio Application Request methods can be set at a Web server wide and application level. At the root node of IIS or in the Website configured for the Fusio application open the settings “Handler Mappings”: Find the mapping for PHP in the list. It is possible to have multiple versions of PHP mapped so be sure to choose the one you have configured for your Fusio website. Click Request Restrictions: Choose the tab “Verbs”: The IIS default does not include all of the request methods required by Fusio. Fusio requires the following request methods to be active – GET,POST,PUT,DELETE. Add the required verbs or on a development machine choose “All verbs”: Setting Up URL Rewrite IIS has a plugin for Url Rewrite which must be installed via the Microsoft Platform installer, before it can be configured: - Install Web Platform Installer - Install “Url Rewrite” IIS plugin - Configure “Url Rewrite” for Fusio website Install Web Platform installer If not already installed, install IIS Url Rewrite via the Web Platform Installer Install “Url Rewrite” IIS plugin In IIS manager: Search “url” and click add to install “URL Rewrite 2.1”: Configure “Url Rewrite” for Fusio website Once installed click “Url Rewrite” in the Fusion website: Url Rewrite rules can be imported from the .htaccess file, configured manually or set directly in the web.config file in the public folder of the Fusio installation: - Import Rules from .htaccess - Manual Url Rewrite Configuration - Edit Web.config file Import Rules from .htaccess Within Url Rewrite click “Import Rules” on the menu on the right: Click “…” to choose the .htaccess file in the public folder of the Fusio installation: In the Fusion web application folders choose the “public” folder: Click “Import” Manual Url Rewrite Configuration Match with regular expression and pattern (.*) Conditions: Action Rewrite /index.php/{R:1} Edit Web.config file Located in the public folder e.g. C:\webapps\fusio.localhost.local\public\web.config. It is possible that this file includes other settings. The url rewrite settings are under the <rewrite> node <}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Debugging with XDebug IIS requires “Non Thread Safe” dlls as extensions and so when downloading xDebug choose the NTS version: Copy the required dll to the ext folder of your PHP folder e.g. C:\Program Files\PHP\v7.4\ext Add xdebug configuration to the php.ini in the PHP folder e.g. in the file C:\Program Files\PHP\v7.4\php.ini add the line: zend_extension="C:\Program Files\PHP\v7.4\ext\php_xdebug-2.9.4-7.4-vc15-nts-x86_64.dll" To activate xDebug for your application, either add the following to your php.ini to activate xDebug for all web applications running on IIS: xdebug.remote_autostart=1 xdebug.remote_enable=On xdebug.remote_host=localhost xdebug.remote_port=9000 xdebug.remote_handler="dbgp" or add a file to the web root “.user.ini” with this configuration to activate xDebug for an individual application (preferred): e.g. C:\webapps\fusio.localhost.local\.user.ini This will limit debugging to the Fusio web application. IDE configuration is dependent on the IDE you are using. Set autostart=0 and remote_enable=Off to disable debugging. Test Installation Enter fusio.localhost.local into your browser window and you should see:
http://docs.fusio-project.org/docs/installation/iis/
2022-09-25T07:10:17
CC-MAIN-2022-40
1664030334515.14
[array(['/assets/images/add_website-693a91048bea367a7debe6bb2241de51.png', 'add_website'], dtype=object) array(['/assets/images/add_website_detail-11043cc1cdc1bbf34e9929616a5f5021.png', 'add_website_detail'], dtype=object) array(['/assets/images/add_website_edit_application_pool-efeee19252565748af0d863c27aca7d0.png', 'add_website_edit_application_pool'], dtype=object) array(['/assets/images/confirm_php-ded7b21e3ac018f5a0bdb1824e610963.png', 'confirm_php'], dtype=object) array(['/assets/images/confirm_php_manager-d12470a5369e28de4345496ecfa5b313.png', 'confirm_php_manager'], dtype=object) array(['/assets/images/confirm_request_methods-d9fb8fbc06c4e17b85f3109708d2d49f.png', 'confirm_request_methods'], dtype=object) array(['/assets/images/confirm_request_handle_mappings-6075ceb74663567841dd18ff4b985c08.png', 'confirm_request_handle_mappings'], dtype=object) array(['/assets/images/confirm_request_edit_module_mapping-fdbefabaf15a5a150dda5b60fd66f716.png', 'confirm_request_edit_module_mapping'], dtype=object) array(['/assets/images/confirm_request_restrictions-4522abc45d4fc1932fe86b8eec8c9805.png', 'confirm_request_restrictions'], dtype=object) array(['/assets/images/url_rewrite_install-b1276dc9552932649ba9737cbd5c7f95.png', 'url_rewrite_install'], dtype=object) array(['/assets/images/url_rewrite_install_add-8b6c6a0634fc800142dfb0199991fcaa.png', 'url_rewrite_install_add'], dtype=object) array(['/assets/images/url_rewrite_configure-2db85330caf3b59ba0d3b32c5e2382e2.png', 'url_rewrite_configure'], dtype=object) array(['/assets/images/test_installation-1844c825911b3d7d7554616116d12825.png', 'test_installation'], dtype=object) ]
docs.fusio-project.org
IntroductionThank you very much for choosing our theme. We truly appreciate and really hope that you'll enjoy our theme! If you like this theme, Please support us by rating us 5 stars (How to rate?) Boomrom Software & IT Solutions WordPress Theme is a powerful Easy to use, Mobile friendly, highly customizable SEO friendly IT solutions and SAAS Theme Theme features - Responsive Design - Retina Ready - Sticky Header - Blog Page Layout Option - Truly One Click Demo Importer. - Powerful Drag and Drop Page Builder (Elementor) - Powerfull Codestar Framework Admin Panel(save 49$) - Contact Form 7 Note: All images are just used for Preview Purpose Only. They are not part of the theme and NOT included in the final purchase files. Need Support?If you have any questions regarding to theme issues, please email at [email protected] Requirements for Boomrom To use Boomrom, please make sure you are running WordPress 5.1 or higher, PHP 5.6 or higher, and MySQL 5.6 or higher. We have tested it with Mac, Windows and Linux. Besides, please check the recommended server configuration for proper theme functioning: Minimum server configuration - PHP version - 5.6 and higher - MySQL version - 5.6 or higher - memory_limit - 128M - max_execution_time - 180 - max_input_time - 60 - upload_max_filesize - 32M Recommended server configuration - PHP version - 7.0 and higher - MySQL version - 5.6 or higher - memory_limit - 128M - max_execution_time - 180 - max_input_time - 60 - upload_max_filesize - 32M Files included in the package When you purchase our theme from ThemeForest, you need to download the theme package from your themeforest account. Navigate to your downloads tab on ThemeForest and find Boomrom. Click the download button to see the two options. The All files & documentation contain everything, the Installable WordPress Theme is just the installable WordPress theme file. Below is a full list of everything that is included when you download the main files, along with a brief description of each item. boomrom.zip: main theme file that need to be uploaded to host to install Boomrom theme. boomrom-child.zip: the basic child theme file for people want to customize the theme. Documentation: documentation folder that contains documentation files. Demo Data: folder contains demo data files which are exported from the demo site. Data: folder contains content files for manually import demo content demo-content.xml: the XML file for importing demo content widgets.wie: the file contains exported widgets settings.json: the file contains exported settings for the theme Theme InstallationOnce you purchase the theme from themeforest, you'll be able to load 2 file type. - All Files and documentation - Installable Wordpress Theme File Install theme via Wordpress Dashboard - Go to 'Appearance > Theme' section - Click 'Add New' and select the 'Upload' option - Upload the zip file - All Done :) Install theme via FTP - Access to the file on your server using ftp editor program - Go to 'wp-content/themes' folder - Extract the zip file and put the themename-vxx_xx folder there - Go to 'Wordpress Dashboard > Appearance > Theme' section to activate the theme - All Done :) Install PluginsAfter you install the theme, there'll be a list of suggested and recommended plugins at the top of the wordpress dashboard. If you already hide it out, you can go to 'Appearance > Install Plugins' section instead too. Importing Demo ContentAfter activating the theme, there'll be suggested plugin listed at the top (if there aren't, you can go to 'Appearance > Install Plugins' section as well ). Try installing and activating these following plugins ( as it effects the importing process ). You can also install all suggested plugins at this step as well. Note:All of your old data will be remove if you use this function. Then, go to 'Boomrom > Import Demo Data > Import Demo Data'Follow the steps mentioned on the screen, then, you'll get the site like the demo :) General Layout - Preloader - Display Back To Top - Enable/Disable Smooth Scroll - You can use Custom CSS style if you want overwrite style of theme and your customize code keep in database so you can update theme in future without change Header Settings - Enable Header Sticky - Transparent Menu - Responsive Menu - Logo Upload - Menu Style Page Header Settings - You can setup page title, sub-title, background for page title 1. Default Page Setting. - Default Page Title - Default Page Description - Default Page Background Blog Settings You can setup blog page display as grid or masonry layout with left sidebar,right sidebar or no sidebar You can edit blog sidebar by navigate to Appearance > Widgets > Primary Sidebar Settings - You can setup Sidebar Info Typography Settings You can change font family use default fonts, Google fonts, change font size, font color, heading font size Color Scheme You can change color for overall site, header, footer. 404 Settings You can setting up 404 Page. - You can assign another page to homepage by go to Dashboard > Settings > Reading > Change Frontpage to any page you want Footer Link Widget 1. Go "Appearance > Menu" create invidiual menu: "Company", Resource", "Support" 1. Auto Update using Envato Market # - Install the Envato Envato Market #. -. - The theme will be updated to the most recent version. 2. Manual Update Through WordPress Admin Panel - Navigate to Appearance > Themes Activate another theme to deactivate the Boomrom theme - Delete the existing installed theme (old version). - Unzip the file you just downloaded from ThemeForest and locate the WordPress Theme "Boomrom_V1.0.0.zip" - Still within the Themes section, on the header tab, click on “Install Themes”, then on the header second tab click “Upload”. - Click “Browse…” and locate the new theme file boomrom.zip - In the “Upgrade existing theme?” option choose “Yes” from the dropdown list. - Click “Install Now”. - Click “Activate”. Socials Settings You can setting up your Social Networks.
http://docs.voidcoders.com/boomrom/
2022-09-25T08:07:08
CC-MAIN-2022-40
1664030334515.14
[array(['images/a12.png', None], dtype=object) array(['images/a13.png', None], dtype=object) array(['images/a14.png', None], dtype=object) array(['images/import-sample-data.png', None], dtype=object) array(['images/general-layout.png', None], dtype=object) array(['images/header-option.png', None], dtype=object) array(['images/page-title-setting1.png', None], dtype=object) array(['images/blog-setting.png', None], dtype=object) array(['images/sidebar-info.png', None], dtype=object) array(['images/typography-options.png', None], dtype=object) array(['images/color-scheme.png', None], dtype=object) array(['images/404-setting.png', None], dtype=object) array(['images/homepage-assign.png', None], dtype=object) array(['images/footer-link-menu.png', None], dtype=object) array(['images/theme_update.png', None], dtype=object)]
docs.voidcoders.com
You can integrate any database with Exasol as long as the external source supports a JDBC interface. You need to upload the corresponding JDBC driver into EXAoperation (see Manage JDBC Drivers) and then use the generic JDBC interface for IMPORT and EXPORT. By supporting native interfaces to Exasol and Oracle databases, we achieve even better performance.
https://docs.exasol.com/db/6.2/microcontent/Resources/MicroContent/LoadingData/load-data-from-external-sources.htm
2022-09-25T07:33:44
CC-MAIN-2022-40
1664030334515.14
[]
docs.exasol.com
EditorPlugin¶ Used by the editor to extend its functionality. Description¶ Plugins are used by the editor to extend functionality. The most common types of plugins are those which edit a given node or resource type, import plugins and export plugins. See also EditorScript to add functions to the editor. Tutorials¶ Methods¶ CONTAINER_PROJECT_SETTING_TAB_LEFT = 10 CONTAINER_PROJECT_SETTING_TAB_RIGHT = 11 enum DockSlot: Method Descriptions¶ Adds a script at path to the Autoload list as name. ToolButton add_control_to_bottom_panel ( Control control, String title ) Adds Node.queue_free. void add_control_to_container ( CustomControlContainer container, Control control ) Adds a custom control to a container (see CustomControlContainer). Node.queue_free. Adds the control to a specific dock slot (see DockSlot for options). If the dock is repositioned and as long as the plugin is active, the editor will save the dock position on further sessions. When your plugin is deactivated, make sure to remove your custom control with remove_control_from_docks and free it with Node.queue_free. Adds virtual method handles to check if your custom object is being edited by checking the script or using the is keyword. During run-time, this will be a simple object with a script so this function does not need to be called then. void add_export_plugin ( EditorExportPlugin plugin ) Registers a new EditorExportPlugin. Export plugins are used to perform tasks when the project is being exported. See add_inspector_plugin for an example of how to register a plugin. void add_import_plugin ( EditorImportPlugin importer ) Registers a new EditorImportPlugin. Import plugins are used to import custom and unsupported assets as a custom Resource type. Note: If you want to import custom 3D asset formats use add_scene_import_plugin instead. See add_inspector_plugin for an example of how to register a plugin. void add_inspector_plugin ( EditorInspectorPlugin plugin ) Registers a new EditorInspectorPlugin. Inspector plugins are used to extend EditorInspector and provide custom configuration tools for your object's properties. Note: Always use remove_inspector_plugin to remove the registered EditorInspectorPlugin when your EditorPlugin is disabled to prevent leaks and an unexpected behavior. const MyInspectorPlugin = preload("res://addons/your_addon/path/to/your/script.gd") var inspector_plugin = MyInspectorPlugin.new() func _enter_tree(): add_inspector_plugin(inspector_plugin) func _exit_tree(): remove_inspector_plugin(inspector_plugin) void add_scene_import_plugin ( EditorSceneImporter scene_importer ) Registers a new EditorSceneImporter. Scene importers are used to import custom 3D asset formats as scenes. void add_spatial_gizmo_plugin ( EditorSpatialGizmoPlugin plugin ) Registers a new EditorSpatialGizmoPlugin. Gizmo plugins are used to add custom gizmos to the 3D preview viewport for a Spatial. See add_inspector_plugin for an example of how to register a plugin. Adds a custom menu item to Project > Tools as name that calls callback on an instance of handler with a parameter ud when user activates it. Adds a custom submenu under Project > Tools > name. submenu should be an object of class PopupMenu. This submenu should be cleaned up using remove_tool_menu_item(name).. This method is called when the editor is about to run the project. The plugin can then perform required operations before the project runs. This method must return a boolean. If this method returns false, the project will not run. The run is aborted immediately, so this also prevents all other plugins' build methods from running. void clear ( ) virtual Clear all the state and reset the object being edited to zero. This ensures your plugin does not keep editing a currently existing node, or a node from the wrong scene. void disable_plugin ( ) virtual Called by the engine when the user disables the EditorPlugin in the Plugin tab of the project settings window. This function is used for plugins that edit specific object types (nodes or resources). It requests the editor to edit the given object. void enable_plugin ( ) virtual Called by the engine when the user enables the EditorPlugin in the Plugin tab of the project settings window. Called by the engine when the 2D editor's viewport is updated. Use the overlay Control for drawing. You can update the viewport manually by calling update_overlays. func forward_canvas_draw_over_viewport(overlay): # Draw a circle at cursor position. overlay.draw_circle(overlay.get_local_mouse_position(), 64, Color.white) func forward_canvas_gui_input(event): if event is InputEventMouseMotion: # Redraw viewport when cursor is moved. update_overlays() return true return false This method is the same as forward_canvas_draw_over_viewport, except it draws on top of everything. Useful when you need an extra layer that shows over anything else. You need to enable calling of this method by using set_force_draw_over_forwarding_enabled. bool forward_canvas_gui_input ( InputEvent event ) virtual Called when there is a root node in the current edited scene, handles is implemented and an InputEvent happens in the 2D viewport. Intercepts the InputEvent, if return true EditorPlugin consumes the event, otherwise forwards event to other Editor classes. Example: # Prevents the InputEvent to reach other Editor classes func forward_canvas_gui_input(event): var forward = true return forward Must return false in order to forward the InputEvent to other Editor classes. Example: # Consumes InputEventMouseMotion and forwards other InputEvent types func forward_canvas_gui_input(event): var forward = false if event is InputEventMouseMotion: forward = true return forward Called by the engine when the 3D editor's viewport is updated. Use the overlay Control for drawing. You can update the viewport manually by calling update_overlays. func forward_spatial_draw_over_viewport(overlay): # Draw a circle at cursor position. overlay.draw_circle(overlay.get_local_mouse_position(), 64) func forward_spatial_gui_input(camera, event): if event is InputEventMouseMotion: # Redraw viewport when cursor is moved. update_overlays() return true return false This method is the same as forward_spatial_draw_over_viewport, except it draws on top of everything. Useful when you need an extra layer that shows over anything else. You need to enable calling of this method by using set_force_draw_over_forwarding_enabled. bool forward_spatial_gui_input ( Camera camera, InputEvent event ) virtual Called when there is a root node in the current edited scene, handles is implemented and an InputEvent happens in the 3D viewport. Intercepts the InputEvent, if return true EditorPlugin consumes the event, otherwise forwards event to other Editor classes. Example: # Prevents the InputEvent to reach other Editor classes func forward_spatial_gui_input(camera, event): var forward = true return forward Must return false in order to forward the InputEvent to other Editor classes. Example: # Consumes InputEventMouseMotion and forwards other InputEvent types func forward_spatial_gui_input(camera, event): var forward = false if event is InputEventMouseMotion: forward = true return forward PoolStringArray get_breakpoints ( ) virtual This is for editors that edit script-based objects. You can return a list of breakpoints in the format ( script:line), for example: res://path_to_script.gd:25. EditorInterface get_editor_interface ( ) Returns the EditorInterface object that gives you control over Godot editor's window and its functionalities. Override this method in your plugin to return a Texture in order to give it an icon. For main screen plugins, this appears at the top of the screen, to the right of the "2D", "3D", "Script", and "AssetLib" buttons. Ideally, the plugin icon should be white with a transparent background and 16x16 pixels in size. func get_plugin_icon(): # You can use a custom icon: return preload("res://addons/my_plugin/my_plugin_icon.svg") # Or use a built-in icon: return get_editor_interface().get_base_control().get_icon("Node", "EditorIcons") Override this method in your plugin to provide the name of the plugin when displayed in the Godot editor. For main screen plugins, this appears at the top of the screen, to the right of the "2D", "3D", "Script", and "AssetLib" buttons. ScriptCreateDialog get_script_create_dialog ( ) Gets the Editor's dialog used for making scripts. Note: Users can configure it before use. Warning: Removing and freeing this node will render a part of the editor useless and may cause a crash. Dictionary get_state ( ) virtual Override this method to provide a state data you want to be saved, like view position, grid settings, folding, etc. This is used when saving the scene (so state is kept when opening it again) and for switching tabs (so state can be restored when the tab returns). This data is automatically saved for each scene in an editstate file in the editor metadata folder. If you want to store global (scene-independent) editor data for your plugin, you can use get_window_layout instead. Use set_state to restore your saved state. Note: This method should not be used to save important settings that should persist with the project. Note: You must implement get_plugin_name for the state to be stored and restored correctly. func get_state(): var state = {"zoom": zoom, "preferred_color": my_color} return state Gets the undo/redo object. Most actions in the editor can be undoable, so use this object to make sure this happens when it's worth it. void get_window_layout ( ConfigFile layout ) virtual Override this method to provide the GUI layout of the plugin or any other data you want to be stored. This is used to save the project's editor layout when queue_save_layout is called or the editor layout was changed (for example changing the position of a dock). The data is stored in the editor_layout.cfg file in the editor metadata directory. Use set_window_layout to restore your saved layout. func get_window_layout(configuration): configuration.set_value("MyPlugin", "window_position", $Window.position) configuration.set_value("MyPlugin", "icon_color", $Icon.modulate). Returns true if this is a main screen editor plugin (it goes in the workspace selector together with 2D, 3D, Script and AssetLib). void hide_bottom_panel ( ) Minimizes the bottom panel. Makes a specific item in the bottom panel visible.. Removes an Autoload name from the list. Removes the control from the bottom panel. You have to manually Node.queue_free the control. void remove_control_from_container ( CustomControlContainer container, Control control ) Removes the control from the specified container. You have to manually Node.queue_free the control. Removes the control from the dock. You have to manually Node.queue_free the control. Removes a custom type added by add_custom_type. void remove_export_plugin ( EditorExportPlugin plugin ) Removes an export plugin registered by add_export_plugin. void remove_import_plugin ( EditorImportPlugin importer ) Removes an import plugin registered by add_import_plugin. void remove_inspector_plugin ( EditorInspectorPlugin plugin ) Removes an inspector plugin registered by add_import_plugin void remove_scene_import_plugin ( EditorSceneImporter scene_importer ) Removes a scene importer registered by add_scene_import_plugin. void remove_spatial_gizmo_plugin ( EditorSpatialGizmoPlugin plugin ) Removes a gizmo plugin registered by add_spatial_gizmo_plugin. Removes a menu name from Project > Tools. void save_external_data ( ) virtual This method is called after the editor saves the project or when it's closed. It asks the plugin to save edited external scenes/resources. void set_force_draw_over_forwarding_enabled ( ) Enables calling of forward_canvas_force_draw_over_viewport for the 2D editor and forward_spatial_force_draw_over_viewport for the 3D editor when their viewports are updated. You need to call this method only once and it will work permanently for this plugin. get_state. This method is called when the current scene tab is changed in the editor. Note: Your plugin must implement get_plugin_name, otherwise it will not be recognized and this method will not be called. func set_state(data): zoom = data.get("zoom", 1.0) preferred_color = data.get("my_color", Color.white) void set_window_layout ( ConfigFile layout ) virtual Restore the plugin GUI layout and data saved by get_window_layout. This method is called for every plugin on editor startup. Use the provided configuration file to read your saved data. func set_window_layout(configuration): $Window.position = configuration.get_value("MyPlugin", "window_position", Vector2()) $Icon.modulate = configuration.get_value("MyPlugin", "icon_color", Color.white) Updates the overlays of the 2D and 3D editor viewport. Causes methods forward_canvas_draw_over_viewport, forward_canvas_force_draw_over_viewport, forward_spatial_draw_over_viewport and forward_spatial_force_draw_over_viewport to be called.
https://docs.godotengine.org/en/stable/classes/class_editorplugin.html
2022-09-25T08:46:08
CC-MAIN-2022-40
1664030334515.14
[]
docs.godotengine.org
Use Ephemeral volumes Generic Ephemeral volumes Generic ephemeral volumes is a newer feature and is in beta (enabled by default) as of Kubernetes 1.21. These volumes also work with typical storage operations such as snapshotting, cloning, resizing, and storage capacity tracking. The following steps will allow you to create a generic ephemeral volume with the Portworx CSI Driver.: "1" Apply the portworx-sc.yamlspec to create the Portworx CSI Driver StorageClass: kubectl apply -f portworx.sc.yaml Create a pod spec that uses a Portworx CSI Driver StorageClass, declaring the ephemeral volume as seen below in a yaml file named ephemeral-volume-pod.yaml:: "portworx-csi-sc" resources: requests: storage: 1Gi Apply the ephemeral-volume-pod.yamlspec to create the pod with a generic ephemeral volume: kubectl apply -f ephemeral-volume-pod.yaml Migration to CSI PVCs Currently, you cannot migrate or convert PVCs created using the native Kubernetes driver to the CSI driver. However, this is not required and both types of PVCs can co-exist on the same cluster. Contribute Portworx, Inc. welcomes contributions to its CSI implementation, which is open-source and repository is at OpenStorage. In addition, we also encourage contributions to the Kubernetes-CSI open source implementation.
https://docs.portworx.com/operations/operate-kubernetes/storage-operations/csi/ephemeral/
2022-09-25T08:11:11
CC-MAIN-2022-40
1664030334515.14
[]
docs.portworx.com
Font Manager Panel Use your operation system fonts in your embedded GUI, as well. LVGL applies UTF-8 encoding to display Unicode characters on any language. Here, you can generate a new font for your GUI project. With the font converter tool, you can create C array from any TTF or WOFF font. You can select ranges of Unicode characters and specify the bpp (bit-per-pixel). Create New Font You can create a new font in the Create New Font section. Before creating one, you should copy at least one font into the Assets/Fonts folder of the project. You can generate a new font only from those that are listed in the folder. Font Name Here, you can name your font. Select Font Asset You can select the fonts listed in the Assets/Fonts folder from a drop-down menu. Font Size You can define the size of the font. Bpp You can define the blur of letter edges in bit per pixel. Letters Letters in the generated font can be selected from an ASCII character list. Default setting is that only those letters are included in the generated font which are selected. Range You can also define custom letter ranges, meaning the ranges and/or characters you would like to include, e.g. 0x20-0x7F, 0x200, 450. Symbols List of characters to include. E.g. ABC0123ÁÉŐ Font compression It reduces size but results in slower rendering. Compression is more effective with larger fonts and higher bpp. However, it's about 30% slower to render compressed fonts. Therefore, it's recommended to compress only the largest fonts of a user interface, because - they need the most memory, - they can be compressed better, - and probably they are used less frequently then the medium-sized fonts, so the performance cost is smaller. Learn more Documentation of LVGL. Horizontal subpixel rendering Subpixel rendering allows for tripling the horizontal resolution by rendering anti-aliased edges on Red, Green and Blue channels instead of at pixel level granularity. This takes advantage of the position of physical color channels of each pixel, resulting in higher quality letter anti-aliasing. Learn more here. Subpixel rendering works only if the color channels of the pixels have a horizontal layout. That is the R, G, B channels are next each other and not above each other. Learn more in the Documentation of LVGL. If you specify all parameters, you can create your font by clicking on the Create Font button. Created Fonts You can find your fonts here which you can modify or delete, as well.
https://docs.squareline.io/docs/dev_env/fontmanager/
2022-09-25T07:48:18
CC-MAIN-2022-40
1664030334515.14
[]
docs.squareline.io
Miscellaneous¶ This tab contains all settings which are difficult to put in a category. Cache DCE frontend plugin¶ This option activates or deactivates the caching of the DCE in the frontend. Every DCE that should be available in the the frontend must be initialized in the localconf.php with calling the method ExtensionUtility::configurePlugin(). This option takes effect if the showAction of the DceController is cached or non-cached. Direct output¶ With this option enabled you bypass css_styled_content or fluid_styled_content. Instead of using lib.contentElement, the DCE controller action is used directly. This brings a significant performances boost and removes any wrappings defined by e.g. fluid_styled_content (<div id="c123" ...>). This option is enabled by default, separately for each DCE. Disables the "div.csc-default" wrapping¶ Only available, when EXT:css_styled_content is installed. This option disables the wrapping of the content element with the <div class="csc-default" /> which can be sometimes necessary. Enable access tab in backend¶ If this option is activated a tab with the access rights is shown in the backend. Here you can define detailed, when the DCE is to be shown and who is allowed to see the DCE. When this checkbox is enabled, the enabled fields disappear from palette fields automatically, if set. Enable media tab in backend¶ This option is only available when EXT:fluid_styled_content is installed. If this option is activated a tab with media (FAL) field is shown in the backend. You can access {contentObject.assets}` or ``{contentObject.media} variable in Fluid template. It contains an array of \TYPO3\CMS\Core\Resource\FileReference. Enable categories tab in backend¶ If this option is activated a tab with category picker is shown in the backend. You can access {contentObject.categories} variable in Fluid template. It contains an array of \TYPO3\CMS\Extbase\Domain\Model\Category. DCE palette fields¶ This is a list of fields which should be shown in the head area of this DCE in the backend. The default value is this: sys_language_uid, l18n_parent, colPos, spaceBefore, spaceAfter, section_frame, hidden Prevent header copy suffix¶ If this checkbox is checked (enabled by default) a copied tt_content record, based on this DCE, will not append "Copy (1)" to record's header. It uses the header contents of the original content record. Fluid layout and partial root path¶ Layouts and partials are a part of Fluid templates and are used to avoid redundancies and keep the code cleaner. Here you can define Fluid templates folders where to find the layouts and the partials. With TypoScript you can also set up multiple folder paths for layouts and partials with priority order.
https://docs.typo3.org/p/t3/dce/main/en-us/UsersManual/Miscellaneous.html
2022-09-25T09:11:15
CC-MAIN-2022-40
1664030334515.14
[array(['../_images/misc.png', 'Miscellaneous settings'], dtype=object)]
docs.typo3.org
a!toJson( value, removeNullOrEmptyFields ) Converts a value into a JSON string. The value parameter must be a CDT, a dictionary, a map, a record, or a list. The removeNullOrEmptyFields parameter removes all fields with values that are null, empty strings, or empty arrays from the generated JSON request body. This is important for certain web services, such as those that follow the OData protocol. Some web services treat fields with null values differently from fields that aren't included in the request body at all. Omitting a field may mean "don't modify the field," while sending a null value for that field would mean "write a null value to the field." The behavior of removeNullOrEmptyFields can also be leveraged for integrations that send a JSON request body by selecting the checkbox labeled Remove fields with null or empty values from generated JSON.
https://docs.appian.com/suite/help/21.4/fnc_system_a_tojson.html
2022-09-25T07:40:56
CC-MAIN-2022-40
1664030334515.14
[]
docs.appian.com
Authentication¶ Overview¶ When visiting a web-page or requesting an HTTP service, one must often authenticate to prove the identity of the person (or agent) making the request. There are multiple techniques for authentication. Background¶ Identity¶ Identities in CiviCRM are defined in two ways: - CMS User - CiviCRM Contact Most CMS users have a corresponding CiviCRM contact record; however, the relationship is not a strict 1-1. Here are a few permutations to consider: When a CMS user has been authenticated, this will be flagged using the CMS's preferred API. (Thus, on D7, global $account refers to the authenticated user. On WP, wp_get_current_user() refers to the authenticated user.) When a CiviCRM contact has been authenticated, this will be flagged in CRM_Core_Session as the userID. Browser Authentication¶ CiviCRM is a browser-based application that integrates with other browser-based applications (Drupal, WordPress, Joomla, Backdrop, etc). The most familiar authentication technique is to login through the web-browser. For example, the user may navigate to Drupal's /user/login or WordPress's /wp-admin/, fill in credentials, and initialize an authenticated session. Subsequently, the user may request any CiviCRM web-page, and it recognizes their identity. In this way, CiviCRM can be chameleon - adapting to whichever web-based authentication process is available on the local deployment. Service Authentication¶ CiviCRM also acts as a web-service provider. It offers APIv3 REST and APIv4 REST to remote applications (mobile apps, enterprise integrations, and so on). The core extension "Authx" (v5.36+) defines a portable protocol for CiviCRM service authentication. Key features: - Support consistent authentication protocols, regardless of the CMS (Drupal, WordPress, Joomla, Backdrop) - Integrate with CMS authentication, ensuring that Civi contact records are matched with CMS user records - Accept multiple types of credentials (username/password, API key, JSON Web Token) - Accept credentials through different data-flows (HTTP headers, HTTP parameters, login forms) The remainder of this chapter explores portable authentication options for web services. Credentials¶ TIP: The site administrator may enable or disable support for certain types of credentials. See also: Settings To authenticate, one presents a credential to prove their identity. Here are a few common types of credentials: - Username/Password: The most familiar type of credential, consisting of a username and a secret password. - API Key: A unique, random code assigned for this person or agent. It may be used indefinitely -- until someone deletes/revokes/replaces it. (See also: Sysadmin Guide: Setup: API Keys) JSON Web Token (JWT): A dynamic, digitally-signed token which identifies this person. A JWT does not require any persistent storage, and it will expire automatically. How do you generate a JSON web token? If you are developing a patch or extension for CiviCRM, then you may generate a sign-in token as follows: $token = Civi::service('crypto.jwt')->encode([ 'exp' => time() + 5*60, // Expires in 5 minutes 'sub' => 'cid:203', // Subject (contact ID) 'scope' => 'authx', // Allow general authentication ]); This example $tokenwill be valid for login during the next five minutes. Flows¶ There are two general dataflows for HTTP authentication: - Stateless / Ephemeral: The client submits a singular HTTP request (such as an API call) which includes credentials. The request is authenticated and processed, and then it is forgotten. - Stateful / Persistent / Session: The client makes a request for a persistent session, attaching the contact ID and/or user ID. These will be used in subsequent requests. For each general flow, we have a few variations. What is <credential>? In all cases, the <credential> is formatted per RFC-7617 ("Basic") or RFC-6750 ("Bearer"). For example: - Common Header: Authorization: Basic dXNlcjpwYXNzor Authorization: Bearer ZYXW9876 - X-Header: X-Civi-Auth: Basic dXNlcjpwYXNzor X-Civi-Auth: Bearer ZYXW9876 - Parameter: ?_authx=Basic+dXNlcjpwYXNzor ?_authx=Bearer+ZYXW9876 What's the difference between Common Header and X-Header? The common Authorization: header is easier to integrate with external applications. However, this could be a double-edged sword - some environments may have extra or conflicted rules in how to handle Authorization:. The custom header X-Civi-Auth: may require a little extra work for the downstream application. However, it is likely to bypass any middleware conflicts. How do I test authentication? You may send a request to /civicrm/authx/id, e.g. curl -v '' If successful, it will respond with a JSON document describing the authenticated user: > HTTP/1.1 200 OK > Content-Type: application/json {"contact_id":"202","user_id":"1","flow":"param","cred":"bearer"} How do I manage an explicit session? To login, submit the credential to /civicrm/authx/login. For example, this will login with username and password:: curl -v -X POST '' -d '_authx=Basic+dXNlcjpwYXNz' If successful, it will respond with a session cookie and a summary of the logged-in user: > HTTP/1.1 200 OK > Set-Cookie: SESS1234=abcd1234; expires=Sat, 08-May-2021 12:28:19 GMT; Max-Age=2000000; path=/; domain=.example.com; HttpOnly > Content-Type: application/json {"contact_id":"202","user_id":"1","flow":"login","cred":"pass"} You may now request any HTTP resources. Simply pass the Cookie in each HTTP request. To logout, /civicrm/authx/logout. Guards¶ TIP: The site administrator may enable or disable support for certain types of guards. See also: Settings A guard limits access to AuthX authentication services. By default, a user who presents a password or API key will only be authenticated if they can pass one of these guards: - Permission: Check if the user has permission authenticate with passwordor authenticate with api key. - Site Key: Check if the user knows the secret site key. Why support both guards? Traditional dataflows using APIv3 REST required the site-key as a guard. However, this is difficult to administer. The permission-guard is a more manageable alternative. Of course, we do not want to break backward-compatibility. Do guards really prevent misuse? Sort of. Strictly, no. Regardless of how the user authenticates, the user will end up with the same permissions, so they can access the same records/actions/APIs/screens. Whether the user authenticates through a CMS login-form or AuthX, the upper-bound of malicious behavior is the same. But there are still reasons why an administrator might limit access to AuthX, e.g. - They only wish to allow skilled/administrative users to make connections with off-the-shelf applications. - They wish to impose tighter rules (e.g. stricter password requirements) on users who have AuthX access. Settings¶ For each authentication flow, one may toggle support for different credentials and user-links. Here is the default configuration: Example: Relax all settings to permit testing The default settings are fairly restrictive. If you will be doing testing or development, then you might relax a number of settings: cv ev 'Civi::settings()->set("authx_guards", []);' cv ev 'Civi::settings()->set("authx_param_cred", ["jwt", "api_key", "pass"]);' cv ev 'Civi::settings()->set("authx_header_cred", ["jwt", "api_key", "pass"]);' cv ev 'Civi::settings()->set("authx_xheader_cred", ["jwt", "api_key", "pass"]);' cv ev 'Civi::settings()->set("authx_login_cred", ["jwt", "api_key", "pass"]);' cv ev 'Civi::settings()->set("authx_auto_cred", ["jwt", "api_key", "pass"]);'
https://docs.civicrm.org/dev/en/latest/framework/authx/
2022-09-25T08:38:28
CC-MAIN-2022-40
1664030334515.14
[]
docs.civicrm.org
Building EMOD source code from CentOS Docker images on Linux host machine¶ These steps walk you through building the EMOD source code from CentOS Docker images on a Linux host machine running Linux for CentOS 7.7. Linux host machine¶ To download Docker image to your Linux host machine type the following at command line prompt: docker pull docker-production.packages.idmod.org/idm/centos:dtk-build Run Docker container from Linux host machine¶ To run Docker image from your Linux host machine type the following at command line prompt: docker run -it --user `id -u $USER`:`id -g $USER` : bash-4.2$ To then build the EMOD executable, Eradication.exe, move to the /EMOD directory: cd /EMOD This directories contains the necessary build script and files. Build binary executable from Docker image running Linux CentOS 7.7 within Linux host machine¶ To build a binary executable you run the “scons” script. For more information about the build script options, type “scons –help” from within the /EMOD directory. luser from master(2d8a9f2) checked in on 2019-04-05 15:49:43 -0700 Supports sim_types: GENERIC. You can then use this executable for running simulations. For more information, see Run a simulation using the command line.
https://docs.idmod.org/projects/emod-vector/en/2.21_a/dev-install-centos-docker-linux.html
2022-09-25T07:57:06
CC-MAIN-2022-40
1664030334515.14
[]
docs.idmod.org
Write data with developer tools Write data to InfluxDB with developer tools. Write data with third-party technologies Write data to InfluxDB using third-party developer tools. Write CSV data to InfluxDB Write CSV data with the influx write command or Flux. Include annotations with the CSV data to determine how the data translates into line protocol. Write data with client libraries Use client libraries to write data to InfluxDB. Scrape Prometheus metrics Use Telegraf, InfluxDB scrapers, or the prometheus.scrape Flux function to scrape Prometheus-formatted metrics from an HTTP-accessible endpoint and store them in InfluxDB. Write data with the influx CLI Use the influx write command to write data to InfluxDB from the command line. Write data with the InfluxDB API Use the /api/v2/write InfluxDB API endpoint.
https://docs.influxdata.com/influxdb/v2.4/write-data/developer-tools/
2022-09-25T09:07:18
CC-MAIN-2022-40
1664030334515.14
[]
docs.influxdata.com
The Text Data Bank tool extracts values from any text content, including plain text, HTML, XML, etc., and makes the values available as parameters in other tools and configurations. Sections include: Understanding Text Data Bank The Text Data Bank can extract values from any text content (including plain text, HTML, XML, etc.). You can specify left- and right-hand boundaries that mark the beginning and end of the extracted text. Characters between the values specified in the left- and right-hand fields which MIME Types are considered to be "text" by opening Parasoft> Preferences> MIME Types and and selecting the "Text" option for the appropriate MIME Types. Adding a Text Data Bank Tool - Right-click on a tool and choose Add Output... You can also right-click on the scenario folder and choose Add New > Test... to add a standalone instance of the tool. A standalone instance enables you to extract data from a file that is being dynamically updated or created for every tool run. - Choose a traffic source type in the left-side panel. For example, if you want to extract a part of the response header, choose Response > Transport Header in SOAtest or Outgoing Response > Payload in Virtualize. - Choose Text Data Bank and click Finish. The data bank will be attached to the test or added to the suite. See Configuring the Text Data Bank Tool for next steps. Configuring the Text Data Bank If you added the tool as an output of an existing tool, configure the fields under Tool Settings to extract text data. Standalone instances of the tool will have an Input tab for specifying the source from which you to extract content, as well as a Tool Settings tab for configuring extractions. Tool Settings - Enter the source content from which you want to extract into the Text Content field. If you added the Text Data Bank as an output, you can run the parent tool to populate the field. If you added the tool as a standalone tool, click on the Input tab to specify the file from which you want to extract content. - Select the text inside the Text Content field that you want to extract and click Create Extraction from Selection. You can also click Add to manually add and configure an extraction. See Configuring Extractions for details about manually configuring extraction settings. - You can change the default description (optional) and click OK to continue. The data bank can be used as is, but more complex scenarios may require additional configuration. The value may change during subsequent tool runs, for example, so the specific value may not always be extracted. See Configuring Extractions for additional configuration information. Configuring Extractions - Select the extraction and click Modify to open the tool configuration overlay. - Enter a description for the extraction in the Description field. The description is for identification purposes only and does not need to be unique. - Specify a name for the column holding the data being extracted in the Column Name field. The column refers to the fields in the Data Source Column Name column of the extractions table. Column names should be unique. Extractions with the same column name will be overwritten in order from first to last row in the extractions table. - Specify the start of the extraction in the Left-hand text field and the end of the extraction in the Right-hand text field. Characters that appear between the patterns specified in these fields will be extracted from the Text Content field. You can enable the Regular expression options for each field to programmatically specify the patterns. Using regular expressions enables the tool to correctly extract characters if the Text Content field changed, set end-of-line characters, etc. In the following example, the end of the extraction is marked with a regular expression that matches a carriage return or semicolon. - If the extracted text is to be used for a URL, choose URL Encoded from the Extract Option menu. - Click OK to save your changes. Options Enable the Remove tabs and newline characters before processing option to normalize whitespace in extracted text. This option is enabled by default. Viewing the Data Bank Variables Used During Tool Execution You can configure the Console view (Window> Show View> Console) to display the data bank variables used during tool execution. For details, see Monitoring Variable Usage.
https://docs.parasoft.com/pages/?pageId=77035247&sortBy=createddate
2022-09-25T08:56:03
CC-MAIN-2022-40
1664030334515.14
[]
docs.parasoft.com
Rocky Linux Guides¶ Welcome to the Guides section of the Rocky Linux documentation. You will find a host of "how-to" documents, and much more, here. This section is changing all the time. There are also some longer document groups that can be found in "Books" as well as future planned educational "Labs", each of which may be found in the top menu. Most of the categories do not require any explanation. If you want to find out how to help in the ongoing development of Rocky Linux, join the Mattermost Development channel. For those wishing to be involved in documentation, join the Mattermost Documentation channel and join the discussion to find out more. If you are wanting to dive right in, you can install Rocky Linux now! Última atualização: 3 de novembro de 2021
https://docs.rockylinux.org/pt/guides/
2022-09-25T07:23:31
CC-MAIN-2022-40
1664030334515.14
[]
docs.rockylinux.org
float (C# Reference) The float keyword signifies a simple type that stores 32-bit floating-point values. The following table shows the precision and approximate range for the float type. Literals By default, a real numeric literal on the right side of the assignment operator is treated as double. Therefore, to initialize a float variable, use the suffix f or F, as in the following example: float x = 3.5F; If you do not use the suffix in the previous declaration, you will get a compilation error because you are trying to store a double value into a float variable. Conversions, the expression evaluates to floator. Example' */ C# Language Specification For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage. See Also Single C# Reference C# Programming Guide Casting and Type Conversions C# Keywords Integral Types Table Built-In Types Table Implicit Numeric Conversions Table Explicit Numeric Conversions Table
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/float
2018-05-20T14:52:18
CC-MAIN-2018-22
1526794863570.21
[]
docs.microsoft.com
Canvas Canvas Canvas Canvas Class Definition public : class Canvas : Panel, ICanvas struct winrt::Windows::UI::Xaml::Controls::Canvas : Panel, ICanvas public class Canvas : Panel, ICanvas Public Class Canvas Inherits Panel Implements ICanvas <Canvas ...> oneOrMoreUIElements </Canvas> -or- <Canvas .../> - Inheritance - CanvasCanvasCanvasCanvas - Attributes - Examples: - Canvas.Left - Canvas.Top - Canvas.ZIndex. For example, to change the position of the child element using C#, first define the object inside a Canvas, making sure to include the Canvas.Left and Canvas.Top properties. <Canvas > <Grid x:</Grid> </Canvas> In the code-behind page, you can then access the position of the element, like this: mySquare.SetValue(Canvas.LeftProperty,100); mySquare.SetValue(Canvas.TopProperty, 100);
https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Controls.Canvas
2018-05-20T14:04:19
CC-MAIN-2018-22
1526794863570.21
[array(['windows.ui.xaml.controls/images/controls/canvas.png', 'Canvas layout panel'], dtype=object) ]
docs.microsoft.com
No matches found Try choosing different filters or resetting your filter selections.. - Why You Must Upgrade Your Community Templates in Spring ’16 If your community uses a pre-Winter ’16 Koa, Kokua, or Napili template, or a pre-Spring ’16 Aloha template, we strongly recommend that you update to the latest template version. Not sure what your template version is? Check the Settings area in Community Builder. Customer support for older templates is being discontinued in the Summer ’16 release (May 2016). Also, many powerful Community Builder and template features require the Winter ’16 or Spring ’16 template versions, so if you didn’t upgrade in Winter ’16, now is the time to do it. - We’ve Renamed Some Template Components We know that finding the template components you need should be easy and intuitive. So, we renamed some of the components so that you can easily figure out what they are for. We also grouped the components by type in the Page Editor so that you know where to use them. - Chatter in Community Templates Chatter Lightning components for the community templates make it easy to add full Chatter capabilities to a community. With groups and profiles, you can create a much richer collaboration experience in the community. Individual Lightning components for features such as the feed, publisher, and groups mean that it's easy to completely customize where and how Chatter is integrated into the community. - Files in Communities Using files in Communities is easier and more flexible than ever before! Attach multiple files to a single feed post, upload and select files in a unified flow, and preview files in the new Lightning multi-file preview player. - Community Templates We want your communities to be exactly what you envision. That’s why we're giving you the power to create communities that are awesome. Last release we made community templates extensible, and this release we're making them even more flexible and customizable. You can now determine the level of record detail to include and where to display it. We've made it easier for your community members to navigate in your community, create and work with records, and find the topics they're interested in. - Community Builder Community Builder gives you more control over your pages with the enhanced Page Manager. You also get custom page layouts, an improved page loading experience for members, and better error handling for components. - Community Setup Communities Setup is now available in Lightning Experience! You also get limit increases, a streamlined setup for two-factor authentication, the ability to use Platform Encryption in your community, and much more. - Community Management Managing your community is now even easier than before. You can set up community file limits and now community moderation supports all feed types, special characters in your keyword lists, the API, and much more. To give you more control and flexibility with your community recommendations, you can now specify where your recommendations appear and target specific audiences. - Community Reports and Dashboards Start the drum roll ladies and gents! You can now grant all your role-based external users permission to create and edit reports! You also get new custom report types and you can share Wave dashboards. - Community Insights and Analytics When you install the updated Salesforce Communities Management package from the AppExchange, you’re in store for quite a magic trick. Community Management dashboards now display as Lightning dashboards—even if your org isn’t using Lightning Experience. To top it off, you also get new and improved Insights reports. - Other Changes in Communities Other important changes to Communities include a limit increase, access to Notes and Attachments for users with a Customer Community license, more robust field-level security for guest users, and changes to how emails are sent for large community groups.
https://releasenotes.docs.salesforce.com/en-us/spring16/release-notes/rn_networks.htm
2018-05-20T13:52:09
CC-MAIN-2018-22
1526794863570.21
[]
releasenotes.docs.salesforce.com
. Unofficial libraries Python - mypolr is a Python 3 package for interacting with the Polr 2.0 API. (Documentation)
http://polr.readthedocs.io/en/latest/developer-guide/libraries/
2018-05-20T13:23:32
CC-MAIN-2018-22
1526794863570.21
[]
polr.readthedocs.io
SalesTaxRate Table (AdventureWorks) Is a lookup table that contains the tax rates applicable to states, provinces, or country/regions in which Adventure Works Cycles has a local business presence. SalesTaxRate Table Definition The SalesTaxRate table is contained in the Sales schema. See Also Reference SalesOrderHeader Table (AdventureWorks) StateProvince Table (AdventureWorks) Other Resources AdventureWorks Data Dictionary Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms124544(v=sql.90)
2018-05-20T14:22:13
CC-MAIN-2018-22
1526794863570.21
[]
docs.microsoft.com
class OESimScore This class represents OESimScore that holds a similarity value with an index that identifies the fingerprint in the OEFPDatabase object from which the score is calculated. OESimScore(size_t i, double s) Initializes an OESimScore from an index i and a score s. size_t GetIdx() const Returns the index of the fingerprint corresponding to the similarity score.
https://docs.eyesopen.com/toolkits/cpp/graphsimtk/OEGraphSimClasses/OESimScore.html
2018-05-20T13:50:40
CC-MAIN-2018-22
1526794863570.21
[]
docs.eyesopen.com
Detecting unmanaged configuration changes in stack sets Even as you manage your stacks and the resources they contain through CloudFormation, users can change those resources outside of CloudFormation. Users can edit resources directly by using the underlying service that created the resource. By performing drift detection on a stack set, you can determine if any of the stack instances belonging to that stack set differ, or have drifted, from their expected configuration. How CloudFormation performs drift detection on a stack set When CloudFormation performs drift detection on a stack set, it performs drift detection on the stack associated with each stack instance in the stack set. To do this, CloudFormation compares the current state of each resource in the stack with the expected state of that resource, as defined in the stack's template and any specified input parameters. If the current state of a resource varies from its expected state, that resource is considered to have drifted. If one or more resources in a stack have drifted, then the stack itself is considered to have drifted, and the stack instances that the stack is associated with is considered to have drifted as well. If one or more stack instances in a stack set have drifted, the stack set itself is considered to have drifted. Drift detection identifies unmanaged changes; that is, changes made to stacks outside of CloudFormation. Changes made through CloudFormation to a stack directly, rather than at the stack-set level, aren't considered drift. For example, suppose you have a stack that is associated with a stack instance of a stack set. If you use CloudFormation to update that stack to use a different template, that is not considered drift, even though that stack now has a different template than any other stacks belonging to the stack set. This is because the stack still matches its expected template and parameter configuration in CloudFormation. For detailed information on how CloudFormation performs drift detection on a stack, see Detecting unmanaged configuration changes to stacks and resources. Because CloudFormation performs drift detection on each stack individually, it takes any overridden parameter values into account when determining whether a stack has drifted. For more information on overriding template parameters in stack instances, see Override parameters on stack instances. If you perform drift detection directly on a stack that is associated with a stack instance, those drift results aren't available from the StackSets console page. To detect drift on a stack set using the AWS Management Console Open the AWS CloudFormation console at . On the StackSets page, select the stack set on which you want to perform drift detection. From the Actions menu, select Detect drifts. CloudFormation displays an information bar stating that drift detection has been initiated for the selected stack set. Optional: To monitor the progress of the drift detection operation: Select the stack set name to display the Stackset details page. Select the Operations tab, select the drift detection operation, and then select View drift details. CloudFormation displays the Operation details dialog box. Wait until CloudFormation completes the drift detection operation. When the drift detection operation completes, CloudFormation updates Drift status and Last drift check time for your stack set. These fields are listed on the Overview tab of the StackSet details page for the selected stack set. The drift detection operation may take some time, depending on the number of stack instances included in the stack set, and the number of resources included in the stack set. You can only run a single drift detection operation on a given stack set at one time. CloudFormation continues the drift detection operation even after you dismiss the information bar. To review the drift detection results for the stack instances in a stack set, select the Stack instances tab. The Stack name column lists the name of the stack associated with each stack instance, and the Drift status column lists the drift status of that stack. A stack is considered to have drifted if one or more of its resources have drifted. To review the drift detection results for the stack associated with a specific stack instances: Note the AWS account, Stack name, and AWS region for the stack instance. Open the AWS CloudFormation console at . Log into the AWS account containing the stack instance. Select the AWS region containing the stack instance. From the left-hand navigation pane, select Stacks. Select the stack you wish to view, and then select Drifts. CloudFormation displays the Drifts page for the stack associated with the specified stack instance. In the Resource drift status section, CloudFormation lists each stack resource, its drift status, and the last time drift detection was initiated on the resource. The logical ID and physical ID of each resource is displayed to help you identify them. In addition, for resources with a status of MODIFIED, CloudFormation displays resource drift details. You can sort the resources based on their drift status using the Drift status column. To view the details on a modified resource. With the modified resource selected, select View drift details. CloudFormation displays the drift detail page for that resource. This page lists the resource's expected and current property values, and any differences between the two. To highlight a difference, in the Differences section select the property name. Added properties are highlighted in green in the Current column of the Details section. Deleted properties are highlighted in red in the Expected column of the Details section. Properties whose value have been changed are highlighted in blue in the both Expected and Current columns. To detect drift on a stack set using the AWS CLI To detect drift on an entire stack using the AWS CLI, use the following aws cloudformation commands: detect-stack-set-driftto initiate a drift detection operation on a stack. describe-stack-set-operationto monitor the status of the stack drift detection operation. Once the drift detection operation has completed, use the following commands to return drift information you want: Use describe-stack-setto return detailed information about the stack set, including detailed information about the last completed drift operation performed on the stack set. (Information about drift operations that are in progress isn't included.) Use list-stackinstancesto return a list of stack instances belonging to the stack set, including the drift status and last drift time checked of each instance. Use describe-stack-instanceto return detailed information about a specific stack instance, including its drift status and last drift time checked. Use detect-stack-set-driftto detect drift on an entire stack set and its associated stack instances. The following example initiates drift detection on the stack set stack-set-drift-example. aws cloudformation detect-stack-set-drift --stack-set-name stack-set-drift-example { "OperationId": "c36e44aa-3a83-411a-b503-cb611example" } Because stack set drift detection operations can be a long-running operation, use describe-stack-set-operationto monitor the status of drift operation. This command takes the stack set operation ID returned by the detect-stack-set-driftcommand. The following examples uses the operation ID from the previous example to return information on the stack set drift detection operation. In this example, the operation is still running. Of the seven stack instances associated with this stack set, one stack instance has already been found to have drifted, two instances are in synch, and drift detection for the remaining four stack instances is still in progress. Because one instance has drifted, the drift status of the stack set itself is now DRIFTED. aws cloudformation describe-stack-set-operation --stack-set-name stack-set-drift-example --operation-id c36e44aa-3a83-411a-b503-cb611example { "StackSetOperation": { "Status": "RUNNING", "AdministrationRoleARN": "arn:aws:iam::012345678910:role/AWSCloudFormationStackSetAdministrationRole", "OperationPreferences": { "RegionOrder": [] }, "ExecutionRoleName": "AWSCloudFormationStackSetExecutionRole", "StackSetDriftDetectionDetails": { "DriftedStackInstancesCount": 1, "TotalStackInstancesCount": 7, "LastDriftCheckTimestamp": "2019-12-04T20:34:28.543Z", "InSyncStackInstancesCount": 2, "InProgressStackInstancesCount":" } } Performing the same command later, this example shows the information returned once the drift detection operation has completed. Two of the seven total stack instances associated with this stack set have drifted, rendering the drift status of the stack set itself as DRIFTED. aws cloudformation describe-stack-set-operation --stack-set-name stack-set-drift-example --operation-id c36e44aa-3a83-411a-b503-cb611example { "StackSetOperation": { "Status": "SUCCEEDED", "AdministrationRoleARN": "arn:aws:iam::012345678910:role/AWSCloudFormationStackSetAdministrationRole", "OperationPreferences": { "RegionOrder": [] } "ExecutionRoleName": "AWSCloudFormationStackSetExecutionRole", "EndTimestamp": "2019-12-04T20:37:32.829Z", "StackSetDriftDetectionDetails": { "DriftedStackInstancesCount": 2, "TotalStackInstancesCount": 7, "LastDriftCheckTimestamp": "2019-12-04T20:36:55.612Z", "InSyncStackInstancesCount": 5, "InProgressStackInstancesCount":" } } When the stack set drift detection operation is complete, use the describe-stack-set, list-stackinstances, and describe-stack-instancecommands to review the results. The describe-stack-setcommand includes the same detailed drift information returned by the describe-stack-set-operationcommand. aws cloudformation describe-stack-set --stack-set-name stack-set-drift-example { "StackSet": { "Status": "ACTIVE", "Description": "Demonstration of drift detection on stack sets.", "Parameters": [], "Tags": [ { "Value": "Drift detection", "Key": "Feature" } ], "ExecutionRoleName": "AWSCloudFormationStackSetExecutionRole", "Capabilities": [], "AdministrationRoleARN": "arn:aws:iam::012345678910:role/AWSCloudFormationStackSetAdministrationRole", "StackSetDriftDetectionDetails": { "DriftedStackInstancesCount": 2, "TotalStackInstancesCount": 7, "LastDriftCheckTimestamp": "2019-12-04T20:36:55.612Z", "InProgressStackInstancesCount": 0, "DriftStatus": "DRIFTED", "DriftDetectionStatus": "COMPLETED", "InSyncStackInstancesCount": 5, "FailedStackInstancesCount": 0 }, "StackSetARN": "arn:aws:cloudformation:us-east-1:012345678910:stackset/stack-set-drift-example:bd1f4017-d4f9-432e-a73f-8c22example", "TemplateBody": [details omitted], "StackSetId": "stack-set-drift-example:bd1f4017-d4f9-432e-a73f-8c22ebexample", "StackSetName": "stack-set-drift-example" } } You can use the list-stack-instancescommand to return summary information about the stack instances associated with a stack set, including the drift status of each stack instance. In this example, executing list-stack-instanceson the example stack set enables us to identify which two stack instances have a drift status of DRIFTED. aws cloudformation list-stack-instances --stack-set-name stack-set-drift-example { "Summaries": [ { "StackId": "arn:aws:cloudformation:ap-northeast-1:012345678910:stack/StackSet-stack-set-drift-example-29168cdd-e587-4709-8a1f-90f752ec65e1/1a8a98f0-16d4-11ea-9844-060a5example", "Status": "CURRENT", "Account": "012345678910", "Region": "ap-northeast-1", "LastDriftCheckTimestamp": "2019-12-04T20:36:18.481Z", "DriftStatus": "IN_SYNC", "StackSetId": "stack-set-drift-example:bd1f4017-d4f9-432e-a73f-8c22eexample" }, { "StackId": "arn:aws:cloudformation:eu-west-1:012345678910:stack/StackSet-stack-set-drift-example-b0fb6083-60c0-4e39-af15-2f071e0db90c/0e4f0940-16d4-11ea-93d8-0641cexample", "Status": "CURRENT", "Account": "012345678910", "Region": "eu-west-1", "LastDriftCheckTimestamp": "2019-12-04T20:37:32.687Z", "DriftStatus": "DRIFTED", "StackSetId": "stack-set-drift-example:bd1f4017-d4f9-432e-a73f-8c22eexample }, { ", "LastDriftCheckTimestamp": "2019-12-04T20:34:28.275Z", "DriftStatus": "DRIFTED", "StackSetId": "stack-set-drift-example:bd1f4017-d4f9-432e-a73f-8c22eexample" }, [additional stack instances omitted] ] } The describe-stack-instancecommand also returns this information, but for a single stack instance, as in the example below. aws cloudformation describe-stack-instance --stack-set-name stack-set-drift-example --stack-instance-account 012345678910 --stack-instance-region us-east-1 { "StackInstance": { ", "ParameterOverrides": [], "DriftStatus": "DRIFTED", "LastDriftCheckTimestamp": "2019-12-04T20:34:28.275Z", "StackSetId": "stack-set-drift-example:bd1f4017-d4f9-432e-a73f-8c22eexample" } } Once you've identified which stack instances have drifted, you can use the information about the stack instances that is returned by the list-stack-instancesor describe-stack-instancecommands to execute the describe-stack-resource-drifts. This command returns detailed information about which resources in the stack have drifted. The following example uses the stack ID of one of the drifted stacks, returned by the list-stack-instancescommand in the example above, to return detailed information about the resources that have been modified or deleted outside of CloudFormation. In this stack, two properties on an AWS::SQS::Queueresource, DelaySecondsand maxReceiveCount, have been modified. aws cloudformation describe-stack-resource-drifts --stack-name --stack-resource-drift-status-filters "MODIFIED" "DELETED" { "StackResourceDrifts": [ { 25a37c4", "ActualProperties": "{\"DelaySeconds\":10,\\":20},\"VisibilityTimeout\":60}", "ResourceType": "AWS::SQS::Queue", "Timestamp": "2019-12-04T20:33:57.261Z", "PhysicalResourceId": "", "StackResourceDriftStatus": "MODIFIED", "ExpectedProperties": "{\"DelaySeconds\":20,\\":10},\"VisibilityTimeout\":60}", "PropertyDifferences": [ { "PropertyPath": "/DelaySeconds", "ActualValue": "10", "ExpectedValue": "20", "DifferenceType": "NOT_EQUAL" }, { "PropertyPath": "/RedrivePolicy/maxReceiveCount", "ActualValue": "20", "ExpectedValue": "10", "DifferenceType": "NOT_EQUAL" } ], "LogicalResourceId": "Queue" } ] } Stopping drift detection on a stack set Because drift detection on a stack set can be a long-running operation, there may be instances when you want to stop a drift detection operation that is currently running on a stack set. To stop drift detection on a stack set using the AWS Management Console Open the AWS CloudFormation console at . On the StackSets page, select the name of the stack set. CloudFormation displays the StackSets details page for the selected stack set. On the StackSets details page, select the Operations tab, and then select the drift detection operation. Select Stop operation. To stop drift detection on a stack set using the the AWS CLI Use the stop-stack-set-operationcommand. You must supply both the stack set name and the operation ID of the drift detection stack set operation. aws cloudformation stop-stack-set-operation --stack-set-name stack-set-drift-example --operation-id 624af370-311a-11e8-b6b7-500cexample
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-drift.html
2021-07-24T02:55:19
CC-MAIN-2021-31
1627046150067.87
[]
docs.aws.amazon.com
{"title":"Release note - Version 0.1","slug":"release-note-version-01","body":"**Release Date: 2015-11-16\nVersion: 0.1**\n\nThese release notes introduce the Early Adopter Version of the Seven Bridges Cancer Genomics Cloud (CGC). \n\nThe CGC is a cloud platform for cancer genomics data analysis. This is the first full deploy of the platform, and is open for approved early adopter users only. \n\nWe’re looking forward to seeing what our early adopter users do with the CGC, and will be incorporating changes based on their feedback in subsequent releases. Please be aware that some features of this release are still in alpha and subject to change.\n\n##New Features \nThe following features are present in this release: \n\n###dbGaP authentication\n * Registration and authentication on the CGC using eRA Common and NIH cit credentials.\n * Authentication to use TCGA Controlled Data based on dbGaP permissions.\n\n###Collaboration features at project-level\n * Tool- and data-sharing between project members.\n * User management to set read, write, and execute permissions within projects.\n\n###Controlled Access projects to work on protected TCGA data.\n * The TCGA dataset\n * The TCGA dataset, hosted in the CGC’s secure cloud environment. This includes more than 500,000 files representing cases from all cancer types. \n * A graphical case explorer to investigate TCGA cases by tumor type and gene mutation status.\n * Introduction of a semantic triplestore for more than 120 TCGA metadata properties and an alpha version of the ‘Data Browser’. The Data Browser provides a graphical interface to construct queries and explore the TCGA dataset. \n\n###Private Data Uploads\n * Efficient upload clients to add your private data to the CGC from personal computers, clusters and FTP servers.\n\n###Bioinformatics Apps \n * Ready-to-use open source bioinformatics tools and workflows. \n * An interactive genome browser.\n * A Docker-based command-line utility for porting third-party software to the CGC. \n * A graphical interface for building, configuring, and executing a workflow.\n\n###Native CWL Support\n * Full support for the Common Workflow Language (CWL). \n * Tools can be installed on the CGC using their CWL descriptions. Alternatively, tools can be installed using a graphical tool editor, which automatically generates a CWL tool description. \n\n###Knowledge center \n * Documentation (currently in beta)\n * A collaborative question and answer site for cancer researchers.\n * A CGC blog \n\n##Known Issues\nThis release of the service has the following known issues, which will be corrected in future versions: \n\n * The ‘file’ node must be selected in the Data Browser in order to link data to a project. \n * Some planned documentation pages are still in development. \n * The Knowledge Center documentation page may require 15 seconds to load completely.","_id":"564a4390e126d40d00f0e066","__v":2,"tags":[],"user":{"name":"Seven Bridges","username":"","_id":"554290cd6592e60d00027d17"},"changelog":[{"type":"added","update":"","_id":"565f14d57f93280d0052cea8"}],"createdAt":"2015-11-16T20:58:56.358Z","project":"55faf11ba62ba1170021a9a7","metadata":{"title":"","description":"","image":[]}}
https://docs.cancergenomicscloud.org/blog/release-note-version-01
2021-07-24T02:36:46
CC-MAIN-2021-31
1627046150067.87
[]
docs.cancergenomicscloud.org
Managing Tasks via the Product (Tasks Sync)¶ Revenue Inbox synchronization ensures continuous background syncing of your work Tasks between MS Exchange/Office 365 and Salesforce. Once activated, RI synchronization runs every 30 minutes on server side. Tip Refer to this article to learn how MS Exchange/Office 365 Task item fields are matched with Salesforce Tasks by RI. Also refer to this article for more details on configuring objects synchronization. With Revenue Inbox, you can share and update your work tasks in Salesforce right from your preferred mail client. Task reminders are synced along with them - so you will not miss any important ones. Selective automatic synchronizing of Tasks with Salesforce offered by Revenue Inbox is highly useful for jobs reporting and coordination. After RI sync is set up, to enable Tasks downsyncing from Salesforce to your mail client, go to Synchronization Dashboard (see the article Opening the Dashboard), open Sync Settings > Filters in the pane on the left-hand side and enable the switch button under Tasks: Note that your Tasks existing in MS Exchange/Office 365 prior to RI sync activation will not be synced with Salesforce, unless you specifically select them to be synced using either one of the methods described below: Saving a Task in Salesforce¶ You can share an MS Exchange/Office 365 Task in Salesforce in one of the following ways: - By assigning the task the custom Salesforce category. In this case on the following sync session the task will be saved in Salesforce and automatically moved to the Salesforce Tasks folder - By moving the task to the dedicated Salesforce Tasks folder. On the following sync session the task will be saved in Salesforce and automatically assigned the Salesforce category. As long as the task stays in the folder, any changes made to it will be conveyed to its matching Task object in Salesforce in MS Outlook on the following sync session Note Your personal Tasks located in the default Tasks folder or other folders in MS Exchange/Office 365 will not get saved in Salesforce. Only the Tasks assigned the Salesforce category or moved to the Salesforce Tasks folder will get saved. Editing a Salesforce Task via Revenue Inbox¶ You can edit a Salesforce task via RI in one of the following ways: - In Revenue Inbox RI Sidebar using either of the ways described in the previous section. - Click the Cloud icon or the icon next to the Task’s header; the Task will be opened in Salesforce in your browser. Note Changes made to a task via Revenue Inbox Sidebar are conveyed to Salesforce immediately and do not require RI sync to be active, while changes made to tasks in email client will appear in Salesforce only after the following sync session. We would love to hear from you
https://docs.revenuegrid.com/ri/fast/articles/Synchronization-of-Tasks/
2021-07-24T01:23:43
CC-MAIN-2021-31
1627046150067.87
[array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/tasks_switch.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/582db73e903360645bfa502b.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/582db7dec697916f5d0516af.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/582db7dec697916f5d0516af2.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/cloud_icon.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/582db90f903360645bfa5043.png', None], dtype=object) array(['../../assets/images/faq/fb.png', None], dtype=object)]
docs.revenuegrid.com
>> master, peers, and search heads. You must use the same key value for all cluster nodes. You set pass4SymmKey when you deploy the cluster. For details on how to set the key on the master node, see Enable the indexer cluster master: master_uri notify_scan_period On a search head: master_uri On a master!
https://docs.splunk.com/Documentation/Splunk/7.1.6/Indexer/Enableclustersindetail
2021-07-24T01:40:56
CC-MAIN-2021-31
1627046150067.87
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
New to Telerik Reporting? Download free 30-day trial How to Add a Custom Action Adding a custom action using the Report Designer In Design view, right-click the report item to which you want to add a link and then click Properties. In The Properties dialog box for that report item, click Action. The Edit Action dialog will open. Select Custom. An additional section appears in the dialog box, containing a button titled Select parameters. Clicking the button will open the Edit Custom Action Parameters dialog box. Add one or more parameters, defining their Name and Value properties. Click OK when ready. To test the action, preview the report and click the report item with the applied custom action. A message will appear, displaying information for the action's properties. Adding a custom action programatically Telerik.Reporting.CustomAction customAction = new Telerik.Reporting.CustomAction(); customAction.Parameters.Add("param1", "=Fields.Name"); customAction.Parameters.Add("param2", "=Now()"); textBox1.Action = customAction; Dim customAction As New Telerik.Reporting.CustomAction() customAction.Parameters.Add("param1", "=Fields.Name") customAction.Parameters.Add("param2", "=Now()") textBox1.Action = customAction
https://docs.telerik.com/reporting/designing-reports-interactivity-how-to-add-custom-action
2021-07-24T01:30:43
CC-MAIN-2021-31
1627046150067.87
[]
docs.telerik.com
PHP is the most popular server-side scripting language, powering millions of websites including most WooCommerce stores. If you arrived at this page from the notice in your WooCommerce store, your store is running an outdated and unmaintained version of PHP. Not only is your website’s performance (a lot) lower than it should be, you may find that things do not work as you expect and be open to security vulnerabilities! What is PHP? ↑ Back to top PHP is a scripting language which most likely powers your WooCommerce webshop. PHP, like all software, gets updated over time to patch security issues and improve its features. And like other software, it’s important to keep your PHP version up to date. Updating your PHP version ↑ Back to top Contact your host ↑ Back to top In most cases you cannot update the PHP version yourself and need to contact your host about this. The upgrade process is an easy process and should be something your host can do for you without impacting your website or charging you a fee. Here’s a letter you can send to your hosting company: Dear host, I’m running a WooCommerce webshop on one of your servers and WooCommerce has recommended using at least PHP 7.0. WordPress, the content management system that WooCommerce uses, has listed PHP 7.4 as the recommended version on n their requirements page: Can you please let me know if my hosting supports PHP 7.0 or higher and how I can upgrade? Looking forward to your reply. VPS Server ↑ Back to top If you have a VPS server, see How to upgrade from PHP 5. My host doesn’t support PHP 7.0 ↑ Back to top If your host doesn’t support PHP 7.0 or higher, we recommend you find a host that does. We have a list of WordPress hosting solutions we recommend and all support PHP 7.0 or higher. If you contact another host, be sure to ask them which PHP version your website will run on before purchasing. More information ↑ Back to top PHP has a list of unsupported version including dates on their website. If you develop WordPress plugins yourself you might want to check out the PHP library called WPupdatePHP.
https://docs.woocommerce.com/document/how-to-update-your-php-version/
2021-07-24T01:57:53
CC-MAIN-2021-31
1627046150067.87
[]
docs.woocommerce.com
Configure Metric Graphing Properties You configure metric graphing properties during the installation of the CCAdv and WA modules. There is no system-wide setting that determines the time period of values displayed in graphs. Users can graph five minutes and thirty minutes data in the same graph. Use the Time Profile for Charting option on the Metric Manager page of the Administration module to enable a metric for graphing. If changes are required in the metric graphing properties after installation, use the CONFIG_PARAMETER table in the Advisors database. The following list describes the properties that govern metric graphing in the CONFIG_PARAMETER table: - The duration of the historical values retained for graphing. The default number is 120 minutes, or 2 hours. Changing this number will increase or decrease the number of minutes that the historical data for metrics is kept in the metric graphing database. See Change the duration of historical values. - The duration of the future values displayed for graphing. The default number is 120 minutes, or 2 hours. Changing this number increases or decreases the number of minutes that the future data of WA forecast metrics is displayed on the complete X axis (horizontal axis) of a graph. See Change the duration of future values. - The minimum interval in seconds between graphed values in all graphs for points stored after the change. See Change the interval between values. - Whether graphed values display from midnight. The default value is true. Changing this to false means that a graph will not show values with times from the previous day. See Retain or delete values at midnight. <tabber> Change the duration of historical values= Use the following procedure to change the duration, in minutes, of the historical values that are retained for graphing. Note that CCAdv/WA is optimized with the graphing parameters of 120 minutes of graphable values that are no closer than 60 seconds apart. If you decrease the interval in seconds between values, you must decrease the duration of values stored, so that only approximately 120 values are stored for graphing. See the procedure on the Change the interval between values tab on this page. |-| Change the duration of future values= Use the following procedure to change the duration, in minutes, of the future values that are displayed for graphing. Only WA contact group forecast metrics have future values. |-| Change the interval between values= The supported amount of historical data that CCAdv/WA stores for one graphed metric is 120 values. By default, CCAdv/WA keeps 120 values that are not closer than one minute apart. If you decrease the interval in seconds between values, you must decrease the duration of values stored, so that only approximately 120 values are stored for graphing. Use the following procedure to change the minimum number of seconds between values in a graph. Example You want to display a graph of values for one day all the way back to midnight; that is, at most 24 hours. We can calculate that (24 hours * 60 minutes per hour / 120 data points) means 1 data point will be graphed not more than every 12 minutes. - At installation set the Store snapshots for graphing interval to 720 seconds (12 minutes * 60 seconds per minute) This setting corresponds to warehoused.metrics.min.interval.secs in CONFIG_PARAMETER.NAME in the Advisors database. - Manually, in the CONFIG_PARAMETER table in the Advisors database, set PARAM_VALUE to 1440 for the warehoused.metrics.max.minutes.kept parameter. That is the result of 24 hours * 60 minutes per hour, for 1440 minutes. After CCAdv/WA has been running for 24 hours, a newly opened graph would display the last 24 hours of values, with values spaced at least 12 minutes apart. |-| Retain or delete values at midnight= Use this procedure to specify whether graphs display values from the previous day.
https://docs.genesys.com/Documentation/PMA/8.5.0/PMADep/MetricGraphConfig
2021-07-24T02:50:19
CC-MAIN-2021-31
1627046150067.87
[]
docs.genesys.com
New Features - Async Mode: Added automatic Browser Monitoring support. Fixes - Async mode: Fixed a bug where using HttpClient with a BaseAddress would result in transactions being lost or corrupted. Upgrading - For upgrade instructions, see Upgrade the .NET agent. - If you are upgrading from a particularly old agent, see Upgrading legacy .NET agents for a list of major changes to the .NET agent.
https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/net-release-notes/net-agent-512130/?q=
2021-07-24T02:39:32
CC-MAIN-2021-31
1627046150067.87
[]
docs.newrelic.com
Our offering of UiPath Automation CloudTM requires no installation from your part. In order to access it, you only need an internet connection with TLS and a compatible browser. Browser Compatibility The table below lists the supported browsers and their minimum version of compatibility with Automation CloudTM. Other browsers that are not listed above, older versions of the ones listed above, or mobile browsers are not supported. If you access Automation CloudTM with any of these, you could receive an error. End of support for Internet Explorer UiPath Automation CloudTM no longer supports Internet Explorer as of May 2021. Please use another supported browser to access our cloud products. For more information, see this forum post. Compatibility with Other Products For information about the compatible versions of Robot and Studio that work with Automation CloudTM and Orchestrator, see the technical compatibility matrix. Activating TLS on Your Machine Make sure that you connect to the internet using TLS. - On your machine, open Control Panel > Network and Internet > Internet Options. - Select the Advanced tab. - Scroll down to the Security section. - Select the Use TLS 1.2 check box. - Click Apply. Updated about 13 hours ago
https://docs.uipath.com/getting-started/docs/cloud-software-requirements
2021-07-24T02:24:32
CC-MAIN-2021-31
1627046150067.87
[array(['https://files.readme.io/bcababd-TLS_options3.png', 'TLS options3.png'], dtype=object) array(['https://files.readme.io/bcababd-TLS_options3.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
To avoid malicious activities such as SPIT (SPam over Internet Telephony), TDoS (Telephony Denial-Of-Service), fuzzing and War dialing, please do the following to keep your server and service secured. - From Brekeke SIP Server version 3.2, Block List feature can be used to define filter policy and block policy for malicious activity.If there is any request which matches the rule defined in [Block List] > [Filtering Policy] and/or rules in [Block List] > [Setting] page, the request will be blocked and its remote IP address and/or user name will be added automatically to [Block IP Address] and/or [Blocked User Name] pages. You can also manually define IP addressed and user names to be blocked. - Refer to Brekeke SIP Server administrator guide the section “9. Security” - Change default administrator “sa” password, and set strong password for system administrators. - Secure SIP authentication information. Using auto-provisioning function is recommend to inimize breach of the information. - If possible, use a firewall in the front of SIP Server to block unknown remote IP addresses. - Use the latest version of Brekeke Products - Change clients’ SIP devices SIP port to other port instead of 5060 - Use the following DialPlan rules to block “friendly-scanner” For Brekeke ver3.2 and later products: Add following rules to Brekeke admintool > [SIP SERVER] > [Dial Plan] > [Preliminary] page: By using $action = block in [Deploy Patterns], Brekeke SIP Server will block matched requests and put the remote IP address in the [Block List] > [Blocked IP Address] page. For Brekeke PBX 3.8 or later, the default preliminary rule ($pbx.precheck = ^true in the Mathing Patterns and $pbx.preprocess in the Deploy Patterns) are mostly equivalent to the rules below. [Matching Patterns] $str.lowercase(User-Agent) = friendly-scanner|sundayddr|vaxsipuseragent|sipcli|custom|pplsip|vaxsipuseragent|sipscan|sipvicious|sipptk $request = ^(\S+) [Deploy Patterns] $action = block $param = Method=%1 UA=%{User-Agent} [Matching Patterns] From = sipsscuser|sipvicious $request = ^(\S+) [Deploy Patterns] $action = block $param = Method=%1 UA=%{User-Agent} With above rule in [Dial Plan] > [Preliminary] page, the remote IP addressof matched blocked requests from sipsscuser or sipvicious will be put in [Block List] > [Blocked IP Address] page, [IP Address] column, in the [reason] column, the DialPlan rule name will show and followed by blocked requests’ Method and User Agent information which is recorded by $param in the rule and the time blocked recorded was added in [Time Added] column. Also, there are sevral sample DialPlan rules for blocking malicious packets under the Honeypot topic. Note: These DialPlan rules should be listed under the [Dial Plan] > [Preliminary] page not [Dial Plan] > [Rules]. For Brekeke ver3.0 and ver3.1 products: Add following rules to Brekeke SIP Server admintool > [Dial Plan] > [Preliminary] page: By using $accept = false in [Deploy Patterns], Brekeke SIP Server will not accept matched requests and there will be no response sent from Brekeke products either. [Matching Patterns] $str.lowercase(User-Agent) = friendly-scanner|sundayddr|vaxsipuseragent|sipcli|custom|pplsip|vaxsipuseragent|sipscan|sipvicious|sipptk [Deploy Patterns] $accept = false [Matching Patterns] From = sipsscuser|sipvicious [Deploy Patterns] $accept = false Related Links: - Block List - Honeypot - Automatically Block Unwanted Calls - Authenticate caller by IP address - Reject non-registered caller’s call
https://docs.brekeke.com/avoid-attacks/
2021-07-24T01:11:19
CC-MAIN-2021-31
1627046150067.87
[]
docs.brekeke.com
Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8730_1182925711.1627087875251" ------=_Part_8730_1182925711.1627087875251 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: This section describes accessory tools that can be used to manage the WC= S server. The WCS server may be behind NAT and as such it will require a port rang= e opened for the external network, for instance, UDP 31000-32000. This m= eans that a UDP packet sent from the external network to the port in that r= ange should reach the server where WCS is placed. Hence, we have a simple test. Send a UDP packet from outside using nc an= d receive it on the server using tcpdump. If the packet reached, the por= t is open. echo -n "hello" |= nc -4u -w1 wcs1.com 31000=20 or for Debian: echo -n "hello" |= nc -u -w1 wcs1.com 31000=20 This command sends a simple UDP packet in the given direction. tcpdump udp port = 31000=20 This command makes the server listen for a particular port and immediate= ly outputs information about packet arrival to the console: 17:50:21.932509 I= P myhost.39194 > host.31000: UDP, length 5=20 This is Java utility that provides important information about a Java pr= ocess and execution threads. When you run jstack from the console, a bri= ef information about jstack is shown: If the information is not shown or the jstack utility is not found, use = the installation instruction to latest versio= n of JDK. After installing jdk you should create a symbolical link to j= stack to quickly run it: ln -sf /usr/java/= default/bin/jstack /usr/bin/jstack=20 Example: jstack 8888 > = jstack.report=20 Here, 8888 is the ID of the Java process. Since build 5.2.801, = WCS is running from 'flashphoner' user for security reasons. Therefore, jst= ack should be launched from the same user if using JDK 8: sudo -u `ps -o un= ame=3D -p $(pgrep java)` `which jstack` `pgrep java`=20 A stream published picture quality depends on channel bandwidth between = publisher and server, the same for subscriber. Channel bandwidth can be che= cked using iperf utility. This program is implemented for all m= ajor OS: Windows, MacOS, Ubuntu/Debian, CentOS. iperf in server mode can be= installed and running with WCS, that allows to check whole channel bandwit= h from publisher to viewer. iperf can be installed on CentOS 7 as follows: yum install iperf= 3=20 Run iperf in server mode iperf3 -s -p 5201==20 where 5201 is iperf port for testing client connections On client side iperf can be launched as follows: 1. To test upload channel bandwith via UDP (Windows example) iperf3.exe -c tes= t2.flashphoner.com -p 5201 -u=20 Where The result of the command above should look like this: 2. To test download channel bandwidth via UDP iperf3.exe -c tes= t2.flashphoner.com -p 5201 -u -R=20 Where The result of the command above should look like this: By default, iperf tests the channel for 10 seconds. This interval should= be increased, for example, to 120 second iperf3.exe -c tes= t2.flashphoner.com -p 5201 -u -t 120=20 The upload channel bandwidth test via UDP result shows the maximum video= publishing bitrate without packet losses. In the sample above bitrate shou= ld be limited with 1000 kbps, on server side for example webrtc_cc_max_bit= rate=3D1000000=20 Note that iperf major versions on server and on testing client should be= the same. Today version 3 is actual, but ther is also version 2 in reposit= ories.
https://docs.flashphoner.com/exportword?pageId=9242017
2021-07-24T02:03:05
CC-MAIN-2021-31
1627046150067.87
[]
docs.flashphoner.com
How to fix "Your request timed out. Please retry the request" in WordPress ? img source : publicdomainpictures.net You may have encountered the following error message when you are uploading an image, installing a Theme or doing something as simple as trying to edit your post in your WordPress admin. Your request timed out. Please retry the request This happens when your server takes too long to respond or complete a task that you are trying to accomplish. Possible causes - Your server do not have not enough resources to perform that task that you have executed. - There could be a script somewhere that is written badly, which results in loops and causing the server to terminate it before completing. - There could be a script making request to an external remote server, which takes too long to respond. - Your PHP values max_input_time (Maximum amount of time each script may spend parsing request data) and max_execution_time (Maximum execution time of each script, in seconds) could have been set too low. Troubleshooting The following are some troubleshooting suggestions. Contact your Web Hosting Company Contact your web hosting company for help. They should be able to provide some useful information regarding your issue. They may point to you a file that is causing this issue. Ask your web hosting company to increase your PHP max_input_time and max_execution_time to allow you to complete your task. Disable Plugins - Disable your plugins and retry your task. - If this works, you may have a plugin that's taking too long to execute it's code, or you may have not enough resources to run a large collection of plugins. - Disable one plugin at a time and retry your task. This is to eliminate any plugin that's taking too long to execute it's code. Increase your PHP values max_input_time and max_execution_time - You can try increasing your PHP max_input_time and max_execution_time to a greater value. - Editing php.ini - If you have access to php.ini file on your web server. You can access it using your FTP program. - Open it up and search for max_input_time set it to 60 max_input_time = 60; max_execution_time = 60; - Try setting to a high value, if you are still unable to complete your task. - Editing .htaccess file - Use your FTP program and login to your server. - Find your .htaccess file which is located at the same level as wp-config.php - Make a backup copy before attempting to edit it. - Open it and add the following codes at the end of the file. php_value max_input_time 60 php_value max_execution_time 60 - Save and re-load your WordPress admin. - Proceed to try out your task. - Increase the values, if it still does not work. Contact your Web Hosting company for help, if you do not have access to .htaccess file or php.ini
https://docs.presscustomizr.com/article/183-your-request-timed-out-please-retry-the-request
2021-07-24T01:01:09
CC-MAIN-2021-31
1627046150067.87
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5c80ee9a04286350d088be8d/file-fcW2jP8eAh.jpg', None], dtype=object) ]
docs.presscustomizr.com
Azure Marketplace Virtual Machine Image Publication This article defines some basic concepts of the Azure Marketplace and introduces how to publish VM images on the CPP (Cloud Partner Portal). Within this article, the scope of “Azure Marketplace” is limited to China, “image” and “mirror” are used interchangeably, and the terms “product,” “product/service,” and “offer” all refer to VM images unless otherwise specified. 1. Prerequisites and Preparations 1.1 You must have an Azure account and have registered as an Azure Marketplace service provider. Please refer to the Azure Marketplace Publisher Guide. 1.2 Before you publish an image, please carefully read the Azure China Marketplace Participation Policy and the Azure Marketplace Publisher Agreement, paying particular attention to the sections on product pricing models, product permissions and support, and user data privacy protection. 1.3 To prepare pre-published data and virtual machine image files, please refer to the Virtual Image Creation Guide. Note that you are responsible for the product’s software license and any third-party software dependencies. 1.4 After image creation and local testing are completed, you need to upload images to an Azure Storage account. You can perform testing with tools such as PowerShell or Azure CLI. Please also refer to the testing and uploading sections of the VM Image Creation Guide. 2. Sign in to the Cloud Partner Portal and create a VM offer As shown below, click "New offer" -> "Virtual Machines" in the left-hand navigation bar, then start the image publishing process. The image content includes four forms: "Offer Settings", "SKUs", "Marketplace", and "Support." Each form consists of a set of text fields to be filled in. Required fields are marked with a red asterisk (*). After each form is filled out, you must click "Save" to prevent content loss. 2.1 Define the offer and SKU In the Azure Marketplace, each virtual machine Offer corresponds to a particular Offer type. Each Offer is the "parent" of all its SKUs, and can contain one or more SKUs. Publishers can have multiple Offers and decide how to structure them. The SKU is the smallest purchasable unit in the Azure Marketplace. The SKU allows you to differentiate between images based on features, type, price, and billing model under the same Offer. A virtual machine image contains one operating system disk and zero or more data disks. It is essentially the complete storage profile for a virtual machine. Each disk must have one VHD, and even a blank data disk needs to have a VHD created. An Offer is displayed under Offers in the Azure Marketplace. Please consider Offer IDs carefully as they will be displayed in the URL:{Publisher}.{OfferIdentifier}?tab=Overview 2.2 Marketplace The Marketplace primarily includes marketing-related content and laws, as well as sales lead management assets and specifications. 2.3 Support Support includes Support Department contact details and technical support information. These four areas are described in detail below. 3. Offer Settings Offer ID The Offer ID is the unique identifier that represents the offer within Azure Marketplace. The Offer ID is generally the name of the product or service that the seller intends to sell on the Azure Marketplace. The Offer ID can only include lowercase letters, numbers, dashes or underscores, may only end with lowercase letters or numbers, and may not exceed 50 characters in length. This identifier will be displayed in the product URL and ARM template. Note that once the Offer request is submitted, the field cannot be modified. Publisher ID The Publisher ID dropdown box allows you to select the publisher that will publish the Offer. Note that once the Offer request is submitted, the field cannot be modified. Name Offer names are used to identify offers on the Azure Marketplace platform. They are only displayed within the Publishing Platform, and are not displayed externally or to users. Offer names must be no longer than 50 characters or 25 Chinese characters. If possible, please include an identifiable trademark. The offer name may be the same as the title in section 5. Select “Save” to save your progress. You will add the SKUs for your Offer in the next tab. 4. SKUs From the “SKU” tab, you can create one or multiple SKUs. Solutions can be differentiated by SKU based on their features sets, the VM image type, throughput/scalability, billing model, or other specific features. Click “New SKU” to create an SKU. SKU ID A SKU requires an ID in the URL and the ID must be unique within the Publishing Platform. SKU names may include lower case letters, numbers and dashes, but may not end with a dash. The maximum length is 50 characters and Chinese characters are not supported. Note that once the Offer request is submitted, the field cannot be modified. SKU Title The SKU name is the name that appears publicly in the Azure Marketplace and Azure portal. It cannot exceed 25 Chinese characters or 50 characters. If possible, please include an identifiable trademark and avoid including the company name. Summary The summary is visible to customers, so it should be easy to read and no longer than 100 characters or 50 Chinese characters. The title and summary description of the SKU is displayed on the Plan + Pricing tab of the Azure Marketplace product pages, as well as on the Azure portal product pages. Azure Marketplace examples are given below: Description The description field is visible to customers. It is displayed on the product page in the Azure portal. Descriptions should generally include a simple explanation of the SKU. We recommend a length of no more than 100 characters. Descriptions are displayed in the Azure portal as shown below: Hide this SKU This flag allows you to set whether this particular SKU is visible to customers in Azure Marketplace and the Azure portal. You may need to hide the SKU if you only want to provide SKUs through the solution template instead of allowing them to be purchased separately. Price There are two price models: Free and BYOL. For the free model, Microsoft only charges an infrastructure cost and not a software cost. Please refer to the list of Virtual Machine Pricing. Regarding the BYOL model, the publisher can administer the license to run the virtual machine software and Microsoft only charges for infrastructure costs. In this mode, the customer must contact the service provider by email, phone, or some other means to obtain a license. The service provider can set whether the customer can get a free trial for 30 or 90 days, or no free trial. This calculated from the start of the deployment of the virtual machine. Operating system family and operating system type Choose whether your VM is based on a Windows or Linux (or Linux-like) operating system and select the operating system version based on this platform. OS Friendly Name Fill in an operating system name such as "Windows Server 2016" or "Linux Ubuntu 16." Recommended VM Sizes Must contain no more than six types. Given customer selection habits and the display of the Azure Management Platform, we recommend that you choose three virtual machine recommended configurations. These virtual machine configurations are specially displayed in the Azure portal when users deploy a virtual machine. Open port When the VM is deployed, the default port and communications protocol will be provided. These settings will take effect during the process of deploying the VM. You can modify the port configuration after the SKU has been published. Please note that the public port number and private port number are generally set as the same. Port number 22 TCP is the default SSH login port for Linux virtual machines, while port numbers 3389 TCP and 5986 TCP are the remote login ports for Windows virtual machines. These three port numbers do not need to be added manually. New disk image version and operating system VHD URL address The version number of the image must be added in semantic version format, i.e. the version number should be in form “X.Y.Z”, where X, Y, and Z are integers. The SKU version should only be increased incrementally. While you can add up to eight versions of each SKU, only the latest version of the SKU will be displayed in Azure Marketplace; the other versions will only be shown via the API. During the publishing and authentication process, the URI address where the VHD file is located must be located in Azure Blob Storage. If URI access permissions are not set correctly, an error message saying “this image does not exist” will appear during the image publishing process. After successful publication of the VHD image, Azure Marketplace will no longer require access the VHD file. At this time, the URI properties can be restored to the original settings and the VHD source file can also be deleted. For details about preparing VHD files, please see the Azure Marketplace VM Image Preparation Guide. First, if you are creating a Linux/Windows image on Azure, after the image is created, first delete the VM, go to Storage and select the VHD file, then click “Break lease” to use when publishing. Next, set URI address permissions using the following two methods. a. Method 1: Get the VM image shared access signature (SAS) URL from the Azure portal A simpler method of generating the SAS URL for the VHD file is to go the Azure portal and look up the Storage and Blob where VHD files are stored, then select the file you want to configure. Next, select “Generate SAS”, then enter the configuration information to generate the SAS URL. See below for details. b. Method 2: Get the SAS URL from Azure Storage Explorer Another method of generating the SAS URL is to generate it from Azure Storage Explorer. First, download and install Azure Storage Explorer. You then need to learn to use Azure Storage Explorer. Assuming your VHD file is already present in a particular storage account and container, left-click on the storage account, container or Blob, then select the VHD file that you need to publish. Right-click on “Get a shared access signature”, then select the start time and expiry time. The permissions for the SAS URL must be set to at least read. Select “Create”, then copy the URL details from the next screen. Of course, you can also refer to Create an SAS for an Azure Storage Blob, and use PowerShell or other command line methods to generate the SAS URL. Whichever method you use, please ensure that you pay attention to the following parameters when you generate the SAS URL for the VHD file: - “Read” permissions are sufficient. Please do not give “write” or “delete” access permissions. - To ensure that data is accessible throughout the publishing period, access must last for at least 60 days from the date when the SAS URL was created. To ensure UTC time, select the day before the current date. For example, if the current date is October 6, 2014, then select 10/5/2014. - Once configuration is complete, enter the SAS URL in your browser, press enter, then check that you can download the VHD file. 5. Marketplace The Azure Marketplace primarily includes marketing-related content and laws, as well as the sales lead management policy and practices. This section of the content will mainly be shown in Azure Marketplace. - Marketing assets include the offer name, description and logo. - Legal assets include the privacy policy, terms of use, and other legal documents. - The sales lead management policy allows you to specify how to handle information about potential end users from Azure Marketplace. Title The title of the offer is the official name of the offer that appears publicly in the Azure Marketplace. It cannot exceed 25 Chinese characters or 50 characters. If possible, you should include an identifiable trademark, such as “XXX Cloud Platform Firewall 2018”. Summary The summary is visible to customers, so it should be easy to read and no longer than 50 Chinese characters or 100 characters. Description The detailed description is visible to the customer and is displayed on the image’s product page. The detailed description generally includes a product introduction, user guide, and technical details. It makes up the main portion of the p, so please pay attention to the overall layout and the following points: - The product introduction generally includes a product overview, function features, technical architecture, and application scenarios. - The user guide generally includes product deployment instructions, login instructions, and usage methods. - The technical details generally include installation location, system startup and shutdown, log management, system maintenance, and other considerations. - The user guide and technical details can also be written as a product manual and added to Related Links. - Pictures cannot be inserted in the detailed description. To add pictures, please refer to the Picture module. - The description must be no longer than 3,000 characters or 1,500 Chinese characters, but we recommend a minimum length of 500 characters. You can add HTML tags to the detailed description, but only basic HTML tags such as p, em, ul, li, ol, strong and b are supported (you can also use plain text, but it will look messier). You can edit using a rich text editor. We recommend two simple rich text editors: HtmlCleaner and Simditor. Both allow you to use simple HTML tags. If you are using HtmlCleaner, as shown below, please edit the text in the Visual Editor, then in the “HTML Editor”, select “Copy to clipboard” to copy the HTML source code. If you are using Simditor, as shown below, once you finish editing the text, select the text with the cursor and right-click. You can select “View selected source code” and copy the source code directly. Alternatively, you can select “View element” (or press F12), right-click to select the DIV tag in which the text block is located, and then press “Copy” -> “Internal HTML” to finish copying. Please note that the source code cannot include complex HTML tags such as DIV or CSS. Preview subscription ID The preview subscription ID is the subscription ID used by the Publisher for the interface preview and deployment testing of the image in the Azure portal and Azure Marketplace when the image is published to the "Publisher signoff" status. The subscription ID can be viewed by clicking on the "Cost management + billing" menu in the Azure portal. Related links You can add up to five links to useful information such as user guides. These links will be displayed after the ARM user guide. Choose categories From the category list select as many as 3 categories related to the image. The selected categories will be used to map your Offer to the product categories in Azure Marketplace and Azure Portal. Upload a Logo Icon All logos uploaded to Cloud Partner Portal must comply with the following rules: - The Azure layout uses a simple color palette, so please keep the number of main and secondary colors in the logo to a minimum. - The main colors in the Azure portal are black and white, so you should avoid using these colors as your logo background, and instead choose colors that will make your logo stand out in Azure portal. We recommend that you use a simple main color, and if you are using a transparent background, please ensure that the logo or text are not in black, white or blue. - Avoid using gradient backgrounds in your logo background. - Avoid putting any text on your logo, even if the text consists of your company or brand name. The logo should appear “smooth”, and gradients should be avoided. - The logo should not be stretched. - The logo must comply with the resolution requirements (40x40 for small, 90x90 for medium, 115x115 for large, 255x115 for wide). Upload Screenshots The first “cover” image will appear on both the Azure portal and Azure Marketplace image details page. Any further images will only be displayed on the Azure Marketplace image details page. You can upload a maximum of five images. Please refer to Upload a Logo in the previous section for details of image specifications. Images must also conform to the 533x324 resolution requirement. Lead management If you want to collect user information and manage sales leads, you need to provide an Azure storage connection string. The system will then save the user data in your storage table. For details, see Azure Marketplace Sales Leads. We suggest that the privacy statement URL address be placed on the service provider website. We suggest that the End User License Agreement URL address is placed on the service provider company website. Is open-source software Please note that not all free software is open source. Product Authorization Certificate Providing Product Authorization Certificiate is mandatory for ISV who will publish offers which is not owned by ISV itself. Examples of Main Marketplace Information 6. Support Please note that the technical and customer support information for this service is displayed on the Offer details page. The contact information that you supplied when you registered for Azure Marketplace will not be displayed on the Offer details page. Fill in the technical support information Please provide contact information for Technical Support. As far as possible, contact details should be given for the selected company rather than personal details, to prevent this information becoming invalid due to personal changes. Fill in customer service support information Please provide contact details for Customer Support. As far as possible, contact details should be given for the selected company rather than personal details, to prevent this information becoming invalid due to personal changes. After you finished entering the information, please select “Save” to save your progress. 7. Problems with Publishing Images for the Global Azure Marketplace on Azure China Marketplace The image descriptions in the Global Azure Marketplace are generally based on English. We recommend that when images are published to the Azure China market, the text of the summary and description is mainly in Chinese. Please also note that if an image passes Global Azure testing, it is not guaranteed that it will pass China Azure testing. Images can only be published on the Azure China Marketplace after they have passed China Azure testing. Next steps After you have finished creating a VM image, the next steps are to submit onboard requests for Onboard requests and administrator approval. Feedback - If you have any questions about this documentation, please submit user feedback in the Azure Marketplace. - You can also look for solutions in the FAQs.
https://docs.azure.cn/en-us/articles/azure-marketplace/imagepublishguide
2021-07-24T01:09:24
CC-MAIN-2021-31
1627046150067.87
[array(['media/imagepublishguide/2.png', None], dtype=object) array(['media/imagepublishguide/3.png', None], dtype=object) array(['media/imagepublishguide/4.png', None], dtype=object) array(['media/imagepublishguide/5.png', None], dtype=object) array(['media/imagepublishguide/8.png', None], dtype=object)]
docs.azure.cn
Crate iced_winit[−][src] A windowing shell for Iced, on top of winit. iced_winit offers some convenient abstractions on top of iced_native to quickstart development when using winit. It exposes a renderer-agnostic Application trait that can be implemented and then run with a simple call. The use of this trait is optional. Additionally, a conversion module is available for users that decide to implement a custom event loop.
https://docs.rs/iced_winit/0.3.0/iced_winit/
2021-07-24T02:34:55
CC-MAIN-2021-31
1627046150067.87
[array(['https://github.com/hecrj/iced/blob/0525d76ff94e828b7b21634fa94a747022001c83/docs/graphs/native.png?raw=true', 'The native path of the Iced ecosystem'], dtype=object) ]
docs.rs
Policies define the set of IP addresses or domain names (i.e. IOCs, or Indicators of Compromise) that will be applied to the device to block malicious IP traffic and DNS lookups, or to allow whitelisted IP and DNS traffic. Policies are assembled with: - Targets: ThreatSTOP tracks IOCs and groups them as objects named targets. Targets can contain IP Addresses, subnets, DNS domains, as well as DNS server names and IP addresses. Targets are the atomic building block of policies. Targets attributes are described here. - Target bundles: Bundles are groups of targets defined based on target attributes. For example, a bundle can track all targets tied to the same type of threat. When adding a bundle to your policy, the targets will track changes made by ThreatSTOP. For example, if you add a bundle that tracks Botnet targets and new botnet targets are added to the system, your policy will incorporate the target automatically without requiring you to edit it. Using bundles is the recommended method of blocking threats based on their type or severity to ensure new threats are rapidly added to your policy while avoiding the minutiae of tracking newly published targets. - User-Defined Lists: User-Defined Lists are lists of IP addresses or domain names. They can be added to your policy to block connections and DNS lookups, or to whitelist them. Policies are assigned to devices. As their contents are updated throughout the day, firewall and DNS rules are reconfigured to filter the latest IOCs. Policies can also be loaded on SIEM devices using the optional ThreatList feature. IP and DNS Policies There are two types of policies, which are assigned to devices based on the device type: - IP Defense devices use IP Policies, built with IP Targets. They block traffic (inbound and outbound) on firewalls and routers based on source and destination IP addresses. - DNS Defense devices and Roaming Defense devices use DNS Policies that prevent DNS requests for malicious domains. They can use both domain and IP targets. DNS policies can filter both domains and IP addresses (i.e. IP records returned by the DNS lookups) and can use both domain and IP targets. IP policies use IP targets only. - IP Policies can apply the following actions: - Block traffic from and to the IP subnets currently present in the Block target. - Allow (whitelist) traffic from and to the IP subnets configured in a set of Allow targets. - Block or allow traffic from IP User-Defined Lists. The action is a setting of the UDL itself. - DNS Policies can enforce one of multiple RPZ Behaviors. In addition to blocking or whitelisting the DNS lookups, RPZ behaviors can also override the records in the DNS responses. The RPZ behavior is selected when adding a target, bundle or User-Defined List to the policy. Predefined and custom policies Predefined policies are managed by ThreatSTOP, and provide several default options to start enforcing a policy quickly. Predefined policies cannot include user-defined lists and can’tb e edited by customers. Custom policies are created by customers to be tailored to their environment and can include User-defined lists. User Interface List view The list of policies available (predefined and custom) in your account is available in the Policies menu. The list displays the following details: - Policy name, Policy Description and Type. - Predefined: set to Yes if this policy is a predefined policy managed by ThreatSTOP. - Number of addresses: for IP Policies, the number of IP addresses covered (block and allow) by the policy; this is different from the number of records because it includes subnets, which can block large group of IP addresses (e.g. Geographical targets). - Number of records: number of subnets (IP policies) or domains and IP addresses (DNS Policies). You can edit, view or delete policies from this page. A policy must be inactive (not assigned to a device) before it can be deleted. Creating a new policy After clicking the Create new Policy button on the Policy list page, you will be prompted to choose between: - The type of Policy (IP or DNS) - this can not be changed. - A blank policy or the policy (custom or predefined) that you want to duplicate as a starting point. After creating the policy, you can set the following settings: - Its name (up to 8 alphanumeric characters). - An optional description. - For DNS Policies, a default RPZ Action. This will set the default in the GUI when adding bundles and targets. This setting doesn’t control the action of the policy, which is set on a per-target basis. To configure the policy contents, follow the instruction in the Editing section below. Once the policy has been created, you can assign it to one or multiple devices or generate ThreatList files for your SIEM device. Editing policy contents The policy editor is composed of 4 sections. Tabs The tabs switch the editor between settings and the type of objects: - The Settings tab is used to rename the policy or change its description. Important: Web-Automatation devices will reflect a change of policy name automatically within 5 minutes. CLI devices require changing the policy name on devices that use it so they continue retrieving it. - Target Bundles: select the bundles to add to (or remove from your policy) - Using bundles is the recommended method of blocking threats based on target attributes without dealing with the minutiae of tracking new targets. - When you select and add a bundle, your policy will track changes to the bundle made by ThreatSTOP (such as adding or retiring a target that match the bundle configuration). - You can also add a bundle ‘as targets’, which will add the current targets that make up the bundle, but will not track future changes. - Targets: add or remove individual targets. - Marketplace: selection of targets provided by our partners. - Excluded targets: ensure a target will never be added to your policy, even if included (now or later) in a bundle present in your policy. For example, you might want to include the ITAR bundle but not never block the subnets associated with one of the countries in the bundle. - User-Defined Lists: UDLs in your account or shared with your account by another user. List of objects A list of policy objects, their attributes and filters are displayed below the tabs. - The list is paginated. - The list can be sorted by clicking on the table header. - The Search field will perform a text search in the name and description of the objects. You can also use the filters on the right hand side to find targets and bundles based on: - their attribute (Severity, Confidence, Risk, IOC type, Threat Type, Traffic type). - whether they are already added to your policy (In Policy filter). The filters can be collapsed with the blue arrow and reset at once with the Reset All Filters button. To view the full details of a target/bundle and add it to your policy, click on it in the list. Object details The details of the currently selected object (Target, bundle or user-defined list based on the active tab) are displayed in the lower left corner. Bundles - The name and description of the bundle. - Its current size (number of records). - The list of targets currently included by this bundle. - There are 4 possible actions in the header: - Adding the bundle: select an action (Block, Allow, or RPZ Behavior) and click Add Bundle. Your policy will track changes made to the bundle. - Adding the targets currently** present in the bundle. Your policy will *not track changes made to the bundle. - Change the action (Block > Allow, Allow > Block, or new RPZ Behavior). - Remove the bundle from the policy. Marketplace - The marketplace showcases targets available through our Threat Intelligence Partners. - These targets may require additional subscriptions. Please contact customer service for access. Additional targets - The name and description of the target - Its current size (number of records). For an IP device, this is approximately to the number of firewall rules. Also note that the size of a target will change over time based on the number of IPs or domains currently associated with the threats. - The type of Threat tracked by the target (see Target Attributes). - Danger, Risk and Confidence levels (see Target Attributes). - There are 3 possible actions on targets: - Add the target: select an action and click Add Target. Note: most targets can only be added as a block list, not an allow list. - Change the action from block to allow, or vice-versa, when the target supports both actions. - Remove the target from the policy. Excluded targets - This displays the the same settings as for Individual Targets. - There are two possible actions: - Excluding the target, which will remove it from your policy if added using a bundle or as an additional target. - Clearing the target, which removes it from the exclusion list. User-Defined Lists - The name and description of the User-Defined List. - The number of records (subnets) and addresses (subnets are expanded to count the number of IP addresses). - A link to the details page for the User-Defined List. Current Policy Contents The contents of the policy are displayed in the bottom right corner. They are: - The name of the policy. - The bundles, targets, excluded targets and UDLs in the policy. - The list of devices currently assigned this policy, if any. - The DNS Zone names associated with the policy. The ThreatSTOP subsystem distributes policies using the DNS protocol. These names are used to configure the policy on CLI devices. There are two zone names for IP devices and one for DNS devices. Clicking on the information icon next to an object will display its details in the Details section.
https://docs.threatstop.com/portal_policies.html
2021-07-24T00:22:36
CC-MAIN-2021-31
1627046150067.87
[]
docs.threatstop.com
Most Popular Articles - How to fix maximum upload and php memory limit issues in WordPress ? - Fixing errors when uploading images in WordPress - How to manually Install or Update a WordPress theme via FTP? - How to fix a "cURL error 28: Connection timed out" in WordPress ? - How to create Multilevel / Hierarchical menus ? - First steps with the Customizr WordPress theme
https://docs.presscustomizr.com/collection/5-wordpress-issues-faq
2021-07-24T00:27:15
CC-MAIN-2021-31
1627046150067.87
[]
docs.presscustomizr.com
You're reading the documentation for an older, but still supported, version of ROS 2. For information on the latest version, please have a look at Galactic. Installing ROS 2 on macOS¶ Table of Contents This page explains how to install ROS 2 on macOS. System requirements¶ We support macOS Mojave (10.14). Installing prerequisites¶ You need the following things installed before installing ROS 2. brew (needed to install more stuff; you probably already have this): Follow installation instructions at Optional: Check that brewis happy with your system configuration by running: brew doctor Fix any problems that it identifies. Use brewto install more stuff: brew install [email protected] # Unlink in case you have [email protected] installed already brew unlink python # Make the python command be Python 3.8 brew link --force [email protected] # install asio and tinyxml2 for Fast-RTPS brew install asio tinyxml2 # install dependencies for robot state publisher brew install tinyxml eigen pcre poco # OpenCV isn't a dependency of ROS 2, but it is used by some demos. brew install opencv # install OpenSSL for DDS-Security brew install openssl # if you are using ZSH, then replace '.bashrc' with '.zshrc' echo "export OPENSSL_ROOT_DIR=$(brew --prefix openssl)" >> ~/.bashrc # install Qt for RViz brew install qt freetype assimp # install console_bridge for rosbag2 brew install console_bridge # install dependencies for rcl_logging_log4cxx brew install log4cxx spdlog # install CUnit for Cyclone DDS brew install cunit Install ros2 doctor dependencies python3 -m pip install rosdistro Install rqt dependencies brew install sip pyqt5 Fix some path names when looking for sip stuff during install (see ROS 1 wiki): ln -s /usr/local/share/sip/Qt5 /usr/local/share/sip/PyQt5 brew install graphviz python3 -m pip install pygraphviz pydot Note You may run into an issue installing pygraphviz, “error: Error locating graphviz”. Try the following install command instead: python3 -m pip install --install-option="--include-path=/usr/local/include/" --install-option="--library-path=/usr/local/lib/" pygraphviz Install SROS2 dependencies python3 -m pip install lxml Install additional runtime dependencies for command-line tools: python3 -m pip install catkin_pkg empy ifcfg lark-parser lxml netifaces numpy pyparsing pyyaml setuptools argcomplete. Downloading ROS 2¶ Go to the releases page: Download the latest package for macOS; let’s assume that it ends up at ~/Downloads/ros2-release-distro-date-macos-amd64.tar.bz2. Note: there may be more than one binary download option which might cause the file name to differ. Unpack it: mkdir -p ~/ros2_foxy cd ~/ros2_foxy_foxy/ros2-osx/setup.bash Try some examples¶ In one terminal, set up the ROS 2 environment as described above and then run a C++ talker: ros2 run demo_nodes_cpp talker In another terminal, set up the ROS 2 environment and then run a Python listener:
https://docs.ros.org/en/ros2_documentation/foxy/Installation/macOS-Install-Binary.html
2021-07-24T01:49:15
CC-MAIN-2021-31
1627046150067.87
[]
docs.ros.org
Converting a Manual Test into Automated Test Using Sofy, you don’t have to write scripting code for creating Automated Test Cases. Once created, automated test case is resilient to UI changes of the app, screen form factors and dynamic content using the power of Sofy’s AI/ML algorithms. While you can record an automated test case, you can also convert a Manual Run and convert it to an Automated Test Case using the following steps: -. - Verify the Test Case Once converted, verify the newly created Automated Test Case under Main Menu => Automated => Test Runs - Execute the Test Case Verify Automated Test Case executes as expected. To do so, Acquire a Device. Go to Automated Tab on the Device Portal, Select the Test Case, and execute the Test.
https://docs.sofy.ai/mobile-manual-testing/converting-a-manual-test-into-automated-test
2021-07-24T01:08:05
CC-MAIN-2021-31
1627046150067.87
[array(['https://files.helpdocs.io/kr0v3v4pmt/articles/std5cj2dvi/1616619713237/image.png', None], dtype=object) array(['https://files.helpdocs.io/kr0v3v4pmt/articles/std5cj2dvi/1616619729457/image.png', None], dtype=object) array(['https://files.helpdocs.io/kr0v3v4pmt/articles/std5cj2dvi/1616619742636/image.png', None], dtype=object) ]
docs.sofy.ai
. ThreatSTOP Portal Device Settings UDP (optional, but recommended for DNS notifications) - IP Range: 192.124.129.0/24 - Inbound UDP port 53 - DNS over TCP - IP Range: 192.124.129.0/24 - Outbound TCP port 53 or 5353 - DNS over TLS - Configuration service - Hostname: ts-ctp.threatstop.com - IP Range: 204.68.97.208/28 - Outbound TCP port 5353 - with Web Automation -: Adding ISC Bind Device - Web Automation Adding the Bind device via web automation is the easiest method as all the settings are handled in the portal and sent down to the TSCM. After filling out the portal form you can just run the following command on the TSCM CLI to add the device. $ tsadmin add --type auto --device_id=[Device ID] --auto_key=[Device Key] You will be prompted for all device settings listed above. The TSCM device will create
https://docs.threatstop.com/webauto_bind9_tscm.html
2021-07-24T01:51:56
CC-MAIN-2021-31
1627046150067.87
[]
docs.threatstop.com
Using Services.exe from the Command Line (Windows CE 5.0) You can use service.exe to control a service from the command line. The following table shows the commands that can be used for most services: Note <service instance> stands for the instantiation of a service, e.g., HTP0. <servie name> stands for the service's name as it is displayed in the protected registry, e.g., HTTPD. To control a Telnet server using Services.exe, type the following syntax at a command prompt: s -d list s services -d start tel0: s services -d stop tel0: s services -d refresh tel0: The above commands will list all the available services on your device. Services.exe will start the Telnet server, then stop it, and then refresh its configuration parameters. Adding the "-d" flag will generate a debug output, instead of console output. See Also Services.exe | Handling Command Line Parameters Send Feedback on this topic to the authors
https://docs.microsoft.com/en-us/previous-versions/windows/embedded/ms885782(v=msdn.10)?redirectedfrom=MSDN
2020-03-28T12:44:33
CC-MAIN-2020-16
1585370491857.4
[]
docs.microsoft.com
... Follow steps one and two as above then choose the permanently delete option from the actions menu. This will permanently and unrecoverably purge the accounts from the system. Anything to watch out for? (applies up to 22 March 2018) Accounts deleted in the 30 days before 20 February 2018 cannot be restored or permanently deleted via the interface however our service desk will be able to help. Overview Content Tools
https://docs.openathens.net/pages/diffpagesbyversion.action?pageId=329441&selectedPageVersions=1&selectedPageVersions=2
2020-03-28T12:25:15
CC-MAIN-2020-16
1585370491857.4
[]
docs.openathens.net
Customizing Factories¶ Warning Some factories may already be decorated in the Sylius Core. You need to check before decorating which factory (Component or Core) is your resource using. Why would you customize a Factory?¶ Differently configured versions of resources may be needed in various scenarios in your application. You may need for instance to: - create a Product with a Supplier (which is your own custom entity) - create a disabled Product (for further modifications) - create a ProductReview with predefined description and many, many more. How to customize a Factory?¶ Tip You can browse the full implementation of this example on this GitHub Pull Request. Let’s assume that you would want to have a possibility to create disabled products. 1. Create your own factory class in the App\Factory namespace. Remember that it has to implement a proper interface. How can you check that? For the ProductFactory run: $ php bin/console debug:container sylius.factory.product As a result you will get the Sylius\Component\Product\Factory\ProductFactory - this is the class that you need to decorate. Take its interface ( Sylius\Component\Product\Factory\ProductFactoryInterface) and implement it. <?php declare(strict_types=1); namespace App\Factory; use Sylius\Component\Product\Model\ProductInterface; use Sylius\Component\Product\Factory\ProductFactoryInterface; final class ProductFactory implements ProductFactoryInterface { /** @var ProductFactoryInterface */ private $decoratedFactory; public function __construct(ProductFactoryInterface $factory) { $this->decoratedFactory = $factory; } public function createNew(): ProductInterface { return $this->decoratedFactory->createNew(); } public function createWithVariant(): ProductInterface { return $this->decoratedFactory->createWithVariant(); } public function createDisabled(): ProductInterface { /** @var ProductInterface $product */ $product = $this->decoratedFactory->createWithVariant(); $product->setEnabled(false); return $product; } } 2. In order to decorate the base ProductFactory with your implementation you need to configure it as a decorating service in the config/services.yaml. services: app.factory.product: class: App\Factory\ProductFactory decorates: sylius.factory.product arguments: ['@app.factory.product.inner'] public: false 3. You can use the new method of the factory in routing. After the sylius.factory.product has been decorated it has got the new createDisabled() method. To actually use it overwrite sylius_admin_product_create_simple route like below in config/routes.yaml: # config/routes.yaml sylius_admin_product_create_simple: path: /products/new/simple methods: [GET, POST] defaults: _controller: sylius.controller.product:createAction _sylius: section: admin factory: method: createDisabled # like here for example template: SyliusAdminBundle:Crud:create.html.twig redirect: sylius_admin_product_update vars: subheader: sylius.ui.manage_your_product_catalog templates: form: SyliusAdminBundle:Product:_form.html.twig route: name: sylius_admin_product_create_simple
https://docs.sylius.com/en/latest/customization/factory.html
2020-03-28T10:49:45
CC-MAIN-2020-16
1585370491857.4
[]
docs.sylius.com
Features.1 - short-term activity (see the JIRA open tickets - ETA: June 2012) - Improve the PHP CodeSniffer rule repository (adding missing parameters, descriptions, ...) - Work on the "Sonar Way", PEAR and Zend profiles (<= for PHP gurus! ) - PHP 2.0 - mid-term activity - Handle multiple files with the same name - Consider root folders as "Projects" - Non structured PHP files - If it turns out that those tickets are technically difficult/long to implement, then they can be postponed Changelog
http://docs.codehaus.org/pages/viewpage.action?pageId=229738567
2013-05-18T17:20:00
CC-MAIN-2013-20
1368696382584
[array(['/s/fr_FR/3278/15/_/images/icons/emoticons/warning.png', None], dtype=object) ]
docs.codehaus.org
/03 System Settings (page was abandoned, no work since 2009, obsolete now) - 14:06, 3 November 2009 Chris Davenport (Talk | contribs) marked revision 16476 of page Manual 1 6/07 Globals/03 System Settings patrolled
http://docs.joomla.org/index.php?title=Special:Log&page=Manual+1+6%2F07+Globals%2F03+System+Settings
2013-05-18T17:20:11
CC-MAIN-2013-20
1368696382584
[]
docs.joomla.org
Title Preview Creation Date 1912 Description This china platter, believed to have been brought in 1636 from England by the Roger Williams family, was presented to Roger Williams Family Association (RWFA) in 1912 by Marie E. Cutter Law. A note in the original inventory of material taken from the Brown home in 1997 indicated this platter was "at the Betsy Williams Cottage." The RWFA thinks this platter, which was taken from a trunk in the Brown home, is the one mentioned above.
http://docs.rwu.edu/association_artifacts/6/
2013-05-18T17:36:54
CC-MAIN-2013-20
1368696382584
[array(['http://docs.rwu.edu/association_artifacts/1000/preview.jpg', 'image preview'], dtype=object) ]
docs.rwu.edu
This worksheet should be used by companies to create a financial model. Financial modeling, from an accounting perspective, is a means to help make cash flow projections and forecasts. A financial model can be used to conduct business valuations, assist with management decision making, and assist with financial statement analysis. This worksheet comes with instructions about how to enter information, view results, customize the financial model, and how to print the financial statements. The worksheet enables the user to create detailed financial statements, conduct sales projections and margin analysis, make staffing plans, and plan equipment purchases. Get Unlimited Access to Our Complete Business Library Plus Would you be interested in taking a longer survey for a chance to win a 1-month free subscription to Docstoc Premium?
http://premium.docstoc.com/docs/356529/Financial-Model
2013-05-18T17:18:57
CC-MAIN-2013-20
1368696382584
[]
premium.docstoc.com
The following tutorial is part of the course Understanding advanced molecular simulation held at EPF Lausanne during the spring semester 2018. Understanding advanced molecular simulation¶ This tutorial makes use of the AiiDA plugins for the zeo++ and RASPA2 codes. It is meant to be run inside the Quantum Mobile virtual machine. AiiDA tutorial¶ Screening nanoporous materials¶ Task: Screen a set of metal-organic frameworks (MOFs) for their performance in storing methane at room temperature by computing their deliverable capacities, i.e. the difference between the amount of methane stored in a fully loaded tank (at 65 bar) and an empty tank (at 5.8 bar) per volume. Report: Write a short report (1 page) outlining your approach and identifying the five MOFs with the highest deliverable capacities. Include an export of your AiiDA database 1. Note: This exercise requires a basic knowledge of python. If you are not familiar with python, partner with someone who is. - 1 Upload to a file hosting service like SWITCHdrive or Dropbox and include a download link in the report.
https://aiida-tutorials.readthedocs.io/en/latest/pages/2018_EPFL_molsim/index.html
2019-10-14T07:06:29
CC-MAIN-2019-43
1570986649232.14
[]
aiida-tutorials.readthedocs.io
This page gives information on the Object ID Render Element. Page Contents Overview The Object ID Render Element isolates individual objects with colors or integer values for compositing purposes. In 3ds Max, each object can be assigned an Object ID in the Object Properties dialog's G-Buffer section. This value is also sometimes called the G-Buffer value. Two or more objects in the scene can share the same Object ID. The Object ID Render Element creates selection masks based on Object ID. This render element either shows each object (by Object ID) in a solid unshaded color, or stores the Object IDs as integer values within the EXR format or V-Ray Image Format file. The Color (with AA) method supports antialiasing at the edges of objects. The Integer (no AA) method, which assigns an Object ID to each pixel in the render element, does not support antialiasing as each pixel either has an integer assigned to it, or it does not. More than one VRayObjectID pass can be generated for a single rendering if desired; for example, if you wish to have both solid-shaded color and integer-based render elements for maximum flexibility during compositing. UI Path: ||Render Setup window|| > Render Elements tab > Add button > VRayObjectID. output type – The type of output desired: Integer (no AA) – Object IDs will be exported within the EXR or V-Ray Image format image as an Integer value, one integer per pixel. Colors shown in the V-Ray Frame Buffer are for visual representation only, and are not saved as part of the render element file itself. Antialiasing around the edges of the object or material is not represented. Color (with AA) – Each object ID renders as a solid color depending on the Object ID set in the 3ds Max Object Properties dialog. A wide range of preset colors is used (red, green, blue, pink, purple, orange, etc.) to accommodate a potentially large number of Object IDs in the scene. The edges of objects or materials are antialiased where they meet other parts of the scene. The colors red, green, and blue will be used first, followed by additional colors if there are more than three Object IDs in use in the scene. Common Uses The Object ID Render Element is useful for isolating geometry in a scene based on the Object ID. This means that items can be isolated by use of a matte created from the solid colors within the Object ID pass. In the example below the sofa has an Object ID of 1, the dangling light has an Object ID of 4 and the sideboard has an Object ID of 2. Object ID Render Element Matte created in composite by using only the blue channel Original beauty render Dangling lamp color-corrected using matte created from Object ID Render Element blue channel (as a mask in composite) Notes If antialiasing is a needed for a scene, setting output type to Color (with AA) gives the best results. Setting the output type to Integer (no AA) will render slightly faster, although with no antialiasing.
https://docs.chaosgroup.com/display/VRAY3MAX/Object+ID+%7C+VRayObjectID
2019-10-14T06:52:43
CC-MAIN-2019-43
1570986649232.14
[]
docs.chaosgroup.com
Logging to elmah.io from Orchard CMS Orchard CMS is a free, open source community-focused content management system built on the ASP.NET MVC and ASP.NET Core platforms. This tutorial is written for the ASP.NET Core version of Orchard. If you want to log to elmah.io from the MVC version, you should follow our tutorial for MVC. To start logging to elmah.io, install the following two packages (in that order): Install-Package Elmah.Io.Client Install-Package Elmah.Io.AspNetCore Then modify your Startup.cs file: public class Startup { public void ConfigureServices(IServiceCollection services) { ... services.AddElmahIo(o => { o.ApiKey = "API_KEY"; o.LogId = new Guid("LOG_ID"); }); } public void Configure(IApplicationBuilder app, IHostingEnvironment env) { ... app.UseElmahIo(); ... } } Replace API_KEY with your API key (Where is my API key?) and LOG_ID with the id of the log (Where is my log ID?) where you want errors logged. Like with any other ASP.NET Core application, it's important to call the UseElmahIo-method after setting up other middleware handling exceptions (like UseDeveloperExceptionPage). Orchard uses NLog as the internal logging framework. Hooking into this pipeline is a great way to log warnings and errors through NLog to elmah.io as well. Install the Elmah.Io.Nlog NuGet package: Install-Package Elmah.Io.NLog -Pre Add the elmah.io target to the NLog.config-file: <?xml version="1.0" encoding="utf-8" ?> <nlog ...> <extensions> ... <add assembly="Elmah.Io.NLog"/> </extensions> <targets> ... <target name="elmahio" type="elmah.io" apiKey="API_KEY" logId="LOG_ID"/> </targets> <rules> ... <logger name="*" minlevel="Warn" writeTo="elmahio" /> </rules> </nlog> Make sure not to log Trace and Debug messages to elmah.io, which will quickly use up the included storage.
https://docs.elmah.io/logging-to-elmah-io-from-orchard/
2019-10-14T06:40:20
CC-MAIN-2019-43
1570986649232.14
[]
docs.elmah.io
Welcome.and std::wstringtypes (see <string>) instead of raw char[]arrays. C++ Standard Library containers like vector, list, and mapinstead of raw arrays or custom containers. See <vector>, <list>, and <map>. C++ Standard Library algorithms instead of manually coded ones. Exceptions, to report and handle error conditions. Lock-free inter-thread communication using C++ Standard Library std::atomic<>(see <atomic>) instead of other inter-thread communication mechanisms. Inline lambda functions instead of small functions implemented separately. Range-based for loops to write more robust loops that work with arrays, C++ Standard Library++: #include <vector> void f() { // Assume circle and shape are user-defined types circle* p = new circle( 42 ); vector<shape*> v = load_shapes(); for( vector<circle*>::iterator i = v.begin(); i != v.end(); ++i ) { if( *i && **i == *p ) cout << **i << " is a match\n"; } // CAUTION: If v's pointers own the objects, then you // must delete them all before v goes out of scope. // If v's pointers do not own the objects, and you delete // them here, any code that tries to dereference copies // of the pointers will cause null pointer exceptions. for( vector<circle*>::iterator i = v.begin(); i != v.end(); ++i ) { delete *i; // not exception safe } // Don't forget to delete this, too. delete p; } // end f() Here's how the same thing is accomplished in modern C++: #include <memory> #include <vector> void f() { // ... auto p = make_shared<circle>( 42 ); vector<shared_ptr<shape>> v = load_shapes(); for( auto& s : v ) { if( s && *s == *p ) { cout << *s << " is a match\n"; } } } In modern C++, you don't have to use new/delete or explicit exception handling because you can use smart pointers instead. When you use the auto type deduction and lambda function, you can write code quicker, tighten it, and understand it better. And a range-based for loop is cleaner, easier to use, and less prone to unintended errors than a C-style C++ Standard Library. - Uniform Initialization and Delegating Constructors Object Lifetime And Resource Management Objects Own Resources (RAII) - Pimpl For Compile-Time Encapsulation - - String and I/O Formatting (Modern C++) Errors and Exception Handling Portability At ABI Boundaries For more information, see the Stack Overflow article Which C++ idioms are deprecated in C++11. See also C++ Language Reference Lambda Expressions C++ Standard Library Visual C++ language conformance Feedback
https://docs.microsoft.com/en-us/cpp/cpp/welcome-back-to-cpp-modern-cpp?redirectedfrom=MSDN&view=vs-2019
2019-10-14T05:45:02
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
. Important Some control classes MouseLeftButtonD - Override OnMouseLeftButtonDown to implement class handling for this event in derived classes.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.mouseleftbuttondown?redirectedfrom=MSDN&view=netframework-4.8
2019-10-14T05:46:40
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Durban Makes Push to Bring African Docs to Global Audiences FilmMart carves out space for local voices developing content and really understanding how to tell stories to audiences through cinema,” said Toni Monty, head of the Durban Film Office and the DFM. She added: “We really want to increase that focus and create a bigger space for documentaries in Durban. We inched forward on that this year, with the intention of really building that in the future.” Eight documentaries were among the 16 African projects in development taking part in this year’s finance forum at the DFM. Before pitching to an audience of leading broadcasters, financiers, funding bodies, and other potential investors, the filmmakers took part in an intensive mentorship process led by veteran South African documentarians Don Edkins and Xoliswa Sithole; Joost Daamen, from the Int’l. Documentary Festival Amsterdam (IDFA); and Elizabeth Radshaw, Olena Decock, Angela Tucker, Kristi Jacobson, and Ricardo Acosta, from the Hot Docs-Blue Ice Group. The process began six weeks before the DFM, as Edkins and Sithole – often working over patchy Skype connections across the continent – helped guide the filmmakers through the key elements of their Durban pitches. Edkins said the goal was to focus on both the visuals of the pitch, in the form of a teaser or trailer, as well as its narrative arc. “The process is really interesting because it provides for real dialogue around the film,” he said. “We discuss the story, how they see it in cinematic terms, why they are making the film, audiences, and how to finance it. We view the teaser or trailer for the film and discuss that.” In Durban, an intensive two-day mentorship took place. “For some teams, this is the first time they have pitched publicly, so it was important to provide guidance on key elements of a documentary pitch,” Decock and Radshaw explained by email. “You want to outright address creative and production questions in the pitch, but leave enough of a teaser to secure a follow-up meeting with decision-makers. Most importantly, they address key film story questions and position the film for funders and audiences.” “I find a real sense of forward movement during the mentoring process,” said Edkins. “Some of the filmmakers are still busy shooting research material and finding new elements for their story, and more visual material which has to be cut into the teaser. It’s an opportunity for open discussion.” Fine-tuning a pitch ahead of the DFM isn’t just a way to land crucial sources of financing. “Articulating your story for funders is essentially articulating it for your audience,” said Decock and Radshaw. “Early on in the documentary filmmaking process, it’s important to understand who would be most interested in your film. Not everyone will get a chance to see your documentary, so you want to ensure you reach those who have the appetite for it.” “The African documentary landscape has been making real strides in recent years,” said Nataleah Hunter-Young, who’s worked on programming at the Toronto Int’l. Film Festival and Hot Docs, and curated the doc program of the Durban Intl. Film Festival this year. “I think that is directly linked to increased resourcing and development, both on a national infrastructure level, and in terms of investments in the development of filmmakers, specifically. Continentally-based organizations like DocuBox, in Kenya, STEPS, in South Africa, and the Ouaga Film Lab, in Burkina Faso, to name a few, have specifically sought to support African documentarians in telling authentic stories in new and challenging ways through mentorship, funding, and skills development.” She added, “A number of major international festivals have also demonstrated their commitments to supporting African filmmakers through investing dollars, time, and resources into emerging artists.” Among the steady supporters who again had a strong presence at this year’s DFM were the Berlinale, the Int’l. Documentary Film Festival Amsterdam, the Intl. Film Festival Rotterdam, and the Hot Docs-Blue Ice Group, whose documentary fund, which is currently accepting online submissions, provides development funds to approximately 4-10 African projects each year, and has awarded funding to 53 projects from 19 countries. “Despite all of this, more initiatives are needed,” Hunter-Young continued. “Some of that is currently being built on the local and continental level, but there is still room for more international partners and a generally stronger international commitment. I think it’s safe to say that we’re also starting to see a push from African filmmakers themselves to document their stories without necessarily needing to rely on European or North American dollars.” Hunter-Young continued: “I anticipate we’ll be seeing many more intra-continental co-productions in the years to come, including North-Sub-Saharan partnerships, which is long overdue!” “What is really necessary is to access finance from different sources to make the film, and have professional support available where it is needed. And have distribution platforms to reach audiences,” added Edkins. For African filmmakers, there’s perhaps never been a better time to find those viewers. “I think international audiences have woken up to the fact that they’ve been fed a lot of lies about the continent through film and television – something that the documentary genre can certainly take a lot of credit for – and I am still hearing filmmakers and funders talking about how ready they are to move away from films commissioned by NGOs, or made by non-African filmmakers with little to no investment in the communities they’re portraying,” said Hunter-Young. “That said, I believe that this awakening amongst audiences has led to a hunger for African stories, told by African people, in ways that ideally no longer have to conform to the traditional genre tropes,” she added. “The challenge, I think, is more with supply than it is with demand. In a city like Toronto – one of the most multicultural cities in the world – demand for African stories is never short. We programmers know the demand is there, so it’s our job to fill it and ensure audiences are aware of it.” Pictured: Nataleah Hunter-Young
http://afridocs.net/durban-makes-push-bring-african-docs-global-audiences/
2019-10-14T05:39:41
CC-MAIN-2019-43
1570986649232.14
[array(['http://afridocs.net/wp-content/uploads/2018/07/Untitled-design-1.jpg', None], dtype=object) ]
afridocs.net
Depending on the selected routing method, load balancer and QoS setting, a slave will automatically be chosen when the host connects. The maximum allowed latency can be set to limit the connection to only use a slave that is within the specified maximum applied latency limit. This can be specified in the connection string, and enables slave selection based on the slave which has a latency within the specified limit. For example, using the connection string: jdbc://connector1:3306/database?maxAppliedLatency=5 Will specify that a host with a replication latency of less than 5 seconds should be selected. The option can be set globally by configuring the JDBC options used by the connector via the --connector-max-slave-latency option to tpm (in seconds): shell> ./tools/tpm update alpha --connector-max-slave-latency=10 The Connector computes latency by polling the Replicator every 3 seconds for the current replication-view latency. This gives the Connector an accuracy of +/- 3 seconds, which means that values of 3 or less will not function as expected. For any queries that have a very low tolerance for replication latency, we strongly suggest you read directly from the master database server only. This ensures the latest data is being read. The --connector-max-slave-latency flag does not ensure the slave has applied the latest sequence number, just that its latency at the last commit was under the specified number. This behavior can be adjusted by specifying --use-relative-latency=true in the configuration. NOTE: --use-relative-latency=true is a cluster-wide setting, cctrl and trepctl will also report relative latency based on this setting. Applied Latency The measured. See Section E.2.6, “Terminology: Fields appliedLatency” for more information. Relative Latency The. See Section E.2.70, “Terminology: Fields relativeLatency” for more information. Comparing Relative and Applied Latencies Both relative and applied latency are visible via the trepctl status. Relative indicates the latency since the last time the appliedLast an applied latency (time to write to the slave DB) of 0.571 seconds from the time it committed on the master, and last committed something to the slave DB 8.944 seconds ago. If relative latency increases significantly in a busy system, it may be a sign that replication is stalled. This is a good parameter to check in monitoring scripts. For more information,see: --connector-max-slave-latency Section 4.1.6.3, “Relative Latency” To troubleshoot the latency-based routing decisions the connector makes, and uncomment the below lines in /opt/continuent/tungsten/tungsten-connector/conf/log4j.properties #log4j.logger.com.continuent.tungsten.router.resource.loadbalancer=debug, stdout #log4j.additivity.com.continuent.tungsten.router.resource.loadbalancer=false The log will then show and explain what node the connector choose and why. No connector restart is required to enable logging. If you re-comment the lines, a restart will be required. To disable without a connector restart, replace the word debug with the word info.
https://docs.continuent.com/tungsten-clustering-6.1/connector-routing-latency.html
2019-10-14T06:15:19
CC-MAIN-2019-43
1570986649232.14
[]
docs.continuent.com
: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2015 Assembly Bill 96 - Tabled
http://docs-preview.legis.wisconsin.gov/2015/proposals/sb74
2019-10-14T06:19:31
CC-MAIN-2019-43
1570986649232.14
[]
docs-preview.legis.wisconsin.gov
Install Microsoft Teams App for elmah.io To install the integration with Microsoft Teams, go to teams and click the Store menu item. Search for "elmah.io" and click the app: Select your team and click the Install button: Select which channel you want elmah.io messages in and click the Set up button: A new webhook URL is generated. Click the Copy Text button followed by the Save button: The elmah.io integration is now configured on Microsoft Teams and you should see the following screen: The final step is to input the webhook URL that you just copied, into elmah.io. Log into elmah.io and go to the log settings. Click the Apps tab. Locate the Microsoft Teams app and click the Install button. In the overlay, paste the URL from the previous step:
https://docs.elmah.io/elmah-io-apps-teams/
2019-10-14T06:18:46
CC-MAIN-2019-43
1570986649232.14
[array(['../images/apps/teams/step1.png', 'Search for elmah.io'], dtype=object) array(['../images/apps/teams/step2.png', 'Add to a team'], dtype=object) array(['../images/apps/teams/step3.png', 'Select the channel'], dtype=object) array(['../images/apps/teams/step4.png', 'Copy the webhook URL'], dtype=object) array(['../images/apps/teams/step5.png', 'Configured'], dtype=object) array(['../images/teams_installapp.png', 'Install Microsoft Teams app'], dtype=object) ]
docs.elmah.io
Aggregations in Table and Chart Components¶ There are specific ways that you can take advantage of aggregations in both table and chart components. In a Table Component¶ Column summaries¶ Summaries are a standard Table component element that appear at the bottom of a table’s column, and they display the result of an aggregation while disregarding groupings. Because of this, summaries are. Skiud on Salesforce [[]] Salesforce imposes a limit on how many records can be returned in a query. If a Skuid on Salesforce page throws an Apex heap size error, this limit has been exceeded. Using an aggregate model is necessary for the visualization to display correctly:.
https://docs.skuid.com/latest/en/skuid/models/aggregate-model/aggregation-table-chart.html
2019-10-14T07:08:34
CC-MAIN-2019-43
1570986649232.14
[]
docs.skuid.com
All content with label aws+client+composition+custom_interceptor+distribution+gatein+gridfs+infinispan+installation+post+query+write_through. Related Labels: expiration, datagrid, coherence, interceptor, server, rehash, replication, transactionmanager, dist, release, partitioning, deadlock, future, archetype, jbossas, lock_striping, nexus, guide, schema, listener, editor, state_transfer, cache, amazon, s3, grid, memcached, jcache, test, api, xsd, ehcache, maven, documentation, wcm, page, write_behind, 缓存, ec2, s, hibernate, getting, templates, interface, clustering, setup, eviction, template, concurrency, jboss_cache, examples, tags, import, index, events, batch, hash_function, configuration, buddy_replication, loader, colocation, cloud, remoting, mvcc, notification, tutorial, murmurhash2, xml, read_committed, started, cachestore, data_grid, hibernate_search, resteasy, cluster, br, development, websocket, async, transaction, xaresource, build, hinting, searchable, categories, demo, scala, command-line, migration, non-blocking, rebalance, filesystem, jpa, design, tx, eventing, shell, content, client_server, testng, murmurhash, infinispan_user_guide, standalone, repeatable_read, snapshot, hotrod, webdav, tasks, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, downloads, jsr-107, jgroups, lucene, locking, rest, uploads, hot_rod more » ( - aws, - client, - composition, - custom_interceptor, - distribution, - gatein, - gridfs, - infinispan, - installation, - post, - query, - write_through ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/aws+client+composition+custom_interceptor+distribution+gatein+gridfs+infinispan+installation+post+query+write_through
2019-10-14T07:26:23
CC-MAIN-2019-43
1570986649232.14
[]
docs.jboss.org
freud¶ Danger freud v2.0 is currently in beta on the master branch. That’s right – freud just got a whole lot… beta. (Pun intended.) This major rewrite of the freud library brings new APIs, speedups, and many new features. Please report any issues you encounter while using the beta or learning the new API. NumPy arrays for input and output, enabling integration with the scientific Python ecosystem for many typical materials science workflows.. Resources¶ Some other helpful links for working with freud: Find examples of using freud on the examples page. Find detailed tutorials and reference information at the official freud documentation. View and download the code on the GitHub repository. Ask for help on the freud-users Google Group. Report issues or request features using the issue tracker. Support and Contribution¶ Please visit our repository on GitHub for the library source code. Any issues or bugs may be reported at our issue tracker, while questions and discussion can be directed to our forum. All contributions to freud are welcomed via pull requests! Table of Contents¶ Getting Started Reference API
https://freud.readthedocs.io/en/latest/?badge=latest
2019-10-14T06:50:49
CC-MAIN-2019-43
1570986649232.14
[]
freud.readthedocs.io
Knowledge Base Feedback Have a suggestion for the Armor Knowledge Base? Send a message to [email protected]. This API call applies to both Armor Complete and Armor Anywhere users. The Add Recipient to a Ticket API adds a recipient to a ticket. Sample Request POST { "id": 2152 } Input The following table describes the different parts of this API call: The following table describes the parameter (or parameters) for this API call:
https://docs.armor.com/pages/viewpage.action?pageId=14483975
2019-10-14T07:20:48
CC-MAIN-2019-43
1570986649232.14
[]
docs.armor.com
older version of V-Ray to V-Ray 3.6, visit the Migrating from Previous Versions page for V-Ray 3.6 for SketchUp. V-Ray for SketchUp SketchUp licenses also work with V-Ray 3.6, allowing anyone who purchases an upgrade to complete unfinished projects using V-Ray 3.6 if needed. Opening a Scene When a scene saved with V-Ray version 3.6 or older is loaded in SketchUp SketchUp with V-Ray Next. When the scene is saved, the updates are saved with it. Materials Material Structure The internal material structure used in V-Ray for SketchUp Project Material ID Colors Tool Found in Extensions > V-Ray > Tools, this tool allows you to assign a random color ID to your materials. This provides a workaround for materials from v2.0 scenes which all have black color IDs. Note that it will not affect Material ID numbers, which by default are set to. Textures Triplanar texture Triplanar texture is now affected by SketchUp material size. In previous V-Ray versions, it was projected at a UV size of 1x1 inch and multiplied by its Size parameter (default value 10). To restore the same way it looked in version 3.6, adjust either the SketchUp material size to 1 by 1 inch or decrease the Triplanar's Size value accordingly. Lights Lights viewport widgets The viewport widgets for all V-Ray Lights and the V-Ray Infinite Plane are updated. They now come with additional lines to help with snapping, positioning or rotating. To switch back from their new look to the old solid display a new toolbar button is implemented - The Enable Solid Widgets button. Lights Intensity Tool The Light Intensity tool is no longer available. Its functionality is now part of the new V-Ray Scene Interaction Tool..
https://docs.chaosgroup.com/display/VNFS/Migrating+from+Previous+Versions
2019-10-14T07:05:55
CC-MAIN-2019-43
1570986649232.14
[]
docs.chaosgroup.com
Overview of Generics in the .NET Framework [This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.] This topic provides an overview of generics in the .NET Framework and a summary of generic types or methods. It also defines the terminology used to discuss generics.. Public Class Generic(Of T) Public Field As T End Class public class Generic<T> { public T Field; } generic<typename T> public ref. Dim g As New Generic(Of String) g.Field = "A string" Generic<string> g = new Generic<string>(); g.Field = "A string"; Generic<String^>^ g = gcnew Generic<String^>(); g->Field = "A string"; Generics Terminology. Constraints are limits placed on generic type parameters. For example, you might limit a type parameter to types that implement the IComparer<T> generic interface, to ensure that instances of the type can be ordered. You can also constrain type parameters to types that have a particular base class, that have a default. Function Generic(Of T)(ByVal arg As T) As T Dim temp As T = arg ... End Function T Generic<T>(T arg) { T temp = arg; ...} generic<typename T> T Generic(T arg) { T temp = arg; ...}; Visual Basic, Introduction to Generics (C# Programming Guide),. See Also Tasks How to: Define a Generic Type with Reflection Emit Reference System.Collections.Generic System.Collections.ObjectModel Introduction to Generics (C# Programming Guide) Concepts When to Use Generic Collections Generic Types in Visual Basic Overview of Generics in Visual C++ Generic Collections in the .NET Framework Generic Delegates for Manipulating Arrays and Lists Advantages and Limitations of Generics Other Resources Commonly Used Collection Types Generics in the .NET Framework
https://docs.microsoft.com/en-us/previous-versions/ms172193(v=vs.100)?redirectedfrom=MSDN
2019-10-14T05:26:37
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
This is an older version of Search Guard. Switch to Latest version User Impersonation Content The Search Guard user impersonation feature lets you submit requests on behalf of another user. It means that a user can log in with his or her credentials, and then impersonate as another user, without having to know this users username or password. For example, this can be useful when an admin needs to debug permission problems for a particular user. In order for user impersonation to work, you must be able to retrieve the user from one of the configured authentication backends. The Active Directory/LDAP and the Internal Users authentication backend support impersonation out of the box. If you use a custom authentication backend, make sure you to implement the AuthenticationBackend#exists(User user) method correctly: /** * * Lookup for a specific user in the authentication backend * * @param user The user for which the authentication backend should be queried * @return true if the user exists in the authentication backend, false otherwise */ boolean exists(User user); Permission settings To give a user permission to impersonate as another user, searchguard.authcz.rest_impersonation_user.<allowed_user>: - <impersonated_user_1> - <impersonated_user_2> - ... For example: searchguard.authcz.rest_impersonation_user.admin: - user_1 - user_2 In this example, the user admin has the permission to impersonate as user user_1 and user_2. Wildcards are supported, so the following snippet grants the user admin the permission to impersonate as any user that starts with user_. searchguard.authcz.rest_impersonation_user.admin: - user_* Using impersonation on the REST layer To impersonate as another user, specify the username in the sg_impersonate_as HTTP header of the REST call, for example: curl -u admin:password \ -H "sg_impersonate_as: user_1" \ -XGET Effects on audit- and compliance logging When using impersonation, the audit and compliance events will track both the initiating user and the impersonated user: Additional resources
https://docs.search-guard.com/6.x-21/user-impersonation
2019-10-14T06:50:18
CC-MAIN-2019-43
1570986649232.14
[]
docs.search-guard.com
Inherits: CanvasItem < Node < Object Inherited By: TextureRect, ColorRect, Label, Tabs, GraphEdit, VideoPlayer, NinePatchRect, LineEdit, Container, TextEdit, BaseButton, Popup, Tree, Separator, ReferenceRect, Panel, TabContainer, Range, RichTextLabel, ItemList Category: Core All User Interface nodes inherit from Control. Features anchors and margins to adapt its position and size to its parent. Emitted when the node gains keyboard focus. Emitted when the node loses keyboard focus. Emitted when the node receives an InputEvent. Emitted when the node’s minimum size changes. Emitted when a modal Control is closed. See show_modal. Emitted when the mouse enters the control’s Rect area, provided its mouse_filter lets the event reach it. Emitted when the mouse leaves the control’s Rect area, provided its mouse_filter lets the event reach it. Emitted when the control changes size. Emitted when one of the size flags changes. See size_flags_horizontal and size_flags_vertical. ANCHOR_*constants. Default value: ANCHOR_BEGIN. ANCHOR_*constants. Default value: ANCHOR_BEGIN. ANCHOR_*constants. Default value: ANCHOR_BEGIN. ANCHOR_*constants. Default value: ANCHOR_BEGIN. Control. If this property is not set, Godot will give focus to the closest Controlto the bottom of this one. If the user presses Tab, Godot will give focus to the closest node to the right first, then to the bottom. If the user presses Shift+Tab, Godot will look to the left of the node, then above it. Control. If this property is not set, Godot will give focus to the closest Controlto the left of this one. Control. If this property is not set, Godot will give focus to the closest Controlto the bottom of this one. Control. If this property is not set, Godot will give focus to the closest Controlto the bottom of this one. Margins are often controlled by one or multiple parent Container nodes. Margins update automatically when you move or resize the node. MOUSE_FILTER_*constants. See the constants to learn what each does. SIZE_*constants to change the flags. See the constants to learn what each does. SIZE_EXPANDsize flag, the parent Container will let it take more or less space depending on this property. If this node has a stretch ratio of 2 and its neighbour a ratio of 1, this node will take two thirds of the available space. SIZE_*constants to change the flags. See the constants to learn what each does. Controlchildren use. Rectarea. Rectarea. Control. Happens when you call one of the add_*_override enum SizeFlags Controlbased on its bounding box, so it doesn’t work with the fill or expand size flags. Use with size_flags_horizontal and size_flags_vertical. enum CursorShape CURSOR_BDIAGSIZE. It tells the user they can resize the window or the panel both horizontally and vertically. CURSOR_VSIZE. CURSOR_HSIZE. enum FocusMode enum GrowDirection enum LayoutPresetMode enum LayoutPreset Controlwill fit its parent container. Use with set_anchors_preset. enum MouseFilter enum Anchor Rect, in the top left. Use it with one of the anchor_*member variables, like anchor_left. To change all 4 anchors at once, use set_anchors_preset. Rect, in the bottom right. Use it with one of the anchor_*member variables, like anchor_left. To change all 4 anchors at once, use set_anchors_preset. Base class for all User Interface or UI related nodes. Control features a bounding rectangle that defines its extents, an anchor position relative to its parent and margins that represent an offset to the anchor. The margins update automatically when the node, any of its parents, or the screen size change. For more information on Godot’s UI system, anchors, margins, and containers, see the related tutorials in the manual. To build flexible UIs, you’ll need a mix of UI elements that inherit from Control and Container nodes. User Interface nodes and input Godot sends input events to the scene’s root node first, by calling Node._input. Node._input forwards the event down the node tree to the nodes under the mouse cursor, or on keyboard focus. To do so, it calls MainLoop._input_event. Call accept_event so no other node receives the event. Once you accepted an input, it becomes handled so Node._unhandled_input will not process it. Only one Control node can be in keyboard focus. Only the node in focus will receive keyboard events. To get the focus, call grab_focus. Control nodes lose focus when another node grabs it, or if you hide the node in focus. Set mouse_filter to MOUSE_FILTER_IGNORE to tell a Control node to ignore mouse or touch events. You’ll need it if you place an icon on top of a button. Theme resources change the Control’s appearance. If you change the Theme on a Control node, it affects all of its children. To override some of the theme’s parameters, call one of the add_*_override methods, like add_font_override. You can override the theme with the inspector. Returns the minimum size this Control can shrink to. The node can never be smaller than this minimum size. The node’s parent forwards input events to this method. Use it to process and accept inputs on UI elements. See accept_event. Replaces Godot 2’s _input_event. Marks an input event as handled. Once you accept an input event, it stops propagating, even to nodes listening to Node._unhandled_input or Node._unhandled_key_input. Overrides the color in the theme resource the node uses. Overrides an integer constant in the Theme resource the node uses. If the constant is invalid, Godot clears the override. See Theme.INVALID_CONSTANT for more information. Overrides the name font in the theme resource the node uses. If font is empty, Godot clears the override. Overrides the name icon in the theme resource the node uses. If icon is empty, Godot clears the override. Overrides the name shader in the theme resource the node uses. If shader is empty, Godot clears the override. Overrides the name Stylebox in the theme resource the node uses. If stylebox is empty, Godot clears the override. Godot calls this method to test if data from a control’s get_drag_data can be dropped at position. position is local to this control. This method should only be used to test the data. Process the data in drop_data. extends Control func can_drop_data(position, data): # check position if it is relevant to you # otherwise just check data return typeof(data) == TYPE_DICTIONARY and data.has('expected') Godot calls this method to pass you the data from a control’s get_drag_data result. Godot first calls can_drop_data to test if data is allowed to drop at position where position is local to this control. extends ColorRect func can_drop_data(position, data): return typeof(data) == TYPE_DICTIONARY and data.has('color') func drop_data(position, data): color = data['color'] Forces drag and bypasses get_drag_data and set_drag_preview by passing data and preview. Drag will start even if the mouse is neither over nor pressed on this control. The methods can_drop_data and drop_data must be implemented on controls that want to receive drop data. Returns the mouse cursor shape the control displays on mouse hover, one of the CURSOR_* constants. Godot calls this method to get data that can be dragged and dropped onto controls that expect drop data. Return null if there is no data to drag. Controls that want to receive drop data should implement can_drop_data and drop_data. position is local to this control. Drag may be forced with force_drag. A preview that will follow the mouse that should represent the data can be set with set_drag_preview. A good time to set the preview is in this method. extends Control func get_drag_data(position): var mydata = make_data() set_drag_preview(make_preview(mydata)) return mydata Returns MARGIN_LEFT and MARGIN_TOP at the same time. This is a helper (see set_margin). Return which control is owning the keyboard focus, or null if no one. Return position and size of the Control, relative to the top-left corner of the window Control. This is a helper (see get_global_position, get_size). Return the minimum size this Control can shrink to. A control will never be displayed or resized smaller than its minimum size. Return position and size of the Control, relative to the top-left corner of the parent Control. This is a helper (see get_position, get_size). Return the rotation (in radians) Return the tooltip, which will appear when the cursor is resting over this control. Steal the focus from another control and become the focused control (see set_focus_mode). Return whether the Control is the current focused control (see set_focus_mode). Give up the focus, no other control will be able to receive keyboard input. Sets MARGIN_LEFT and MARGIN_TOP at the same time. This is a helper (see set_margin). Forwards the handling of this control’s drag and drop to target control. Forwarding can be implemented in the target control similar to the methods get_drag_data, can_drop_data, and drop_data but with two differences: # ThisControl.gd extends Control func _ready(): set_drag_forwarding(target_control) # TargetControl.gd extends Control func can_drop_data_fw(position, data, from_control): return true func drop_data_fw(position, data, from_control): my_handle_data(data) func get_drag_data_fw(position, from_control): set_drag_preview(my_preview) return my_data() Shows the given control at the mouse pointer. A good time to call this method is in get_drag_data. Sets MARGIN_RIGHT and MARGIN_BOTTOM at the same time. This is a helper (see set_margin). Set the rotation (in radians). Display a Control as modal. Control must be a subwindow. Modal controls capture the input signals until closed or the area outside them is accessed. When a modal control loses focus, or the ESC key is pressed, they automatically hide. Modal controls are used extensively for popup dialogs and menus. © 2014–2018 Juan Linietsky, Ariel Manzur, Godot Engine contributors Licensed under the MIT License.
https://docs.w3cub.com/godot~3.0/classes/class_control/
2019-10-14T05:23:18
CC-MAIN-2019-43
1570986649232.14
[]
docs.w3cub.com
Semi-fuzz(Fuzz) testing¶ Unexpected input can lead ctags to enter an infinite loop. The fuzz target tries to identify these conditions by passing semi-random (semi-broken) input to ctags. $ make fuzz LANGUAGES=LANG1[,LANG2,...] With this command line, ctags is run for random variations of all test inputs under Units/*/input.* of languages defined by LANGUAGES macro variable. In this target, the output of ctags is ignored and only the exit status is analyzed. The ctags binary is also run under timeout command, such that if an infinite loop is found it will exit with a non-zero status. The timeout will be reported as following: [timeout C] Units/test.vhd.t/input.vhd This means that if C parser doesn’t stop within N seconds when Units/test.vhd.t/input.vhd is given as an input, timeout will interrupt ctags. The default duration can be changed using TIMEOUT=N argument in make command. If there is no timeout but the exit status is non-zero, the target reports it as following: [unexpected-status(N) C] Units/test.vhd.t/input.vhd The list of parsers which can be used as a value for LANGUAGES can be obtained with following command line $ ./ctags --list-languages Besides LANGUAGES and TIMEOUT, fuzz target also takes the following parameters: VG=1 Run ctags under valgrind. If valgrind finds a memory error it is reported as:[valgrind-error Verilog] Units/array_spec.f90.t/input.f90 The valgrind report is recorded at Units/\*/VALGRIND-${language}.tmp. As the same as units target, this semi-fuzz test target also calls misc/units shrink when a test case is failed. See “Units test facility” about the shrunk result.
http://docs.ctags.io/en/latest/semifuzz.html
2019-10-14T06:18:42
CC-MAIN-2019-43
1570986649232.14
[]
docs.ctags.io
Retrieves a list that describes the streaming sessions for a specified stack and fleet. If a UserId is provided for the stack and fleet, only streaming sessions for that user are described. If an authentication type is not provided, the default is to authenticate users using a streaming URL. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. describe: Sessions describe-sessions --stack-name <value> --fleet-name <value> [--user-id <value>] [--authentication-type <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>] --stack-name (string) The name of the stack. This value is case-sensitive. --fleet-name (string) The name of the fleet. This value is case-sensitive. --user-id (string) The user identifier. --authentication-type (string) The authentication method. Specify API for a user authenticated using a streaming URL or SAML for a SAML federated user. The default is to authenticate users using a streaming URL.. Sessions -> (list) Information about the streaming sessions. (structure) Describes a streaming session. Id -> (string)The identifier of the streaming session. UserId -> (string)The identifier of the user for whom the session was created. StackName -> (string)The name of the stack for the streaming session. FleetName -> (string)The name of the fleet for the streaming session. State -> (string)The current state of the streaming session. ConnectionState -> (string)Specifies whether a user is connected to the streaming session. StartTime -> (timestamp)The time when a streaming instance is dedicated for the user. MaxExpirationTime -> (timestamp)The time when the streaming session is set to expire. This time is based on the MaxUserDurationinSeconds value, which determines the maximum length of time that a streaming session can run. A streaming session might end earlier than the time specified in SessionMaxExpirationTime , when the DisconnectTimeOutInSeconds elapses or the user chooses to end his or her session. If the DisconnectTimeOutInSeconds elapses, or the user chooses to end his or her session, the streaming instance is terminated and the streaming session ends. AuthenticationType -> (string)The authentication method. The user is authenticated using a streaming URL (API ) or SAML 2.0 federation (SAML ). NetworkAccessConfiguration -> (structure) The network details for the streaming session. EniPrivateIpAddress -> (string)The private IP address of the elastic network interface that is attached to instances in your VPC. EniId -> (string)The resource identifier of the elastic network interface that is attached to instances in your VPC. All network interfaces have the eni-xxxxxxxx resource identifier. NextToken -> (string) The pagination token to use to retrieve the next page of results for this operation. If there are no more pages, this value is null.
https://docs.aws.amazon.com/cli/latest/reference/appstream/describe-sessions.html
2019-08-17T11:34:03
CC-MAIN-2019-35
1566027312128.3
[]
docs.aws.amazon.com
Operating Systems We recommend using the following operating system. Note, if you are using the pre-configured Virtual Appliance from our site all following requirements are already satisfied. - Squid 4 to act as internet proxy server. Squid web proxy project is open sourced and is available out of the box on almost all Linux distributions. - 10 and Ubuntu 18. - Python programming language version 3.6+ which is usually directly available on all supported operating systems. Python Django 2.2 web framework for correct functioning of Web UI.
https://docs.diladele.com/administrator_guide_stable/install/operating_systems.html
2019-08-17T11:21:35
CC-MAIN-2019-35
1566027312128.3
[]
docs.diladele.com
Configure environment security. Assign security roles to users. To assign a user to an environment role, an Environment Admin can take these steps in the PowerApps Admin center: Note Currently, roles can only be assigned to users. Please check back for when assigning a role to a security group is available. Select the environment in the environments table. Select Security tab. View if the user already exists in the environment, by selecting view the list of users in the environment. In case user doesn’t exist, you can add the user from PowerApps Admin center Add the user by mentioning the email address of the user, in your organization, and selecting Add user. Wait for a few minutes to check if the user is available in the list of users in the environment. Select the user from the list of users in the environment. Assign the role to the user. Select OK to update the assignments to the environment role. Predefined security roles The PowerApps environment includes predefined security roles that reflect common user tasks with access levels defined to match the security best-practice goal of providing access to the minimum amount of business data required to use the app. *Privilege is global scope unless specified otherwise. The Environment Maker role can not only create resources within an environment, but can also distribute the apps they build in an environment to other users in your organization. They can share the app with individual users. For more information, see Share an app in PowerApps. For the users making apps which are connecting to the database and needs to create or update entities and security roles, should be assigned System Customizer role as well, along with the Environment Maker as Environment Maker role, has no privileges on the database. Create or configure a custom security role If your app uses a custom entity, its privileges must be explicitly granted in a security role before your app can be used. You can either add these privileges in an existing security role or create a custom security role. There are a set of minimum privileges that are required in order for the new security role to be used - see Minimum privileges to run app. Tip If you want to create a custom security role with the minimum required privileges to run an app, check out the section below: Minimum privileges to run app. The environment might maintain the records which can be used by multiple apps, you might need multiple security roles to access the data with different privileges. e.g. - Some of the users (Type A) might only need to read, update, and attach other records so their security role will have read, write, and append privileges. - Other users might need all the privileges that users of Type A has, plus the ability to create, append to, delete, and share, so their security role will have create, read, write, append, delete, assign, append to, and share privileges. For more information about access and scope privileges, see Security roles. In PowerApps Admin center select the environment where you want to update a security role. Click on the Dynamics 365 Administration Center link in the Details tab to manage the environment in the Dynamics 365 admin center. Select the instance (with the same name of environment) and select Open. If you see published apps and tiles, look in the upper-right corner and select the Gear icon ( ). Then select Advanced settings. In the menu bar, select Settings > Security. Select Security roles. Select New. From the security role designer, enter a role name in the Details tab. From the other tabs, you'll select the actions and the scope for performing that action. Select a tab and search for your entity; for example - Custom Entities tab, for setting permissions on a custom entity. Select the privileges Read, Write, Append. Select Save and Close. Minimum privileges to run app When you create a custom security role, you need to include a set of minimum privileges into the security role in order for a user to run an app. We've created a solution you can import that provides a security role with the required minimum privileges. Start by downloading the solution from the Download Center: Common Data Service minimum privilege security role. Then, follow the directions to import the solution: Import, update, and export solutions. When you import the solution, it creates the min prv apps use role which you can copy (see: Create a security role by Copy Role). When Copying Role is complete, navigate to each tab - Core Records, Business Management, Customization, etc - and set the appropriate privileges. Important You should try out the solution in a development environment before importing into a production environment.
https://docs.microsoft.com/de-de/power-platform/admin/database-security
2019-08-17T12:03:06
CC-MAIN-2019-35
1566027312128.3
[]
docs.microsoft.com
Embedding a policy on your website is easy. Simply log in to your dashboard, hover over the policy you want to embed, and click on the embed button that appears. Then, copy the public link (if you want your policy hosted by Termly) or the code snippet (if you want to display your policy directly on your site), and paste it into the <a> (public link) or <body> (code snippet) of your website.
https://docs.termly.io/how-to-embed-a-policy-on-your-website
2019-08-17T10:46:45
CC-MAIN-2019-35
1566027312128.3
[]
docs.termly.io
sample() function The sample() function selects a subset of the records from the input table. Function type: Selector Output data type: Object sample(n:5, pos: -1) Parameters n Sample every Nth element. Data type: Integer pos The position offset from the start of results where sampling begins. pos must be less than n. If pos is less than 0, a random offset is used. Defaults to -1 (random offset). Data type: Integer Examples from(bucket:"example-bucket") |> range(start:-1d) |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" ) |> sample(n: 5,.
https://v2.docs.influxdata.com/v2.0/reference/flux/functions/built-in/transformations/selectors/sample/
2019-08-17T10:57:27
CC-MAIN-2019-35
1566027312128.3
[]
v2.docs.influxdata.com