content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
UIElement.UpdateLayout Method Microsoft Silverlight will reach end of support after October 2021. Learn more. Namespace: System.Windows Assembly: System.Windows (in System.Windows.dll) Syntax 'Declaration Public Sub UpdateLayout public void UpdateLayout() Remarks Frequent calls to InvalidateArrange, or in particular to UpdateLayout, have significant performance consequences if large numbers of elements exist in the UI. Avoid calling this method unless you absolutely require precise layout state for subsequent calls to other APIs in your code.
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/ms599327%28v%3Dvs.95%29
2019-10-14T07:03:06
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Last updated 20th August, 2019. Before you begin This tutorial presupposes that you already have a working OVH Managed Kubernetes cluster, and you have deployed there an application using the OVH Managed Kubernetes LoadBalancer. If you want to know more on those topics, please look at the using the OVHbetes objects needed for the Ingress Controller, and then it deploys the controller: $ kubectl apply -f namespace.apps/nginx-ingress-controller app.kubernetes.io/part-of:xxxxxxxxx.lb.c1.gra.k8s.ovh.net. Copy the next YAML snippet in a patch-ingress-configmap.yml file: data: use-proxy-protocol: "true" proxy-real-ip-cidr: "0.0.0.0/32" use-forwarded-headers: "false" http-snippet: | geo $realip_remote_addr $is_lb { default 0; 10.108.0.0/14 1; } server-snippet: | if ($is_lb != 1) { return 403; } And apply it in your cluster: kubectl -n ingress-nginx patch configmap nginx-configuration -p "$(cat patch-ingress-configmap.yml)" After applying the patch, you need to restart the Ingress Controller: kubectl -n ingress-nginx get pod | grep 'ingress' | cut -d " " -f1 - | xargs -n1 kubectl -n ingress-nginx delete pod You should see the configuration being patched and the controller pod deleted (and recreated): $ kubectl -n ingress-nginx patch configmap nginx-configuration -p "$(cat patch-ingress-configmap.yml)" configmap/nginx-configuration patched $ kubectl -n ingress-nginx get pod | grep 'ingress' | cut -d " " -f1 - | xargs -n1 kubectl -n ingress-nginx delete pod pod "nginx-ingress-controller-86449c74bb-cfwnv" deletedxxxxxxxxx.lb.c1.gra.k8s.ovh.net And you should get the HTTP parameters of your request, including the right source IP in the x-real-ip header: { "path": "/", "headers": { "host": "6d6rslnrn8.lb.c1.gra.k8s.ovh.net", "x-request-id": "2126b343bc837ecbd07eca904c33daa3", "x-real-ip": "XXX.XXX.XXX.XXX", "x-forwarded-for": "XXX.XXX.XXX.XXX", "x-forwarded-host": "xxxxxxxxxx.lb.c1.gra.k8s.ovh.net", "x-forwarded-port": "80", "x-forwarded-proto": "http", "x-original-uri": "/", "x-scheme": "http", "user-agent": "curl/7.58.0", "accept": "*/*" }, "method": "GET", "body": "", "fresh": false, "hostname": "6d6rslnrn8.lb.c1.gra.k8s.ovh.net",
https://docs.ovh.com/gb/en/kubernetes/getting-source-ip-behind-loadbalancer/
2019-10-14T06:14:46
CC-MAIN-2019-43
1570986649232.14
[]
docs.ovh.com
In order to set up a custom SMTP server as an alternate delivery channel, ToutApp does require that you utilize some form of authentication for security purposes. You can set up any SMTP server on your SMTP configuration page. To set up an Office365 SMTP server, Microsoft recommends the following configuration: SMTP Server: smtp.office365.com Server Port: Port 587 - Secured Authentication Method: Login (SSL/TLS) Username or Login: your Office365 email address Password: your Office365 email password Your Domain: leave blank If you're still having issues setting up your SMTP server, partner with your Exchange Admin to ensure the right credentials are being used.
https://docs.marketo.com/plugins/viewsource/viewpagesrc.action?pageId=14746287
2019-10-14T06:28:26
CC-MAIN-2019-43
1570986649232.14
[]
docs.marketo.com
Download and run the Office 365 IdFix tool)), you can synchronize your directory if there are no errors. If there are errors in your directory, it is recommended that you fix them before you synchronize. See prepare directory attributes for synchronization with Office 365 for more information. where you extracted IdFix, which is C:\Users<your user name>. Additional resources on IdFix Video training For more information, see the lesson Install and use the IdFix tool, brought to you by LinkedIn Learning. Feedback
https://docs.microsoft.com/en-us/office365/enterprise/install-and-run-idfix?redirectSourcePath=%252fen-gb%252farticle%252fInstall-and-run-the-Office-365-IdFix-tool-f4bd2439-3e41-4169-99f6-3fabdfa326ac
2019-10-14T07:06:01
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
[−][src]Crate agilulf A simple but fast KV database (like ) This crate provide an abstraction layer of AsyncDatabase. User can select which database to use easily. Any struct implements AsyncDatabase trait can construct a TCP server. It actually also provides some implementation of it: Database stores data as LSM structure in disk. MemDatabase uses a lock-free skiplist to store data in Memory. Note: with the increasing of the data size, it will become slower and slower. TCP Server is built with romio and futures-preview. It spawns every connected tcp stream on a ThreadPool.
https://docs.rs/agilulf/0.1.0/agilulf/
2019-10-14T06:08:41
CC-MAIN-2019-43
1570986649232.14
[]
docs.rs
Light. Product documentation PATROL for Light Weight Protocols documentation helps new and experienced users implement or use this product. Based on your role, the following sections of the documentation are recommended. All users should view and set up a watch on the Release notes and notices for the latest product information and documentation updates. When working on the product, click the Help icons to directly link to the topic about. PATROL for Light Weight Protocols product is a component of the Base repository. For information about downloading and installing the repository, see Downloading the repository. Related product documentation When working with the PATROL for Light Weight Protocols product, you might also need to refer to the following documentation: - BMC PATROL Agent 11.0 or later - Infrastructure Management–PATROL Repository Console and product compatibility You can use the KMs and Monitoring Solutions with several BMC.
https://docs.bmc.com/docs/PATROL4lwp/21/orientation-721195451.html
2019-10-14T07:23:47
CC-MAIN-2019-43
1570986649232.14
[]
docs.bmc.com
moveit_msgs /AllowedCollisionMatrix Message File: moveit_msgs/AllowedCollisionMatrix.msg Raw Message Definition # The list of entry names in the matrix string[] entry_names # The individual entries in the allowed collision matrix # square, symmetric, with same order as entry_names AllowedCollisionEntry[] entry_values # In addition to the collision matrix itself, we also have # the default entry value for each entry name. # If the allowed collision flag is queried for a pair of names (n1, n2) # that is not found in the collision matrix itself, the value of # the collision flag is considered to be that of the entry (n1 or n2) # specified in the list below. If both n1 and n2 are found in the list # of defaults, the result is computed with an AND operation string[] default_entry_names bool[] default_entry_values Compact Message Definition string[] entry_names moveit_msgs/AllowedCollisionEntry[] entry_values string[] default_entry_names bool[] default_entry_values autogenerated on Thu, 15 Aug 2019 03:50:33
http://docs.ros.org/api/moveit_msgs/html/msg/AllowedCollisionMatrix.html
2019-08-17T23:57:39
CC-MAIN-2019-35
1566027313501.0
[]
docs.ros.org
all tests are passing and that your patch is in line with the contribution guidelines. Also see the Development section. Documentation Good documentation is just as important as good code. Check out the Documentation section of this guide and consider adding or improving Cop descriptions. Working on the Manual The manual is generated from the markdown files in the doc folder of RuboCop's GitHub repo and is published to Read the Docs. The MkDocs tool is used to convert the markdown sources to HTML. To make changes to the manual you simply have to change the files under manual. The manual will be regenerated automatically when changes to those files are merged in master (or the latest stable branch). You can install MkDocs locally and use the command mkdocs serve to see the result of changes you make to the manual locally: $ cd path/to/rubocop/repo $ mkdocs serve If you want to make changes to the manual's page structure you'll have to edit mkdocs.yml. Salt, Patreon, PayPal, and Open Collective.
http://docs.rubocop.org/en/stable/contributing/
2019-08-17T23:31:45
CC-MAIN-2019-35
1566027313501.0
[]
docs.rubocop.org
9.0.006.08 Journey Optimization Platform Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release includes only resolved issues. Resolved Issues This release contains the following resolved issues: This release corrects issues with the initialization script provided in release 9.0.006.07 of the Predictive Routing Server (JOP). It changes the way data is preloaded for the first login to the Predictive Routing application. In the previous release, 9.0.006.07, the script that preloaded data into the database did not set all the required fields for account collection correctly and caused an error when you added Predictive Routing to the list of Configurable Apps. For details about this procedure, see Adding the Predictive Routing application to your default account. For complete deployment instructions, see Deploying: Journey Optimization Platform. (PRR-1344) Upgrade Notes No special procedure is required to upgrade to release 9.0.006.08. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/9.0.x/gpm-jop90rn/gpm-jop9000608
2019-08-17T22:40:57
CC-MAIN-2019-35
1566027313501.0
[]
docs.genesys.com
In this example, the session collation is ASCII. CASE_N (a<'b', a>='ba' and a<'dogg' and b<>'cow', c<>'boy', NO CASE OR UNKNOWN) The following table shows the result value returned by the above CASE_N function given the specified values for a, b, and c. x and yrepresent any value or NULL. The value 4 is returned when all the conditions are FALSE, or a condition is UNKNOWN with all preceding conditions evaluating to FALSE.
https://docs.teradata.com/reader/756LNiPSFdY~4JcCCcR5Cw/XTAFqB2ZGUAkpFquG3bEbg
2019-08-17T22:53:44
CC-MAIN-2019-35
1566027313501.0
[]
docs.teradata.com
File System File Info. Creation Time System File Info. Creation Time System File Info. Creation Time System Property Info. Creation Time Definition The following example demonstrates the CreationTime); } Note This method may return an inaccurate value because it uses native functions whose values may not be continuously updated by the operating system. The value of the CreationTime property is pre-cached if the current instance of the FileSystemInfo object was returned from any of the following DirectoryInfo methods:
https://docs.microsoft.com/en-us/dotnet/api/system.io.filesysteminfo.creationtime?view=netframework-4.8
2019-08-17T23:17:04
CC-MAIN-2019-35
1566027313501.0
[]
docs.microsoft.com
MoveWindow function Changes the position and dimensions of the specified window. For a top-level window, the position and dimensions are relative to the upper-left corner of the screen. For a child window, they are relative to the upper-left corner of the parent window's client area. Syntax BOOL MoveWindow( HWND hWnd, int X, int Y, int nWidth, int nHeight, BOOL bRepaint ); Parameters hWnd Type: HWND A handle to the window. X Type: int The new position of the left side of the window. Y Type: int The new position of the top of the window. nWidth Type: int The new width of the window. nHeight Type: int The new height of the window. bRepaint Conceptual Other Resources Reference
https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-movewindow
2019-08-17T22:48:45
CC-MAIN-2019-35
1566027313501.0
[]
docs.microsoft.com
Introduction Running Kill Bill on AWS using our official CloudFormation template is the easiest and fastest way to get started. It is also the only method of installation that is certified by the core developers for a Highly Available, horizontally scalable and production-ready installation. With the click of a button, the template will install and configure: Kill Bill and Kaui on a custom AMI optimized for AWS workloads (integrated with CloudWatch, SQS, SES, X-Ray and more) Auto Scaling Groups, to automatically scale up and down the number of instances as needed (such as when batches of invoices are generated) A load balancer, integrated with our internal healthchecks to promptly take out of rotation unhealthy instances A RDS Aurora Cluster with automatic failover Kill Bill comes with the Analytics plugin pre-configured: you get a subscription billing management solution as feature-rich as popular SaaS platforms, but you are in control. Installation For installation support along the way, reach out to Configuration options The installation supports the following configuration options: VpcId: the VPC to use for the installation. In your AWS Console, go to Services and search for VPC. Under Your VPCs, locate the VPC ID you would like to use or create a new one. Subnets: the subnets to use, associated with at least two different availability zones. In the VPC Dashboard, go to Subnets and find two subnets in your VPC in two different availability zones. Alternatively, create new ones (use 10.0.0.0/24 and 10.0.1.0/24 as the IPv4 CIDR for instance). KeyName: name of an existing EC2 KeyPair to enable SSH access to the instances. You can create a new one by going to Key Pairs in your EC2 Dashboard. HTTPLocation: IP address range allowed to access the load balancer (you can always use 0.0.0.0/0 initially and adjust access later on). EnvType: environment purpose (test, prod, etc.) InstanceType: the EC2 instance type to use for Kill Bill. Here are some guidelines: m5.large: test environment c5.xlarge: up to 5,000 subscriptions c5.2xlarge: up to 50,000 subscriptions c5.4xlarge: beyond 50,000 subscriptions KillBillServerCapacity: the initial number of Kill Bill instances in the Auto Scaling group. Here are some guidelines: 1: test environment 2: up to 5,000 subscriptions 3: up to 50,000 subscriptions 4: beyond 50,000 subscriptions KauiServerCapacity: the initial number of Kaui instances in the Auto Scaling group. We recommend using the default value 2. DBClass: the database instance type to use for RDS. Here are some guidelines: db.t3.medium: test environment db.r5.xlarge: up to 5,000 subscriptions db.r5.2xlarge: up to 50,000 subscriptions db.r5.4xlarge: beyond 50,000 subscriptions DBName: database name for Kill Bill. We recommend using the default value killbill. KauiDBName: database name for Kaui. We recommend using the default value kaui. DBUser: database admin username DBPassword: database admin password EnableCloudWatchMetrics: whether to record Kill Bill metrics in CloudWatch. Strongly recommended for production. When enabled, a default monitoring dashboard will be created. Setup steps Start the installation process by going to AWS Marketplace: click Continue to Subscribe and populate the configuration options in the CloudFormation form. Launch the stack. Upon success, the Outputs tab will display the load balancer URL. Kill Bill is available on port 80 while Kaui on port 9090. You can log-in to Kaui by going to http://<LOAD_BALANCER_URL>:9090 (make sure your IP address can access the load balancer, as defined by the parameter HTTPLocation, or add it to the security group as needed). Default credentials are: admin/password. Take a look at our Getting Started guide for an introduction to Kaui. Upgrade steps The Kill Bill core team will provide new AMIs whenever necessary. Because the CloudFormation from AWS Marketplace will always reflect the latest AMI ids, you can simply update the stack with the latest CloudFormation template and the instances in the AutoScaling groups will be updated automatically. We strongly recommend to always test the upgrade in a test environment first.
http://docs.killbill.io/latest/aws.html
2019-08-17T22:45:16
CC-MAIN-2019-35
1566027313501.0
[array(['https://github.com/killbill/killbill-docs/raw/v3/userguide/assets/img/aws/cloudwatch.png', 'cloudwatch'], dtype=object) array(['https://github.com/killbill/killbill-docs/raw/v3/userguide/assets/img/aws/analytics.png', 'analytics'], dtype=object) ]
docs.killbill.io
The Axsun Integrated Engine enhances the Axsun SS-OCT Laser Engine with Balanced Photoreceivers, K-clock, and a Data Acquisition Board (DAQ), plus other optional OCT system components such as a Variable Delay Line (VDL) and OCT interferometer. This Getting Started Guide covers the installation and basic operation of an Axsun Integrated Engine based on the Axsun Ethernet/PCIe DAQ (EDAQ). Instructions can be followed effectively without basic understanding of the EDAQ and its associated host software architecture, however this Architecture & Interface Background section includes critical knowledge for users and system integrators planning to employ more than very basic Integrated Engine and EDAQ functionality. Additional information can be found in the SS-OCT Laser Engine Getting Started Guide, the SS-OCT Laser Engine Reference Manual and Ethernet/PCIe DAQ Board Reference Manual. NOTE: For instructions on installing an Integrated Engine based on the Axsun Camera Link DAQ, use the SS-OCT Laser Engine Getting Started Guide and then refer to the CameraLink DAQ Board Reference Manual. Congratulations, your Axsun SS-OCT Integrated Laser Engine has been delivered! First inspect the packaging for external signs of damage. If there is any obvious physical damage, take photographs and request that the carrier's agent be present when the packaging is opened. NOTE: Retain the shipping container and packing materials so that they may be reused if necessary. Remove your Engine from its packaging and confirm that all components are present and not damaged. If a component is missing, notify Axsun. If a component appears damaged, notify Axsun and the carrier then follow the carrier's instructions for damage claims. NOTE: For integrated engines with PCIe DAQ which is included as a separate item, 2 USB cables and 2 SATA cables (18 inch long each) are included. Proceed to the next page in this guide describing Installing Software for communicating with and capturing data from your Axsun Integrated Engine.
https://docs.axsun.com/axsun-technologies-knowledge-base/guides/integrated-engine
2019-08-17T22:58:07
CC-MAIN-2019-35
1566027313501.0
[]
docs.axsun.com
Writing a lexicon entry Here's a quick video: If you prefer reading: - Log in to Interania docs (click Sign In in the upper-right corner). - On the Lexicon page, click the (green) Create a new entry button. - In the new page that appears, enter a title for the topic (the word or phrase that you're defining). - Now add content! - Add a concise definition. - Link to related terms, if applicable: use the Link option in the editing menu or press Command-K. Save the page when prompted, then search for the term to link to. - Link to howtos or reference topics about this term, if they exist. (Use the Search tab in the Link dialog.) - Delete any sections that don't apply. - Click Save and you're done! What about the link from the Lexicon page? There are a few more steps for this: - Go back to the Interana Lexicon. - Click Edit, then Edit live. - Find the term you created a new entry for. Select the word or phrase and click Link (or Command-K) to open the Link dialog. - Select your page from the list and click Save Link. - You're back in the Edit window. Click Save in the upper-left (or press Command-S). - Now you're really done!
https://docs.interana.com/lexicon/Writing_a_lexicon_entry
2019-08-17T22:31:57
CC-MAIN-2019-35
1566027313501.0
[]
docs.interana.com
As of 2018 we have changed our support-platform, and now issue all mails through Intercom. This means that mails from addresses like: are in fact mails from Jottacloud (support or simply information about updates, new features, payments etc.) and not spam. Make sure to whitelist mails from these domains/addresses so that you don't miss important updates or information from us!
https://docs.jottacloud.com/en/articles/1689432-emails-from-jottacloud
2019-08-17T22:58:52
CC-MAIN-2019-35
1566027313501.0
[]
docs.jottacloud.com
Easily Deploy Models to Salesforce Objects Where: This change applies to Einstein Analytics in Lightning Experience and Salesforce Classic. Einstein Analytics and Einstein Discovery are available for an extra cost in Enterprise, Performance, and Unlimited editions, and they’re also available in Developer Edition. Who: To deploy a model to a Salesforce object, you need the Connect Einstein Discovery Model user permission that is included in the Einstein Analytics Plus license. Why: Deploying Einstein Discovery predictions to a Salesforce object gives you the power of data science where you need it. But sometimes, the field names in your model don't exactly match the field names on your Salesforce object. Use the mapping screen to tell Einstein Discovery how the model's fields map to the Salesforce object's fields. How: On the Story toolbar, click Deploy Model and follow the instructions to select your model and object. On the Confirm field mapping screen, map the fields in your model to the corresponding fields in your Salesforce object. Fields with the same name are mapped automatically but you can easily change mappings as needed.
https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_bi_edd_wb.htm
2019-08-17T23:00:19
CC-MAIN-2019-35
1566027313501.0
[array(['release_notes/images/rn_edd_mapping.png', 'Mapping story fields to Salesforce Object fields'], dtype=object)]
docs.releasenotes.salesforce.com
Einstein Insights: Create Reports Based on Insights You can now create reports and dashboards related to account and opportunity insights. Get a better understanding of your customers by, for example, running reports that show all insights for specific accounts or opportunities. Where: This change applies to Lightning Experience and Salesforce Classic in Enterprise, Performance, and Unlimited editions. How: To get started, create a custom report type. To report on opportunity insights, use Opportunities as the primary object and Einstein Opportunity Insights as the secondary object. To report on account insights, use Accounts as the primary object and Einstein Account Insights as the secondary object. Then add extra criteria to focus on the data you need. Be aware of the following limitations. - When insights are no longer relevant, we remove them from the Einstein component, so they’re not available in reports. For example, an insight about no communication with a prospect is removed after communication is reestablished. - You can run reports only for the objects that insights are associated with. You can’t run reports for insights based on what appears on the Home page.
https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_sales_einstein_insights_parent.htm
2019-08-17T23:32:00
CC-MAIN-2019-35
1566027313501.0
[]
docs.releasenotes.salesforce.com
... - Open The operation log appears. When the product server is running, the log displays the message "WSO2 Carbon started in 'n' seconds". ... Accessing the management console Once the server has started, you can run the Management Console by opening a Web browser and typing in the management console's URL. The URL is displayed as the last line in the start script's console and log. For example: ... Overview Content Tools Activity
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=40439561&selectedPageVersions=5&selectedPageVersions=4
2019-08-17T23:02:56
CC-MAIN-2019-35
1566027313501.0
[]
docs.wso2.com
Action Group Best Practices We strive to write tests using only action groups. Fortunately, we have built up a large set of action groups to get started. We can make use of them and extend them for our own specific needs. In some cases, we may never even need to write action groups of our own. We may be able to simply chain together calls to existing action groups to implement our new test case. Why use Action Groups? Action groups simplify maintainability by reducing duplication. Because they are re-usable building blocks, odds are that they are already made use of by existing tests in the Magento codebase. This proves their stability through real-world use. Take for example, the action group named LoginAsAdmin: Logging in to the admin panel is one of the most used action groups. It is used around 1,500 times at the time of this writing. Imagine if this was not an action group and instead we were to copy and paste these 5 actions every time. In that scenario, if a small change was needed, it would require a lot of work. But with the action group, we can make the change in one place. How to extend action groups Again using LoginAsAdmin as our example, we trim away metadata to clearly reveal that this action group performs 5 actions: This works against the standard Magento admin panel login page. Bu imagine we are working on a Magento extension that adds a CAPTCHA field to the login page. If we create and activate this extension and then run all existing tests, we can expect almost everything to fail because the CAPTCHA field is left unfilled. We can overcome this by making use of MFTF’s extensibility. All we need to do is to provide a “merge” that modifies the existing LoginAsAdmin action group. Our merge file will look like: Because the name of this merge is also LoginAsAdmin, the two get merged together and an additional step happens everytime this action group is used. To continue this example, imagine someone else is working on a ‘Two-Factor Authentication’ extension and they also provide a merge for the LoginAsAdmin action group. Their merge looks similar to what we have already seen. The only difference is that this time we fill a different field: Bringing it all together, our resulting LoginAsAdmin action group becomes this: No one file contains this exact content as above, but instead all three files come together to form this action group. This extensibility can be applied in many ways. We can use it to affect existing Magento entities such as tests, action groups, and data. Not so obvious is that this tehcnique can be used within your own entities to make them more maintainable as well.
https://devdocs.magento.com/mftf/v2/docs/guides/action-groups.html
2021-07-23T19:16:30
CC-MAIN-2021-31
1627046150000.59
[]
devdocs.magento.com
The Ethereum node provides the JSON RPC interface, which is used by libraries such as Web3, as well as applications directly. The goal is to implement an interface that will work with requests and responses of a similar format. This is necessary to facilitate the migration of applications from the Ethereum platform to Echo. In Echo JSON RPC implemented via additional plugin that enables with help of --plugins=ethrpc. If you enables plugin, you must also specify endpoint that webserver should listen with option --ethrpc-endpoint, for example --ethrpc-endpoint=0.0.0.0:8092. Plugin starts a webserver and binds implemented api on every connection. That api implements described methods converting Echo data format to specified one.
https://docs.echo.org/api-reference/ethrpc
2021-07-23T19:40:48
CC-MAIN-2021-31
1627046150000.59
[]
docs.echo.org
Media Manager Media Files - Media Files - Upload - Search Upload to en:manual Sorry, you don't have enough rights to upload files. File - View - Date: - 2019/05/16 14:09 - Filename: - ios_eduroam-install_eng_02.png - Format: - PNG - Size: - 812KB - Width: - 1125 - Height: - 2436 - References for: - eduroam with iOS (iOS 5+)
https://docs.gwdg.de/doku.php?id=en:services:network_services:eduroam:android&tab_files=upload&do=media&tab_details=view&image=en%3Aservices%3Anetwork_services%3Aeduroam%3Aios_eduroam-install_eng_02.png&ns=en%3Amanual
2021-07-23T20:08:39
CC-MAIN-2021-31
1627046150000.59
[]
docs.gwdg.de
B505: weak_cryptographic_key¶ B505: Test for weak cryptographic key use¶ As computational power increases, so does the ability to break ciphers with smaller key lengths. The recommended key length size for RSA and DSA algorithms is 2048 and higher. 1024 bits and below are now considered breakable. EC key length sizes are recommended to be 224 and higher with 160 and below considered breakable. This plugin test checks for use of any key less than those limits and returns a high severity error if lower than the lower threshold and a medium severity error for those lower than the higher threshold. >> Issue: DSA key sizes below 1024 bits are considered breakable. Severity: High Confidence: High Location: examples/weak_cryptographic_key_sizes.py:36 35 # Also incorrect: without keyword args 36 dsa.generate_private_key(512, 37 backends.default_backend()) 38 rsa.generate_private_key(3, See also New in version 0.14.0.
https://bandit.readthedocs.io/en/latest/plugins/b505_weak_cryptographic_key.html
2021-07-23T19:32:13
CC-MAIN-2021-31
1627046150000.59
[]
bandit.readthedocs.io
Alternate restore Methods of restoring files to alternate node¶ Let the owner node delegate access to it's backup to another node. The restore node then uses it's own configuration, but tells TSM to access the data from the owner node. Use the owner nodes configuration. If you are doing restore testing, then method 1 is preferred. This method however requires that the owner node is still alive and able to delegate access to it's data. Steps to use method 1¶ On the owner node¶ The owner node has to delegate access to it's backups, otherwise the restore node will not have permissions to access the data. To authorize another node to restore or retrieve your files using the GUI: Click Utilities→ Node Access Listfrom the main window. In the Node Access Listwindow, click the Addbutton. Type the NODENAMEon the node that should get access in the Grant Access to Nodefield. In the Filespaceand Directoryfield, select the file space and the directory that the user can access. You can select one file space and one directory at a time. If you want to give the user access to another file space or directory, you must create another access rule. If you want to limit the user to specific files in the directory, type the name or pattern of the files on the server that the other user can access in the Filenamefield. You can make only one entry in the Filenamefield. It can either be a single file name or a pattern that matches one or more files. You can use a wildcard character as part of the pattern. Your entry must match files that have been stored on the server. If you want to give access to all files that match the file name specification within the selected directory including its subdirectories, click Include subdirectories. Click OKto save the access rule and close the Add Access Rulewindow. The access rule that you created is displayed in the list box in the Node Access Listwindow. When you have finished working with the Node Access Listwindow, click OK. If you do not want to save your changes, click Cancelor close the window. If you prefer the command line client, then use the set access command to authorize another node to restore or retrieve your files. You can also use the query access command to see your current list, and delete access to delete nodes from the list. Example prompt> dsmc set access backup \\landsort\e$\* GYMMIX prompt> dsmc query access Type Node User Path ---- ---------------------------- Backup FILE * \\landsort\e$\* Backup GYMMIX * \\landsort\e$\* prompt> dsmc delete access Index Type Node User Path ----- ---- ---------------------------- 1 Backup FILE * \\landsort\e$\* 2 Backup GYMMIX * \\landsort\e$\* Enter Index of rule(s) to delete, or quit to cancel: 1,2 prompt> dsmc query access ANS1302E No objects on server match query On the restore node¶ Start the GUI and under the meny Utilities select Access Another Node and enter the node name. Now you can do a restore as usual, but the client is reading the data from the owner nodes backup. When doing a command line restore the option is -fromnode=NODENAME. Example dsmc restore -fromnode=XXXXXXXXXX \\cougar\d$\projx\* d:\projx\ Steps to use method 2¶ Windows¶ Copy dsm.opt from owner node to restore node, but change the name of the optfile to another name. E.g. dsm-restore.opt The config file can also be downloaded from the portal or API if owner node is dead. the restore node start the tsm GUI with the option -optfile=dsm-restore.opt Example dsm -optfile=dsm-restore.opt Warning Sometimes admins replace dsm.opt on the restore node with to owner nodes dsm.opt. This is dangerous since if you forget to undo the change, then two nodes with the same configuration will run backup under the same nodename. This will corrupt the backups. Linux/UNIX¶ Copy the content from the owner nodes dsm.sys and add append it to the restore nodes dsm.sys, but change the SERVERNAME line to SERVERNAME RESTORE_OWNERNODE. restore node start the tsm GUI or command line with this option: -server=RESTORE_OWNERNODE Example dsmc -server=RESTORE_OWNERNODE
https://docs.safespring.com/backup/howto/restore-alternate/
2021-07-23T19:11:13
CC-MAIN-2021-31
1627046150000.59
[]
docs.safespring.com
, Teradata Database assigns the export definition that is currently defined in the Export Width Table ID parameter of DBS Control to that user. For more information, see Utilities, B035-1102.
https://docs.teradata.com/r/eWpPpcMoLGQcZEoyt5AjEg/mv9in52NzilKN_HZsqrgIQ
2021-07-23T17:57:07
CC-MAIN-2021-31
1627046150000.59
[]
docs.teradata.com
Note These instructions always explain the steps required to upgrade from last stable release to current stable release. However, each section also provides links to instructions for older versions. Version upgrade¶ If you have upgraded the HRM in the past, you will know that some steps must be performed in addition to replacing the old HRM code with the new one: some entries might have been added or changed in the configuration files ( hrm_{server|client}_config.inc), and the database structure might have been changed. Stop the Queue Manager¶ Significant parts of the configuration as well as the database are usually changed during an upgrade, so the Queue Manager needs to be stopped first. Shut down the Queue Manager with: sudo /etc/init.d/hrmd stop if you are using System-V or upstart, or with: sudo systemctl stop hrmd.service if you are using systemd. In some rare situations, the Queue Manager might get stuck. To ensure the stop command did work properly, run the following check. On System-V or upstart: sudo /etc/init.d/hrmd status On systemd: sudo systemctl status hrmd.service Alternatively, you can use: ps aux | grep -i [r]unHuygens that should return empty (nothing). Download and extract the new HRM release¶ To install the new HRM version you need to download the .zip file from the website or github as explained in downloading the standard archive. Warning Please do not extract the new archive on top of the previous HRM installation! See details below. Clean up previous installations¶ Because of changes in tre structure of the code and of the external dependences (starting from version 3.4), we highly recommend. Note Please follow these instructions first if you are upgrading from older versions. Update the configuration files¶ There were no configuration changes between versions 3.6.x and 3.7.x of HRM. Check the configuration files¶ An easy way to check for modifications is by running the $HRM_HOME/resources/checkConfig.php script. From the shell, run: cd $HRM_HOME php resources/checkConfig.php config/hrm_server_config.inc php resources/checkConfig.php config/hrm_client_config.inc There were no configuration changes between versions 3.6.x and 3.7 of HRM. The output of the checkConfig.php script should be: Checking against HRM v3.7. Check completed successfully! Your configuration file is valid! Please make sure to fix all problems you might have! The sample files and the Manual installation instructions will help you set the correct parameters. Note Please follow these instructions first if you are upgrading from older versions. Update the database¶ Newer versions of the HRM might use slightly different/updated versions of the database back-end than previous ones. For this reason, the first time you run the HRM after an update you will be told that the database must be updated and that you are not allowed to continue until this has been done! Note Database updates are supported across HRM versions, i.e. it is possible to upgrade the database from revision 7 to 18 in one step. The following describes two possible ways to update the database. Note Although we test this procedure quite carefully, it is highly recommended to backup the database before updating! Updating from the web interface¶ Login to the HRM as the admin user: you will be brought directly to the Database update page. Click on the update button. If everything works properly (as it should…), the following message should be displayed. Needed database revision for HRM v3.7 is number 18. Current database revision is number 17. Updating... Database successfully updated to revision 18. The database is now at the latest revision. Updating from the console¶ Alternatively, the database can be updated from the console (see create or update database). Please pay attention to what the update process will report! The output should be the same as the one listed in the previous section, but if the update fails, you might want to report it. Re-start the Queue Manager¶ After processing the described upgrade steps, the Queue Manager needs to be started again, with: sudo /etc/init.d/hrmd start if you are using System-V or upstart, or with: sudo systemctl start hrmd.service if you are using systemd. Upgrade from previous releases¶ The following pages are linked to from the relevant sections above, but are listed here again for convenience.
https://huygens-remote-manager.readthedocs.io/en/latest/admin/upgrade.html
2021-07-23T17:54:29
CC-MAIN-2021-31
1627046150000.59
[]
huygens-remote-manager.readthedocs.io
cupy.append¶ - cupy.append(arr, values, axis=None)[source]¶ Append values to the end of an array. - Parameters arr (array_like) – Values are appended to a copy of this array. values (array_like) – These values are appended to a copy of arr. It must be of the correct shape (the same shape as arr, excluding axis). If axisis not specified, valuescan be any shape and will be flattened before use. axis (int or None) – The axis along which valuesare appended. If axisis not given, both arrand valuesare flattened before use. - Returns - cupy.ndarray A copy of arrwith valuesappended to axis. Note that appenddoes not occur in-place: a new array is allocated and filled. If axisis None, outis a flattened array. See also
https://docs.cupy.dev/en/stable/reference/generated/cupy.append.html
2021-07-23T20:42:39
CC-MAIN-2021-31
1627046150000.59
[]
docs.cupy.dev
How do push notifications work? You can log into your dashboard anytime and either schedule push notification to be sent or send one immediately. We also offer push categories that lets your subscribers choose what categories of content they always want a notification for... Example: 'Breaking News' category Send or schedule a manual push Opt in to push categories
https://docs.zeen101.com/article/113-how-do-push-notifications-work
2021-07-23T20:11:51
CC-MAIN-2021-31
1627046150000.59
[array(['https://d2pxilza3u6xqt.cloudfront.net/wp-content/uploads/2016/08/Add_New_Push_Notification_%E2%80%B9_Pub_1_%E2%80%94_WordPress.jpg', 'Add_New_Push_Notification_‹_Pub_1_—_WordPress'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5bc7551e042863158cc78aed/images/5bdb24ac2c7d3a01757aafc5/file-Y6KuHlygcs.jpg', None], dtype=object) ]
docs.zeen101.com
Many desktop applications offer the possibility for the user to undo its last changes : this undo feature has now been integrated into the CubicWeb framework. This document will introduce you to the undo feature both from the end-user and the application developer point of view. But because a semantic web application and a common desktop application are not the same thing at all, especially as far as undoing is concerned, we will first introduce what is the undo feature for now. What is an undo feature is quite intuitive in the context of a desktop application. But it is a bit subtler in the context of a Semantic Web application. This section introduces some of the main differences between a classical desktop and a Semantic Web applications to keep in mind in order to state precisely what we want. A CubicWeb application acts upon an Entity-Relationship model, described by a schema. This allows to ensure some data integrity properties. It also implies that changes are made by all-or-none groups called transactions, such that the data integrity is preserved whether the transaction is completely applied or none of it is applied. A transaction can thus include more actions than just those directly required by the main purpose of the user. For example, when a user just writes a new blog entry, the underlying transaction holds several actions as illustrated below : Because of the very nature (all-or-none) of the transactions, the “undoable stuff” are the transactions and not the actions ! Actually, within the transaction “Created Blog entry : Torototo”, two of those actions are said to be public and the others are said to be private. Public here means that the public actions (1 and 3) were directly requested by the end user ; whereas private means that the other actions (2, 4, 5) were triggered “under the hood” to fulfill various requirements for the user operation (ensuring integrity, security, ... ). And because quite a lot of actions can be triggered by a “simple” end-user request, most of which the end-user is not (and does not need or wish to be) aware, only the so-called public actions will appear [1] in the description of the an undoable transaction. But note that both public and private actions will be undone together when the transaction is undone. A CubicWeb application can be used simultaneously by different users (whereas a single user works on an given office document at a given time), so that there is not always a single history time-line in the CubicWeb case. Moreover CubicWeb provides security through the mechanism of permissions granted to each user. This can lead to some transactions not being undoable in some contexts. In the simple case two (unprivileged) users Alice and Bob make relatively independent changes : then both Alice and Bob can undo their changes. But in some case there is a clean dependency between Alice’s and Bob’s actions or between actions of one of them. For example let’s suppose that : Then it is clear that Alice can undo her contents changes and Bob can undo his post creation independently. But Alice can not undo her post creation while she has not first undone her changes. It is also clear that Bob should not have the permissions to undo any of Alice’s transactions. But more surprising things can quickly happen. Going back to the previous example, Alice can undo the creation of the blog after Bob has published its post in it ! But this is possible only because the schema does not require for a post to be in a blog. Would the blog entry of relation have been mandatory, then Alice could not have undone the blog creation because it would have broken integrity constraint for Bob’s post. When a user attempts to undo a transaction the system will check whether a later transaction has explicit dependency on the would-be-undone transaction. In this case the system will not even attempt the undo operation and inform the user. If no such dependency is detected the system will attempt the undo operation but it can fail, typically because of integrity constraint violations. In such a case the undo operation is completely [3] rollbacked. The exposition of the undo feature to the end-user through a Web interface is still quite basic and will be improved toward a greater usability. But it is already fully functional. For now there are two ways to access the undo feature as long as the it has been activated in the instance configuration file with the option undo-support=yes. Immediately after having done the change to be canceled through the undo link in the message. This allows to undo an hastily action immediately. For example, just after having validated the creation of the blog entry A second blog entry we get the following message, allowing to undo the creation. At any time we can access the undo-history view accessible from the start-up page. This view will provide inspection of the transaction and their (public) actions. Each transaction. This is all for the end-user side of the undo mechanism : this is quite simple indeed ! Now, in the following section, we are going to introduce the developer side of the undo mechanism. A word of warning : this section is intended for developers, already having some knowledge of what’s under CubicWeb’s hood. If it is not yet the case, please refer to CubicWeb documentation . The core of the undo mechanisms is at work in the native source, beyond the RQL. This does mean that transactions and actions are no entities. Instead they are represented at the SQL level and exposed through the DB-API supported by the repository Connection objects. Once the undo feature has been activated in the instance configuration file with the option undo-support=yes, each mutating operation (cf. [2]) will be recorded in some special SQL table along with its associated transaction. Transaction are identified by a txuuid through which the functions of the DB-API handle them. On the web side the last commited transaction txuuid is remembered in the request’s data to allow for imediate undoing whereas the undo-history view relies upon the DB-API to list the accessible transactions. The actual undoing is performed by the UndoController accessible at URL of the form... Please refer to the file cubicweb/server/sources/native.py and cubicweb/transaction.py for the details. The undoing information is mainly stored in three SQL tables: When the undo support is activated, entries are added to those tables for each mutating operation on the data repository, and are deleted on each transaction undoing. Those table are accessible through the following methods of the repository Connection object : Returns the list of Action object for the given txuuid. NB: By default it only return public actions. The exposure of the undo feature to the end-user through the Web interface relies on the DB-API introduced above. This implies that the transactions and actions are not entities linked by relations on which the usual views can be applied directly. That’s why the file cubicweb/web/views/undohistory.py defines some dedicated views to access the undo information : Apart from this main undo-history view a txuuid is stored in the request’s data last_undoable_transaction in order to allow immediate undoing of a hastily validated operation. This is handled in cubicweb/web/application.py in the main_publish and add_undo_link_to_msg methods for the storing and displaying respectively. Once the undo information is accessible, typically through a txuuid in an undo URL, the actual undo operation can be performed by the UndoController defined in cubicweb/web/views/basecontrollers.py. This controller basically extracts the txuuid and performs a call to undo_transaction and in case of an undo-specific error, lets the top level publisher handle it as a validation error. The undo mechanism relies upon a low level recording of the mutating operation on the repository. Those records are accessible through some method added to the DB-API and exposed to the end-user either through a whole history view of through an immediate undoing link in the message box. The undo feature is functional but the interface and configuration options are still quite reduced. One major improvement would be to be able to filter with a finer grain which transactions or actions one wants to see in the undo-history view. Another critical improvement would be to enable the undo feature on a part only of the entity-relationship schema to avoid storing too much useless data and reduce the underlying overhead. But both functionality are related to the strong design choice not to represent transactions and actions as entities and relations. This has huge benefits in terms of safety and conceptual simplicity but prevents from using lots of convenient CubicWeb features such as facets to access undo information. Before developing further the undo feature or eventually revising this design choice, it appears that some return of experience is strongly needed. So don’t hesitate to try the undo feature in your application and send us some feedback.
https://docs.cubicweb.org/book/additionnal_services/undo.html
2018-08-14T11:22:06
CC-MAIN-2018-34
1534221209021.21
[]
docs.cubicweb.org
Breadcrumbs are a navigation component to help the user locate himself along a path of entities. Breadcrumbs are displayed by default in the header section (see Layout and sections). With the default main template, the header section is composed by the logo, the application name, breadcrumbs and, at the most right, the login box. Breadcrumbs are displayed just next to the application name, thus they begin with a separator. Here is the header section of the CubicWeb’s forge: There are three breadcrumbs components defined in cubicweb.web.views.ibreadcrumbs: The IBreadCrumbsAdapter adapter is defined in the cubicweb.web.views.ibreadcrumbs module. It specifies that an entity which implements this interface must have a breadcrumbs and a parent_entity method. A default implementation for each is provided. This implementation expoits the ITreeAdapter. Note Redefining the breadcrumbs is the hammer way to do it. Another way is to define an ITreeAdapter adapter on an entity type. If available, it will be used to compute breadcrumbs. Here is the API of the IBreadCrumbsAdapter class: return a list containing some: defining path from a root to the current view the main view is given as argument so breadcrumbs may vary according to displayed view (may be None). When recursing on a parent entity, the recurs argument should be a set of already traversed nodes (infinite loop safety belt). If the breadcrumbs method return a list of entities, the cubicweb.web.views.ibreadcrumbs.BreadCrumbView is used to display the elements. By default, for any entity, if recurs=True, breadcrumbs method returns a list of entities, else a list of a simple string. In order to see a hierarchical breadcrumbs, entities must have a parent method which returns the parent entity. By default this method doesn’t exist on entity, given that it can not be guessed. Enter search terms or a module, class or function name.
https://docs.cubicweb.org/book/devweb/views/breadcrumbs.html
2018-08-14T11:22:13
CC-MAIN-2018-34
1534221209021.21
[]
docs.cubicweb.org
spam score A score assigned to a message by the anti-spam engine indicating the relative likelihood that the message is spam. Anti-spam rules consist of a test definition and a "weight". If the test matches the message, the corresponding weight is added to the message’s total spam score. Generally, multiple rules must be triggered by a message in order to result in a spam score high enough for an action to be taken. SophosLabs constantly analyzes emerging spam techniques and updates the Email Appliance anti-spam rule sets accordingly.
https://docs.sophos.com/msg/sea/help/en-us/msg/sea/references/spam_score.html
2018-08-14T10:37:18
CC-MAIN-2018-34
1534221209021.21
[]
docs.sophos.com
Query Element in Microsoft.Search.Query Schema for Windows SharePoint Services Search This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. The Query element is the parent element for the child elements that define the query. Applies to the Search Query Web service. For more information, see Windows SharePoint Services Query Web Service. <Query> <QueryId /> <SupportedFormats> <Format /> </SupportedFormats> <Context> <QueryText /> <OriginatorContext /> </Context> <Range> <StartAt /> <Count /> </Range> <EnableStemming /> <TrimDuplicates /> <IgnoreAllNoiseQuery /> <IncludeRelevantResults /> </Query> Attributes Child Elements Parent Elements Remarks Search in Windows SharePoint Services supports only one Query element in the request query. Schema name: Microsoft.Search.Query Applies to: QueryService.Query web method (Microsoft.Search.Response.Document), QueryService.QueryEx web method (System.Data.Dataset) Optional: No See Also Concepts Windows SharePoint Services Query Web Service Microsoft.Search.Query Schema
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-services/ms463471(v=office.12)
2018-08-14T10:59:54
CC-MAIN-2018-34
1534221209021.21
[]
docs.microsoft.com
Mark: Complete the incomplete task or tasks. Cancel the incomplete task or tasks. When all tasks on the certification instance are Closed Complete or Cancelled: The system sets the Completed date field on the certification instance record to the current date and time. The Percent complete field on the certification instance record is set to 100 percent. Related TasksActivate Data CertificationReassign a certification taskRelated ConceptsCertification schedulesCertification tasksCertification elementsCertification instancesCertification audit instancesCertification audit definitionData Certification planningData certification performanceCertification tasks cancellationRelated ReferenceData Certification Overview moduleSend certification task reminders
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/product/data-certification/task/t_MarkCertTaskClosedIncmp.html
2018-08-14T10:38:57
CC-MAIN-2018-34
1534221209021.21
[]
docs.servicenow.com
Calls a method of a service component, Java object or COM controlchar *sm_obj_call(char *method_spec); - method_spec - A string specifying the method and its parameters consisting of the following: - object_id - An integer handle identifying the component whose method you want to call. Object handles are returned by sm_obj_create for component objects, sm_prop_id for ActiveX controls and by sm_obj_call. - method - The name of the method. Periods are allowed as part of the method specification, as in:Application.Quit - p1, p2, ... - (Optional) A comma-delimited list of the method's parameters. Unused parameters can be omitted, as in:sm_obj_call ("TreeView, \"Add\" , , , , 'First node'") COM, EJB, Java Client - · The value returned by the component, converted to a string. - · A null string if an error occurred. For a COM error code, call sm_com_result. COM error codes are defined in winerror.h. sm_obj_callcalls methods that are part of the component's interfaces. To find which methods are available, refer to the documentation supplied with COM component, use the Panther AxView utility, or use the View Component Interface in the Panther Editor for service components.Component Interface in the Panther Editor for service components. This function returns a string; the component itself can return different types of data. For calling methods of Java objects, the method name can include an optional type-specifier to eliminate ambiguity for overloaded methods (see Working with Java Objects for further information on using type-specifiers.) @obj()may be used to pass in Java objects as parameters for the Java method. For primitive and String valued method return types, this function returns the value as a string. Otherwise, a Panther object ID is returned. This object ID should be passed to sm_obj_delete_id when the associated Java object is no longer needed in order to allow for garbage collection by the JVM. For COM components, if the typelib cannot be used to determine the parameter's type, @obj()can()may be needed if any of the parameters must be passed as objects. The syntax of sm_obj_callis different in JPL from that in
http://docs.prolifics.com/panther/html/prg_html/libfu270.htm
2018-08-14T11:21:26
CC-MAIN-2018-34
1534221209021.21
[]
docs.prolifics.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Describes the assessment targets that are specified by the ARNs of the assessment targets. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DescribeAssessmentTargetsAsync. Namespace: Amazon.Inspector Assembly: AWSSDK.Inspector.dll Version: 3.x.y.z Container for the necessary parameters to execute the DescribeAssessmentTargets service method. Describes the assessment targets that are specified by the ARNs of the assessment targets. var response = client.DescribeAssessmentTargets(new DescribeAssessmentTargetsRequest { AssessmentTargetArns = new List { "arn:aws:inspector:us-west-2:123456789012:target/0-0kFIPusq" } }); List assessmentTargets = response.AssessmentTargets;
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Inspector/MInspectorDescribeAssessmentTargetsDescribeAssessmentTargetsRequest.html
2018-08-14T10:58:13
CC-MAIN-2018-34
1534221209021.21
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Exports the contents of a Amazon Lex resource in a specified format. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to GetExportAsync. Namespace: Amazon.LexModelBuildingService Assembly: AWSSDK.LexModelBuildingService.dll Version: 3.x.y.z Container for the necessary parameters to execute the GetExport service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/LexModelBuildingService/MLexModelBuildingServiceGetExportGetExportRequest.html
2018-08-14T10:58:02
CC-MAIN-2018-34
1534221209021.21
[]
docs.aws.amazon.com
End User Management This section describes the options that can be made available to PureMessage end users, the people within your organization who are the senders and recipients of email that is processed by PureMessage. PureMessage allows end users to manage messages via a web page. From the End User Web Interface (EUWI), users can create their own lists of Approved (whitelisted) Senders and Blocked (blacklisted) Senders, and manage their own quarantined messages. The EUWI is administered using a variety of PureMessage Manager features. The HTTPD (RPC/UI) service runs the PureMessage End User Web Interface (EUWI). The status of this service is viewed on the Local Services tab, which also provides access to EUWI-related configuration options. Alternatively, the HTTPD (RPC/UI) service can be controlled and tested using the pmx-httpd and pmx-rpc-enduser command-line programs. A list configured via the Policy tab determines which end users have access to the EUWI. See “Editing Lists” in the Policy Tab section of the PureMessage Manager Reference to change the pre-configured list of approved end users. By default, on installation, all users can access the EUWI due to the “*” wildcard setting in the enduser-users list located under the opt/pmx/etc directory. To restrict user access, use email glob syntax matching in the enduser-users list. See “Email Globs” in the “Match Types” section of the Manager Reference for more information. Many EUWI options are configurable via the Quarantine tab. See “Setting End User Options” in the Manager Reference to set the location and session options. See “Configuring End User Features” in the Manager Reference to configure end user access to specific components (for example, per-user whitelists). See “Managing End User Whitelists” and “Managing End User Blacklists” in the Quarantine Tab section of the Manager Reference to manage whitelists and blacklists for individual end users. EUWI per-user list changes are synchronized to all PureMessage hosts. Add PureMessage hosts to the RPC list via the Policy tab. See the “RPC Hosts” entry in the “About PureMessage Default Lists” section of the Policy Tab documentation in the Manager Reference to configure the IP addresses of all PureMessage servers.
https://docs.sophos.com/msg/pmx/help/en-us/msg/pmx/concepts/AdmQuarantine-EndUser.html
2018-08-14T10:37:16
CC-MAIN-2018-34
1534221209021.21
[]
docs.sophos.com
Contribute Most of the base code is contained in the plugins. Each plugin can be disabled or taken and modified or created by new ones. If you want to implement your functionality to keep them generic and compatible with the basic platform I can give you a space as a collaborator in the KeplerJs Organization on Github. If you also want to help me correct some translation errors or add new langs… see the i18n core package. Alternatively, if you want to create your own personal adapter, I recommend you to make it as a KeplerJs plugin1 and publish it with the same license(MIT) in a your public repository and then publish it at Atmospherejs.com! A good idea to begin to understand basic functionality is to disable all plugins using meteor remove for each meteor package plugin. If you want to start kepler adapting to your study case right now, you should take one of the plugin code example, for example keplerjs:share and study its code.
http://docs.keplerjs.io/contribute.html
2018-08-14T10:33:11
CC-MAIN-2018-34
1534221209021.21
[]
docs.keplerjs.io
Datomic Cloud Releases This page provide CloudFormation templates, client maven coordinates, and release notes that you can use when planning an upgrade. Current Release - The ionand ion-devlibraries are available on the datomic-cloud repository. - The client-cloudlibrary is available on the maven Central Repository. - The Storage, Solo, and Productiontemplates require that you sign up via the AWS Marketplace, and are linked directly in the table below, Release History Blanks in this table indicate that a component was not upgraded in a particular release. Release Notes 2018-07-10 ion-dev 0.9.173 and ion 0.9.14 - New Feature: datomic.ion.castlibrary for monitoring ions - Bugfix: fixed race condition in ion code loading that could allow ion invocation before namespace completely loaded - Enhancement: warn on dependency conflicts - Improvement: prefer shell-friendly symbols instead of strings as arguments to datomic.ion.dev CLI commands - Improvement: better error messaging when deploying to the wrong region. - Improvement: list available deploy groups in push output - Improvement: enforce the requirement for :unamewhen project has a :local/rootin deps.edn. 0.8.56: Jul 02 2018 client-cloud Update - Enhancement: added sync to client API. See - Better error message when unable to connect to cluster or proxy. 402-8396: Jun 29 2018 Compute Template Update - Upgraded AWS libs to 1.11.349 - Upgraded Jackson libs to 2.9.5 - Fixed cache problem where all d/withdatabases deriving from a common initial call to d/with-dbhad the same common value. 0.8.54: Jun 06 2018 client-cloud Update - Enhancement: added :server-type :ion - Enhancement: ensure recentness of d/db conn 397-8384: Jun 06 2018 Storage and Compute Template Update - Enhancement: Datomic Ions - Improvement: Replaced Application Load Balancer with Network Load Balancer. If your applications run in a separate VPC you will need to configure a VPC endpoint. 303-8300: Feb 21 2018 Storage and Compute Template Update - Bugfix: doubles and floats allowed in transactions - Bugfix: avoid unnecessary ":AdopterSkippedOlder" alert when creating new database - Update: latest Amazon Linux patches - Improvement: better error handling in Storage template - Improvement: reduce memcached timeout
https://docs.datomic.com/cloud/releases.html
2018-08-14T10:42:25
CC-MAIN-2018-34
1534221209021.21
[]
docs.datomic.com
Login security Login security refers to the security settings you can configure to control access to your instance. Specify a login landing pageBy default, users see their homepage upon login. You can specify a different login landing page by using a system property or the content management system.Enable the logout confirmation promptYou can enable a logout confirmation prompt to prevent users from inadvertently logging themselves out.Remove the Logout buttonYou can remove the Logout button to prevent inadvertent logouts.IP range based authenticationOne way to secure a web-based application is to restrict access based on the IP address.Implementing a nonceYou can implement a nonce to be used with single sign-on digest authentication.Installation exitsInstallation exits are customizations that exit from Java to call a script before returning back to Java.Strengthening password validation rulesYou can customize password strength validation rules for the change password screen by overriding the installation exit associated with password validation.Login scenariosDescribes different login scenarios.Specify lockout for failed login attemptsThe system provides inactive script actions that enable you to specify the number of failed login attempts before a user account is locked and to reset the count after a successful login.Change settings for the Remember me check box and cookieWhen the Remember me check box is selected at login, a cookie is stored on the user's computer. This cookie automatically authenticates the user upon subsequent visits.Self service password resetThe Self Service Password Reset plugin enables end users who are locally authenticated to reset their own passwords.ServiceNow access controlThis SNC Access Control plugin (com.snc.snc_access_control) enables customers to control which ServiceNow employees may access their instance, and when.
https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/security/concept/c_LoginSecurity.html
2018-08-14T10:40:58
CC-MAIN-2018-34
1534221209021.21
[]
docs.servicenow.com
Additional Network Routes The Additional Network Routes dialog box allows you to specify routing of requests to specified IP ranges via specified gateways. Additional routes can enable the Email Appliance to process requests from client machines whose IP addresses reside outside of the native subnet of the Email Appliance. - To add a route: - Enter a descriptive Route Name. - Enter the requested Destination IP Range in CIDR format.Important This range must not include the static IP address of the Email Appliance and must be outside the subnet of the Email Appliance. - Enter the Gateway IP Address to which you want the requested IP addresses routed. This represents the next hop that can be used to reach the destination IP specified, and should be on the same subnet as the Email Appliance. - Click Add. To disable a route, you must delete it. To modify a route, you must delete it and re-add the modified route information. - To delete a route: - Select the check box beside the route that you want to delete. - Click Delete. The route is de-activated and removed from the routing table.Note If you change your network configuration or topology, it may be necessary to alter any additional routes you have created.Note If a route is specified that makes the administrative user interface inaccessible, you must connect a laptop to the configuration port on the back of the appliance and access the Email Appliance via the IP address 172.24.24.172 to gain access to the appliance and delete the incorrect routes.
https://docs.sophos.com/msg/sea/help/en-us/msg/sea/tasks/DBIPRouting.html
2018-08-14T10:38:17
CC-MAIN-2018-34
1534221209021.21
[]
docs.sophos.com
You can collect installation or upgrade log files for ESXi. If an installation or upgrade fails, checking the log files can help you identify the source of the failure. Solution - Enter the vm-support command in the ESXi Shell or through SSH. - Navigate to the /var/tmp/ directory. - Retrieve the log files from the .tgz file.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.upgrade.doc/GUID-854B64D8-3C08-4127-9019-AC41CFE5B6FE.html
2018-08-14T10:58:33
CC-MAIN-2018-34
1534221209021.21
[]
docs.vmware.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the SetQueueAttributes operation.. In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully. Namespace: Amazon.SQS.Model Assembly: AWSSDK.SQS.dll Version: 3.x.y.z The SetQueueAttributesRequest type exposes the following members This example shows how to set queue attributes. var client = new AmazonSQSClient(); var attrs = new Dictionary (); // Maximum message size of 128 KiB (1,024 bytes * 128 KiB = 131,072 bytes). int maxMessage = 128 * 1024; attrs.Add(QueueAttributeName.DelaySeconds, TimeSpan.FromSeconds(5).TotalSeconds.ToString()); attrs.Add(QueueAttributeName.MaximumMessageSize, maxMessage.ToString()); attrs.Add(QueueAttributeName.MessageRetentionPeriod, TimeSpan.FromDays(1).TotalSeconds.ToString()); attrs.Add(QueueAttributeName.ReceiveMessageWaitTimeSeconds, TimeSpan.FromSeconds(5).TotalSeconds.ToString()); attrs.Add(QueueAttributeName.VisibilityTimeout, TimeSpan.FromHours(1).TotalSeconds.ToString()); // Dead-letter queue attributes. attrs.Add(QueueAttributeName.RedrivePolicy, "{\"maxReceiveCount\":" + "\"5\"," + "\"deadLetterTargetArn\":" + "\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyTestDeadLetterQueue\"}"); var request = new SetQueueAttributesRequest { Attributes = attrs, QueueUrl = "" }; client.SetQueueAttributes
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SQS/TSetQueueAttributesRequest.html
2018-08-14T11:09:17
CC-MAIN-2018-34
1534221209021.21
[]
docs.aws.amazon.com
Contrasting Silverlight and WPF Silverlight and Windows Presentation Foundation (WPF) both allow you to develop rich user experience applications based on XAML and the .NET Framework. However, there are some differences between these platforms, and these differences have to be carefully considered when transitioning an application between Silverlight and WPF or when building an application that targets both WPF and Silverlight. Note This topic describes differences between Silverlight 2.0 and WPF that is part of the .NET Framework 3.5. These differences are expected to be reduced in future versions of Silverlight and WPF. Silverlight and WPF Architectural Overview Windows Presentation Foundation (WPF) provides developers with a unified programming model for building rich Windows Forms applications that incorporate user interface (UI), media, and documents. WPF enables software developers to deliver a new level of "user experience" (UX) by providing a declarative-based language (XAML) for specifying vector-based graphics that can scale and take advantage of hardware acceleration. Silverlight is a cross-browser, cross-platform implementation of the .NET Framework for delivering next-generation rich interactive media and content over the Web and for developing browser-hosted Rich Internet Applications (RIAs) that can integrate data and services from many sources. Silverlight enables developers to build applications that significantly enhance the typical end user experience, compared with traditional Web applications. Like WPF, Silverlight provides a XAML-based language to specify user interfaces. Silverlight and WPF share many of the same features and capabilities, but they are built on top of different run-time stacks, as illustrated in Figure 1. WPF leverages the full .NET Framework and executes on the common language runtime (CLR). Silverlight is based on a subset of XAML and the full .NET Framework, and it executes on a browser-hosted version of the CLR. Figure 1 WPF and Silverlight For more information about WPF architecture, see WPF Architecture on MSDN. For more information about Silverlight architecture, see Silverlight Architecture on MSDN. Differences Between Silverlight and WPF To keep Silverlight small and lightweight, some WPF and .NET Framework features are not available in Silverlight. Because of this, there can be subtle—and not so subtle—differences that have to be carefully considered when moving an application between Silverlight and WPF or when building an application that targets both WPF and Silverlight. For a summary of these differences, see WPF Compatibility on MSDN. This section describes some of the major differences the patterns & practices team encountered during the development of the Composite Application Guidance for WPF and Silverlight. These differences relate to Silverlight 2.0 and WPF 3.5, the current versions at the time of this writing. Resources Resources are simple collections of key-value pairs that can store almost any element (strings, brushes, styles, data sources, and many others). Resources can be associated with almost any element class that exposes a Resources property of type ResourceDictionary. The following are the main differences between Silverlight and WPF concerning resources: - Resources can contain static or dynamic content. Dynamic content can be changed at any time and consumers of the resource will be automatically updated. Dynamic resource references are not supported in Silverlight; therefore, only static resource references are available. - Merged dictionaries are useful for separating resources so that they can be shared within the application as if they were in a single logical location. Silverlight does not currently support MergedDictionaries. Global resources can be defined in the App.xaml file or locally in each user control that will use the resource. Styles There are several differences between Silverlight and WPF when using styles. You should be aware of the following limitations in Silverlight: - After you set a style to a FrameworkElement at run time, it cannot be subsequently changed. - Style inheritance is not supported, because the BasedOn property is not available. - Silverlight does not support implicit styles applied using the TargetType attribute value. If you define an application-level style for a specific user control, it will not be automatically applied to all instances of the user control; instead, you must explicitly reference the style by its key for each control instance. Triggers Triggers allow designers to define the visual behavior of a control by declaratively specifying how its properties change in response to events or property changes, such as highlighting a button when it is clicked. Typically, triggers are fired when a property of a control changes and results in one or more other properties of that control also changing. Triggers are defined inside a style and can be applied to any object of the specified target type. Silverlight does not support triggers in Styles, ControlTemplates, or DataTemplates (the Triggers collection does not exist on these elements). However, similar behavior can be achieved by using the Silverlight Visual State Manager (VSM) to define interactive control templates. Using VSM, the visual behavior of a control, including any animations and transitions, are defined in the control template. This can be easily done by using Blend 2.5. However, be aware that the XAML file will get more complex and that control templates built for Silverlight are not yet compatible with WPF. For more information about the Visual State Manager, see Creating a Control That Has a Customizable Appearance on MSDN. Note At the time of writing, the VSM is planned to be supported in the next version of WPF. A preview of a Visual State Manager for WPF is available on CodePlex in the WPF Toolkit. Data Binding Both WPF and Silverlight provide data binding support. The following are the main differences between Silverlight and WPF data binding: - In Silverlight, there is no support for the ElementName property in the binding syntax, so the property of a control cannot be bound to a property in another control. In addition, Silverlight does not support the RelativeSource property, which is useful when the binding is specified in a ControlTemplate or a Style. - In Silverlight, there is no OneWayToSource data flow mode. - In Silverlight, there is no UpdateSourceTrigger property. All updates to the source in the two-way binding mode occur immediately, except in the case of TextBox, in which case changes are propagated to the source when the focus is lost. - In Silverlight, you cannot bind directly to XML data. A possible workaround for this is to convert the XML to CLR objects, and then bind to the CLR object. - In Silverlight, there is no XMLDataProvider class. - In Silverlight, there is no support for binding validation rules. Controls in Silverlight can be configured to raise an event to indicate that a validation error has occurred. - In Silverlight, there is no ReadOnlyObservableCollection; however, ObservableCollection is supported. ReadOnlyObservableCollection is a read-only wrapper around the ObservableCollection. The ObservableCollection represents a dynamic data collection that provides notifications when items get added, removed, or when the whole collection gets refreshed. - In Silverlight, there is no CollectionView class. In WPF, this class represents a view for grouping, sorting, filtering, and navigating a data collection. The ICollectionView interface is available in Silverlight; by using this, developers can create their own CollectionView implementations, although most Silverlight controls do not automatically interact with this interface. - In Silverlight, there is no DataTemplateSelector class. In WPF, this class provides a way to choose a DataTemplate based on the data object and the data-bound element. Commanding The following are the differences between Silverlight and WPF regarding commanding: - Routed commands are not available in Silverlight. However, the ICommand interface is available in Silverlight, allowing developers to create their own custom commands. The Composite Application Library provides the DelegateCommand and CompositeCommand classes to simplify command implementation. For more information, see the Commands technical concept. - In WPF, controls can be hooked up to commands through their Command property. By doing this, developers can declaratively associate controls and commands. This also means a control can interact with a command so that it can invoke the command and have its enabled status automatically updated. Silverlight controls do not currently support the Command property. To help work around this issue, the Composite Application Library provides an attached property-based extensibility mechanism that allows commands and controls to be declaratively associated with each other. For more information, see "Using Commands in Silverlight Commands" in the Commands technical concept. - There is no input gesture or input binding support in Silverlight. Miscellaneous The following are some miscellaneous differences between Silverlight and WPF: - In Silverlight, the UIElement class has an internal constructor; therefore, you cannot create a control inheriting from it. - In Silverlight, there is no x:type markup extension support or support for custom markup extensions. - In Silverlight, items added to a TabControl control are not automatically wrapped inside a TabItem type, as is the case with WPF. More Information For information about creating multi-targeted applications in WPF and Silverlight, see the following topics: - Multi-Targeting design concept - Multi-Targeting technical concept - Multi-Targeting QuickStart - Project Linker: Synchronization Tool Home page on MSDN | Community site
https://docs.microsoft.com/en-us/previous-versions/msp-n-p/ff921107(v=pandp.20)
2018-08-14T11:08:49
CC-MAIN-2018-34
1534221209021.21
[array(['images/ff921107.ae30e27d-ca20-4cf6-9948-b0fae073c32d%28en-us%2cpandp.20%29.png', 'Ff921107.ae30e27d-ca20-4cf6-9948-b0fae073c32d(en-us,PandP.20).png Ff921107.ae30e27d-ca20-4cf6-9948-b0fae073c32d(en-us,PandP.20).png'], dtype=object) ]
docs.microsoft.com
Make sure content can be found The content must be crawled and added to the search index for your users to find what they're searching for in SharePoint Online. Make site content searchable When users search on a site, results can come from many places such as columns, libraries, and pages. A site owner can change search settings to decide whether or not content should appear in search results. Users only see search results for content they have access to. Setting the right permissions for content ensure that people can see the right documents and sites in the search results. Learn more. Crawl site content. Learn more. Search across on-premises and online content Hybrid search lets your users search for files and documents across SharePoint Server and Office 365 at the same time. Hybrid search in SharePoint. Remove search results temporarily You can temporarily remove documents, pages and sites from search results with immediate effect. Learn more.
https://docs.microsoft.com/en-us/sharepoint/make-sure-content-can-be-found?redirectSourcePath=%252fzh-tw%252farticle%252f%2525E7%2525A2%2525BA%2525E4%2525BF%25259D%2525E4%2525BD%2525BF%2525E7%252594%2525A8%2525E8%252580%252585%2525E5%25258F%2525AF%2525E4%2525BB%2525A5%2525E6%252589%2525BE%2525E5%252588%2525B0%2525E5%252585%2525A7%2525E5%2525AE%2525B9-16fb530a-ed64-4fea-ab15-adf4b5fe96e9
2018-08-14T11:07:39
CC-MAIN-2018-34
1534221209021.21
[]
docs.microsoft.com
You can use the vdmutil command-line interface to configure and manage the True SSO feature. Location of the Utility By default, the path to the vdmutil command executable file is C:\Program Files\VMware\VMware View\Server\tools\bin. To avoid entering the path on the command line, add the path to your PATH environment variable. Syntax and Authentication Use the following form of the vdmutil command from a Windows command prompt. vdmutil authentication options --truesso additional options and arguments The additional options that you can use depend on the command option. This topic focuses on the options for configuring True SSO (--truesso). Following is an example of a command for listing connectors that have been configured for True SSO: vdmUtil --authAs admin-role-user --authDomain domain-name --authPassword admin-user-password --truesso --list --connector The vdmutil command includes authentication options to specify the user name, domain, and password to use for authentication. You must use the authentication options with all vdmutil command options except for --help and --verbose. Command Output The vdmutil command returns 0 when an operation succeeds and a failure-specific non-zero code when an operation fails. The vdmutil command writes error messages to standard error. When an operation produces output, or when verbose logging is enabled by using the --verbose option, the vdmutil command writes output to standard output, in US English.
https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.horizon-view.administration.doc/GUID-29D20B33-CE0D-4A45-9F17-0DA27E6B148A.html
2018-08-14T10:52:31
CC-MAIN-2018-34
1534221209021.21
[]
docs.vmware.com
Quick Install JBM using WHMCS Module You can easily install Server Backup Manager from inside WHMCS. WHMCS Top menu -> Addons -> Jetbackupmanager From Jetbackupmanager, navigate to “Easy deployment” tab and select “Clean install”. You will to select servers for the installation, server list is generated from WHMCS Top menu -> Setup -> Product Services -> Servers. You can install JBM into a single server, or multiple servers at once.
https://docs.jetapps.com/jetbackup/jbm-easydeployment
2018-05-20T15:48:29
CC-MAIN-2018-22
1526794863626.14
[]
docs.jetapps.com
Lock screen overview (Windows Runtime apps) This topic discusses the concepts and terminology for an app's presence on the lock screen. The lock screen is shown when you lock your device,.. A toast shown on the lock screen includes both image (if present) and text. The toast is shown for the length of a long-duration toast. Note For Windows Phone Store apps, the toast is shown for the normal length of time. be given to the possibility of your app being selected to show detailed status (tile notification content) on the lock screen. Do not declare lock screen capabilities just to play sound or show information about what is playing—that happens automatically. This applies particularly to music players. Doing so would pointlessly occupy one of the limited lock screen slots, possibly Quickstart: Showing tile and badge updates... Here's how: - Required Provide a badge logo through the LockScreenBadgeLogo property. This badge logo should be a different image than the parent app's badge logo, but it must meet the same requirements. - Optional. Note For Windows Store apps, for Windows Phone Store apps. Quickstart: Showing tile and badge updates on the lock screen Guidelines and checklist for tiles and badges Quickstart: Sending a tile update Quickstart: Sending a toast notification Quickstart: Pinning a secondary tile
https://docs.microsoft.com/en-us/previous-versions/windows/desktop/apps/hh779720(v=win.10)
2018-05-20T16:12:56
CC-MAIN-2018-22
1526794863626.14
[array(['images/hh779720.lockscreen%28en-us%2cwin.10%29.png', None], dtype=object) array(['images/hh779720.lock_screen_badge%28en-us%2cwin.10%29.png', None], dtype=object) array(['images/hh779720.personalize%28en-us%2cwin.10%29.png', None], dtype=object) array(['images/hh779720.permissions%28en-us%2cwin.10%29.png', None], dtype=object) ]
docs.microsoft.com
下列常量作为 PHP 核心的一部分总是可用的。 PASSWORD_BCRYPT(integer) PASSWORD_BCRYPT is used to create new password hashes using the CRYPT_BLOWFISH algorithm. This will always result in a hash using the "$2y$" crypt format, which is always 60 characters wide. Supported Options: salt (string) - to manually provide a salt to use when hashing the password. Note that this will override and prevent a salt from being automatically generated. If omitted, a random salt will be generated by password_hash() for each password hashed. This is the intended mode of operation and as of PHP 7.0.0 the salt option has been deprecated.. PASSWORD_ARGON2I(integer) PASSWORD_ARGON2I is used to create new password hashes using the Argon2 algorithm. Supported Options: memory_cost (integer) - Maximum memory (in bytes). Available as of PHP 7.2.0. PASSWORD_ARGON2_DEFAULT_MEMORY_COST(integer) Default amount of memory in bytes that Argon2lib will use while trying to compute a hash. Available as of PHP 7.2.0. PASSWORD_ARGON2_DEFAULT_TIME_COST(integer) Default amount of time that Argon2lib will spend trying to compute a hash. Available as of PHP 7.2.0. PASSWORD_ARGON2_DEFAULT_THREADS(integer) Default number of threads that Argon2lib will use. Available as of PHP 7.2.0.
http://docs.php.net/manual/zh/password.constants.php
2018-05-20T16:06:13
CC-MAIN-2018-22
1526794863626.14
[]
docs.php.net
All public logs Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 05:09, 8 May 2013 Wilsonge (Talk | contribs) deleted page JModelItem/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JModelItem== ===Description=== {{Description:JModelItem}} <span class="editsection" style="font-size:76%;"> <nowiki>[</nowiki>Descripti..." (and the only contributor was "Doxiki2")) - 13:10, 26 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 52846 of page JModelItem/1.6 patrolled - 14:34, 19 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 43040 of page JModelItem/1.6 patrolled
https://docs.joomla.org/index.php?title=Special:Log&page=JModelItem%2F1.6
2016-04-29T05:08:12
CC-MAIN-2016-18
1461860110372.12
[]
docs.joomla.org
Changes related to "Help35:Menus Menu Item Article Category Blog" ← Help35:Menus Menu Item Article Category Blog This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. 28 April 2016:30Help35:Content Featured Articles (diff; hist; +3) MATsxm 24 April 2016 21:13Help35:Content Article Manager (4 changes; hist; -62) [MATsxm; MarijkeS×3] 20:28Help35:Content Featured Articles (3 changes; hist; -909) [MarijkeS; MATsxm×2] 20:26Help35:Components Content Categories (4 changes; hist; -951) [MATsxm×2; MarijkeS×2] m 20:11Help35:Menus Menu Manager (diff; hist; +3) MATsxm
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20121211083511&target=Help30%3AMenus_Menu_Item_Article_Category_Blog
2016-04-29T05:39:21
CC-MAIN-2016-18
1461860110372.12
[array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) ]
docs.joomla.org
New in version 1.6. The array iterator encapsulates many of the key features in ufuncs, allowing user code to support features like output parameters, preservation of memory layouts, and buffering of data with the wrong alignment or type, without requiring difficult coding. This page documents the API for the iterator. The C-API naming convention chosen is based on the one in the numpy-refactor branch, so will integrate naturally into the refactored code base. The iterator is named NpyIter and functions are named NpyIter_*. The existing iterator API includes functions like PyArrayIter_Check, PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The new iterator design replaces all of this functionality with a single object and associated API. One goal of the new API is that all uses of the existing iterator should be replaceable with the new iterator without significant effort. In 1.6, the major exception to this is the neighborhood iterator, which does not have corresponding features in this iterator. Here is a conversion table for which functions to use with the new iterator: The best way to become familiar with the iterator is to look at its usage within the NumPy codebase itself. For example, here is a slightly tweaked version of the code for PyArray_CountNonzero, which counts the number of non-zero elements in an array. npy_intp PyArray_CountNonzero(PyArrayObject* self) { /* Nonzero boolean function */ PyArray_NonzeroFunc* nonzero = PyArray_DESCR(self)->f->nonzero; NpyIter* iter; NpyIter_IterNextFunc *iternext; char** dataptr; npy_intp* strideptr,* innersizeptr; /* Handle zero-sized arrays specially */ if (PyArray_SIZE(self) == 0) { return 0; } /* * Create and use an iterator to count the nonzeros. * flag NPY_ITER_READONLY * - The array is never written to. * flag NPY_ITER_EXTERNAL_LOOP * - Inner loop is done outside the iterator for efficiency. * flag NPY_ITER_NPY_ITER_REFS_OK * - Reference types are acceptable. * order NPY_KEEPORDER * - Visit elements in memory order, regardless of strides. * This is good for performance when the specific order * elements are visited is unimportant. * casting NPY_NO_CASTING * - No casting is required for this operation. */ iter = NpyIter_New(self, NPY_ITER_READONLY| NPY_ITER_EXTERNAL_LOOP| NPY_ITER_REFS_OK, NPY_KEEPORDER, NPY_NO_CASTING, NULL); if (iter == NULL) { return -1; } /* * The iternext function gets stored in a local variable * so it can be called repeatedly in an efficient manner. */ iternext = NpyIter_GetIterNext(iter, NULL); if (iternext == NULL) { NpyIter_Deallocate(iter); return -1; } /* The location of the data pointer which the iterator may update */ dataptr = NpyIter_GetDataPtrArray(iter); /* The location of the stride which the iterator may update */ strideptr = NpyIter_GetInnerStrideArray(iter); /* The location of the inner loop size which the iterator may update */ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter); /* The iteration loop */ do { /* Get the inner loop data/stride/count values */ char* data = *dataptr; npy_intp stride = *strideptr; npy_intp count = *innersizeptr; /* This is a typical inner loop for NPY_ITER_EXTERNAL_LOOP */ while (count--) { if (nonzero(data, self)) { ++nonzero_count; } data += stride; } /* Increment the iterator to the next inner loop */ } while(iternext(iter)); NpyIter_Deallocate(iter); return nonzero_count; } Here is a simple copy function using the iterator. The order parameter is used to control the memory layout of the allocated result, typically NPY_KEEPORDER is desired. PyObject *CopyArray(PyObject *arr, NPY_ORDER order) { NpyIter *iter; NpyIter_IterNextFunc *iternext; PyObject *op[2], *ret; npy_uint32 flags; npy_uint32 op_flags[2]; npy_intp itemsize, *innersizeptr, innerstride; char **dataptrarray; /* * No inner iteration - inner loop is handled by CopyArray code */ flags = NPY_ITER_EXTERNAL_LOOP; /* * Tell the constructor to automatically allocate the output. * The data type of the output will match that of the input. */ op[0] = arr; op[1] = NULL; op_flags[0] = NPY_ITER_READONLY; op_flags[1] = NPY_ITER_WRITEONLY | NPY_ITER_ALLOCATE; /* Construct the iterator */ iter = NpyIter_MultiNew(2, op, flags, order, NPY_NO_CASTING, op_flags, NULL); if (iter == NULL) { return NULL; } /* * Make a copy of the iternext function pointer and * a few other variables the inner loop needs. */ iternext = NpyIter_GetIterNext(iter); innerstride = NpyIter_GetInnerStrideArray(iter)[0]; itemsize = NpyIter_GetDescrArray(iter)[0]->elsize; /* * The inner loop size and data pointers may change during the * loop, so just cache the addresses. */ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter); dataptrarray = NpyIter_GetDataPtrArray(iter); /* * Note that because the iterator allocated the output, * it matches the iteration order and is packed tightly, * so we don't need to check it like the input. */ if (innerstride == itemsize) { do { memcpy(dataptrarray[1], dataptrarray[0], itemsize * (*innersizeptr)); } while (iternext(iter)); } else { /* For efficiency, should specialize this based on item size... */ npy_intp i; do { npy_intp size = *innersizeptr; char *src = dataaddr[0], *dst = dataaddr[1]; for(i = 0; i < size; i++, src += innerstride, dst += itemsize) { memcpy(dst, src, itemsize); } } while (iternext(iter)); } /* Get the result from the iterator object array */ ret = NpyIter_GetOperandArray(iter)[1]; Py_INCREF(ret); if (NpyIter_Deallocate(iter) != NPY_SUCCEED) { Py_DECREF(ret); return NULL; } return ret; } The iterator layout is an internal detail, and user code only sees an incomplete struct. This is an opaque pointer type for the iterator. Access to its contents can only be done through the iterator API. This is the type which exposes the iterator to Python. Currently, no API is exposed which provides access to the values of a Python-created iterator. If an iterator is created in Python, it must be used in Python and vice versa. Such an API will likely be created in a future version. This is a function pointer for the iteration loop, returned by NpyIter_GetIterNext. This is a function pointer for getting the current iterator multi-index, returned by NpyIter_GetGetMultiIndex. Creates an iterator for the given numpy array object op. Flags that may be passed in flags are any combination of the global and per-operand flags documented in NpyIter_MultiNew, except for NPY_ITER_ALLOCATE. Any of the NPY_ORDER enum values may be passed to order. For efficient iteration, NPY_KEEPORDER is the best option, and the other orders enforce the particular iteration pattern. dtype isn’t NULL, then it requires that data type. If copying is allowed, it will make a temporary copy if the data is castable. If NPY_ITER_UPDATEIFCOPY is enabled, it will also copy the data back with another cast upon iterator destruction. Returns NULL if there is an error, otherwise returns the allocated iterator. To make an iterator similar to the old iterator, this should work. iter = NpyIter_New(op, NPY_ITER_READWRITE, NPY_CORDER, NPY_NO_CASTING, NULL); If you want to edit an array with aligned double code, but the order doesn’t matter, you would use this. dtype = PyArray_DescrFromType(NPY_DOUBLE); iter = NpyIter_New(op, NPY_ITER_READWRITE| NPY_ITER_BUFFERED| NPY_ITER_NBO| NPY_ITER_ALIGNED, NPY_KEEPORDER, NPY_SAME_KIND_CASTING, dtype); Py_DECREF(dtype); Creates an iterator for broadcasting the nop array objects provided in op, using regular NumPy broadcasting rules. Any of the NPY_ORDER enum values may be passed to order. For efficient iteration, NPY_KEEPORDER is the best option, and the other orders enforce the particular iteration pattern. When using NPY_KEEPORDER, if you also want to ensure that the iteration is not reversed along an axis, you should pass the flag NPY_ITER_DONT_NEGATE_STRIDES. op_dtypes isn’t NULL, it specifies a data type or NULL for each op[i]. Returns NULL if there is an error, otherwise returns the allocated iterator. Flags that may be passed in flags, applying to the whole iterator, are: - NPY_ITER_C_INDEX¶ Causes the iterator to track a raveled flat index matching C order. This option cannot be used with NPY_ITER_F_INDEX. - NPY_ITER_F_INDEX¶ Causes the iterator to track a raveled flat index matching Fortran order. This option cannot be used with NPY_ITER_C_INDEX. - NPY_ITER_MULTI_INDEX¶ Causes the iterator to track a multi-index. This prevents the iterator from coalescing axes to produce bigger inner loops. - NPY_ITER_EXTERNAL_LOOP¶ Causes the iterator to skip iteration of the innermost loop, requiring the user of the iterator to handle it. This flag is incompatible with NPY_ITER_C_INDEX, NPY_ITER_F_INDEX, and NPY_ITER_MULTI_INDEX. - NPY_ITER_DONT_NEGATE_STRIDES¶ This only affects the iterator when NPY_KEEPORDER is specified for the order parameter. By default with NPY_KEEPORDER, the iterator reverses axes which have negative strides, so that memory is traversed in a forward direction. This disables this step. Use this flag if you want to use the underlying memory-ordering of the axes, but don’t want an axis reversed. This is the behavior of numpy.ravel(a, order='K'), for instance. - NPY_ITER_COMMON_DTYPE¶ Causes the iterator to convert all the operands to a common data type, calculated based on the ufunc type promotion rules. Copying or buffering must be enabled. If the common data type is known ahead of time, don’t use this flag. Instead, set the requested dtype for all the operands. - NPY_ITER_REFS_OK¶ Indicates that arrays with reference types (object arrays or structured arrays containing an object type) may be accepted and used in the iterator. If this flag is enabled, the caller must be sure to check whether :cfunc:`NpyIter_IterationNeedsAPI`(iter) is true, in which case it may not release the GIL during iteration. - NPY_ITER_ZEROSIZE_OK¶ Indicates that arrays with a size of zero should be permitted. Since the typical iteration loop does not naturally work with zero-sized arrays, you must check that the IterSize is non-zero before entering the iteration loop. - NPY_ITER_REDUCE_OK¶ Permits writeable operands with a dimension with zero stride and size greater than one. Note that such operands must be read/write. When buffering is enabled, this also switches to a special buffering mode which reduces the loop length as necessary to not trample on values being reduced. Note that if you want to do a reduction on an automatically allocated output, you must use NpyIter_GetOperandArray to get its reference, then set every value to the reduction unit before doing the iteration loop. In the case of a buffered reduction, this means you must also specify the flag NPY_ITER_DELAY_BUFALLOC, then reset the iterator after initializing the allocated operand to prepare the buffers. - NPY_ITER_RANGED¶ Enables support for iteration of sub-ranges of the full iterindex range [0, NpyIter_IterSize(iter)). Use the function NpyIter_ResetToIterIndexRange to specify a range for iteration. This flag can only be used with NPY_ITER_EXTERNAL_LOOP when NPY_ITER_BUFFERED is enabled. This is because without buffering, the inner loop is always the size of the innermost iteration dimension, and allowing it to get cut up would require special handling, effectively making it more like the buffered version. - NPY_ITER_BUFFERED¶ Causes the iterator to store buffering data, and use buffering to satisfy data type, alignment, and byte-order requirements. To buffer an operand, do not specify the NPY_ITER_COPY or NPY_ITER_UPDATEIFCOPY flags, because they will override buffering. Buffering is especially useful for Python code using the iterator, allowing for larger chunks of data at once to amortize the Python interpreter overhead. If used with NPY_ITER_EXTERNAL_LOOP, the inner loop for the caller may get larger chunks than would be possible without buffering, because of how the strides are laid out. Note that if an operand is given the flag NPY_ITER_COPY or NPY_ITER_UPDATEIFCOPY, a copy will be made in preference to buffering. Buffering will still occur when the array was broadcast so elements need to be duplicated to get a constant stride. In normal buffering, the size of each inner loop is equal to the buffer size, or possibly larger if NPY_ITER_GROWINNER is specified. If NPY_ITER_REDUCE_OK is enabled and a reduction occurs, the inner loops may become smaller depending on the structure of the reduction. - NPY_ITER_GROWINNER¶ When buffering is enabled, this allows the size of the inner loop to grow when buffering isn’t necessary. This option is best used if you’re doing a straight pass through all the data, rather than anything with small cache-friendly arrays of temporary values for each inner loop. - NPY_ITER_DELAY_BUFALLOC¶ When buffering is enabled, this delays allocation of the buffers until NpyIter_Reset or another reset function is called. This flag exists to avoid wasteful copying of buffer data when making multiple copies of a buffered iterator for multi-threaded iteration. Another use of this flag is for setting up reduction operations. After the iterator is created, and a reduction output is allocated automatically by the iterator (be sure to use READWRITE access), its value may be initialized to the reduction unit. Use NpyIter_GetOperandArray to get the object. Then, call NpyIter_Reset to allocate and fill the buffers with their initial values. Flags that may be passed in op_flags[i], where 0 <= i < nop: - NPY_ITER_READWRITE¶ - NPY_ITER_READONLY¶ - NPY_ITER_WRITEONLY¶ Indicate how the user of the iterator will read or write to op[i]. Exactly one of these flags must be specified per operand. - NPY_ITER_COPY¶ Allow a copy of op[i] to be made if it does not meet the data type or alignment requirements as specified by the constructor flags and parameters. - NPY_ITER_UPDATEIFCOPY¶ Triggers NPY_ITER_COPY, and when an array operand is flagged for writing and is copied, causes the data in a copy to be copied back to op[i] when the iterator is destroyed. If the operand is flagged as write-only and a copy is needed, an uninitialized temporary array will be created and then copied to back to op[i] on destruction, instead of doing the unecessary copy operation. - NPY_ITER_NBO¶ - NPY_ITER_ALIGNED¶ - NPY_ITER_CONTIG¶ Causes the iterator to provide data for op[i] that is in native byte order, aligned according to the dtype requirements, contiguous, or any combination. By default, the iterator produces pointers into the arrays provided, which may be aligned or unaligned, and with any byte order. If copying or buffering is not enabled and the operand data doesn’t satisfy the constraints, an error will be raised. The contiguous constraint applies only to the inner loop, successive inner loops may have arbitrary pointer changes. If the requested data type is in non-native byte order, the NBO flag overrides it and the requested data type is converted to be in native byte order. - NPY_ITER_ALLOCATE¶ This is for output arrays, and requires that the flag NPY_ITER_WRITEONLY be set. If op[i] is NULL, creates a new array with the final broadcast dimensions, and a layout matching the iteration order of the iterator. When op[i] is NULL, the requested data type op_dtypes[i] may be NULL as well, in which case it is automatically generated from the dtypes of the arrays which are flagged as readable. The rules for generating the dtype are the same is for UFuncs. Of special note is handling of byte order in the selected dtype. If there is exactly one input, the input’s dtype is used as is. Otherwise, if more than one input dtypes are combined together, the output will be in native byte order. After being allocated with this flag, the caller may retrieve the new array by calling NpyIter_GetOperandArray and getting the i-th object in the returned C array. The caller must call Py_INCREF on it to claim a reference to the array. - NPY_ITER_NO_SUBTYPE¶ For use with NPY_ITER_ALLOCATE, this flag disables allocating an array subtype for the output, forcing it to be a straight ndarray. TODO: Maybe it would be better to introduce a function NpyIter_GetWrappedOutput and remove this flag? - NPY_ITER_NO_BROADCAST¶ Ensures that the input or output matches the iteration dimensions exactly. Extends NpyIter_MultiNew with several advanced options providing more control over broadcasting and buffering. If 0/NULL values are passed to oa_ndim, op_axes, itershape, and buffersize, it is equivalent to NpyIter_MultiNew. The parameter oa_ndim, when non-zero, specifies the number of dimensions that will be iterated with customized broadcasting. If it is provided, op_axes and/or itershape must also be provided. The op_axes parameter let you control in detail how the axes of the operand arrays get matched together and iterated. In op_axes, you must provide an array of nop pointers to oa_ndim-sized arrays of type npy_intp. If an entry in op_axes is NULL, normal broadcasting rules will apply. In op_axes[j][i] is stored either a valid axis of op[j], or -1 which means newaxis. Within each op_axes[j] array, axes may not be repeated. The following example is how normal broadcasting applies to a 3-D array, a 2-D array, a 1-D array and a scalar. int oa_ndim = 3; /* # iteration axes */ int op0_axes[] = {0, 1, 2}; /* 3-D operand */ int op1_axes[] = {-1, 0, 1}; /* 2-D operand */ int op2_axes[] = {-1, -1, 0}; /* 1-D operand */ int op3_axes[] = {-1, -1, -1} /* 0-D (scalar) operand */ int* op_axes[] = {op0_axes, op1_axes, op2_axes, op3_axes}; The itershape parameter allows you to force the iterator to have a specific iteration shape. It is an array of length oa_ndim. When an entry is negative, its value is determined from the operands. This parameter allows automatically allocated outputs to get additional dimensions which don’t match up with any dimension of an input. If buffersize is zero, a default buffer size is used, otherwise it specifies how big of a buffer to use. Buffers which are powers of 2 such as 4096 or 8192 are recommended. Returns NULL if there is an error, otherwise returns the allocated iterator. Makes a copy of the given iterator. This function is provided primarily to enable multi-threaded iteration of the data. TODO: Move this to a section about multithreaded iteration. The recommended approach to multithreaded iteration is to first create an iterator with the flags NPY_ITER_EXTERNAL_LOOP, NPY_ITER_RANGED, NPY_ITER_BUFFERED, NPY_ITER_DELAY_BUFALLOC, and possibly NPY_ITER_GROWINNER. Create a copy of this iterator for each thread (minus one for the first iterator). Then, take the iteration index range [0, NpyIter_GetIterSize(iter)) and split it up into tasks, for example using a TBB parallel_for loop. When a thread gets a task to execute, it then uses its copy of the iterator by calling NpyIter_ResetToIterIndexRange and iterating over the full range. When using the iterator in multi-threaded code or in code not holding the Python GIL, care must be taken to only call functions which are safe in that context. NpyIter_Copy cannot be safely called without the Python GIL, because it increments Python references. The Reset* and some other functions may be safely called by passing in the errmsg parameter as non-NULL, so that the functions will pass back errors through it instead of setting a Python exception. Removes an axis from iteration. This requires that NPY_ITER_MULTI_INDEX was set for iterator creation, and does not work if buffering is enabled or an index is being tracked. This function also resets the iterator to its initial state. This is useful for setting up an accumulation loop, for example. The iterator can first be created with all the dimensions, including the accumulation axis, so that the output gets created correctly. Then, the accumulation axis can be removed, and the calculation done in a nested fashion. WARNING: This function may change the internal memory layout of the iterator. Any cached functions or pointers from the iterator must be retrieved again! Returns NPY_SUCCEED or NPY_FAIL. If the iterator is tracking a multi-index, this strips support for them, and does further iterator optimizations that are possible if multi-indices are not needed. This function also resets the iterator to its initial state. WARNING: This function may change the internal memory layout of the iterator. Any cached functions or pointers from the iterator must be retrieved again! After calling this function, :cfunc:`NpyIter_HasMultiIndex`(iter) will return false. Returns NPY_SUCCEED or NPY_FAIL. If NpyIter_RemoveMultiIndex was called, you may want to enable the flag NPY_ITER_EXTERNAL_LOOP. This flag is not permitted together with NPY_ITER_MULTI_INDEX, so this function is provided to enable the feature after NpyIter_RemoveMultiIndex is called. This function also resets the iterator to its initial state. WARNING: This function changes the internal logic of the iterator. Any cached functions or pointers from the iterator must be retrieved again! Returns NPY_SUCCEED or NPY_FAIL. Deallocates the iterator object. This additionally frees any copies made, triggering UPDATEIFCOPY behavior where necessary. Returns NPY_SUCCEED or NPY_FAIL. Resets the iterator back to its initial state, at the beginning of the iteration range. and restricts it to the iterindex range [istart, iend). See NpyIter_Copy for an explanation of how to use this for multi-threaded iteration. This requires that the flag NPY_ITER_RANGED was passed to the iterator constructor. If you want to reset both the iterindex range and the base pointers at the same time, you can do the following to avoid extra buffer copying (be sure to add the return code error checks when you copy this code). /* Set to a trivial empty range */ NpyIter_ResetToIterIndexRange(iter, 0, 0); /* Set the base pointers */ NpyIter_ResetBasePointers(iter, baseptrs); /* Set to the desired range */ NpyIter_ResetToIterIndexRange(iter, istart, iend); back to its initial state, but using the values in baseptrs for the data instead of the pointers from the arrays being iterated. This functions is intended to be used, together with the op_axes parameter, by nested iteration code with two or more iterators.. TODO: Move the following into a special section on nested iterators. Creating iterators for nested iteration requires some care. All the iterator operands must match exactly, or the calls to NpyIter_ResetBasePointers will be invalid. This means that automatic copies and output allocation should not be used haphazardly. It is possible to still use the automatic data conversion and casting features of the iterator by creating one of the iterators with all the conversion parameters enabled, then grabbing the allocated operands with the NpyIter_GetOperandArray function and passing them into the constructors for the rest of the iterators. WARNING: When creating iterators for nested iteration, the code must not use a dimension more than once in the different iterators. If this is done, nested iteration will produce out-of-bounds pointers during iteration. WARNING: When creating iterators for nested iteration, buffering can only be applied to the innermost iterator. If a buffered iterator is used as the source for baseptrs, it will point into a small buffer instead of the array and the inner iteration will be invalid. The pattern for using nested iterators is as follows. NpyIter *iter1, *iter1; NpyIter_IterNextFunc *iternext1, *iternext2; char **dataptrs1; /* * With the exact same operands, no copies allowed, and * no axis in op_axes used both in iter1 and iter2. * Buffering may be enabled for iter2, but not for iter1. */ iter1 = ...; iter2 = ...; iternext1 = NpyIter_GetIterNext(iter1); iternext2 = NpyIter_GetIterNext(iter2); dataptrs1 = NpyIter_GetDataPtrArray(iter1); do { NpyIter_ResetBasePointers(iter2, dataptrs1); do { /* Use the iter2 values */ } while (iternext2(iter2)); } while (iternext1(iter1)); Adjusts the iterator to point to the ndim indices pointed to by multi_index. Returns an error if a multi-index is not being tracked, the indices are out of bounds, or inner loop iteration is disabled. Returns NPY_SUCCEED or NPY_FAIL. Adjusts the iterator to point to the index specified. If the iterator was constructed with the flag NPY_ITER_C_INDEX, index is the C-order index, and if the iterator was constructed with the flag NPY_ITER_F_INDEX, index is the Fortran-order index. Returns an error if there is no index being tracked, the index is out of bounds, or inner loop iteration is disabled. Returns NPY_SUCCEED or NPY_FAIL. Returns the number of elements being iterated. This is the product of all the dimensions in the shape. Gets the iterindex of the iterator, which is an index matching the iteration order of the iterator. Gets the iterindex sub-range that is being iterated. If NPY_ITER_RANGED was not specified, this always returns the range [0, NpyIter_IterSize(iter)). Adjusts the iterator to point to the iterindex specified. The IterIndex is an index matching the iteration order of the iterator. Returns an error if the iterindex is out of bounds, buffering is enabled, or inner loop iteration is disabled. Returns NPY_SUCCEED or NPY_FAIL. Returns 1 if the flag NPY_ITER_DELAY_BUFALLOC was passed to the iterator constructor, and no call to one of the Reset functions has been done yet, 0 otherwise. Returns 1 if the caller needs to handle the inner-most 1-dimensional loop, or 0 if the iterator handles all looping. This is controlled by the constructor flag NPY_ITER_EXTERNAL_LOOP or NpyIter_EnableExternalLoop. Returns 1 if the iterator was created with the NPY_ITER_MULTI_INDEX flag, 0 otherwise. Returns 1 if the iterator was created with the NPY_ITER_C_INDEX or NPY_ITER_F_INDEX flag, 0 otherwise. Returns 1 if the iterator requires buffering, which occurs when an operand needs conversion or alignment and so cannot be used directly. Returns 1 if the iterator was created with the NPY_ITER_BUFFERED flag, 0 otherwise. Returns 1 if the iterator was created with the NPY_ITER_GROWINNER flag, 0 otherwise. If the iterator is buffered, returns the size of the buffer being used, otherwise returns 0. Returns the number of dimensions being iterated. If a multi-index was not requested in the iterator constructor, this value may be smaller than the number of dimensions in the original objects. Returns the number of operands in the iterator. Gets the array of strides for the specified axis. Requires that the iterator be tracking a multi-index, and that buffering not be enabled. This may be used when you want to match up operand axes in some fashion, then remove them with NpyIter_RemoveAxis to handle their processing manually. By calling this function before removing the axes, you can get the strides for the manual processing. Returns NULL on error. Returns the broadcast shape of the iterator in outshape. This can only be called on an iterator which is tracking a multi-index. Returns NPY_SUCCEED or NPY_FAIL. This gives back a pointer to the nop data type Descrs for the objects being iterated. The result points into iter, so the caller does not gain any references to the Descrs. This pointer may be cached before the iteration loop, calling iternext will not change it. This gives back a pointer to the nop operand PyObjects that are being iterated. The result points into iter, so the caller does not gain any references to the PyObjects. This gives back a reference to a new ndarray view, which is a view into the i-th object in the array :cfunc:`NpyIter_GetOperandArray`(), whose dimensions and strides match the internal optimized iteration pattern. A C-order iteration of this view is equivalent to the iterator’s iteration order. For example, if an iterator was created with a single array as its input, and it was possible to rearrange all its axes and then collapse it into a single strided iteration, this would return a view that is a one-dimensional array. Fills nop flags. Sets outreadflags[i] to 1 if op[i] can be read from, and to 0 if not. Fills nop flags. Sets outwriteflags[i] to 1 if op[i] can be written to, and to 0 if not. Builds a set of strides which are the same as the strides of an output array created using the NPY_ITER_ALLOCATE flag, where NULL was passed for op_axes. This is for data packed contiguously, but not necessarily in C or Fortran order. This should be used together with NpyIter_GetShape and NpyIter_GetNDim with the flag NPY_ITER_MULTI_INDEX passed into the constructor. A use case for this function is to match the shape and layout of the iterator and tack on one or more dimensions. For example, in order to generate a vector per input value for a numerical gradient, you pass in ndim*itemsize for itemsize, then add another dimension to the end with size ndim and stride itemsize. To do the Hessian matrix, you do the same thing but add two dimensions, or take advantage of the symmetry and pack it into 1 dimension with a particular encoding. This function may only be called if the iterator is tracking a multi-index and if NPY_ITER_DONT_NEGATE_STRIDES was used to prevent an axis from being iterated in reverse order. If an array is created with this method, simply adding ‘itemsize’ for each iteration will traverse the new array matching the iterator. Returns NPY_SUCCEED or NPY_FAIL. Returns a function pointer for iteration. A specialized version of the function pointer may be calculated by this function instead of being stored in the iterator structure. Thus, to get good performance, it is required that the function pointer be saved in a variable rather than retrieved for each loop iteration.. The typical looping construct is as follows. NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL); char** dataptr = NpyIter_GetDataPtrArray(iter); do { /* use the addresses dataptr[0], ... dataptr[nop-1] */ } while(iternext(iter)); When NPY_ITER_EXTERNAL_LOOP is specified, the typical inner loop construct is as follows.op, nop = NpyIter_GetNOp(iter); do { size = *size_ptr; while (size--) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } } while (iternext()); Observe that we are using the dataptr array inside the iterator, not copying the values to a local temporary. This is possible because when iternext() is called, these pointers will be overwritten with fresh values, not incrementally updated. If a compile-time fixed buffer is being used (both flags NPY_ITER_BUFFERED and NPY_ITER_EXTERNAL_LOOP), the inner size may be used as a signal as well. The size is guaranteed to become zero when iternext() returns false, enabling the following loop construct. Note that if you use this construct, you should not pass NPY_ITER_GROWINNER as a flag, because it will cause larger sizes under some circumstances. /* The constructor should have buffersize passed as this value */ #define FIXED_BUFFER_SIZE 1024, iop, nop = NpyIter_GetNOp(iter); /* One loop with a fixed inner size */ size = *size_ptr; while (size == FIXED_BUFFER_SIZE) { /* * This loop could be manually unrolled by a factor * which divides into FIXED_BUFFER_SIZE */ for (i = 0; i < FIXED_BUFFER_SIZE; ++i) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } iternext(); size = *size_ptr; } /* Finish-up loop with variable inner size */ if (size > 0) do { size = *size_ptr; while (size--) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } } while (iternext()); Returns a function pointer for getting the current multi-index of the iterator. Returns NULL if the iterator is not tracking a multi-index. It is recommended that this function pointer be cached in a local variable before the iteration loop.. This gives back a pointer to the nop data pointers. If NPY_ITER_EXTERNAL_LOOP was not specified, each data pointer points to the current data item of the iterator. If no inner iteration was specified, it points to the first data item of the inner loop. This pointer may be cached before the iteration loop, calling iternext will not change it. This function may be safely called without holding the Python GIL. Gets the array of data pointers directly into the arrays (never into the buffers), corresponding to iteration index 0. These pointers are different from the pointers accepted by NpyIter_ResetBasePointers, because the direction along some axes may have been reversed. This function may be safely called without holding the Python GIL. This gives back a pointer to the index being tracked, or NULL if no index is being tracked. It is only useable if one of the flags NPY_ITER_C_INDEX or NPY_ITER_F_INDEX were specified during construction. When the flag NPY_ITER_EXTERNAL_LOOP is used, the code needs to know the parameters for doing the inner loop. These functions provide that information. Returns a pointer to an array of the nop strides, one for each iterated object, to be used by the inner loop. This pointer may be cached before the iteration loop, calling iternext will not change it. This function may be safely called without holding the Python GIL. Returns a pointer to the number of iterations the inner loop should execute. This address may be cached before the iteration loop, calling iternext will not change it. The value itself may change during iteration, in particular if buffering is enabled. This function may be safely called without holding the Python GIL. Gets an array of strides which are fixed, or will not change during the entire iteration. For strides that may change, the value NPY_MAX_INTP is placed in the stride. Once the iterator is prepared for iteration (after a reset if NPY_DELAY_BUFALLOC was used), call this to get the strides which may be used to select a fast inner loop function. For example, if the stride is 0, that means the inner loop can always load its value into a variable once, then use the variable throughout the loop, or if the stride equals the itemsize, a contiguous version for that operand may be used. This function may be safely called without holding the Python GIL.
http://docs.scipy.org/doc/numpy-1.6.0/reference/c-api.iterator.html
2016-04-29T04:07:57
CC-MAIN-2016-18
1461860110372.12
[]
docs.scipy.org
Interpret the input as a matrix. Unlike matrix, asmatrix does not make a copy if the input is already a matrix or an ndarray. Equivalent to matrix(data, copy=False). Examples >>> x = np.array([[1, 2], [3, 4]]) >>> m = np.asmatrix(x) >>> x[0,0] = 5 >>> m matrix([[5, 2], [3, 4]])
http://docs.scipy.org/doc/numpy-1.7.0/reference/generated/numpy.mat.html
2016-04-29T04:06:32
CC-MAIN-2016-18
1461860110372.12
[]
docs.scipy.org
public class PassThroughSourceExtractor extends Object implements SourceExtractor SourceExtractorimplementation that just passes the candidate source metadata object through for attachment. Using this implementation means that tools will get raw access to the underlying configuration source metadata provided by the tool. This implementation should not be used in a production application since it is likely to keep too much metadata in memory (unnecessarily). clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public PassThroughSourceExtractor() public Object extractSource(Object sourceCandidate, Resource definingResource) sourceCandidateas-is. extractSourcein interface SourceExtractor sourceCandidate- the source metadata definingResource- the resource that defines the given source object (may be null) sourceCandidate
https://docs.spring.io/spring/docs/3.2.0.BUILD-SNAPSHOT/javadoc-api/org/springframework/beans/factory/parsing/PassThroughSourceExtractor.html
2016-04-29T04:20:44
CC-MAIN-2016-18
1461860110372.12
[]
docs.spring.io
JCacheStorageXcache::clean From Joomla! Documentation Revision as of 15::clean Description Clean cache for a group given a mode. Description:JCacheStorageXcache::clean [Edit Descripton] public function clean ( $group $mode=null ) - Returns - Defined on line 131 of libraries/joomla/cache/storage/xcache.php See also JCacheStorageXcache::clean source code on BitBucket Class JCacheStorageXcache Subpackage Cache - Other versions of JCacheStorageXcache::clean SeeAlso:JCacheStorageXcache::clean [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JCacheStorageXcache::clean&direction=next&oldid=56072
2016-04-29T04:10:59
CC-MAIN-2016-18
1461860110372.12
[]
docs.joomla.org
This Powerpoint Project Dashboard Template is easy to edit. Download it now! The dials & status graphics are simple to change. See discount deals for this Template What it does Using graphics, dials and well thought out formats this allows you to easily show project status to stakeholders, wider teams or bored loved-ones. It has; - A dashboard template - RAG status (Red, Amber, Green) - Highlight report - RAID update (Reliability, Availability, Performance and Capacity) - SWOT (Strength, Weakness, Opportunity and Threats) Please note that as it is written in powerpoint, the dials do not not update automatically but need to be done by hand – try as we might powerpoint just won’t play ball with us on this. We like this document a lot, but if you do have a little more budget we would personally go for this one. It’s bigger and better and has more stuff in it like graphs.
https://business-docs.co.uk/downloads/powerpoint-project-dashboard-with-status-template/
2016-04-29T03:59:24
CC-MAIN-2016-18
1461860110372.12
[]
business-docs.co.uk
Developing a MVC Component/Using the language filter facility From Joomla! Documentation < J2.5:Developing a MVC ComponentRevision as of 05:17, 17 October 2011 by Kmgalanakis (Talk | contribs) Articles in this!1.6 tutorial. You are encouraged to read the previous parts of the tutorial before reading this. Using the language filter facility This part is open for editing. Prev: Adding an install/uninstall/update script file Next: Adding an update server Contributors Bold text
https://docs.joomla.org/index.php?title=Developing_a_Model-View-Controller_Component/2.5/Using_the_language_filter_facility&oldid=62625
2016-04-29T04:57:00
CC-MAIN-2016-18
1461860110372.12
[]
docs.joomla.org
Difference between revisions of "Media form field type" From Joomla! Documentation Revision as of 11:38, 20 May 2011 Attributes: name, type[=media], label, description, directory The directory attribute should be relative to the top level /images/ folder, so to have the media selector opened with the directory /images/stories/ already selected: <field name="image0" type="media" directory="stories" />
https://docs.joomla.org/index.php?title=Media_form_field_type&diff=58341&oldid=40817
2016-04-29T05:07:50
CC-MAIN-2016-18
1461860110372.12
[]
docs.joomla.org
Difference between revisions of "Can I add registration fields?" From Joomla! Documentation Revision as of 07:16, 29 April 2013 In Joomla! 1.6 and newer you can add extended registration fields by using a profile plugin. A sample profile plugin is included in the standard installation, offering a number of commonly requested fields such as mailing address, telephone number and date-of-birth. See: What is a profile plugin?
https://docs.joomla.org/index.php?title=Can_I_add_registration_fields%3F&diff=prev&oldid=85842
2016-04-29T05:00:18
CC-MAIN-2016-18
1461860110372.12
[]
docs.joomla.org
Difference between revisions of "Screen.sections.15" From Joomla! Documentation Revision as of 20:09,) 8 sections.. Function. Quick Tips -..
https://docs.joomla.org/index.php?title=Help15:Screen.sections.15&diff=5197&oldid=5196
2016-04-29T04:45:27
CC-MAIN-2016-18
1461860110372.12
[]
docs.joomla.org
Na. In some cases, applying simplify() may actually result in some more complicated expression. By default ratio=1.7 prevents more extreme cases: if (result length)/(input length) > ratio, then input is returned unmodified (count_ops() is used to measure length). For example, if ratio=1, simplify output can’t be longer than input. >>> from sympy import S, simplify, count_ops, oo >>> root = S("(5/2 + 21**(1/2)/2)**(1/3)*(1/2 - I*3**(1/2)/2)" ... "+ 1/((1/2 - I*3**(1/2)/2)*(5/2 + 21**(1/2)/2)**(1/3))") Since simplify(root) would result in a slightly longer expression, root is returned inchanged instead: >>> simplify(root, ratio=1) is root True If ratio=oo, simplify will be applied anyway: >>> count_ops(simplify(root, ratio=oo)) > count_ops(root) True Note that the shortest expression is not necessary the simplest, so setting ratio to 1 may not be a good idea. Heuristically, default value ratio=1.7 seems like a reasonable choice. Collect additive terms with respect to a list of symbols up to powers with rational exponents. By the term symbol here are meant arbitrary expressions, which can contain powers, products, sums etc. In other words symbol is a pattern which will be searched for in the expression’s terms. This function will not apply any redundant expanding to the input expression, so user is assumed to enter expression in final form. This makes ‘collect’ more predictable as there is no magic behind the scenes. However it is important to note, that powers of products are converted to products of powers using ‘separate’ function. There are two possible types of output. First, if ‘evaluate’ flag is set, this function will return a single expression or else it will return a dictionary with separated symbols up to rational powers as keys and collected sub-expressions as values respectively. >>> from sympy import collect, sympify, Wild >>> from sympy.abc import a, b, c, x, y, z This function can collect symbolic coefficients in polynomial[sympify(1)] c You can also work with multi-variate) (a + b)*(x**2)**c Note also that all previously stated facts about ‘collect’ function apply to the exponential function, so you can get: >>> from sympy import exp >>> collect(a*exp(2*x) + b*exp(2*x), exp(x)) (a + b)*exp(2*x) If you are interested only in collecting specific powers of some symbols then set ‘exact’ flag=True to,x) Note: arguments are expected to be in expanded form, so you might have to call expand() prior to calling this function. A wrapper to expand(power_base=True) which separates a power with a base that is a Mul into a product of powers, without performing any other expansions, provided that assumptions about the power’s base and exponent allow. deep=True (default is False) will do separations inside functions. force=True (default is False) will cause the expansion to ignore assumptions about the base and exponent. When False, the expansion will only happen if the base is non-negative or the exponent is an integer. >>> from sympy.abc import x, y, z >>> from sympy import separate, sin, cos, exp >>> (x*y)**2 x**2*y**2 >>> (2*x)**y (2*x)**y >>> separate(_) 2**y*x**y >>> separate((x*y)**z) (x*y)**z >>> separate((x*y)**z, force=True) x**z*y**z >>> separate(sin((x*y)**z)) sin((x*y)**z) >>> separate(sin((x*y)**z), deep=True, force=True) sin(x**z*y**z) >>> separate((2*sin(x))**y + (2*cos(x))**y) 2**y*sin(x)**y + 2**y*cos(x)**y >>> separate((2*exp(y))**x) 2**x*exp(x*y) >>> separate((2*cos(x))**y) 2**y*cos(x)**y Notice that summations are left untouched. If this is not the desired behavior, apply ‘expand’ to the expression: >>> separate(((x+y)*z)**2) z**2*(x + y)**2 >>> (((x+y)*z)**2).expand() x**2*z**2 + 2*x*y*z**2 + y**2*z**2 >>> separate((2*y)**(1+z)) 2**(z + 1)*y**(z + 1) >>> ((2*y)**(1+z)).expand() 2*2**z*y*y**z Rationalize the denominator. >>> from sympy import radsimp, sqrt, Symbol >>> radsimp(1/(2+sqrt(2))) -2**(1/2)/2 + 1 >>> x,y = map(Symbol, 'xy') >>> e = ((2+2*sqrt(2))*x+(2+sqrt(8))*y)/(2+sqrt(2)) >>> radsimp(e) 2**(1/2)*x + 2**(1/2)*y Put an expression over a common denominator, cancel and reduce. >>> from sympy import ratsimp >>> from sympy.abc import x, y >>> ratsimp(1/x + 1/y) (x + y)/(x*y)) == Usage == trigsimp(expr) -> reduces expression by using known trig identities == Notes == deep: - Apply trigsimp inside functions recursive: - Use common subexpression elimination (cse()) and apply trigsimp recursively (recursively==True is quite expensive operation if the expression is large) >>> from sympy import trigsimp, sin, cos, log >>> from sympy.abc import x, y >>> e = 2*sin(x)**2 + 2*cos(x)**2 >>> trigsimp(e) 2 >>> trigsimp(log(e)) log(2*sin(x)**2 + 2*cos(x)**2) >>> trigsimp(log(e), deep=True) log Simplify combinatorial expressions. This function takes as input an expression containing factorials, binomials, Pochhammer symbol and other “combinatorial” functions, and tries to minimize the number of those functions and reduce the size of their arguments. The result is be given in terms of binomials and factorials. more familiar binomials and factorials.(). Replace numbers with simple representations. If rational=True then numbers are simply replaced with their rational equivalents. If rational=False, a simple formula that numerically matches the given expression is sought (and the input should be possible to evalf to a precision of at least 30 digits). Optionally, a list of (rationally independent) constants to include in the formula may be given. A lower tolerance may be set to find less exact matches. With full=True, a more extensive search is performed (this is useful to find simpler numbers when the tolerance is set low). Examples: >>> from sympy import nsimplify, sqrt, GoldenRatio, exp, I, exp, pi >>> nsimplify(4/(1+sqrt(5)), [GoldenRatio]) -2 + 2*GoldenRatio >>> nsimplify((1/(exp(3*pi*I/5)+1))) 1/2 - I*(5**(1/2)/10 + 1/4)**(1/2) >>> nsimplify(I**I, [pi]) exp(-pi/2) >>> nsimplify(pi, tolerance=0.01) 22/7 Denests an expression that contains nested square roots. This algorithm is based on <>. Perform common subexpression elimination on an expression. Parameters: Returns: Expand hypergeometric functions. If allow_hyper is True, allow partial simplification (that is a result different from input, but still containing hypergeometric functions).)) 1 + hyper((1, 1, 1), (), z)
http://docs.sympy.org/0.7.1/modules/simplify/simplify.html
2016-04-29T03:57:31
CC-MAIN-2016-18
1461860110372.12
[]
docs.sympy.org
Apps users within an organization: one app for troubleshooting email servers, one app for analyzing business trends, and so on. This way, everyone uses the same There are several option for building your own custom pages in Splunk: - the simplified XML syntax. Learn more about advanced views. that are permissionable and controlled via Splunk's access control layer. Splunk: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7 View the Article History for its revisions. Feedback submitted, thanks!
http://docs.splunk.com/Documentation/Splunk/4.3.1/Developer/Appintro
2016-04-29T04:01:23
CC-MAIN-2016-18
1461860110372.12
[]
docs.splunk.com
Configuring Eclipse and Xdebug From Joomla! Documentation Revision as of 11:56, 17 October 2012 by Dextercowley (Talk | contribs) Contents. Debug Joomla! From Eclipse Let's do a quick debug session in Eclipse. Set a Breakpoint First, we'll set a breakpoint inside Joomla!. Go to the PHP Explorer view and find the Joomla! file "components/com_content/views/frontpage/tmpl/default.php" as shown below. Double-click the file name to open this file for editing. Double-click in the blank area just to the left of line 2, as shown below. A small blue circle will display. This sets a breakpoint at this line of code. When running in debug mode, Eclipse will suspend the program and we can debug from this point. Create a Launch Configuration Now, let's set up what's called a Launch Configuration so we can more easily run the front-end in debug mode. Select the menu Run / Debug Configurations . Select "PHP Web Page" in the left-hand tree list. Right-click and select "New" to display the screen below. Notice that the "Break at First Line" option is checked by default. Keep this setting. Change the Name to "Debug Front End" and press the Browse button and browse to the "index.php" file in the top-level folder of the Joomla 1.5 Source project, as shown below. Click OK and Close buttons to save the launch configuration. You can use this same procedure to create a launch configuration for the Joomla! back end. Just call it "Debug Back End" and browse to the "index.php" file under the administrator folder. Run a Debug Session We can select our launch configuration by pressing the drop-down arrow next to the debug icon in the toolbar, as shown below. If our just created launch configuration with the name "Debug Front End" doesn't show up, we have to add it to our favourites by pressing "Organize Favorites..." in the same drop down and adding our launch configuration to the favorite list. At this point, two things happen. First, a new browser session starts with an empty window. This is because Joomla! is suspended at the first line of code (since we chose "Break at First Line"). Second, inside Eclipse, the PHP Debug perspective is opened automatically for us, showing the line where we are suspended. Press the Resume button (green right arrow) in the toolbar to take us to the next breakpoint. This time we suspend at line 2 of the "default.php" file, where we set our break point. The screen should look like the one below. To end the debug session, just press the red Terminate button. Eclipse will again display the PHP perspective and you will get a "teminated" window in your browser. Since we created a Debug launch configuration, we can re-run the debug session for the front end just by using the Debug drop-down in the toolbar. (Note: If you don't want to worry about launch configurations, you can always just highlight the "index.php" file, right-click, and select Run / Debug As / PHP Web Page. Using launch configurations is just a convenience.) When we launch the debug session, we again go to the PHP Debug perspective. Now, press the Resume button once to take us to line 2 of "default.php". Press Resume a second time. Now the Joomla! front page displays and the debugger doesn't show any active frames in the debug view. This is because Joomla! is now just waiting for the user to do something. Press the link "Joomla! Overview". Now the debugger has again stopped at the first line of code (line 15) of "index.php", again because we have the "Break at First Line" option set. Press Resume again, and the "Joomla! Overview" article displays and again we have no active frames in the Debug view. Let's take a quick look at some other debugger features. Press the "Home" link and press the Resume button once. Again, you should be suspended at line 2 of "default.php", where we set our manual breakpoint. Press the "Step Over" button in the debug toolbar. The screen should display as shown below, with the current line now being line 3. Two other "step" buttons are "Step Into" and "Step Return". These are used to navigate down to a called method and navigate back. Let's try them. Notice that line 3 includes a call to the "get" method of the "$this->params" object. Press "Step Into" and now we navigate to this method, as shown below. This method is defined in the file "libraries/joomla/html/parameter.php" file and is a member of the "JParameter" class (since "$this" was a JParameter object). Notice that the current line also calls a method. Press "Step Into" again and we navigate to the getValue method of the JRegistry class in the "registry.php" file. As you might guess, the "Step Return" navigates to the line following the "return" statement of the current method. So if we press "Step Return" once we go back to line 121 of "parameter.php" file. Let's look at two other debugger features. Hover the mouse on the $key variable in line 135. You should see the value of this variable, as shown below. Look at the Variables view to the right of the Debug view and you can see the current value of all of the variables, as shown below. Click on the second frame in the Debug view, as shown below. Notice that the Variables view changes to show the variables for this frame and the edit window now shows the file for this frame. Click on some other frames to get the idea of this. The "frame stack" allows you to see all of the levels of the program and how we got to the current line of code. We can also see the value of variables at each level in the program. When we step into a method, we add a new frame on the top of the stack. When we step return out of a method, the frame for this method is removed and we go back to the previous stack. Press Step Return and we now go back to the line in "default.php" where we called the "get" method. Sometimes it is handy to evaluate an entire expression. Highlight the expression on line 5 "$this->escape($this->params->get('page_title'))", being careful to get the entire expression but no extra characters. Right-click and select Watch from the context menu. This expression is now added to the Expressions view and we can see what it evaluates to, as shown below. You can also type in an expression by right-clicking inside the Expressions view and selecting "Add Watch Expression". Important Note: There appears to be a bug when you try to launch a debug session with existing Watch Expressions. You get an error "Unexpected termination of script, debugging ended". To avoid this error, just delete all Watch expressions, using the "Remove All Expressions" button in the toolbar, prior to starting a new debug session. (Above solution did not work in my case but to not show superglobals was the solution. - User:Rolandd) To finish up, delete the Watch expression and press the red Terminate button to stop the debug session.
https://docs.joomla.org/index.php?title=Configuring_Eclipse_and_Xdebug&direction=prev&oldid=102074
2016-04-29T05:22:09
CC-MAIN-2016-18
1461860110372.12
[]
docs.joomla.org
Revision history of "JLanguage::exists"::exists (cleaning up content namespace and removing duplicated API references)
https://docs.joomla.org/index.php?title=JLanguage::exists&action=history
2016-04-29T05:46:09
CC-MAIN-2016-18
1461860110372.12
[]
docs.joomla.org
public interface. In contrast to a ResultSetExtractor, a RowCallbackHandler object is typically stateful: It keeps the result state within the object, to be available for later inspection. See RowCountCallbackHandler for a usage example. Consider using a RowMapper instead if you need to map exactly one result object per row, assembling them into a List. JdbcTemplate, RowMapper, ResultSetExtractor, RowCountCallbackHandler void processRow(ResultSet rs) throws SQLException next()on the ResultSet; it is only supposed to extract values of the current row. Exactly what the implementation chooses to do is up to it: A trivial implementation might simply count rows, while another implementation might build an XML document. rs- the ResultSet to process (pre-initialized for the current row) SQLException- if a SQLException is encountered getting column values (that is, there's no need to catch SQLException)
https://docs.spring.io/spring/docs/3.0.6.RELEASE/javadoc-api/org/springframework/jdbc/core/RowCallbackHandler.html
2016-04-29T04:12:36
CC-MAIN-2016-18
1461860110372.12
[]
docs.spring.io
maxTouchPoints maxTouchPoints This article is Ready to Use. W3C Working Draft Summary The maximum number of simultaneous touch contacts supported by the device. Property of dom/Navigator Syntax Note: This property is read-only. var result = navigator.; Return Value Returns an object of type Number Examples Basic HTML5 Canvas painting application JavaScript ); } }); Usage is 10. Related specifications See also Related articles Pointer Events Attribution This article contains content originally from external sources. Portions of this content come from the Microsoft Developer Network: [maxTouchPoints property Article]
http://docs.webplatform.org/wiki/dom/Navigator/maxTouchPoints
2016-04-29T03:58:44
CC-MAIN-2016-18
1461860110372.12
[]
docs.webplatform.org
A. After issuing these modification operations, MongoDB allows applications to determine the level of acknowledgment returned from the database. See Write Concern. Create operations add new documents to a collection. In MongoDB, the db.collection.insert() method perform create operations. The following diagram highlights the components of a MongoDB insert operation: The components of a MongoDB insert operations. The following diagram shows the same query in SQL: The components of a SQL INSERT statement. 示例 The following operation inserts a new documents into the users collection. The new document has four fields name, age, and status, and an _id field. MongoDB always adds the _id field to the new document if that field does not exist. db.users.insert( { name: "sue", age: 26, status: "A" } ). 参见 SQL to MongoDB Mapping Chart for additional examples of MongoDB write operations and the corresponding SQL statements.. Update operations modify existing documents in a collection. In MongoDB, db.collection.update() and the db.collection.save() methods perform update operations. The db.collection.update() method can accept a components of a MongoDB update operation. The following diagram shows the same query in SQL: The components of a SQL UPDATE statement. 示例 db.users.update( { age: { $gt: 18 } }, { $set: { status: "A" } }, { multi: true } ) This update operation on the users collection sets the status field to A for the documents that match the criteria of age greater than 18. For more information, see db.collection.update() and db.collection.save(), and Modify Documents for examples. operations remove documents from a collection. In MongoDB, db.collection.remove() method performs delete operations. The db.collection.remove() method can accept a query criteria to determine which documents to remove. The following diagram highlights the components of a MongoDB remove operation: The components of a MongoDB remove operation. The following diagram shows the same query in SQL: The components of a SQL DELETE statement. 示例 db.users.remove( { status: "D" } ) This delete operation on the users collection removes all documents that match the criteria of status equal to D. For more information, see db.collection.remove() method and Remove Documents. By default, db.collection.remove() method removes all documents that match its query. However, the method can accept a flag to limit the delete operation to a single document. The modification of a single document is always atomic, even if the write operation modifies multiple sub-documents within that document. For write operations that modify multiple documents, the operation as a whole is not atomic, and other operations may interleave. No other operations are atomic. You can, however, attempt to isolate a write operation that affects multiple documents using the isolation operator. To isolate a sequence of write operations from other read and write operations, see Perform Two Phase Commits.
http://cn.docs.mongodb.org/master/core/write-operations/
2013-12-05T08:46:23
CC-MAIN-2013-48
1386163042430
[]
cn.docs.mongodb.org
Help Center Local Navigation Search This Document. Previous topic: Use a picture as your device wallpaper Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/25417/Move_an_application_icon_46_441451_11.jsp
2013-12-05T08:59:16
CC-MAIN-2013-48
1386163042430
[]
docs.blackberry.com
(xml).. For ‘common case’ and one value is an ‘exceptional documentation for the contextlib module. Exception. documentation for the ctypes module. A new hashlib module, written by Gregory P. Smith, has been added to replace the md5 and sha modules. hashlib adds support for additional secure hashes (SHA-224, SHA-256, SHA-384, and SHA-512). When available, the module uses OpenSSL for fast platform optimized implementations of algorithms. The old md5 and sha modules still exist as wrappers around hashlib to preserve backwards compatibility. The new module’s interface is very close to that of the old modules, but not identical. The most significant difference is that the constructor functions for creating new hashing objects are named differently. # pysqlite module (), a wrapper for the SQLite embedded database, has been added to the standard library under the package name. pysqlite was written by Gerhard Häring and provides a SQL interface compliant with the DB-API 2.0 specification described by PEP 249.. To use the module, you must first create a Connection object that represents the database. Here the data will be stored in the /tmp/example file: conn = sqlite3.connect('/tmp/example') You can also supply the special name :memory: to create a database in RAM. Once you have a Connection, you can create a Cursor object and call its execute() method to perform SQL commands: c = conn.cursor() # Create table c.execute('''create table stocks (date text, trans text, symbol text, qty real, price real)''') # Insert a row of data c.execute("""insert into stocks values ('2006-01-05','BUY','RHAT',100,35.14)""")) To retrieve data after executing a SELECT statement, you can either treat the cursor as an iterator, call the cursor’s fetchone() method to retrieve a single matching row, or call fetchall() to get a list of the matching rows. This example uses the iterator form: >>> c = conn.cursor() >>> c.execute('select * from stocks order by price') >>> for row in c: ... print row ... (u'2006-01-05', u'BUY', u'RHAT', 100, 35.140000000000001) (u'2006-03-28', u'BUY', u'IBM', 1000, 45.0) (u'2006-04-06', u'SELL', u'IBM', 500, 53.0) (u'2006-04-05', u'BUY', u'MSOFT', 1000, 72.0) >>> For more information about the SQL dialect supported by SQLite, see. See also The documentation for the sqlite3 module..
http://docs.python.org/2/whatsnew/2.5.html
2013-12-05T08:45:20
CC-MAIN-2013-48
1386163042430
[]
docs.python.org
LibreOffice » jvmfwk View module in: cgit Doxygen Wrappers so you can use all the Java Runtime Environments with their slightly incompatible APIs with more ease. Used to use an over-engineered "plugin" mechanism although there was only one "plugin", called "sunmajor", that handles all possible JREs. IMPORTANT: The Generated by Libreoffice CI on lilith.documentfoundation.org Last updated: 2020-08-01 10:38:02 | Privacy Policy | Impressum (Legal Info)
https://docs.libreoffice.org/jvmfwk.html
2020-08-03T22:57:18
CC-MAIN-2020-34
1596439735836.89
[]
docs.libreoffice.org
The" See also Notifications to learn more information about subscriptions in the Governance Registry. Overview Content Tools Activity
https://docs.wso2.com/display/Governance510/API-Level+Access+to+the+Subscription+Manager
2020-08-03T23:50:48
CC-MAIN-2020-34
1596439735836.89
[]
docs.wso2.com
Mark participant as "not attended" In this article Mark participant as "not attended" This article is relevant for: Meeting organizer Mark participant as "not attended" All confirmed participants will automatically have their status set to Attended when the meeting start date is reached (meeting status = Ongoing). The Meeting Organizer can then change the status (during or after the meeting) to Not Attended according to the signed attendance list. The aim is to have a complete attendance list in ISO Meetings with the final attendance. The Meeting Organizer has the option to add a comment to explain why a participant has not attended (e.g. excused or not excused) To mark a participant as Not Attended, click on the status of the participant and select Not Attended
https://helpdesk-docs.iso.org/article/84-mark-participant-as-not-attended
2020-08-03T23:30:51
CC-MAIN-2020-34
1596439735836.89
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5a293aa00428631b6b6dbb4e/images/5b855ca62c7d3a03f89e326d/file-YNEVfuouS7.png', None], dtype=object) ]
helpdesk-docs.iso.org
While you are using the non-destructive texturing features in Mixer, you may start utilizing more resources frequently as you proceed with adding details through the layer stack. In order to keep the viewport running smoothly, consider using the following tips: - Under the Setup tab: Use a lower Working Resolution. This can be increased again at any time and texture maps can be exported at a resolution of your choice regardless of the selected working one. - Under the Display tab: Lower the Shadow Resolution and the GPU Tessellation. - Under the Performance tab: Turn off High Quality Blur , increase Downsample Textures, decrease Normal Matching and enable Bake Layers. To shorten the export time, lower the resolution of the textures being exported under the Export tab. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.quixel.com/mixer/1/en/topic/improving-performance
2020-10-23T21:13:33
CC-MAIN-2020-45
1603107865665.7
[]
docs.quixel.com
Mixer is free and easily available for use at Quixel’s web page. If you require assistance with setting up Mixer, you can view a step by step process below. Downloading Mixer Once you’re on the Quixel Mixer page, click Download for free.The website will automatically download an installer for your computer. If needed, you can access all versions of Mixer available here. The downloaded installer can be launched to initiate the setup which will take you to the following options. The user will have the latest version of Mixer available for download, sample mixes and new Smart Material packs published on regular intervals. Clicking on these options will show additional information on the right. Setting Paths The installer will then guide the user to set up their Mixer folders’ destinations. The installer will confirm three different paths for the following three aspects during the setup. - Quixel Mixer Application - Local Library - Mixer Files Launching Mixer for the First time When you will be working on your projects using Mixer, you may find yourself dealing with a lot of files and data. To help organize the content, Mixer uses two file paths each with their own purpose. These are: - Mixer Files: This is where your Mixes, and the project folders that hold Mixes, will be stored. - Local Library: This is where your asset library, with downloaded assets from the Megascans Library, will be stored. If you have an existing folder of assets from Bridge, for example, point to that folder instead. Setting this to the same folder as the Bridge Library will ensure that assets downloaded in each application will all be stored in the same location, and accessible by both. Depending on the number of assets and their resolution, this folder can be very large. See the following page for more information on the folder structure Mixer creates on your drive. Folder Structure Mixer is currently allowing the following ways to login. You will be directed towards the Sign In dialogue box where you can use your Epic Games credentials to start utilizing mixer. If you have an existing Quixel account, you can access Mixer using those credentials. After registering or logging in, you will be presented with the following dialogue from where you can either create a new project or select a previously saved one. Offline Activation On the bottom right of the Sign In dialogue box you can see a link to Use License File. From your account settings on the Megascans site, download the Mixer offline license (.lic) file. Under License File, browse to the .lic file and click Activate. Additional Setup Options - Integration with Bridge : See the following page if you would like to integrate Mixer with Quixel Bridge. - Mixer Network Setup : See the following page if you would like to set up a network installation of Mixer. Post your comment on this topic.
http://docs.quixel.com/mixer/1/en/topic/setting-up-mixer
2020-10-23T22:07:43
CC-MAIN-2020-45
1603107865665.7
[]
docs.quixel.com
INFO: Analyzed target //foo:foo (14 packages loaded, 48 targets configured). INFO: Found 1 target... Target //foo:foo up-to-date: bazel-bin/foo/foo INFO: Elapsed time: 9.905s, Critical Path: 3.25s INFO: Build completed successfully, 6 total actions INFO: Analyzed target //foo:foo (0 packages loaded, 0 targets configured). INFO: Found 1 target... Target //foo:foo up-to-date: bazel-bin/foo/foo INFO: Elapsed time: 0.144s, Critical Path: 0.00s INFO: Build completed successfully, 1 total action We see a “null” build: in this case, there are no packages to re-load, since nothing has changed, and no build steps to execute. (If something had changed in “foo” or some of its dependencies, resulting in the re-execution. Once it has been run, you should not need to run it again until the WORKSPACE file changes. The distribution directory is another Bazel mechanism to avoid unnecessary downloads. Bazel searches distribution directories before the repository cache. The primary difference is that the distribution directory requires manual preparation. Using the --distdir=/path/to-directory. This only works if the file hash is specified in the WORKSPACE declaration. While the condition on the file name is not necessary for correctness, it reduces the number of candidate files to one per specified directory. In this way, specifying distribution files directories remains efficient, even if the number of files in such a directory grows large. Running Bazel in an airgapped environment To keep Bazel’s binary size small, Bazel’s implicit dependencies are fetched over the network while running for the first time. These implicit dependencies contain toolchains and rules that may not be necessary for everyone. For example, Android tools are unbundled and fetched only when building Android projects. However, these implicit dependencies may cause problems when running Bazel in an airgapped environment, even if you have vendored all of your WORKSPACE dependencies. To solve that, you can prepare a distribution directory containing these dependencies on a machine with network access, and then transfer them to the airgapped environment with an offline approach. To prepare the distribution directory, use the --distdir flag. You will need to do this once for every new Bazel binary version, since the implicit dependencies can be different for every release. To build these dependencies outside of your airgapped environment, first checkout the Bazel source tree at the right version: git clone "$BAZEL_DIR" cd "$BAZEL_DIR" git checkout "$BAZEL_VERSION" Then, build the tarball containing the implicit runtime dependencies for that specific Bazel version: bazel build @additional_distfiles//:archives.tar Export this tarball to a directory that can be copied into your airgapped environment. Note the --strip-components flag, because --distdir can be quite finicky with the directory nesting level: tar xvf bazel-bin/external/additional_distfiles/archives.tar \ -C "$NEW_DIRECTORY" --strip-components=3 Finally, when you use Bazel in your airgapped environment, pass the --distdir flag pointing to the directory. For convenience, you can add it as an .bazelrc entry: build --distdir=path/to/directory We do not recommend this option. - If you frequently make changes to your request configuration, such as alternating between -c optand -c dbgbuilds, or between simple- and cross-compilation, you will typically rebuild the majority of your codebase each time you switch. When this option is false, the host and request configurations are identical: all tools required during the build will be built in exactly the same way as target programs. This setting means that no libraries need to be built twice during a single build.-executed, and so on, resulting in a very large rebuild. Also, please note: if your host architecture is not capable of running your target binaries, your build will not work. -, and. 1: Hermeticity means that the action only uses its declared input files and no other files in the filesystem, and it only produces its declared output files.rc, the Bazel configuration file Bazel accepts many options. Some options are varied frequently (for example, --subcommands) while others stay the same across several builds (such as --package_path). To avoid specifying these unchanged options for every build (and other commands), you can specify options in a configuration file. Where are the .bazelrc files? Bazel looks for optional configuration files in the following locations, in the order shown below. The options are interpreted in this order, so options in later files can override a value from an earlier file if a conflict arises. All options that control which of these files are loaded are startup options, which means they must occur after bazel and before the command ( build, test, etc). The system RC file, unless --nosystem_rcis present. Path: - On Linux/macOS/Unixes: /etc/bazel.bazelrc - On Windows: %ProgramData%\bazel.bazelrc It is not an error if this file does not exist. If another system-specified location is required, you must build a custom Bazel binary, overriding the BAZEL_SYSTEM_BAZELRC_PATHvalue in //src/main/cpp:option_processor. The system-specified location may contain environment variable references, such as ${VAR_NAME}on Unix or %VAR_NAME%on Windows. The workspace RC file, unless --noworkspace_rcis present. Path: .bazelrcin your workspace directory (next to the main WORKSPACEfile). It is not an error if this file does not exist. The home RC file, unless --nohome_rcis present. Path: - On Linux/macOS/Unixes: $HOME/.bazelrc - On Windows: %USERPROFILE%\.bazelrcif exists, or %HOME%/.bazelrc It is not an error if this file does not exist. The user-specified RC file, if specified with --bazelrc=file This flag is optional. However, if the flag is specified, then the file must exist. In addition to this optional configuration file, Bazel looks for a global rc file. For more details, see the global bazelrc section. .bazelrc syntax and semantics Like all UNIX “rc” files, the .bazelrc file is a text file with a line-based grammar. Empty lines and lines starting with line contains a sequence of words, which are tokenized according to the same rules as the Bourne shell. Imports Lines that start with import or try-import are special: use these to load other “rc” files. To specify a path that is relative to the workspace root, write import %workspace%/path/to/bazelrc. The difference between import and try-import is that Bazel fails if the import‘ed file is missing (or can’t be read), but not so for a try-import‘ed file. Import precedence: - Options in the imported file take precedence over options specified before the import statement. - Options specified after the import statement take precedence over the options in the imported file. - Options in files imported later take precedence over files imported earlier..) For example, the lines: build --test_tmpdir=/tmp/foo --verbose_failures build --test_tmpdir=/tmp/bar are combined as: build --test_tmpdir=/tmp/foo --verbose_failures --test_tmpdir=/tmp/bar so the effective flags are --verbose_failures and --test_tmpdir=/tmp/bar. Option precedence: - Options on the command line always take precedence over those in rc files. For example, if a rc file says build -c optbut the command line flag is -c dbg, the command line flag takes precedence. Within the rc file, precedence is governed by specificity: lines for a more specific command take precedence over lines for a less specific command. Specificity is defined by inheritance. Some commands inherit options from other commands, making the inheriting command more specific than the base command. For example testinherits from the buildcommand, so all bazel buildflags are valid for bazel test, and all buildlines apply also to bazel testunless there’s a testline for the same option. If the rc file says: test -c dbg --test_env=PATH build -c opt --verbose_failures then bazel build //foowill use -c opt --verbose_failures, and bazel test //foowill use --verbose_failures -c dbg --test_env=PATH. The inheritance (specificity) graph is: - Every command inherits from common - The following commands inherit from (and are more specific than) build: test, run, clean, mobile-install, info, print_action, config, cquery, and aquery coverageinherits from test Two lines specifying options for the same command at equal specificity are parsed in the order in which they appear within the file. - Because this precedence rule does not match the file order, it helps readability if you follow the precedence order within rc files: start with commonoptions at the top, and end with the most-specific commands at the bottom of the file.. This syntax does not extend to the use of startup to set startup options, e.g. setting startup:config-name --some_startup_option in the .bazelrc will be ignored.. The global bazelrc file In addition to your personal .bazelrc file, Bazel reads global bazelrc files in this order: $workspace/tools/bazel.rc, .bazelrc next to the Bazel binary, and /etc/bazel.bazelrc. (It’s fine if any are missing.) You can make Bazel ignore the global bazelrcs by passing the --nomaster_bazelrc startup option.. 41-44- Reserved for Google-internal use. 45- Error publishing results to the Build Event Service. Return codes for commands bazel build, bazel test: 1- Build failed. 3- Build OK, but some tests failed or timed out. 4- Build successful but no tests were found even though testing was requested. For bazel run: 1- Build failed. - If the build succeeds but the executed subprocess returns a non-zero exit code it will be the exit code of the command as well. For reads See the Performance Profiling section.
https://docs.bazel.build/versions/master/guide.html
2020-10-23T22:25:17
CC-MAIN-2020-45
1603107865665.7
[]
docs.bazel.build
30 to Fedora 31) is not fully integrated into Silverblue at this time. However, Silverblue can be upgraded between major versions using the ostree command. For example, to upgrade to Silverblue 32, the commands are: $ sudo ostree remote gpg-import fedora -k /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-32-primary $ rpm-ostree rebase fedora:fedora/32.
https://docs.fedoraproject.org/en-US/fedora-silverblue/updates-upgrades-rollbacks/
2020-10-23T23:08:20
CC-MAIN-2020-45
1603107865665.7
[]
docs.fedoraproject.org
.md title, example: - Repository name: bazelbuild/rules_go - Repository description: Go rules for Bazel - Repository tags: golang, bazel README.md: / LICENSE your to ci.bazel.build. Documentation See the Stard.md section above. We used to have all of the rules in the Bazel repository (under //tools/build_rules or //tools/build_defs). We still have a couple rules there, but we are working on moving the remaining rules out.
https://docs.bazel.build/versions/master/skylark/deploying.html
2020-10-23T22:31:36
CC-MAIN-2020-45
1603107865665.7
[]
docs.bazel.build
This page explains each mask component and its properties. As the building blocks of a mask stack, components generate a texture for the mask stack. To add a component to the stack, click on the component icon and select one. Solid This component fills the mask with a greyscale value. You can use this as a base when starting your mask stack. Map Load a custom Map as a mask. If the image is in color, it will be converted to grayscale. Map Type Custom Image Upload a custom image as a map from your disk Layer Map Select map from a layer’s channel present in the layer stack. Library Asset Select map of a library asset. Noise Noise can help with creating a more natural or random Mix by using noise generators. - Noise Generator: Dropdown to chose between available noise generators. - Seed: Iterate through different versions of the noise with the same values. - Amplitude: Control the intensity of noise waves. - Frequency: Control the amount of noise. - Octaves: Control the number of octaves. A higher value will produce more fine irregularities. - Lacunarity: Control the frequency of the octaves. - Persistence: Control the amplitude of the octaves. Pattern Create a patterned mask in a grid with a range of different shapes. Pattern Type: Choose between Square, Circle, Checker, or Gradient. Placement - Repeat: Specify how many times to repeat the pattern. - Spacing: Control the space between the tiles. - Offset: Lets you offset the tiling vertically or horizontally. Jitter This lets you add random variation to your material. There are three types of jitter available. - Jitter: Toggle to enable a type of jitter. - Brightness Jitter: Randomize the brightness of individual cells. - Gradient Jitter: Generate a random gradient on individual cells. - Size Jitter: Randomize the size of individual cells. - Amount: Control the amount of jitter. - Angle: Control the angle of the gradient jitter. - Threshold: Control the fraction of cells affected by jitter. A higher value means fewer cells are affected by the jitter. - Random Seed: Choose a variation of the same jitter values. Bevel - Bevel: Adjust the size of the bevel on each cell. - Bevel Curve: Control the curve of the bevel effect. Cut Out - Amount: Control the fraction of cells that are not visible. A higher value means more cells will not be visible. - Random Seed: Choose a variation of the cut out effect. Normal This component directionally masks using the Mix’s normals. - Angle: The angle of normals to include in the mask, where 0 is to the right when looking top down. - Tilt: The tilt in the vertical direction of normals to include. - Range: Specify the range of normals to apply to the mask. - You can also choose between using the Underlying Normals or the current Layer Normals for normal values. Curvature This component retrieves edge information from the Mix to accentuate or modify. This can be used to mask on edges to show erosion. Types of Curvature - Edges Only: Apply on edges of the mesh only - Cavities Only: Apply on cavities of the mesh only - Edges & Cavities: Apply on both edges and cavities Targeted Layer - Mesh & Underlying Mix: Apply curvature on the Mesh and the Underlying Mix - Base Normals: Apply curvature on the normals of the base layer. - Current Layer: Apply curvature on the current layer - Mesh Only: Apply curvature on mesh only. Curvature Controls - Tightness: Control the number of edges that are sampled. - Levels: Set a range of edge information to modify. - Soft Mesh: Attempt to reduce hard triangulation edges in the mesh curvature. - Anti-aliasing: Minimizing the distortion artifacts when representing a high-resolution image at a lower resolution - Invert: can be used to retrieve edge information from the crevices instead of the edges. Position Gradient Similar to the normal component, this creates a gradient using displacement and normal directions. In addition to manually setting, the angle and tilt values can be modified by clicking and dragging using the two smaller circles. - Angle: The angle of normals to include in the mask, where 0 is to the right when looking top down. - Tilt: The tilt in the vertical direction of normals to include. A 0-degree tilt is directly horizontal, resulting in a plain gradient. - Range: Control the range of the gradient’s greyscale values. - You can also choose from the drop down between using the Mesh & Underlying Mix, Mesh and Base Displacement or the Mesh Only option for displacement information. Post your comment on this topic.
http://docs.quixel.com/mixer/1/en/topic/components
2020-10-23T22:19:13
CC-MAIN-2020-45
1603107865665.7
[]
docs.quixel.com
When you open a new project you will see a blank project window with a single plane in the Viewport tab. This is where all your work will happen. To interact with your Mix within the viewport: - Middle Mouse: Scroll to zoom in and out of your Mix. - Alt + Right Mouse: Move vertically to zoom in and out of your Mix. - Alt + Left Mouse: Rotate the camera around your Mix. Render Settings Within the Viewport you have options on the top left to control your scene settings: The dropdown lets you change the render mode of the preview displayed in the viewport. - PBR Metalness: Physically-Based Rendering in Metalness Workflow. - Diffuse/Albedo: Color of the surface. - Metalness: Metallicity of the surface. - PBR Specular: Specular reflections map of the surface. - Gloss: Smoothness of the surface. - Roughness: Roughness of the surface. - Normal: Normals of the surface. - Displacement: Vertical displacement of the surface. - Occlusion: Soft shadows on the surface. - Layer Mask: Mask (visibility) of the selected layer. - Active Mask: Shows the mask of the type selected. - If a layer is selected, will show the same view as Layer Mask. - If a mask stack is selected, will show the corresponding mask. - If a mask layer is selected, will show the resulting mask of that component or modifier. - Material ID: Shows Material ID colors on your mesh. For explanation purposes, the image shown is based on a custom 3D mesh. Camera View and Environments Camera View This dropdown allows you to select any of the given orientations for your camera in the viewport in order to create Mixes efficiently Environment/HDRI Select any HDRI from the dropdown list to change scene’s lighting for your Mix with different backgrounds. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.quixel.com/mixer/1/en/topic/viewport
2020-10-23T21:33:29
CC-MAIN-2020-45
1603107865665.7
[]
docs.quixel.com
TOPICS× Concept of Authoring and Publishing. AEM and the Dispatcher are used to publish this AEM documentation. Author Environment Additionally, there are administrative tasks that help you manage your content: - workflows that control how changes are managed; for example. enforcing a review before publication - projects that coordinate individual tasks AEM is also administered (for a majority of tasks) from the author environment. Publish. Dispatcher To optimize performance for visitors to your website, the dispatcher implements load balancing and caching. IMPROVE THIS PAGE Last update: <0> min read WAS THIS CONTENT HELPFUL? By submitting your feedback, you accept the Adobe Terms of Use. Thank you for submitting your feedback.
https://docs.adobe.com/content/help/en/experience-manager-64/authoring/essentials/author.html
2020-10-23T23:19:41
CC-MAIN-2020-45
1603107865665.7
[array(['/content/dam/help/experience-manager-64.en/help/sites-authoring/assets/chlimage_1-289.png', None], dtype=object) ]
docs.adobe.com
Warehouse: Stock Tab Use the Stock tab as a report to see the in-stock balance of items stored in the warehouse (items to which the warehouse was linked), amounts reserved for sale or expected from purchase. For newly created warehouses, no item balances will be provided under this tab because there is no inventory data to be processed by the report yet. Balances will be displayed only when relevant inventory transactions take place for a new warehouse. To view the in-stock balance of the items, select a type of balance you want to display, and then click Show. Types of balance: - Stock balances If you select the stock balances type, you will get the following information about all the items to which the warehouse is linked: - Extended stock balances For the extended stock balances type, all the details present in the stock balances report are displayed as well along with the following information: More information
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427399990
2020-10-23T22:11:00
CC-MAIN-2020-45
1603107865665.7
[]
docs.codejig.com
PeriodType - Type - Abstract Class - Namespace - craft\enums - Inherits - craft\enums\PeriodType - Since - 2.0.0 The PeriodType class is an abstract class that defines the various time period lengths that are available in Craft. This class is a poor man's version of an enum, since PHP does not have support for native enumerations.
https://docs.craftcms.com/api/v3/craft-enums-periodtype.html
2020-10-23T22:22:21
CC-MAIN-2020-45
1603107865665.7
[]
docs.craftcms.com
Diversity and inclusion in Fedora The Fedora Project welcomes and encourages participation by everyone. Our community is based on mutual respect, tolerance, and encouragement, and we work to help each other live up to these principles. Please, refer to our Code of Conduct for more information. The Fedora Diversity and Inclusion team (Fedora D&I team) is committed to fostering diversity and inclusion in the Fedora community. You can watch our recent video showcasing the Fedora community here. Goals The goal of this initiative is to help foster diversity and inclusion in the Fedora community. The Fedora D&I team works towards this goal by focusing on efforts including (but not limited to): Creating content and organizing events to spread awareness about diversity within Fedora and outside the Fedora community. Supporting community building and engagement activities to foster inclusion. Coordinating with Fedora SIGs and subprojects to futher foster diversity and inclusion in the community. Supporting programs committed to building diversity and inclusion in Free and Open Source Software communities. Current Projects Fedora Women Fedora Women aims to foster involvement of women (cis and trans) and genderqueer people in Fedora and Free and Open Source Software. Events The Fedora D&I team organizes and supports a variety of events which help us achieve our goals. Here is the list of events. Please send us an email if you are interested in helping us organize events in your local community related to promoting diversity and inclusion in open source communities including Fedora. Meetings Get in touch with us Are you an individual contributor who wants to help us shape diversity and inclusion in the Fedora community? Are you a part of a local hackerspace, LUG or STEM community? Are you involved in open source project beyond Fedora? Did you say YES to any one of these? Diversity efforts require that we work with all parts of Fedora. We feel that everyone can bring in valuable inputs to this conversation. We have diverse and flexible contribution opportunities available for everyone. If you’d like to help out, please introduce yourself on our Diversity & Inclusion mailing list. You can join the conversation via IRC #fedora-diversity on Freenode. We also have a Telegram bridge to IRC here. We are especially looking for other open source projects and local communities to share resources and collaborate together. Please get in touch via the above mentioned ways or open a new ticket here. Feel like you don’t have much time to contribute? Subscribe to our Diversity & Inclusion mailing list to receive updates about our work.
https://docs.fedoraproject.org/ar/diversity-inclusion/
2020-10-23T23:16:33
CC-MAIN-2020-45
1603107865665.7
[]
docs.fedoraproject.org
The Featurebox is the perfect opportunity to turn your visitors into subscribers. It shows up right at your Blog Homepage, above all posts. You have the following Options Enable the Featurebox You find the Featurebox Options within the WordPress Customizer » Optin Form & Sections » Featurebox There you can simply enable or disable the Featurebox with […] Optin Forms & Sections Author Byline The You find the Author Byline Options […] Notificationbar The You find the Notificationbar Options within the […] Inside Content The […] After Content The After Content Optin is an opt-in box that appears under all your posts. You have the following Options Settings for individual posts You can enable or disable the After Content Optin for individual posts. To find this setting scroll to the bottom of your post edit screen of an individual post inside your wordpress dashboard. Note The settings […] Between Posts Optin The If you enable the two step optin, the […] Post Specific Bonuses Post Specific Bonuses are the fastes and easiest way to build your email list. The Growtheme makes it really easy to set up a post specific bonuses. You find the option to upload a bonus to a post on the post edit screen inside your wordpress admin panel. Just click on any post where you […]
https://docs.growtheme.com/category/wordpress-customizer/optin-forms/
2020-10-23T20:51:35
CC-MAIN-2020-45
1603107865665.7
[]
docs.growtheme.com
DebuggerTypeProxy for IronPython Old-style Class and Instance. [assembly:DebuggerTypeProxy( typeof(IronPython.DebuggingSupport.OldInstanceProxy), TargetTypeName = "IronPython.Runtime.Types.OldInstance, IronPython, Version=2.0.0.300, ...") ] public class OldInstanceProxy{ [DebuggerBrowsable(DebuggerBrowsableState.RootHidden)] public object m_oldInstance; public OldInstanceProxy(object target) { try { OldInstance obj = target as OldInstance; TypeBuilder tb = CodeGen.CreateTemporaryType(); ConstructorBuilder cb = tb.DefineConstructor(MethodAttributes.Public, CallingConventions.Standard, new Type[] { typeof(OldInstance) }); ILGenerator il = cb.GetILGenerator(); il.Emit(OpCodes.Ldarg_0); il.Emit(OpCodes.Call, typeof(Object).GetConstructor(Type.EmptyTypes)); foreach (KeyValuePair<object,object> pair in obj.Dictionary) { string key = pair.Key.ToString(); Type fieldType = pair.Value == null ? typeof(object) : pair.Value.GetType(); FieldBuilder fb = tb.DefineField(key, fieldType, FieldAttributes.Pub.Public); // emit code for the ctor ... } il.Emit(OpCodes.Ret); m_oldInstance = Activator.CreateInstance(tb.CreateType(), obj); Few discussions: - The attached solution also includes another proxy type: OldClassProxy for IronPython.Runtime.Types.OldClass. With that, viewing class attributes feels like viewing C# class static fields. Such coding pattern can be applied to other similar scenarios. - It is a bit hard to apply this for IronPython new style instances. For each new style class, IronPython creates a new type on the fly. For example, "class C(object): pass" generates a IronPython.Runtime.NewTypes.Object_1. The proxy type (which will create the "real" proxy type) need be created at the same time. - Types created by Reflection.Emit are not GC-collectible. In my proxy type implementation, a simple caching mechanism is implemented to reuse those "real" proxy types; but considering the python dynamism, object attributes can be added, removed, their values' type can be changed; such "real" proxy type may have to be created frequently, each click to expand in the debugger window could cause one type creation, so use wisely if you use such proxy type in real world scenarios. - Unlike DebuggerVisualizer where VS provides a way to debug it, it is a bit inconvenient to debug DebuggerTypeProxy code. So I found having the constructor code wrapped inside try-catch and saving the exception if thrown is useful. The exception object can be viewed in the debugger window if the type proxy dll is compiled with the "Debug" configuration. The messages from Debug.WriteLine are helpful too. - The proxy type implementation depends on the rapidly changing code of the DLR/IronPython project. It is not surprising that the attached solution could be broken in the future. ..
https://docs.microsoft.com/en-us/archive/blogs/haibo_luo/debuggertypeproxy-for-ironpython-old-style-class-and-instance
2020-10-23T22:59:20
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
Yes, your data is secure, and is always stored by you. We provide Jet Bridge that makes our architecture secure. Jet Bridge is a free and open-source app that generates an API and proxies the requests to databases and business apps. We don’t collect or host your private data on our side. Jet encrypts all data and credentials that go through our servers using an HTTPS connection. Host Jet Bridge on your servers. You can place it behind your VPN, in your own VPC. We won’t get access to your data, however, you will still receive interfaces updates normally. If your infrastructure doesn't have access to the internet, you can use on-premise on your own servers and block all network connections. You can place it behind your VPN, in your own VPC. Is there such a thing as too much security? Definitely not when it comes to your company’s private data. In Jet, you can additionally secure your workspace by setting up enforced two-factor authentication for each team member. Once the feature is activated, every time someone tries accessing your admin panel, they will be asked to verify the login attempt by typing in a security code sent to either their mobile phone or email. To prevent bad consequences for your business, Jet Admin automatically creates a backup of your interface, so you can always restore it in case of an incident. Simply push the “Recover” button at the top right corner of your screen and select what you would like to backup. Since Jet Admin doesn’t require access to your data, you are free to host your admin’s API under DMZ or VPN network. Once you do that, your admin panel will be separated from your public network, leaving no chance for malicious attacks or remote rooting. This might be on a checklist for some large healthcare and financial companies that can be held liable for clients’ personal information. In most cases though, it is not a necessity. When your company grows to an enterprise, it is especially hard to track how many people end up having access to your data. Not to be dramatic, but there is always a possibility for an insider attack. Thanks to the IP Whitelisting feature, you can create a list of the IP addresses you trust to interact with your Jet admin panel.
https://docs.jetadmin.io/data-privacy-and-security
2020-10-23T22:14:21
CC-MAIN-2020-45
1603107865665.7
[]
docs.jetadmin.io
VPN configuration (macOS device policy) With the VPN configuration you define VPN settings for network connections. Setting Description Connection name The name of the connection shown on the device. Connection type The type of the VPN connection: Cisco AnyConnect Cisco Legacy AnyConnect IPsec (Cisco) F5 Check Point Custom SSL/TLS Different entry fields are shown on the VPN page depending on the connection type you select here. Identifier (reverse DNS format) The custom identifier in reverse DNS format. Server The host name or the IP address of the server. Account The user account for the authentication of the connection. Third-party settings If your vendor has specified custom connection properties, you can enter them in this field. To enter a property, click Add and then enter Key and Value of the property in the dialog box. Send all traffic through VPN All traffic is sent through VPN. Group The group that may be required for the authentication of the connection. User authentication The type of user authentication for the connection: Password If you select this option, the Password field is shown below the User authentication field. Enter the password for authentication. Certificate If you select this option, the Certificate field is shown below the User authentication field. Select a certificate. Device authentication The type of device authentication: Keys (Shared Secret)/Group name If you select this option, the fields Group name, Keys (Shared Secret), Use hybrid authentication and Request password are displayed below the Device authentication field. Enter the required authentication information in the Group name and Keys (Shared Secret) fields. Select Use hybrid authentication and Request password as required. Certificate If you select this option, the fields Certificate and Including user PIN are displayed below the Device authentication field. In the Certificate list, select the required certificate. Select Including user PIN to include the user PIN in device authentication. Proxy The proxy settings for the connection: No proxy Manually If If you select this option, the Proxy server URL field is displayed. Enter the URL of the server with the proxy setting in this field. Provider type The VPN connection type. App proxy: Network traffic is sent through a VPN tunnel at the application layer. Packet tunnel: Network traffic is sent through a VPN tunnel at the network layer.
https://docs.sophos.com/central/Mobile/help/en-us/esg/Sophos-Mobile/references/ConfigurationVPNMacOSD.html
2020-10-23T21:43:27
CC-MAIN-2020-45
1603107865665.7
[]
docs.sophos.com
mutes View your mutes. See also accounts/:id/{mute,unmute} getMuted accounts Accounts the user has muted. Returns: Array of Account OAuth: User token + read:mutes or Version history: - 0.0.0 - added Request Headers Authorization required string Bearer <user token> Form Data Parameters limit optional string Maximum number of results to return per page. Defaults to 40. max_id optional string Internal parameter. Use the HTTP Link header for pagination instead. since_id optional string Internal parameter. Use the HTTP Link header for pagination instead. Response 200: Success Sample response with limit=2. The id of mutes is private, so parse the HTTP Link header to find links to next and previous pages. Link: <>; rel="next", <>; rel="prev" [ { "id": "963076", "username": "Simia91", "acct": "Simia91", "display_name": "", "locked": false, "bot": false, "created_at": "2019-11-07T10:31:17.428Z", "note": "<p></p>", "url": "", "avatar": "", "avatar_static": "", "header": "", "header_static": "", "followers_count": 18, "following_count": 73, "statuses_count": 640, "last_status_at": "2019-11-19T15:14:47.088Z", "emojis": [], "fields": [] }, { "id": "1001524", "username": "hakogamae", "acct": "hakogamae", "display_name": "Hakogamae 🔞", "locked": false, "bot": false, "created_at": "2019-11-15T13:01:55.538Z", "note": "<p>This blog is going to be about what I don't know -- what's the diff between good for me and not? </p><p>I always to make reasonable choices, but I've been wrong many times. Maybe I'll get better by simply working at it slowly.</p><p>"If I have the belief that I can do it,<br />I shall surely acquire the capacity to<br />do it even if I may not have it at the<br />beginning." -- Gandhi</p><p>My name -- Hakogamae -- comes from the Japanese Kanji Radical 22 匚部 meaning "box." I'm in a box now.</p><p>At Humblr, I was Fslowly</p>", "url": "", "avatar": "", "avatar_static": "", "header": "", "header_static": "", "followers_count": 23, "following_count": 0, "statuses_count": 137, "last_status_at": "2019-11-21T18:44:25.570Z", "emojis": [], "fields": [ { "name": "Men", "value": "living, alive", "verified_at": null }, { "name": "Carpe diem", "value": "匚部", "verified_at": null }, { "name": "Photographs", "value": "capturing time", "verified_at": null }, { "name": "Feedback", "value": "always helps", "verified_at": null } ] } ] 401: Unauthorized If the Authorization header is not provided or contains an invalid token, the request will fail. { "error": "The access token is invalid" } Last updated January 1, 2020 · Improve this page
https://docs-hello.2heng.xin/methods/accounts/mutes/
2020-10-23T22:02:53
CC-MAIN-2020-45
1603107865665.7
[]
docs-hello.2heng.xin
9.0.000.18 Billing Data Server Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release contains the following new features and enhancements: - GIM-based voicemail support — BDS now supports tracking of SIP voicemail usage based on data supplied by Genesys Info Mart. The following changes support this feature: - Two new datasets are added: agent_login and agent_group. - The DN dataset is updated. - A new metric, voicemail_boxes_gim, is added. - For more information, see GIM-based voicemail in the Billing Data Server User's Guide. - Security Enhancements: - BDS now supports authentication through Web Services and Applications (GWS). - BDS now performs server-server GWS authorization. - Logging Enhancements: - BDS now generates AUDIT log-records in the Bds.log file, for all manual CRUD operations. - BDS logger now supports a new severity level, AUDIT. AUDIT log-records are written to the main log file ( Bds.log), and also into a new bds-audit.log file. - BDS logger now excludes sensitive data from all logs. - Miscellaneous Enhancements: - A new statistic, peak_time, is now added to the statistics log file for all applicable metrics. - In the tenant configuration, the tenant_name parameter is no longer required, and is removed from the tenant template. - You can now define a blacklist for business unit names. Business units that are added to the blacklist cannot be added or imported; this prevents errors during business unit name detection. For more information about this feature, see the Billing Data Server User's Guide. Resolved Issues This release contains the following resolved issues: On premise deployments, if there is no requirement to break down data based on business unit, BDS now generates GVP metrics (such as GVP Ports, GVP ASR Ports, and GVP TTS Ports) even when Genesys Info Mart datasets are not available. As a result, test datasets (test_voice_interactions) are removed from the GVP ASR minutes / GVP TTS minutes, and GVP Minutes metrics. (CBILL-2255) The transformation process is optimized to more quickly handle GVP data in scenarios where business unit billing is required. (CBILL-2237) During tenant processing in scenarios where business unit billing is used, BDS now creates a statistic record in the statistics log in the file for each business unit, with the following metric value: _2018-09-24 13:15:26,951: metric=gvp_minutes;stat_type=business_units_breakdown;business_unit=WFO1;value=2_ For each business unit, the business_unit field is populated with the unique name assigned to that unit, and: - The business_unit field in the ‘total value’ file is populated with the tier3_name, which is a tenant name. - For tenants who do not use business unit breakdown, the business_unit field is always populated with the tier3_name, which is a tenant name. (CBILL-2096) The extraction process is optimized to more quickly handle the extraction of voice_sessions and mm_sessions datasets. Previously, extraction queries used the LIKE operator, which slowed processing. (CBILL-2067) BDS now works correctly with new versions of Python packages (numpy, pandas, and dateutils) , and the new versions of the Python packages are now included in the BDS container. Previously, BDS was unable to work with new versions of numpy, pandas, and dateutils Python packages, and logged errors similar to the following: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 from pandas._libs (CBILL-2014) Upgrade Notes No special procedure is required to upgrade to release 9.0.000.18. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/latest/bds90rn/bds9000018
2020-10-23T23:19:56
CC-MAIN-2020-45
1603107865665.7
[]
docs.genesys.com
Shortcut tips for Visual Studio You can navigate in Visual Studio more easily by using the shortcuts in this article. These shortcuts include keyboard and mouse shortcuts as well as text you can enter to help accomplish a task more easily. For a complete list of command shortcuts, see Default keyboard shortcuts. Note This topic applies to Visual Studio on Windows. For Visual Studio for Mac, see Common keyboard shortcuts in Visual Studio for Mac.
https://docs.microsoft.com/en-us/visualstudio/ide/productivity-shortcuts?view=vs-2019
2020-10-23T23:15:30
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
In general, results shown in dashboards are based upon pivot tables. If you are already familiar with pivot tables from spreadsheets, working with Resultra's dashboards should be straightforward. When creating new dashboard components, you'll be asked how you want to group and summarize the results. Results can be grouped either by a field's values or into time-intervals. Grouped results are then summarized using a summation, average or other calculation.
https://docs.resultra.org/doku.php?id=dashboards
2020-10-23T21:11:32
CC-MAIN-2020-45
1603107865665.7
[]
docs.resultra.org
Ansible Vault¶ Topics - Ansible Vault - What Can Be Encrypted With Vault - Creating Encrypted Files - Editing Encrypted Files - Rekeying Encrypted Files - Encrypting Unencrypted Files - Decrypting Encrypted Files - Viewing Encrypted Files - Use encrypt_string to create encrypted variables to embed in yaml - Vault Ids and Multiple Vault Passwords - Providing Vault Passwords - Speeding Up Vault Operations - Vault Format - Vault Payload Format 1.1. What Can Be Encrypted With Vault¶). As of version 2.3, Ansible supports encrypting single values inside a YAML file, using the !vault tag to let YAML and Ansible know it uses special processing. This feature is covered in more details below. Creating Encrypted Files¶). Editing Encrypted Files¶ To edit an encrypted file in place, use the ansible-vault edit command. This command will decrypt the file to a temporary file and allow you to edit the file, saving it back when done and removing the temporary file: ansible-vault edit foo.yml Rekeying Encrypted Files¶. Encrypting Unencrypted Files¶ If you have existing files that you wish to encrypt, use the ansible-vault encrypt command. This command can operate on multiple files at once: ansible-vault encrypt foo.yml bar.yml baz.yml Decrypting Encrypted Files¶ Viewing Encrypted Files¶ If you want to view the contents of an encrypted file without editing it, you can use the ansible-vault view command: ansible-vault view foo.yml bar.yml baz.yml Use encrypt_string to create encrypted variables to embed in yaml¶ The ansible-vault encrypt_string command will encrypt and format a provided string into a format that can be included in ansible-playbook YAML files. To encrypt a string provided as a cli arg: ansible-vault encrypt_string --vault-id] --stdin-name 'db_password' --stdin-name 'new_user_password' Output: Reading plaintext input from stdin. (ctrl-d to end input) User enters ‘hunter2’ and hits ctrl-d. Vault Ids and Multiple Vault Passwords¶. Providing Vault Passwords¶. Multiple vault passwords¶:. Note Prior to Ansible 2.4, only one vault password could be used in each Ansible run. The --vault-id option is not support prior to Ansible 2.4. Speeding Up Vault Operations¶ By default, Ansible uses PyCrypto to encrypt and decrypt vault files. If you have many encrypted files, decrypting them at startup may cause a perceptible delay. To speed this up, install the cryptography package: pip install cryptography Vault Format¶ A vault encrypted file is a UTF-8 encoded txt file. The file format includes a newline terminated header. For example: $ANSIBLE_VAULT;1.1;AES256 The header contains the vault format id, the vault format version, and a cipher id, separated’.. Vault Payload Format 1: - a RFC2104 style HMAC -:
https://docs.ansible.com/ansible/2.7/user_guide/vault.html
2020-10-23T21:51:55
CC-MAIN-2020-45
1603107865665.7
[]
docs.ansible.com
You can send photos to clients and others through email. Because sending original file sizes may be too large to send as an attachment, Photo Mechanic offers options to resize and compress photos before they are sent. You can also add a watermark. Go to File> Send Photos via email... (or shortcut Shift-⌘-E) You can also right-click on an individual photo in a contact sheet and choose to Send photos via email... You can type in an email address or use the button next to each address field to select from your default address book. Messages will be sent from your default email account, for example, Mail on macOS. You can change this by using the Send email via ______ dropdown. The Subject: and Body: fields can be customized by using variables. To save time, save a snapshot of your default email messages and take advantage of User/Client variables. If you send multiple photos in one email, only the variables from the first image in the series will be used. Create a separate email for each photo: Use this option if you are sending full-size photos to avoid your or your client's email servers with attachments that are too large. You may still need to resize or compress the photo to send it by email.
https://docs.camerabits.com/support/solutions/articles/48001141707-send-photos-by-email
2020-10-23T22:02:02
CC-MAIN-2020-45
1603107865665.7
[array(['https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/48040513059/original/sHvm0pQ6WpfG5vQ_N4dww4iRY7uFA7lVXA.png?1589842078', None], dtype=object) ]
docs.camerabits.com
ClustrixDB now provides Beta support for the JSON data type, including for SQL, many functions, and indexing. ClustrixDB supports the JSON data type, similar to the support included in MySQL 5.7. ClustrixDB stores JSON in a native JSON format, which allows for easy retrieval. ClustrixDB supports the following JSON functions:: When using JSON, the Query Results Cache (QRC) must be disabled. ClustrixDB does not support: Inserting JSON Querying JSON Due to some of the differences between ClustrixDB and MySQL, if you are replicating to MySQL using SBR, you may encounter errors.
https://docs.clustrix.com/display/CLXDOC/JSON
2020-10-23T21:50:40
CC-MAIN-2020-45
1603107865665.7
[]
docs.clustrix.com
Cached Queries The system automatically maintains a cache of prepared SQL statements (“queries”). This permits the re-execution of an SQL query without repeating the overhead of optimizing the query and developing a Query Plan. A cached query is created when certain SQL statements are prepared. Preparing a query occurs at runtime, not when the routine containing the SQL query code is compiled. Commonly, a prepare is immediately followed by the first execution of the SQL statement, though in Dynamic SQL it is possible to prepare a query without executing it. Subsequent executions ignore the prepare statement and instead access the cached query. To force a new prepare of an existing query it is necessary to purge the cached query. All invocations of SQL create cached queries, whether invoked in an ObjectScript routine or a class method. Dynamic SQL, ODBC, JDBC, and the $SYSTEM.SQL.DDLImport() method create a cached query when the query is prepared. The Management Portal execute SQL interface, the InterSystems SQL Shell, and the %SYSTEM.SQL.Execute() method use Dynamic SQL, and thus use a prepare operation to create cached queries. They are listed in the Management Portal general Cached Queries listing for the namespace (or specified schema), the Management Portal Catalog Details Cached Queries listings for each table being accessed, and the SQL Statements listings. Dynamic SQL follows the cached query naming conventions described in this chapter. Class Queries create a cached query upon prepare (%PrepareClassQuery() method) or first execution (CALL). They are listed in the Management Portal general Cached Queries listing for the namespace. If the class query is defined in a Persistent class, the cached query is also listed in the Catalog Details Cached Queries for that class. It is not listed in the Catalog Details for the table(s) being accessed. It is not listed in the SQL Statements listings. Class queries follow the cached query naming conventions described in this chapter. Embedded SQL creates a cached query upon first execution of the SQL code, or the initiation of code execution by invoking the OPEN command for a declared cursor. Embedded SQL cached queries are listed in the Management Portal SQL Statements listings. They are not listed in the Management Portal Cached Queries listings, nor do the follow the cached query naming conventions described in this chapter. Cached queries of all types are deleted by all purge cached queries operations. SQL query statements that generate a cached query are: SELECT: a SELECT cached query is shown in the Catalog Details for its table. If the query references more than one table, the same cached query is listed for each referenced table. Purging the cached query from any one of these tables purges it from all tables. From the table’s Catalog Details you can select a cached query name to display cached query details, including Execute and Show Plan options. A SELECT cached query created by the $SYSTEM.SQL.Schema.ImportDDL("IRIS") method does not provide Execute and Show Plan options. DECLARE name CURSOR FOR SELECT creates a cached query. However, cached query details do not include Execute and Show Plan options. CALL: creates a cached query shown in the Cached Queries list for its schema. INSERT, UPDATE, INSERT OR UPDATE, DELETE: create a cached query shown in the Catalog Details for its table. TRUNCATE TABLE: creates a cached query shown in the Catalog Details for its table. Note that $SYSTEM.SQL.Schema.ImportDDL("IRIS") does not support TRUNCATE TABLE. SET TRANSACTION, START TRANSACTION, %INTRANSACTION, COMMIT, ROLLBACK: create a cached query shown in the Cached Queries list for every schema in the namespace. A cached query is created when you Prepare the query. For this reason, it is important not to put a %Prepare() method in a loop structure. A subsequent %Prepare() of the same query (differing only in specified literal values) uses the existing cached query rather than creating a new cached query. Changing the SetMapSelectability() value for a table invalidates all existing cached queries that reference that table. A subsequent Prepare of an existing query creates a new cached query and removes the old cached query from the listing. A cache query is deleted when you purge cached queries. Modifying a table definition automatically purges any queries that reference that table. Issuing a Prepare or Purge automatically requests an exclusive system-wide lock while the query cache metadata is updated. The System Administrator can modify the timeout value for the cached query lock. The creation of a cached query is not part of a transaction. The creation of a cached query is not journaled. Cached Queries Improve Performance When you first prepare a query, the SQL Engine optimizes it and generates a program (a set of one or more InterSystems IRIS® data platform routines) that will execute the query. The optimized query text is then stored as a cached query class. If you subsequently attempt to execute the same (or a similar) query, the SQL Engine will find the cached query and directly execute the code for the query, bypassing the need to optimize and code generate. Cached queries provide the following benefits: Subsequent execution of frequently used queries is faster. More importantly, this performance boost is available automatically without having to code cumbersome stored procedures. Most relational database products recommend using only stored procedures for database access. This is not necessary with InterSystems IRIS. A single cached query is used for similar queries, queries that differ only in their literal values. For example, SELECT TOP 5 Name FROM Sample.Person WHERE Name %STARTSWITH 'A' and SELECT TOP 1000 Name FROM Sample.Person WHERE Name %STARTSWITH 'Mc' only differ in the literal values for TOP and the %STARTSWITH condition. The cached query prepared for the first query is automatically used for the second query. For other considerations that result in two “identical” queries resulting in separate cached queries, see below. The query cache is shared among all database users; if User 1 prepares a query, then User 1023 can take advantage of it. The Query Optimizer is free to use more time to find the best solution for a given query as this price only has to be paid the first time a query is prepared. InterSystems SQL stores all cached queries in a single location, the IRISLOCALDATA database. However, cached queries are namespace specific. Each cached query is identified with the namespace from which it was prepared (generated). You can only view or execute a cached query from within the namespace in which it was prepared. You can purge cached queries either for the current namespace or for all namespaces. A cached query does not include comments. However, it can include comment options following the query text, such as /*#OPTIONS {"optionName":value} */. Because a cached query uses an existing query plan, it provide continuity of operation for existing queries. Changes to the underlying tables such as adding indices or redefining the table optimization statistics have no effect on an existing cached query. For use of cached queries when changing a table definition, refer to the “SQL Statements and Frozen Plans” chapter in this manual. Creating a Cached Query When InterSystems IRIS Prepares a query it determines: If the query matches a query already in the query cache. If not, it assigns an increment count to the query. If the query prepares successfully. If not, it does not assign the increment count to a cached query name. Otherwise, the increment count is assigned to a cached query name and the query is cached. Cached Query Names The SQL Engine assigns a unique class name to each cached query, with the following format: %sqlcq.namespace.clsnnn Where namespace is the current namespace, in capital letters, and nnn is a sequential integer. For example, %sqlcq.USER.cls16. Cached queries are numbered sequentially on a per-namespace basis, starting with 1. The next available nnn sequential number depends on what numbers have been reserved or released: A number is reserved when you begin to prepare a query if that query does not match an existing cached query. A query matches an existing cached query if they differ only in their literal values — subject to certain additional considerations: suppressed literal substitution, different comment options, or the situations described in “Separate Cached Queries”. A number is reserved but not assigned if the query does not prepare successfully. Only queries that Prepare successfully are cached. A number is reserved and assigned to a cached query if the query prepares successfully. This cached query is listed for every table referred to in the query, regardless of whether any data is accessed from that table. If a query does not refer to any tables, a cached query is created but cannot be listed or purged by table. A number is released when a cached query is purged. This number becomes available as the next nnn sequential number. Purging individual cached queries associated with a table or purging all of the cached queries for a table releases the numbers assigned to those cached queries. Purging all cached queries in the namespace releases all of the numbers assigned to cached queries, including cached queries that do not reference a table, and numbers reserved but not assigned. Purging cached queries resets the nnn integer. Integers are reused, but remaining cached queries are not renumbered. For example, a partial purge of cached queries might leave cls1, cls3, cls4, and cls7. Subsequent cached queries would be numbered cls2, cls5, cls6, and cls8. A CALL statement may result in multiple cached queries. For example, the SQL statement CALL Sample.PersonSets('A','MA') results in the following cached queries: %sqlcq.USER.cls1: CALL Sample . PersonSets ( ? , ? ) %sqlcq.USER.cls2: SELECT name , dob , spouse FROM sample . person WHERE name %STARTSWITH ? ORDER BY 1 %sqlcq.USER.cls3: SELECT name , age , home_city , home_state FROM sample . person WHERE home_state = ? ORDER BY 4 , 1 In Dynamic SQL, after preparing an SQL query (using the %Prepare() or %PrepareClassQuery() instance method) you can return the cached query name using the %Display() instance method or the %GetImplementationDetails() instance method. See Results of a Successful Prepare. The cached query name is also a component of the result set OREF returned by the %Execute() instance method of the %SQL.Statement class (and the %CurrentResult property). Both of these methods of determining the cached query name are shown in the following example: SET randtop=$RANDOM(10)+1 SET randage=$RANDOM(40)+1 SET myquery = "SELECT TOP ? Name,Age FROM Sample.Person WHERE Age < ?" SET tStatement = ##class(%SQL.Statement).%New() SET qStatus = tStatement.%Prepare(myquery) IF qStatus'=1 {WRITE "%Prepare failed:" DO $System.Status.DisplayError(qStatus) QUIT} SET x = tStatement.%GetImplementationDetails(.class,.text,.args) IF x=1 { WRITE "cached query name is: ",class,! } SET rset = tStatement.%Execute(randtop,randage) WRITE "result set OREF: ",rset.%CurrentResult,! DO rset.%Display() WRITE !,"A sample of ",randtop," rows, with age < ",randage In this example, the number of rows selected (TOP clause) and the WHERE clause predicate value change with each query invocation, but the cached query name does not change. Separate Cached Queries Differences between two queries that shouldn’t affect query optimization nevertheless generate separate cached queries: Different syntactic forms of the same function generate separate cached queries. Thus ASCII('x') and {fn ASCII('x')} generate separate cached queries, and {fn CURDATE()} and {fn CURDATE} generate separate cached queries. A case-sensitive table alias or column alias value, and the presence or absence of the optional AS keyword generate separate cached queries. Thus ASCII('x'), ASCII('x') AChar, and ASCII('x') AS AChar generate separate cached queries. Using a different ORDER BY clause. Using TOP ALL instead of TOP with an integer value. Literal Substitution When the SQL Engine caches an SQL query, it performs literal substitution. The query in the query cache represents each literal with a “?” character, representing an input parameter. This means that queries that differ only in their literal values are represented by a single cached query. For example, the two queries: SELECT TOP 11 Name FROM Sample.Person WHERE Name %STARTSWITH 'A' SELECT TOP 5 Name FROM Sample.Person WHERE Name %STARTSWITH 'Mc' Are both represented by a single cached query: SELECT TOP ? Name FROM Sample.Person WHERE Name %STARTSWITH ? This minimizes the size of the query cache, and means that query optimization does not need to be performed on queries that differ only in their literal values. Literal values supplied using input host variables (for example, :myvar) and ? input parameters are also represented in the corresponding cached query with a “?” character. Therefore, the queries SELECT Name FROM t1 WHERE Name='Adam', SELECT Name FROM t1 WHERE Name=?, and SELECT Name FROM t1 WHERE Name=:namevar are all matching queries and generate a single cached query. You can use the %GetImplementationDetails() method to determine which of these entities is represented by each “?” character for a specific prepare. The following considerations apply to literal substitution: Plus and minus signs specified as part of a literal generate separate cached queries. Thus ABS(7), ABS(-7), and ABS(+7) each generate a separate cached query. Multiple signs also generate separate cached queries: ABS(+?) and ABS(++?). For this reason, it is preferable to use an unsigned variable ABS(?) or ABS(:num), for which signed or unsigned numbers can be supplied without generating a separate cached query. Precision and scale values usually do not take literal substitution. Thus ROUND(567.89,2) is cached as ROUND(?,2). However, the optional precision value in CURRENT_TIME(n), CURRENT_TIMESTAMP(n), GETDATE(n), and GETUTCDATE(n) does take literal substitution. A boolean flag does not take literal substitution. Thus ROUND(567.89,2,0) is cached as ROUND(?,2,0) and ROUND(567.89,2,1) is cached as ROUND(?,2,1). A literal used in an IS NULL or IS NOT NULL condition does not take literal substitution. Any literal used in an ORDER BY clause does not take literal substitution. This is because ORDER BY can use an integer to specify a column position. Changing this integer would result in a fundamentally different query. An alphabetic literal must be enclosed in single quotes. Some functions permit you to specify an alphabetic format code with or without quotes; only a quoted alphabetic format code takes literal substitution. Thus DATENAME(MONTH,64701) and DATENAME('MONTH',64701) are functionally identical, but the corresponding cached queries are DATENAME(MONTH,?) and DATENAME(?,?). Functions that take a variable number of arguments generate separate cached queries for each argument count. Thus COALESCE(1,2) and COALESCE(1,2,3) generate separate cached queries. DynamicSQLTypeList Comment Option When matching queries, a comment option is treated as part of the query text. Therefore, a query that differs from an existing cached query in its comment options does not match the existing cached query. A comment option may be user-specified as part of the query, or generated and inserted by the SQL preprocessor before preparing the query. If an SQL query contains literal values, the SQL preprocessor generates a DynamicSQLTypeList comment option, which it appends to the end of the cached query text. This comment option assigns a data type to each literal. Data types are listed in the order that the literals appear in the query. Only actual literals are listed, not input host variables or ? input parameters. The following is a typical example: SELECT TOP 2 Name,Age FROM Sample.MyTest WHERE Name %STARTSWITH 'B' AND Age > 21.5 generates the cached query text: SELECT TOP ? Name , Age FROM Sample . MyTest WHERE Name %STARTSWITH ? AND Age > ? /*#OPTIONS {"DynamicSQLTypeList":"10,1,11"} */ In this example, the literal 2 is listed as type 10 (integer), the literal “B” is listed as type 1 (string), and the literal 21.5 is listed as type 11 (numeric). Note that the data type assignment is based solely on the literal value itself, not the data type of the associated field. For instance, in the above example Age is defined as data type integer, but the literal value 21.5 is listed as numeric. Because InterSystems IRIS converts numbers to canonical form, a literal value of 21.0 would be listed as integer, not numeric. DynamicSQLTypeList returns the following data type values: Because the DynamicSQLTypeList comment option is part of the query text, changing a literal so that it results in a different data type results in creating a separate cached query. For example, increasing or decreasing the length of a literal string so that it falls into a different range. Literal Substitution and Performance The SQL Engine performs literal substitution for each value of an IN predicate. A large number of IN predicate values can have a negative effect on cached query performance. A variable number of IN predicate values can result in multiple cached queries. Converting an IN predicate to an %INLIST predicate results in a predicate with only one literal substitution, regardless of the number of listed values. %INLIST also provides an order-of-magnitude SIZE argument, which SQL uses to optimize performance. Suppressing Literal Substitution This literal substitution can be suppressed. There are circumstances where you may wish to optimize on a literal value, and create a separate cached query for queries with that literal value. To suppress literal substitution, enclose the literal value in double parentheses. This is shown in the following example: SELECT TOP 11 Name FROM Sample.Person WHERE Name %STARTSWITH (('A')) Specifying a different %STARTSWITH value would generate a separate cached query. Note that suppression of literal substitution is specified separately for each literal. In the above example, specifying a different TOP value would not generate a separate cached query. To suppress literal substitution of a signed number, specify syntax such as ABS(-((7))). Different numbers of enclosing parentheses may also suppress literal substitution in some circumstances. InterSystems recommends always using double parentheses as the clearest and most consistent syntax for this purpose. Cosharding Comment Option If an SQL query specifies multiple sharded tables, the SQL preprocessor generates a Cosharding comment option, which it appends to the end of the cached query text. This Cosharding option shows whether or not the specified tables are cosharded. In the following example, all three specified tables are cosharded: /*#OPTIONS {"Cosharding":[["T1","T2","T3"]]} */ In the following example, none of the three specified tables are cosharded: /*#OPTIONS {"Cosharding":[["T1"],["T2"],["T3"]]} */ In the following example, table T1 is not cosharded, but tables T2 and T3 are cosharded: /*#OPTIONS {"Cosharding":[["T1"],["T2","T3"]]} */ Runtime Plan Choice Runtime Plan Choice (RTPC) is a configuration option that allows the SQL optimizer to take advantage of outlier value information at run time (query execution time). Runtime Plan Choice is a system-wide SQL configuration option. When RTPC is activated, preparing the query includes detecting whether the query contains a condition on a field that has an outlier value. If the prepare detects one or more outlier field conditions, the query is not sent to the optimizer. Instead, SQL generates a Runtime Plan Choice stub. At execution time, the optimizer uses this stub to choose which query plan to execute: a standard query plan that ignores outlier status, or an alternative query plan that optimizes for outlier status. If there are multiple outlier value conditions, the optimizer can choose from multiple alternative run time query plans. When the query is prepared, SQL determines if it contains outlier field conditions. If so, it defers choosing a query plan until the query is executed. At prepare time it creates a standard SQL Statement and (for Dynamic SQL) a corresponding cached query, but defers the choice of whether to use this query plan or to create a different query plan until the query is executed. At prepare time, it creates what appear to be a standard SQL Statement, such as the following: DECLARE QRS CURSOR FOR SELECT Top ? Name,HaveContactInfo FROM Sample.MyTest WHERE HaveContactInfo=?, representing literal substitution variables with question marks. However, if you view the SQL Statement details, the Query Plan contains the statement “execution may cause creation of a different plan” At prepare time, a Dynamic SQL query also creates what appears to be a standard cached query; however, the cached query Show Plan option displays the Query Text with the SELECT %NORUNTIME keyword, indicating that this is a query plan that does not use RTPC. When the query is executed (OPEN in Embedded SQL), SQL creates a second SQL Statement and a corresponding cached query. The SQL Statement has a hash generated name and generates a RTPC stub, such as the following: DECLARE C CURSOR FOR %NORUNTIME SELECT Top :%CallArgs(1) Name,HaveContactInfo FROM Sample.MyTest WHERE HaveContactInfo=:%CallArgs(2). The optimizer then uses this to generate a corresponding cached query. If the optimizer determines that outlier information provides no performance advantage, it creates a cached query identical to the cached query created at prepare time, and executes this cached query. However if the optimizer determines that using outlier information provides a performance advantage, it creates a cached query that suppresses literal substitution of outlier fields in the cached query. For example, if the HaveContactInfo field is an outlier field (the vast majority of records have the value ‘Yes’), the query SELECT Name,HaveContactInfo FROM t1 WHERE HaveContactInfo=? would result in the cached query: SELECT Name,HaveContactInfo FROM t1 WHERE HaveContactInfo=(('Yes')). Note that RTPC query plan display differs based on the source of the SQL code: The Management Portal SQL interface Show Plan button may display an alternative run time query plan because this Show Plan takes its SQL code from the SQL interface text box. The SQL Statement, when selected, displays the Statement Details which includes the Query Plan. This Query Plan does not display an alternative run time query plan, but instead contains the text “execution may cause creation of a different plan” because it takes its SQL code from the statement index. If RTPC is not activated, or the query does not contain appropriate outlier field conditions, the optimizer creates a standard SQL Statement and a corresponding cached query. If an RTPC stub is frozen, all associated alternative run time query plans are also frozen. RTPC processing remains active for a frozen query even when the RTPC configuration option is turned off. You can manually suppress literal substitution when writing the query by specifying parentheses: SELECT Name,HaveContactInfo FROM t1 WHERE HaveContactInfo=(('Yes')). If you suppress literal substitution of the outlier field in a condition, RTPC is not applied to the query. The optimizer creates a standard cached query. Activating RTPC You can configure RTPC system-wide using either the Management Portal or a class method. Note that changing the RTPC configuration setting purges all cached queries. Using the Management Portal, configure the system-wide Optimize queries based on parameter values SQL setting. This option sets an appropriate combination of Runtime Plan Choice (RTPC) optimization and Bias Queries as Outlier (BQO) optimization.(). You can activate RTPC for all processes system-wide using the $SYSTEM.SQL.Util.SetOption() method, as follows: SET status=$SYSTEM.SQL.Util.SetOption("RTPC",flag,.oldval). The flag argument is a boolean used to set (1) or unset (0) RTPC. The oldvalue argument returns the prior RTPC setting as a boolean value. Application of RTPC The system applies RTPC to SELECT and CALL statements. It does not apply RTPC to INSERT, UPDATE, or DELETE statements. The system applies RTPC to any field that Tune Table has determined to have an outlier value, when that field is specified in the following query contexts. The outlier field is specified in a condition where it is compared to a literal. This comparison condition can be: A WHERE clause condition using an equality (=), non-equality (!=), IN, or %INLIST predicate. An ON clause join condition with an equality (=), non-equality (!=), IN, or %INLIST predicate. If RTPC is applied, the optimizer determines at run time whether to apply the standard query plan or an alternative query plan. RTPC is not applied if the query contains unresolved ? input parameters. RTPC is not applied if the query specifies the literal value surrounded by double parentheses, suppressing literal substitution. RTPC is not applied if the literal is supplied to the outlier field condition by a subquery. However, RTPC is applied if there is an outlier field condition within a subquery. Overriding RTPC You can override RTPC for a specific. Cached Query Result Set When you execute a cached query it creates a result set. A cached query result set is an Object instance. This means that the values you specify for literal substitution input parameters are stored as object properties. These object properties are referred to using i%PropName syntax. Listing Cached Queries You can count and list existing cached queries in the current namespace: Displaying cached queries using the InterSystems IRIS Management Portal Listing cached queries using the ^rINDEXSQL global Exporting cached queries to a file using the ExportSQL^%qarDDLExport utility Counting Cached Queries You can determine the current number of cached queries for a table by invoking the GetCachedQueryTableCount() method of the %Library.SQLCatalog class. This is shown in the following example: SET tbl="Sample.Person" SET num=##class(%Library.SQLCatalog).GetCachedQueryTableCount(tbl) IF num=0 {WRITE "There are no cached queries for ",tbl } ELSE {WRITE tbl," is associated with ",num," cached queries" } Note that a query that references more than one table creates a single cached query. However, each of these tables counts this cached query separately. Therefore, the number of cached queries counted by table may be larger than the number of actual cached queries. Displaying Cached Queries You can view (and manage) the contents of the query cache using the InterSystems IRIS Management Portal. From System Explorer, select SQL. Select a namespace with the Switch option at the top of the page; this displays the list of available namespaces. On the left side of the screen open the Cached Queries folder. Selecting one of these cached queries displays the details. The Query Type can be one of the following values: %SQL.Statement Dynamic SQL: a Dynamic SQL query using %SQL.Statement. ODBC/JDBC Statement: a dynamic query from either ODBC or JDBC. Embedded SQL cached queries are not listed in this display. When you successfully prepare an SQL statement, the system generates a new class that implements the statement. If you have set the Retain cached query source system-wide configuration option, the source code for this generated class is retained and can be opened for inspection using Studio. To do this, go to the InterSystems IRIS Management Portal. From System Administration, select Configuration, then SQL and Object Settings, then SQL. On this screen you can set the Retain cached query source option. If this option is not set (the default), the system generates and deploys the class and does not save the source code. You can also set this system-wide option using the $SYSTEM.SQL.Util.SetOption() method, as follows: SET status=$SYSTEM.SQL.Util.SetOption("CachedQuerySaveSource",flag,.oldval). The flag argument is a boolean used to retain (1) or not retain (0) query source code after a cached query is compiled; the default is 0. To determine the current setting, call $SYSTEM.SQL.CurrentSettings(). Listing Cached Queries Using ^rINDEXSQL You can use the ^rINDEXSQL global to list all of the cached queries and all of the SQL Statements for the current namespace: ZWRITE ^rINDEXSQL("sqlidx",2) A typical global in this list looks like this: ^rINDEXSQL("sqlidx",2,"%sqlcq.USER.cls4.1","oRuYrsuQDz72Q6dBJHa8QtWT/rQ=")="". The third subscript is the location. For example, "%sqlcq.USER.cls4.1" is a cached query in the USER namespace; "Sample.MyTable.1" is an SQL Statement. The fourth subscript is the Statement hash. Exporting Cached Queries to a File The following utility lists all of the cached queries for the current namespace to a text file. ExportSQL^%qarDDLExport(file,fileOpenParam,eos,cachedQueries,classQueries,classMethods,routines,display) The following is an example of evoking this cached queries export utility: DO ExportSQL^%qarDDLExport("C:\temp\test\qcache.txt","WNS","GO",1,1,1,1,1) When executed from the Terminal command line with display=1, export progress is displayed to the terminal screen, such as the following example: Export SQL Text for Cached Query: %sqlcq.USER.cls14.. Done Export SQL Text for Cached Query: %sqlcq.USER.cls16.. Done Export SQL Text for Cached Query: %sqlcq.USER.cls17.. Done Export SQL Text for Cached Query: %sqlcq.USER.cls18.. Done Export SQL Text for Cached Query: %sqlcq.USER.cls19.. Done Export SQL statement for Class Query: Cinema.Film.TopCategory... Done Export SQL statement for Class Query: Cinema.Film.TopFilms... Done Export SQL statement for Class Query: Cinema.FilmCategory.CategoryName...Done Export SQL statement for Class Query: Cinema.Show.ShowTimes... Done Export SQL statement for Class Query: Cinema.TicketItem.ShowItem... Done Export SQL statement from Class Method: Aviation.EventCube.Fact.%BuildAllFacts...Done Export SQL statement from Class Method: Aviation.EventCube.Fact.%BuildTempFile...Done Export SQL statement from Class Method: Aviation.EventCube.Fact.%Count...Done Export SQL statement from Class Method: Aviation.EventCube.Fact.%DeleteFact...Done Export SQL statement from Class Method: Aviation.EventCube.Fact.%ProcessFact...Done Export SQL statement from Class Method: Aviation.EventCube.Fact.%UpdateFacts...Done Export SQL statement from Class Method: Aviation.EventCube.Star1032357136.%Count...Done Export SQL statement from Class Method: Aviation.EventCube.Star1032357136.%GetDimensionProperty...Done Export SQL statement from Class Method: Aviation.EventCube.Star1035531339.%Count...Done Export SQL statement from Class Method: Aviation.EventCube.Star1035531339.%GetDimensionProperty...Done 20 SQL statements exported to script file C:\temp\test\qcache.txt The created export file contains entries such as the following: -- SQL statement from Cached Query %sqlcq.USER.cls30 SELECT TOP ? Name , Home_State , Age , AVG ( Age ) AS AvgAge FROM Sample . Person ORDER BY Home_State GO -- SQL statement from Class Query Cinema.Film.TopCategory #import Cinema SELECT TOP 3 ID, Description, Length, Rating, Title, Category->CategoryName FROM Film WHERE (PlayingNow = 1) AND (Category = :P1) ORDER BY TicketsSold DESC GO -- SQL statement(s) from Class Method Aviation.EventCube.Fact.%Count #import Aviation.EventCube SELECT COUNT(*) INTO :tCount FROM Aviation_EventCube.Fact GO This cached queries listing can be used as input to the Query Optimization Plans utility. Executing Cached Queries From Dynamic SQL: A %SQL.Statement Prepare operation (%Prepare(), %PrepareClassQuery(), or %ExecDirect()) creates a cached query. A Dynamic SQL %Execute() method using the same instance executes the most recently prepared cached query. From the Terminal: You can directly execute a cached query using the ExecuteCachedQuery() method of the $SYSTEM.SQL class. This method allows you to specify input parameter values and to limit the number of rows to output. You can execute a Dynamic SQL %SQL.Statement cached query or an xDBC cached query from the Terminal command line. This method is primarily useful for testing an existing cached query on a limited subset of the data. From the Management Portal SQL Interface: Follow the “Displaying Cached Queries” instructions above. From the selected cached query’s Catalog Details tab, click the Execute link. Cached Query Lock Issuing a Prepare or Purge statement automatically requests an exclusive system-wide lock while the cached query metadata is updated. SQL supports the system-wide CachedQueryLockTimeout option of the $SYSTEM.SQL.Util.SetOption() method. This option governs lock timeout when attempting to acquire a lock on cached query metadata. The default is 120 seconds. This is significantly longer than the standard SQL lock timeout, which defaults to 10 seconds. A System Administrator may need to modify this cached query lock timeout on systems with large numbers of concurrent Prepare and Purge operations, especially on a system which performs bulk purges involving a large number (several thousand) cached queries. SET status=$SYSTEM.SQL.Util.SetOption("CachedQueryLockTimeout",seconds,.oldval) method sets the timeout value system-wide: SetCQTimeout SET status=$SYSTEM.SQL.Util.SetOption("CachedQueryLockTimeout",150,.oldval) WRITE oldval," initial value cached query seconds",!! SetCQTimeoutAgain SET status=$SYSTEM.SQL.Util.SetOption("CachedQueryLockTimeout",180,.oldval2) WRITE oldval2," prior value cached query seconds",!! ResetCQTimeoutToDefault SET status=$SYSTEM.SQL.Util.SetOption("CachedQueryLockTimeout",oldval,.oldval3) CachedQueryLockTimeout sets the cached query lock timeout for all new processes system-wide. It does not change the cached query lock timeout for existing processes. Purging Cached Queries Whenever you modify (alter or delete) a table definition, any queries based on that table are automatically purged from the query cache on the local system. If you recompile a persistent class, any queries that use that class are automatically purged from the query cache on the local system. You can explicitly purge cached queries via the Management Portal using one of the Purge Cached Queries options. You can purge cached queries using the SQL Shell PURGE command. You can use the $SYSTEM.SQL.Purge(n) method to explicitly purge cached queries that have not been recently used. Specifying n number of days purges all cached queries in the current namespace that have not been used (prepared) within the last n days. Specifying an n value of 0 or "" purges all cached queries in the current namespace. For example, if you issue a $SYSTEM.SQL.Purge(30) method on May 11, 2018, it will purge only the cached queries that were last prepared before April 11, 2018. A cached query that was last prepared exactly 30 days ago (April 11, in this example) would not be purged. You can also purge cached queries using the following methods: $SYSTEM.SQL.PurgeCQClass() purges one or more cached queries by name in the current namespace. You can specify cached query names as a comma-separated list. Cached query names are case sensitive; the namespace name must be specified in all-capital letters. The specified cached query name or list of cached query names must be enclosed with quotation marks. $SYSTEM.SQL.PurgeForTable() purges all cached queries in the current namespace that reference the specified table. The schema and table name are not case-sensitive. $SYSTEM.SQL.PurgeAllNamespaces() purges all cached queries in all namespaces on the current system. Note that when you delete a namespace, its associated cached queries are not purged. Executing PurgeAllNamespaces() checks if there are any cached queries associated with namespaces that no longer exist; if so, these cached queries are purged. To purge all cached queries in the current namespace, use the Management Portal Purge ALL queries for this namespace option. Purging a cached query also purges related query performance statistics. Purging a cached query also purges related SQL Statement list entries. SQL Statements listed in the Management Portal may not be immediately purged, you may have to press the Clean stale button to purge these entries from the SQL Statements list. When you change the system-wide default schema name, the system automatically purges all cached queries in all namespaces on the system. Remote Systems Purging a cached query on a local system does not purge copies of that cached query on mirror systems. Copies of a purged cached query on a remote system must be manually purged. When a persistent class is modified and recompiled, the local cached queries based on that class are automatically purged. InterSystems IRIS does not automatically purge copies of those cached queries on remote systems. This could mean that some cached queries on a remote system are “stale” (no longer valid). However, when a remote system attempts to use a cached query, the remote system checks whether any of the persistent classes that the query references have been recompiled. If a persistent class on the local system has been recompiled, the remote system automatically purges and recreates the stale cached query before attempting to use it. SQL Commands That Are Not Cached The following non-query SQL commands are not cached; they are purged immediately after use: Data Definition Language (DDL): CREATE TABLE, ALTER TABLE, DROP TABLE, CREATE VIEW, ALTER VIEW, DROP VIEW, CREATE INDEX, DROP INDEX, CREATE FUNCTION, CREATE METHOD, CREATE PROCEDURE, CREATE QUERY, DROP FUNCTION, DROP METHOD, DROP PROCEDURE, DROP QUERY, CREATE TRIGGER, DROP TRIGGER, CREATE DATABASE, USE DATABASE, DROP DATABASE User, Role, and Privilege: CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE, GRANT, REVOKE, %CHECKPRIV Locking: LOCK TABLE, UNLOCK TABLE Miscellaneous: SAVEPOINT, SET OPTION Note that if you issue one of these SQL commands from the Management Portal Execute Query interface, the Performance information includes text such as the following: Cached Query: %sqlcq.USER.cls16. This appears in indicate that a cached query name was assigned. However, this cached query name is not a link. No cached query was created, and the incremental cached query number .cls16 was not set aside. InterSystems SQL assigns this cached query number to the next SQL command issued.
https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GSQLOPT_cachedqueries
2020-10-23T22:12:19
CC-MAIN-2020-45
1603107865665.7
[]
docs.intersystems.com
Big Picture The documentation applies to: v0.8.0 Design Goal¶ LET Portal (LP) has been designed for one important question: How can a small team develop and deliver quickly a change to colleagues? To be honest, we take a couple of months to answer this question. We would like to have a system can help me to generate an application form, a data list, and a dashboard in a few minutes without taking care seriously in some software aspects such as High Performance, High Availability, etc. to adapt a basic requirement. We just care about our colleagues's expectation, how they can use it, and how it reduces more time. LET Portal Architecture¶ According to an architecture above, LET Portal has six components and one 3rd-party component. There are: - SPA Web: Angular 8 - Gateway: Ocelot - .NET Core 3.1 - Identity API: .NET Core 3.1, use .NET Identity library to provide JWT - Portal API: .NET Core 3.1 - Service Management: .NET Core 3.1, a built-in service management for all LET Portal APIs - Chat API: .NET Core 3.1 and SignalR for Web Socket - Proxy Server: Nginx Advantages and Disadvantages¶ In our opinion, there are no perfect architecture and no architecture suits for most of softwares, includes technologies trend such as Micro-services, Saga, etc. So we list out some common advantages and disadvantages based on High Level Design abvobe. Hope they can guide you to go right direction when you have a plan to deploy LET Portal. Advantages¶ - Separation of concerns: we can replace ⅘ components, they are: - Proxy server: you can choose HAProxy instead of Nginx - Gateway Ocelot: you can choose Kong, Zookeepr instead of Ocelot - Identity API: you can choose Identity Server 4 instead of built-in Microsoft Identity - Service Management: you can choose Consul, Eureka instead of built-in service - Support scalability with less changing code -> Reduce deployment preparation - Easy to understand by most of developer -> Reduce training time - Deployable on one VM or multiple VMs -> Cost is efficient - Be ready to apply micro-services when needed -> Adaptive with trend Disadvantages¶ - Many components are built-in, so features can't be comparable with others - Security is acceptable, not suitable for High security requirement - Availability is acceptable, but we need more time to deploy High Availability Components¶ SPA Website¶ Front-end site of LET Portal, it is an Single Page Application which connects to two web services (Identity API, Portal API) and Gateway (Ocelot). Source code location: src/web-portal Technology: Angular 8.2.1 Gateway Ocelot¶ Following a micro-services trend, we use a Gateway Ocelot (it is a open-source project) to do four functions by default in LET Portal: - URL-based routing - Add TraceId to request header - Authentication - Offload SSL (in case HTTPS) Current version, Gateway Ocelot is only covering Portal API and one full-path API of Service Management. We will discuss more detail later. Source code location: src/web-apis/LetPortal.Gateway Technologies: .NET Core 3.1, Ocelot 14.0.5 Identity API¶ This service is providing authentication/authorization mechanism between SPA Web and Gateway Ocelot. Main functions are: - Login with JWT - Register/ Forgot Password - Get roles and claims - Track user's activities Source code location: src/web-apis/LetPortal.IdentityApis Technologies: .NET Core 3.1, .NET Identity Portal API¶ This service provides all data for LET Portal SPA Web, without this, it can't work. This service has many functions to execute, so we will discuss more detail later. Source code location: src/web-apis/LetPortal.PortalApis Technologies: .NET Core 3.1 Chat API¶ This service provides a message exchange and bridge for Video call. That helps to construct a chat room and cache message for improving performance. Source code location: src/web-apis/LetPortal.ChatApis Technologies: .NET Core 3.1, SignalR Service Management API¶ This service is built-in for micro-services architecture. This provides some basic functions as SM in theory: - Service Configuration - Service Monitor - Service Logging These APIs are using HTTP protocol, for gRPC we will do it later Source code location: src/web-apis/LetPortal.ServiceManagementApis Technologies: .NET Core 3.1 Proxy Server - 3rd party¶ This proxy server is mainly acting as network routing and WAF. We choose Nginx for taking this role, however, you can choose another proxy such as HAProxy. A main reason why we should use Proxy server , is Kestrel server doesn't have enough web server's functionalities. If you have a plan to use on Windows OS, so you can replace Nginx by IIS.
https://docs.letportal.app/overview/big-picture/
2020-10-23T20:50:56
CC-MAIN-2020-45
1603107865665.7
[array(['../../assets/images/Software-Architecture.png', 'LET Portal Architecture'], dtype=object) ]
docs.letportal.app
Building Apps with the new Power BI APIs Last month, Microsoft unveiled the new and improved Power BI, a cloud-based business analytics service for non-technical business users. The new Power BI is available for preview in the US. It has amazing new (HTML5) visuals, data sources, mobile applications, and developer APIs. This post will focus on the new Power BI APIs and how to use them to create and load data into Power BI datasets in the cloud. Microsoft is also working with strategic partners to add native data connectors to the Power BI service. If you have a great connector idea, you can submit it HERE. However, ANYONE can build applications that leverage the new APIs to send data into Power BI, so let’s get started! [View:] Yammer Analytics Revisited I’ve done a ton of research and development on using Power BI with Yammer data. In fact, last year I built a custom cloud service that exported Yammer data and loaded it into workbooks (with pre-built models). The process was wildly popular, but required several manual steps that were prone to user error. As such, I decided to use the Yammer use case for my Power BI API sample. Regardless if you are interested in Yammer data, you will find generic functions for interacting with Power BI. Why are Power BI APIs significant? Regardless of how easy Microsoft makes data modeling, end-users (the audience for Power BI) don’t care about modeling and would rather just answer questions with the data. Power BI APIs can automate modeling/loading and give end-users immediate access to answers. Secondly, some data sources might be proprietary, highly normalized, or overly complex to model. Again, Power BI APIs can solve this through automation. Finally, some data sources might have unique constrains that make it hard to query using normal connectors. For example, Yammer has REST end-points to query data. However, these end-points have unique rate limits that cause exceptions with normal OData connectors. Throttling is just one example of a unique constraint that can be addressed by owning the data export/query process in a 3rd party application that uses the Power BI APIs. Common Consent Vision My exploration of the Power BI APIs really emphasized Microsoft’s commitments to Azure AD and "Common Consent" applications. Common Consent refers to the ability of an application leveraging Azure AD to authenticate ONCE and get access to multiple Microsoft services such as SharePoint Online, Exchange Online, CRM Online, and (now) Power BI. All a developer needs to do is request appropriate permissions and (silently) get service-specific access tokens to communicate with the different services. Azure AD will light up with more services in the future, but I’m really excited to see how far Microsoft has come in one year and the types of applications they are enabling. Power BI API Permissions Power BI APIs use Azure Active Directory and OAuth 2.0 to authenticate users and authorize 3rd party applications. An application leveraging the Power BI APIs must first be registered as an Azure AD Application with permissions to Power BI. Currently, Azure AD supports three delegated permissions to Power BI from 3rd party applications. These include "View content properties", "Create content", "Add data to a user’s dataset". "Delegated Permissions" means that the API calls are made on behalf of an authenticated user…not an elevated account as would be the case with "Application Permissions" ("Application Permissions" could be added in the future). The permissions for an Azure AD App can be configured in the Azure Management Portal as seen below. Access Tokens and API Calls With an Azure AD App configured with Power BI permissions, the application can request resource-specific access tokens to Power BI (using the resource ID ""). The method below shows an asynchronous call to get a Power BI access token in a web project. getAccessToken for Power BI APIs The Power BI APIs offer REST endpoints to interact with datasets in Power BI. In order to call the REST end-points, a Power BI access token must be placed as a Bearer token in the Authorization header of all API calls. This can be accomplished server-side or client-side. In fact, the Power BI team has an API Explorer to see how most API calls can be performed in just about any language. I decided to wrap my API calls behind a Web API Controller as seen below. Take note of the Bearer token set in the Authorization header of each HttpClient call. Web API Controller Power BI Model Class Here are a few examples of calling these Web API methods client-side. Client-side Calls to Web API My application can create new datasets in Power BI or update existing datasets. For existing datasets, it can append-to or purge old rows before loading. Once the processing is complete, the dataset can be explored immediately in Power BI. Conclusion The new Power BI is a game-changer for business analytics. The Power BI APIs offer amazing opportunities for ISVs/Developers. They can enable completely new data-driven scenarios and help take the modeling burden off the end-user. You can download the completed solution outlined in this post below (please note you will need to generate your own application IDs for Azure AD and Yammer).
https://docs.microsoft.com/en-us/archive/blogs/richard_dizeregas_blog/building-apps-with-the-new-power-bi-apis
2020-10-23T22:46:05
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
SecureNative Documentation SecureNative provides security monitoring and protection platform for modern applications from OWASP TOP 10 security threats, our platform offers multiple security modules to help you better handle the security of your application. SecureNative monitors and protects your application at run-time through dynamic instrumentation of business logic and user behavior and help you handle most common security threats such as: - Bots protection - 3rd party packages vulnerabilities - SQL/NoSQL injections - XSS attacks - Massive security scans - Raise of HTTP errors (40X, 50X) - Anomaly Usage - Content Scraping - Adaptive Authentication, prevent ATO (Account Takeover) Unified Security Monitoring and Protection PlatformUnified Security Monitoring and Protection Platform - Automated Protection - protect applications from common vulnerabilities - Playbook Automation - create customizable flows that let you protect you application business logic How SecureNative Works ?How SecureNative Works ? We using a micro-agent that is installed in your application as a simple dependency. The agent inspects HTTP requests to your application and analyzes the user's behavior while blocking malicious activity. Learn more how SecureNative works ! Automated Protection Security ModulesAutomated Protection Security Modules Bot ProtectionBot Protection Protects against non-human behavior (aka. bad bots), these bots used for service disruption, data stealing and to perform fraudulent activities. Account TakeoverAccount Takeover Prevent account breaches and stop bad actors from gaining access once they have maliciously acquired authentic login credentials. Shared Account ProtectionShared Account Protection Allow you to monitor user sessions and prevent users from sharing their accounts/subscriptions and harm to your business causing lost in profit, without code changes you can get notifications and shared session blocking. User MonitoringUser Monitoring SecureNative detects when you or one of your users has their account accessed from somewhere uncommon or by a device they don't normally use. It gives your users the confidence that you're taking security seriously and doing everything you can to protect their accounts. Whether it's the result of a bot account takeover, phishing or compromised passwords. Now you can take action to protect the account from those threats. PII / PHI Data LeakPII / PHI Data Leak SecureNative allow you to discover and control all sensitive data, like customer PII, across your application APIs this way you can get visibility to possible sensitive data leaks and get real-time protection. Playbook AutomationPlaybook Automation Using the SecureNative platform you can connect multiple security components together using Security Flows and customize them to handle your security use-cases. Getting StartedGetting Started Once you've signed up for an account, getting started with SecureNative takes a few minutes. You'll use one of our Agents or SDK libraries to interact and integrate our security platform with your application or servers. You can then set up security workflows and take actions to alert you and/or the customer when unusual activity is detected on their account. Agents and SDKsAgents and SDKs We have native libraries for Nodes.JS, Java, Ruby, PHP, and .NET. The API is a simple way to get started in whatever language you prefer. We also have example code snippets in other languages too, like Node, Python, Go, or straight-up command line with CURL!
https://docs.securenative.com/docs/intro/
2020-10-23T21:18:20
CC-MAIN-2020-45
1603107865665.7
[]
docs.securenative.com
Feature: #90298 - Improve user info in BE User module¶ See Issue #90298 Description¶ The Backend users module has been improved by showing more details of TYPO3 Administrators and Editors: - All assigned groups, including subgroups, are now evaluated - All data which can be set in the backend user or an assigned group are now shown including allowed page types - Read & write access to tables - A new “detail view” for a TYPO3 Backend user has been added
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/10.3/Feature-90298-ImproveUserInfoInBeuserModule.html
2020-10-23T22:24:39
CC-MAIN-2020-45
1603107865665.7
[]
docs.typo3.org
The Amazon Chime SDK identity, meetings, and messaging APIs are now published on the new Amazon Chime SDK API Reference. For more information, see the Amazon Chime SDK API Reference. ListChannels Lists all Channels created under a single Chime App as a paginated list. You can specify filters to narrow results. Functionality & restrictions Use privacy = PUBLICto retrieve all public channels in the account. Only an AppInstanceAdmincan set privacy = PRIVATEto list the private channels in an account. The x-amz-chime-bearer request header is mandatory. Use the AppInstanceUserArn of the user that makes the API call as the value in the header. Request Syntax GET /channels?app-instance-arn= AppInstanceArn&max-results= MaxResults&next-token= NextToken&privacy= PrivacyHTTP/1.1 x-amz-chime-bearer: ChimeBearer that you want to return. Valid Range: Minimum value of 1. Maximum value of 50. - NextToken The token passed by previous API calls until all requested channels are returned. Length Constraints: Minimum length of 0. Maximum length of 2048. Pattern: - Privacy The privacy setting. PUBLICretrieves all the public channels. PRIVATEretrieves private channels. Only an AppInstanceAdmincan retrieve private channels. Valid Values: PUBLIC | PRIVATE Request Body The request does not have a request body. Response Syntax HTTP/1.1 200 Content-type: application/json { "Channels": [ { "ChannelArn": "string", "LastMessageTimestamp": number, "Metadata": "string", "Mode": "string", "Name": "string", "Privacy": "string" } ], "NextToken": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - Channels The information about each channel. Type: Array of ChannelSummary objects - NextToken The token returned from previous API requests until the number of channels:
https://docs.aws.amazon.com/chime/latest/APIReference/API_ListChannels.html
2022-05-16T12:52:39
CC-MAIN-2022-21
1652662510117.12
[]
docs.aws.amazon.com