content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Your profile picture in Kiite is automatically populated. At the moment there is no way to manually assign a profile picture. If your workspace is integrated with Slack, your Kiite profile picture will appear as your Slack profile picture. Alternatively, if your workspace is not currently integrated with Slack, whatever you setup to be your Gravatar avatar will be automatically presented as your Kiite profile picture.
http://docs.kiite.ai/en/articles/3412135-how-do-i-change-my-profile-picture
2020-10-19T22:10:41
CC-MAIN-2020-45
1603107866404.1
[]
docs.kiite.ai
Install the Splunk Add-on for Oracle Database - Get the Splunk Add-on for Oracle Database. the Splunk platform. Supported Add-ons: released Feedback submitted, thanks!
https://docs.splunk.com/Documentation/AddOns/released/Oracle/Distributeddeployment
2020-10-19T22:05:54
CC-MAIN-2020-45
1603107866404.1
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Use the list to exclude objects from the Virtual Analyzer Suspicious Object List based on the file SHA-1, IP address, URL, or domain. Task Description Find specific objects in the list Use the drop-down list or search field to filter objects. Add exceptions Click Add to specify the exception details. You can also add exceptions from existing items on the Objects tab. Import exceptions Click Import to locate and upload a CSV file containing properly formatted objects. If you are importing a CSV list for the first time, click the Download sample CSV link and follow the instructions in the file to populate the file with your objects. Delete exceptions Select exceptions and click Remove. Export exceptions Click Export to save the Virtual Analyzer Suspicious Object Exception List in a CSV file.
https://docs.trendmicro.com/en-us/smb/worry-free-business-security-services-67-server-help/policy-management/suspicious-object-li_001/virtual-analyzer-sus/virtual-analyzer-sus_001.aspx
2020-10-19T22:11:51
CC-MAIN-2020-45
1603107866404.1
[]
docs.trendmicro.com
How do I activate the Category Mega Menu? You just need to add a "category" item to the menu (in appearance menu) and you will have a "Enable category posts submenu" option that you can check for each menu item. This can only be done for the Main Menu. Like this:
https://docs.xplodedthemes.com/article/18-how-do-i-activate-the-category-mega-menu
2020-10-19T21:23:19
CC-MAIN-2020-45
1603107866404.1
[]
docs.xplodedthemes.com
Event Monitoring provides a more generic approach to protecting against unauthorized software and malware attacks. It monitors system areas for certain events, allowing administrators to regulate programs that trigger such events. Use Event Monitoring if you have specific system protection requirements that are above and beyond what is provided by Malware Behavior Blocking. The following table provides a list of monitored system events. When Event Monitoring detects a monitored system event, it performs the action configured for the event. The following table lists possible actions that administrators can take on monitored system events.
https://docs.trendmicro.com/en-us/smb/worry-free-business-security-services-67-server-help/policy-management/configuring-windows-/wfbssvc-configuring-/event-monitoring-wfb.aspx
2020-10-19T21:28:24
CC-MAIN-2020-45
1603107866404.1
[]
docs.trendmicro.com
Snappy Ubuntu Core is our recommended operating system. It. The App Store for drones and robots is a market place of apps and behaviors powered by Snappy Ubuntu Core where people is encouraged to put their algorithms for sale. Furthermore, we’ve put special effort into providing ROS support which means that you can create applications out of your favorite robotics framework. If you wish to use a Snappy Ubuntu Core distribution in Erle-Brain, download the image we have uploaded. The image includes official ROS Indigo support with daemons launched at init, mavros APM bridge to create telemetry bridges for wifi, usb and 433/915 MHz traditional telemetry radios. It also includes a trusty chroot for development purposes. In order to connect via ssh, use: user:ubuntu password:ubuntu I.e: ssh [email protected] using miniUSB connection. This image includes the following elements: ardupilotfolder APMfolder catkin_wsin order to develop ROS packages, with an example ( ros_erle_takeoff_land) If you wish to learn about the Snappy Ubuntu Core and the App Store], clik on the link.
http://docs.erlerobotics.com/brains/discontinued/erle_brain/software/operating_system/snappy
2018-03-17T06:11:31
CC-MAIN-2018-13
1521257644701.7
[]
docs.erlerobotics.com
glGetAttachedShaders — return the handles of the shader objects attached to a program object program Specifies the program object to be queried. maxCount Specifies the size of the array for storing the returned object names. count Returns the number of names actually returned in shaders. shaders Specifies an array that is used to return the names of attached shader objects.iv), a value of NULL may be passed for count. If no shader objects are attached to program, a value of 0 will be returned in count. The actual number of attached shaders can be obtained by calling glGetProgramiv with the value GL_ATTACHED_SHADERS. GL_INVALID_VALUE is generated if program is not a value generated by OpenGL. GL_INVALID_OPERATION is generated if program is not a program object. GL_INVALID_VALUE is generated if maxCount is less than 0. glGetProgramiv..
http://docs.gl/es2/glGetAttachedShaders
2018-03-17T06:17:07
CC-MAIN-2018-13
1521257644701.7
[]
docs.gl
Worker¶ Worker processes tasks from a queue. Usage¶ $ kuyruk --app <path.to.kuyruk.instance> --queue <queue_name> If queue_name is not given default queue name(“kuyruk”) is used. Example: $ kuyruk --app tasks.kuyruk --queue download_file OS Signals¶ Description of how worker processes react to OS signals. - SIGINT Worker exits after completing the current task. This is the signal sent when you press CTRL-C on your keyboard. - SIGTERM - Worker exits after completing the current task. - SIGQUIT Worker quits immediately. This is unclean shutdown. If worker is running a task it will be requeued by RabbitMQ. This is the signal sent when you press CTRL-on your keyboard. - SIGUSR1 - Prints stacktrace. Useful for debugging stuck tasks or seeing what the worker is doing. - SIGUSR2 - Discard current task and proceed to next one. Discarded task will not be requeued by RabbitMQ. - SIGHUP - Used internally to fail the task when connection to RabbitMQ is lost during the execution of a long running task. Do not use it.
http://kuyruk.readthedocs.io/en/latest/worker.html
2018-03-17T06:32:17
CC-MAIN-2018-13
1521257644701.7
[]
kuyruk.readthedocs.io
Glossary Archetypes are encapsulated boilerplates for centralizing your project configurations, workflows, and dependencies. An archetype is an npm module template, which is a “superclass” of a module, think inheritance for npm modules but not one that is used to generate code files and then discarded. Caching is a process of storing data locally in order to speed up subsequent retrievals. Child component is any component that is contained in a parent component. Container based technology. Hashing is the transformation of a string of characters into a usually shorter fixed-length value or key that represents the original string. Linting is the process of running a program that will analyze code for potential errors. Local scope is a CSS Modules feature that keeps classes local to the specified file, and does not pollute the global namespace. Markup a notation used to annotate a document's content to give information regarding the structure of the text or instructions for how it is to be displayed. Metadata extractor is a type of tooling that retrieves meta data information from various packages. Module tree is a directory tree like structure of all the package dependencies of a particular npm/node module. Multi instance is a type of architecture where multiple customers run their own separate instance of an application and operating system running on a separate virtual machine, all on a common hardware platform. Platform agnostic is software was runs on any combination of operating system and underlying processor architecture. Predictable state container is an object that stores the state of the entire app where the only way to change the state tree is to emit an action. Also known as Redux. Profiling is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Promise is an object used for asynchronous computations. It represents a value that may be available now, in the future, or never. React Data Id is a custom attribute used so that React can uniquely identify its components within the Document Object Model (DOM). Rendering Engine is a program that renders marked up content. Route Handler is a method or function that is executed when a certain route was requested. It usually handles the request and returns the necessary HTML to the client. Routing is the process of selecting the best paths in a network. Scaffolding tool is a tool used to generate a set of files, folders and configurations that follow the most common best practices to start a new project or component. Server Side Rendering a process where the initial request loads the page, layout, CSS, JavaScript and content. For subsequent updates to the page, the client-side rendering approach repeats the steps it used to get the initial content. Stub is a piece of code used to stand in for some other programming functionality. Transform to change in composition or structure. Transpile is a type of compilation process that takes the source code of a program written in one programming language as its input and produces the equivalent source code in another programming language.
https://docs.electrode.io/resources/glossary.html
2018-03-17T06:20:42
CC-MAIN-2018-13
1521257644701.7
[]
docs.electrode.io
Applies to: Advanced Threat Analytics version 1.8 Configure Port Mirroring Note This article is relevant only if you deploy ATA Gateways instead of ATA Lightweight Gateways. To determine if you need to use ATA Gateways, see Choosing the right gateways for your deployment. The main data source used by ATA is deep packet inspection of the network traffic to and from your domain controllers. For ATA to see the network traffic, you must either configure port mirroring, or use a Network TAP. For port mirroring, configure port mirroring for each domain controller to be monitored, as the source of the network traffic. Typically, you need to work with the networking or virtualization team to configure port mirroring. For more information, see your vendor's documentation. Your domain controllers and ATA Gateways can be either physical or virtual. The following are common methods for port mirroring and some considerations. For more information, see your switch or virtualization server product documentation. Your switch manufacturer might use different terminology. Switched Port Analyzer (SPAN) – Copies network traffic from one or more switch ports to another switch port on the same switch. Both the ATA Gateway and domain controllers must be connected to the same physical switch. Remote Switch Port Analyzer (RSPAN) – Allows you to monitor network traffic from source ports distributed over multiple physical switches. RSPAN copies the source traffic into a special RSPAN configured VLAN. This VLAN needs to be trunked to the other switches involved. RSPAN works at Layer 2. Encapsulated Remote Switch Port Analyzer (ERSPAN) – Is a Cisco proprietary technology working at Layer 3. ERSPAN allows you to monitor traffic across switches without the need for VLAN trunks. ERSPAN uses generic routing encapsulation (GRE) to copy monitored network traffic. ATA currently cannot directly receive ERSPAN traffic. For ATA ATA Gateway using either SPAN or RSPAN. Note If the domain controller being port mirrored is connected over a WAN link, make sure the WAN link can handle the additional load of the ERSPAN traffic. ATA only supports traffic monitoring when the traffic reaches the NIC and the domain controller in the same manner. ATA does not support traffic monitoring when the traffic is broken out to different ports. Supported port mirroring options * ERSPAN is only supported when decapsulation is performed before the traffic is analyzed by ATA. Note Make sure that domain controllers and the ATA Gateways to which they connect have time synchronized to within five minutes of each other. If you are working with virtualization clusters: - For each domain controller running on the virtualization cluster in a virtual machine with the ATA Gateway, configure affinity between the domain controller and the ATA Gateway. This way when the domain controller moves to another host in the cluster the ATA Gateway follows it. This works well when there are a few domain controllers. > [!NOTE] > If your environment supports Virtual to Virtual on different hosts (RSPAN) you do not need to worry about affinity. > - To make sure the ATA Gateways are properly sized to handle monitoring all of the DCs by themselves, try this option: Install a virtual machine on each virtualization host and install an ATA Gateway on each host. Configure each ATA Gateway to monitor all of the domain controllers that run on the cluster. This way, any host the domain controllers run on is monitored. After configuring port mirroring, validate that port mirroring is working before installing the ATA Gateway.
https://docs.microsoft.com/en-us/advanced-threat-analytics/configure-port-mirroring
2018-03-17T06:40:46
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
Connector Reference This document provides reference information about creating connectors. Prerequisites In order to develop connectors (and Mule extensions, in general) using Mule DevKit, the following are required: Java JDK 1.6+ Maven 3.x The @Connector Annotation The class level @Connector annotation indicates that a Java class needs to be processed by DevKit’s Annotation Processor and must be considered a connector. The @Connector annotation defines the following annotation type element declarations: Mule Version Verification At runtime, DevKit compares the minMuleVersion annotation parameter value against the version of Mule where it is deployed. This check is done during the Initialisation phase. Where the version of the Mule instance is older than minMuleVersion, the extension fails to initialize and DevKit logs a proper error message. The goal of this verification is to prevent possible runtime errors when the extension is executed in an older Mule version than the one for which the extension was originally developed. Restrictions The following restrictions apply to types annotated with @Connector. cannot be applied to interfaces cannot be applied to final classes cannot be applied to parametrized classes cannot be applied to non-public classes must contain exactly one method annotated with @Connect must contain exactly one method annotated with @Disconnect
https://docs.mulesoft.com/anypoint-connector-devkit/v/3.3/connector-reference
2017-02-19T16:32:20
CC-MAIN-2017-09
1487501170186.50
[]
docs.mulesoft.com
Asynchronous Availability Collector If you read BZ 536173, you can get a sense of the problem that needs to be solved. Typically, availability checks are very fast (sub-second). However, the plugin container puts a time limit on how long it will wait for a plugin's resource component to return availability status from calls to AvailabilityFacet#getAvailability(). This time limit is typically on the order of several seconds (5s at the time of this writing). The purpose of this time limit is to avoid having a rogue or misbehaving plugin from causing delays in availability reporting for the rest of the resources being managed within the system. The Asynchronous Availability Collector provides an implementation to help resource components that can't guarantee how fast its availability checks will be. Some managed resources simply can't respond to avaiability checks fast enough. In this case, this class will provide an asynchronous method that will collect availability without a timeout being involved (in other words, availability will be retrieved by waiting as long as it takes). In order to tell the plugin container what the managed resource's current availability is, the Asynchronous Availability Collector will provide a fast method to return the last known availability of the resource. In other words, it will be able to return the last know availability that was last retrieved by the asynchronous task - this retrieval of the last known availability will be very fast. The class that plugin developers need to know about is AvailabilityCollectorRunnable. You integrate this class into your ResourceComponent implementation in order to be able to perform availabilty checks asynchronously. This is how you do it:
https://docs.jboss.org/author/display/RHQ/Design-Asynchronous+Availability+Collector
2017-02-19T16:53:38
CC-MAIN-2017-09
1487501170186.50
[]
docs.jboss.org
How to Create a Portfolio Page Learn how to create Portfolio Posts, assign them to Categories and create a Portfolio Page with Filtering. Learn how to create a Portfolio Page using this video tutorial. Install Plugin Ensure the Portfolio Plugin is installed – to install the Portfolio plugin, goto WordPress Admin > Appearance > Install Plugins > Locate Portfolio Post Type > Click Install.. Adding Content Next, we need to add content to our Portfolio Post. We’ll use the Visual Composer to do this. You can use the many Elements to add content to the post – at this stage we have a few choices – see diagram for reference. Use Demo Portfolio If you would like to use the format used on the Classic Demo and have not installed the Demo Content ( recommended ) you can use the following steps to paste the HTML from the demo into the post: Once you have pasted the HTML into the Editor field, follow these steps:: Load a Template If you wish to load a template you’ve created or one of the default templates, create a new Post and follow these steps:
http://docs.acoda.com/you/tag/filtering/
2017-02-19T16:36:15
CC-MAIN-2017-09
1487501170186.50
[]
docs.acoda.com
Ge. That is a hard question to answer as GeoTools is a general purpose geospatial library. Here is a sample of some of the great features in the library today: GeoTools supports additional formats through the use of plug-ins. You can control the formats supported by your application by only including the plug-ins you require. Perhaps one of the unsupported modules or plugins may have what you need. These modules are supplied by the community and do not yet meet the quality expected by the library: There are also some “unsupported” formats that are either popular or under development: The current authoritative list of plugins is of course the source code:>opengeo</id> <name>OpenGeo Maven Repository</name> <url></url> <snapshots> <enabled>true</enabled> </snapshots> </repository> You can now build your project against a snapshot release by setting it as the your version property as shown here: <properties> <geotools.version>8-SNAPSHOT</geotools.version> </properties>. You can clarify any questions you have by sending us questions to the user mailing list:.
http://docs.geotools.org/stable/userguide/welcome/faq.html
2017-02-19T16:39:28
CC-MAIN-2017-09
1487501170186.50
[]
docs.geotools.org
Using AWS Lambda with Amazon Kinesis You can create an Amazon Kinesis stream to continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website click streams, financial transactions, social media feeds, IT logs, and location-tracking events. For more information, see Amazon Kinesis. You can subscribe Lambda functions to automatically read batches of records off your Amazon Kinesis stream and process them if records are detected on the stream. AWS Lambda then polls the stream periodically (multiple times per second) for new records. Note the following about how the Amazon Kinesis and AWS Lambda integration works: Stream-based model – This is a model (see Event Source Mapping), where AWS Lambda polls the stream and, when it detects new records, invokes your Lambda function by passing the new records as a parameter. In a stream-based model, you maintain event source mapping in AWS Lambda. The event source mapping describes which stream maps to which Lambda function. AWS Lambda provides an API (CreateEventSourceMapping) that you can use to create the mapping. You can also use the AWS Lambda console to create event source mappings. Synchronous invocation – AWS Lambda invokes a Lambda function using the RequestResponseinvocation type (synchronous invocation) by polling the Kinesis Stream. For more information about invocation types, see Invocation Types. Event structure – The event your Lambda function receives is a collection of records AWS Lambda reads from your stream. When you configure event source mapping, the batch size you specify is the maximum number of records that you want your Lambda function to receive per invocation. Regardless of what invokes a Lambda function, AWS Lambda always executes a Lambda function on your behalf. If your Lambda function needs to access any AWS resources, you need to grant the relevant permissions to access those resources. You also need to grant AWS Lambda permissions to poll your Amazon Kinesis stream. You grant all of these permissions to an IAM role (execution role) that AWS Lambda can assume to poll the stream and execute the Lambda function on your behalf. You create this role first and then enable it at the time you create the Lambda function. For more information, see Manage Permissions: Using an IAM Role (Execution Role). The following diagram illustrates the application flow: Custom app writes records to the stream. AWS Lambda polls the stream and, when it detects new records in the stream, invokes your Lambda function. AWS Lambda executes the Lambda function by assuming the execution role you specified at the time you created the Lambda function. For a tutorial that walks you through an example setup, see Tutorial: Using AWS Lambda with Amazon Kinesis.
http://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html
2016-09-25T03:42:22
CC-MAIN-2016-40
1474738659833.43
[array(['images/kinesis-pull-10.png', None], dtype=object)]
docs.aws.amazon.com
If you want to keep "rice.edu" in your email address after you graduate: - Sign up for your personal alumni account with the Association of Rice Alumni at alumni.rice.edu:. (If you already have an alumni account, please proceed to the next step.) - Log into your alumni account at alumni.rice.edu. - To access the sign-up form, go to the “Connect” menu at the top of the homepage and click on “Online Services.” Click on “Email Forwarding.” Here’s a direct link to the form:. - Complete the simple form (only two steps) and press submit. Why? Student Rice NetID accounts are set to expire several months after graduation on September 1. After the accounts expire, no forwarding occurs; mail sent to an expired Rice.edu address is returned to the sender. Setting up and using a Rice Alumni email address prior to graduation will help prevent missed messages after graduation. JGS Exchange Users The Jones Graduate School of Business provides an Exchange server for JGS email; please contact the JGS Help Desk for assistance saving your Rice email.
https://docs.rice.edu/confluence/display/ITDIY/Alumni+email+accounts;jsessionid=AFCD95B4A5BBA4E103972BC10F82D2AE
2016-09-25T03:40:52
CC-MAIN-2016-40
1474738659833.43
[]
docs.rice.edu
Monitoring, by measuring performance at various times and under different load conditions. As you monitor Amazon RDS, you should consider storing historical monitoring data. This stored data will give you a baseline to compare against with current performance data, identify normal performance patterns and performance anomalies, and devise methods to address issues. For example, with Amazon RDS, you can monitor network throughput, I/O for read, write, and/or metadata operations, client connections, and burst credit balances for your DB instances. When performance falls outside your established baseline, you might need change the instance class of your DB instance or the number of DB instances and Read Replicas that are available for clients in order to optimize your database availability for your workload., provided that they DB instance will vary based on your instance class and the complexity of the operations being performed. You can determine the number of database connections by associating your typical working set will fit into memory to minimize read and write operations. Monitoring Tools AWS provides various tools that you can use to monitor Amazon RDS. You can configure some of these tools to do the monitoring for you, while some of the tools require manual intervention. We recommend that you automate monitoring tasks as much as possible. Automated Monitoring Tools You can use the following automated monitoring tools to watch Amazon RDS Developer Guide. Amazon RDS Enhanced Monitoring provides metrics in real time for the operating system that your DB instande or DB cluster runs on. For more information, see Enhanced Monitoring. Amazon CloudWatch Events – Match events and route them to one or more target functions or streams to make changes, capture state information, and take corrective action. For more information, see Using Events in the Amazon CloudWatch Developer on using AWS CloudTrail Log Monitoring with Amazon RDS, see Logging Amazon RDS API Calls Using AWS CloudTrail. Amazon RDS Events – Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB cluster, DB snapshot, DB cluster snapshot, DB parameter group, or DB security group. For more information, see Using Amazon RDS Event Notification. Database log files – View, download, or watch database log files using the Amazon RDS console or Amazon RDS APIs. You can also query some database log files that are loaded into database tables. For more information, see Amazon RDS Database Log Files. Manual Monitoring Tools Another important part of monitoring Amazon RDS involves manually monitoring those items that the CloudWatch alarms don't cover. The Amazon RDS, CloudWatch, and other AWS console dashboards provide an at-a-glance view of the state of your AWS environment. We recommend that you also check the log files on your DB instance. From the Amazon RDS console, here are some of the items that you can monitor for your resources: The number of connections to a DB instance The amount of read and write operations to a DB instance The amount of storage that a DB instance is currently utilizing The amount of memory and CPU being utilized for a DB instance The amount of network traffic to and from a DB instance Monitoring with Amazon CloudWatch You can monitor DB instance using CloudWatch, which collects and processes raw data from Amazon RDS into readable, near real-time metrics. These statistics are recorded for a period of two weeks, so that you can access historical information and gain a better perspective on how your web application or service is performing. By default, Amazon RDS metric data is automatically sent to Amazon CloudWatch in 1-minute periods. For more information about Amazon CloudWatch, see What Are Amazon CloudWatch, Amazon CloudWatch Events, and Amazon CloudWatch Logs? in the Amazon CloudWatch Developer Guide. region. From the navigation bar, select the region where your AWS resources reside. For more information, see Regions and Endpoints. In the navigation pane, click Metrics. In the CloudWatch Metrics by Category pane, under the metrics category for Amazon RDS, select a metrics category, and then in the upper pane, scroll down to view the full list of metrics. To view metrics using the AWS CLI At a command prompt, use the following command: aws cloudwatch list-metrics --namespace "RDS" Amazon RDS Metrics The following metrics are available from Amazon Relational Database Service. will not invoke actions simply because they are in a particular state, the state must have changed and been maintained for a specified number of periods. The following procedures outlines how to create alarms for Amazon RDS. To set alarms using the CloudWatch console Sign in to the AWS Management Console and open the CloudWatch console at. Choose Alarms and then choose Create Alarm. This launches the Create Alarm Wizard. Choose RDS Metrics and scroll through the Amazon RDS metrics to locate the metric you want to place an alarm on. To display just the Amazon RDS metrics in this dialog box, search for the identifier of your resource. Select the metric to create an alarm on and then Command Line Interface Reference. To set an alarm using the CloudWatch API Call. For more information, see Amazon CloudWatch API Reference Viewing DB Instance Metrics Amazon RDS provides metrics so that you can monitor the health of your DB instances and DB clusters. You can monitor both DB instance metrics and operating system (OS) metrics. This section provides details on how you can view metrics for your DB instance using the RDS console and CloudWatch. For information on monitoring metrics for the operating system of your DB instance in real time using CloudWatch Logs, see Enhanced Monitoring. Viewing Metrics by Using the Console To view DB and OS metrics for a DB instance Sign in to the AWS Management Console and open the Amazon RDS console at. In the navigation pane, choose DB Instances. Select the check box to the left of the DB cluster you need information about. For Show Monitoring, choose the option for how you want to view your metrics from these:. Full Monitoring View includes an option for full-screen viewing. Enhanced Monitoring – Shows a summary of OS metrics available for a DB instance with Enhanced Monitoring enabled. Each metric includes a graph showing the metric monitored over a specific time span. Tip To select the time range of the metrics represented by the graphs, use Time Range. You can choose any graph to bring up a more detailed view of the graph, using which you can apply metric-specific filters to the metric data. Time Range is not available for the Enhanced Monitoring Dashboard. DB Instance Metrics Amazon RDS integrates with CloudWatch metrics to provide a variety of DB instance metrics. You can view CloudWatch metrics using the RDS console, CLI, or API. For a complete list of Amazon RDS metrics, go to Amazon RDS Dimensions and Metrics in the Amazon CloudWatch Developer Guide. Viewing DB Metrics by Using the CloudWatch CLI>
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html
2016-09-25T03:42:04
CC-MAIN-2016-40
1474738659833.43
[]
docs.aws.amazon.com
A Fedora ISO file can be turned into either CD or DVD discs. Turn Fedora Live ISO files into bootable USB media, as well as a CD or DVD. To learn how to turn ISO images into CD or DVD media, refer to. To make bootable USB media, use a Fedora Live image. Use either a Windows or Linux system to make the bootable USB media. To begin, make sure there is sufficient free space available on the USB media. There is no need to repartition or reformat your media. It is always a good idea to back up important data before performing sensitive disk operations. Download a Live ISO file as explained in Section 3.3, “Which Files Do I Download?”. Download the Windows liveusb-creator program at. Follow the instructions given at the site and in the liveusb-creator program to create the bootable USB media.. Download a Live ISO file as shown in Section 3.3, “Which Files Do I Download?”. Install the livecd-tools package on your system. For Fedora systems, use.
http://docs.fedoraproject.org/install-guide/f10/en_US/sn-making-media.html
2009-07-04T07:12:36
crawl-002
crawl-002-028
[]
docs.fedoraproject.org
Fedora 9 features the Upstart initialization system. All System V init scripts should run fine in compatibility mode. However, users who have made customizations to their /etc/inittab file will need to port those modifications to upstart. For information on how upstart works, see the init(8) and initctl(8) man pages. For information on writing upstart scripts, see the events(5) man page, and also the Upstart Getting Started Guide: Due to the change of init systems, it is recommended that users who do an upgrade on a live file system to Fedora 9, reboot soon afterwards. Fedora 9 features NetworkManager. NetworkManager 0.7 provides improved mobile broadband support, including GSM and CDMA devices, and now supports multiple devices, ad-hoc networking for sharing connections, and the use of system-wide network configuration. It is now enabled by default on all installations. When using NetworkManager, be aware of the following: NetworkManager does not currently support all virtual device types. Users who use bridging, bonding, or VLANs may need to switch to the old network service after configuration of those interfaces. NetworkManager starts the network asynchronously. Users who have applications that require the network to be fully initialized during boot should set the NETWORKWAIT variable in /etc/sysconfig/network. Please file bugs about cases where this is necessary, so we can fix the applications in question. Autofs is no longer installed by default. Users who wish to use Autofs can choose it from thegroup in the installer, or with the package installation tools.
http://docs.fedoraproject.org/release-notes/f9/en_US/sn-System-Services.html
2009-07-04T07:13:56
crawl-002
crawl-002-028
[]
docs.fedoraproject.org
Deposit Amount On Wallet Using Auto Flow Auto Flow VS Create Transfer Prior to introducing Auto-Flow method in V2 of the API, only operation you could use to initiate a transfer on users behalf was Create Transfer. However it required you to implement validations and processing logic required for each bank and multiple banks had different requirements and processing logic. For example: - Some of the banks required you added receiver as a beneficiary first, before you could make a deposit - Other banks required cool down period when adding new beneficiary... It was up to you to ensure all these requirements were met on your side when using Create Transfer endpoint. If needed, you can read more about processing using this endpoint on Deposit Amount On Wallet Using Create Transfer What Is Auto Flow Since the process was a bit complex, Dapi introduced Auto-Flow endpoint starting with V2 of the API. It basically does whatever Create Transfer did, but abstracts all the validations required by the banks. In other words, no more need to: - check cool down period - check banks beneficiary requirements - check banks first transaction requirements.... This is all managed on the API when you send Auto Flow request. You can check out example of Auto-Flow below { "appSecret": "00bae841ad979345fca2e2585c000da7eac420504d189cf63315e7a6234d45c68dbd6fff749167292cd1475622805dce7a2b979db3c16e25a2897158ee63845b1043930ff603e19deb1d2d54ad9afc3d52df241d3c4e7286244a2f98a10212e38b2e9f8b0e3a7592702fa4358fb9103b93a26dd6bb92c2be0327ac054f14becc", "userSecret": "DSv56dS/PB7QGJI/IGX4qKDhGVhIvQQhWo4zxTDT0gn079JlHnUSSq8NAtavX4fpHj7PGQ74BzVXBO9pFHXdSeLCMnayKLTLD0+zmMu7wfGzy+ZhkYTBe040CXWQ+AYaPhGTzfVWu3Lz6oM2QnqM9X56BbvpC80tN8Zg72VJHWC6YjazdQQ2NK0pl9+ePbmqn7PNjFKLhipgpTl7Hw3kvnLrSIC9AcXzVYQeSWYAv3LAEbECB1aNLXC0glMG2W7L2iLTMwy54wHbXXfSQlK9S6X7wmnZ0tn28H0MwMqWdLLtxvcFyYlMr3E0hqYnK4a5sU0IvF1yJAMHMBCbjw2Trnx2VMuX5IWjdxScfh+8IxWGvKl6RypksJTyNg100H+Q+j0vfKW/bOijFolZgHJtAxUowPlewK9JwoWahkbX2KTGoqQbSCh6KSzaCxdbg7ykNI5n+m6vdoWzGfZGFYjfgMX6aMInAM3b32ZAp9DlfFxRkg3oeoLBuTGTz73E51bZj9mGgD0FxkIXFPIWGx0WyxoYpMEesJeT8phNy0G82Bd7qzWCPGP4gK70jGpqfCWsvj1XZKMMMjReCdUrhtXKB4spQIFi+63WcGV7vDyWQdUTINOhmR8QfZOJoVm+VZgFiqCLI3Aa8AnoYw3UIPiheVjE5lxMulPNIP0QGMR31VY=", "amount": 1, "senderID": "ntV7rbYoexYaGDRfLCAo8vw1xXgu2VaXXtqvNoMU0sfy6aNErfUEGMD+P6lAlkzu/GKxPeoef7d7eNoxlHKyRw==", "beneficiary": { "name": "John Doe", "address": { "line1": "Baker Street", "line2": "Abu Dhabi", "line3": "United Arab Emirates" }, "country": "AE", "branchAddress": "Deira", "branchName": "Main Branch", "swiftCode": "FGBMAEAA", "iban": "AE770351001004432453627", "accountNumber": "1001004437564656", "bankName": "ENBD" } } Updated 11 months ago Please find the links below on how to use Auto Flow to automate payment process
https://docs.dapi.com/docs/deposit-amount-on-wallet-using-auto-flow
2022-08-07T23:17:39
CC-MAIN-2022-33
1659882570730.59
[]
docs.dapi.com
SciML Style Guide for Julia The SciML Style Guide is a style guide for the Julia programming language. It is used by the SciML Open Source Scientific Machine Learning Organization. As such, it is open to discussion with the community. Please file an issue or open a PR to discuss changes to the style guide. Table of Contents - Code Style Badge - Overarching Dogmas of the SciML Style - Consistency vs Adherence - Community Contribution Guidelines - Open source contributions are allowed to start small and grow over time - Generic code is preferred unless code is known to be specific - Internal types should match the types used by users when possible - Trait definition and adherence to generic interface is preferred when possible - Macros should be limited and only be used for syntactic sugar - Errors should be caught as high as possible, and error messages should be contextualized for newcommers - Subpackaging and interface packages is preferred over conditional modules via Requires.jl - Functions should either attempt to be non-allocating and reuse caches, or treat inputs as immutable - Out-Of-Place and Immutability is preferred when sufficient performant - Tests should attempt to cover a wide gamut of input types - When in doubt, a submodule should become a subpackage or separate package - Globals should be avoided whenever possible - Type-stable and Type-grounded code is preferred wherever possible - Closures should be avoided whenever possible - Numerical functionality should use the appropriate generic numerical interfaces - Functions should capture one underlying principle - Internal choices should be exposed as options whenever possible - Prefer code reuse over rewrites whenever possible - Prefer to not shadow functions - Specific Rules - High Level Rules - General Naming Principles - Modules - Functions - Function Argument Precedence - Tests and Continuous Integration - Whitespace - NamedTuples - Numbers - Ternary Operator - For loops - Function Type Annotations - Struct Type Annotations - Types and Type Annotations - Package version specifications - Documentation - Error Handling - Arrays - VS-Code Settings - JuliaFormatter Code Style Badge Let contributors know your project is following the SciML Style Guide by adding the badge to your README.md. []() Overarching Dogmas of the SciML Style Consistency vs Adherence According to PEP8:! Some code within the SciML organization is old, on life support, donated by researchers to be maintained. Consistency is the number one goal, so updating to match the style guide should happen on a repo-by-repo basis, i.e. do not update one file to match the style guide (leaving all other files behind). Community Contribution Guidelines For a comprehensive set of community contribution guidelines, refer to ColPrac. A relevant point to highlight PRs should do one thing. In the context of style, this means that PRs which update the style of a package's code should not be mixed with fundamental code contributions. This separation makes it easier to ensure that large style improvement are isolated from substantive (and potentially breaking) code changes. Open source contributions are allowed to start small and grow over time If the standard for code contributions is that every PR needs to support every possible input type that anyone can think of, the barrier would be too high for newcomers. Instead, the principle is to be as correct as possible to begin with, and grow the generic support over time. All recommended functionality should be tested, any known generality issues should be documented in an issue (and with a @test_broken test when possible). However, a function which is known to not be GPU-compatible is not grounds to block merging, rather its an encouragement for a follow-up PR to improve the general type support! Generic code is preferred unless code is known to be specific For example, the code: function f(A, B) for i in 1:length(A) A[i] = A[i] + B[i] end end would not be preferred for two reasons. One is that it assumes A uses one-based indexing, which would fail in cases like OffsetArrays and FFTViews. Another issue is that it requires indexing, while not all array types support indexing (for example, CuArrays). A more generic compatible implementation of this function would be to use broadcast, for example: function f(A, B) @. A = A + B end which would allow support for a wider variety of array types. Internal types should match the types used by users when possible If f(A) takes the input of some collections and computes an output from those collections, then it should be expected that if the user gives A as an Array, the computation should be done via Arrays. If A was a CuArray, then it should be expected that the computation should be internally done using a CuArray (or appropriately error if not supported). For these reasons, constructing arrays via generic methods, like similar(A), is preferred when writing f instead of using non-generic constructors like Array(undef,size(A)) unless the function is documented as being non-generic. Trait definition and adherence to generic interface is preferred when possible Julia provides many different interfaces, for example: Those interfaces should be followed when possible. For example, when defining broadcast overloads, one should implement a BroadcastStyle as suggested by the documentation instead of simply attempting to bypass the broadcast system via copyto! overloads. When interface functions are missing, these should be added to Base Julia or an interface package, like ArrayInterface.jl. Such traits should be declared and used when appropriate. For example, if a line of code requires mutation, the trait ArrayInterface.ismutable(A) should be checked before attempting to mutate, and informative error messages should be written to capture the immutable case (or, an alternative code which does not mutate should be given). One example of this principle is demonstrated in the generation of Jacobian matrices. In many scientific applications, one may wish to generate a Jacobian cache from the user's input u0. A naive way to generate this Jacobian is J = similar(u0,length(u0),length(u0)). However, this will generate a Jacobian J such that J isa Matrix. Macros should be limited and only be used for syntactic sugar Macros define new syntax, and for this reason they tend to be less composable than other coding styles and require prior familiarity to be easily understood. One principle to keep in mind is, "can the person reading the code easily picture what code is being generated?". For example, a user of Soss.jl may not know what code is being generated by: @model (x, α) begin σ ~ Exponential() β ~ Normal() y ~ For(x) do xj Normal(α + β * xj, σ) end return y end and thus using such a macro as the interface is not preferred when possible. However, a macro like @muladd is trivial to picture on a code (it recursively transforms a*b + c to muladd(a,b,c) for more accuracy and efficiency), so using such a macro for example: julia> @macroexpand(@muladd k3 = f(t + c3 * dt, @. uprev + dt * (a031 * k1 + a032 * k2))) :(k3 = f((muladd)(c3, dt, t), (muladd).(dt, (muladd).(a032, k2, (*).(a031, k1)), uprev))) is recommended. Some macros in this category are: Some performance macros, like @simd, @threads, or @turbo from LoopVectorization.jl, make an exception in that their generated code may be foreign to many users. However, they still are classified as appropriate uses as they are syntactic sugar since they do (or should) not change the behavior of the program in measurable ways other than performance. Errors should be caught as high as possible, and error messages should be contextualized for newcomers Whenever possible, defensive programming should be used to check for potential errors before they are encountered deeper within a package. For example, if one knows that f(u0,p) will error unless u0 is the size of p, this should be caught at the start of the function to throw a domain specific error, for example "parameters and initial condition should be the same size". Subpackaging and interface packages is preferred over conditional modules via Requires.jl Requires.jl should be avoided at all costs. If an interface package exists, such as ChainRulesCore.jl for defining automatic differentiation rules without requiring a dependency on the whole ChainRules.jl system, or RecipesBase.jl which allows for defining Plots.jl plot recipes without a dependency on Plots.jl, a direct dependency on these interface packages is preferred. Otherwise, instead of resorting to a conditional dependency using Requires.jl, it is preferred one creates subpackages, i.e. smaller independent packages kept within the same Github repository with independent versioning and package management. An example of this is seen in Optimization.jl which has subpackages like OptimizationBBO.jl for BlackBoxOptim.jl support. Some important interface packages to know about are: Functions should either attempt to be non-allocating and reuse caches, or treat inputs as immutable Mutating codes and non-mutating codes fall into different worlds. When a code is fully immutable, the compiler can better reason about dependencies, optimize the code, and check for correctness. However, many times a code making the fullest use of mutation can outperform even what the best compilers of today can generate. That said, the worst of all worlds is when code mixes mutation with non-mutating code. Not only is this a mishmash of coding styles, it has the potential non-locality and compiler proof issues of mutating code while not fully benefiting from the mutation. Out-Of-Place and Immutability is preferred when sufficient performant Mutation is used to get more performance by decreasing the amount of heap allocations. However, if it's not helpful for heap allocations in a given spot, do not use mutation. Mutation is scary and should be avoided unless it gives an immediate benefit. For example, if matrices are sufficiently large, then A*B is as fast as mul!(C,A,B), and thus writing A*B is preferred (unless the rest of the function is being careful about being fully non-allocating, in which case this should be mul! for consistency). Similarly, when defining types, using struct is preferred to mutable struct unless mutating the struct is a common occurrence. Even if mutating the struct is a common occurrence, see whether using SetField.jl is sufficient. The compiler will optimize the construction of immutable structs, and thus this can be more efficient if it's not too much of a code hassle. Tests should attempt to cover a wide gamut of input types Code coverage numbers are meaningless if one does not consider the input types. For example, one can hit all of the code with Array, but that does not test whether CuArray is compatible! Thus it's always good to think of coverage not in terms of lines of code but in terms of type coverage. A good list of number types to think about are: Array types to think about testing are: Array OffsetArray CuArray When in doubt, a submodule should become a subpackage or separate package Keep packages to one core idea. If there's something separate enough to be a submodule, could it instead be a separate well-tested and documented package to be used by other packages? Most likely yes. Globals should be avoided whenever possible Global variables should be avoided whenever possible. When required, global variables should be consts and have an all uppercase name separated with underscores (e.g. MY_CONSTANT). They should be defined at the top of the file, immediately after imports and exports but before an __init__ function. If you truly want mutable global style behaviour you may want to look into mutable containers. Type-stable and Type-grounded code is preferred wherever possible Type-stable and type-grounded code helps the compiler create not only more optimized code, but also faster to compile code. Always keep containers well-typed, functions specializing on the appropriate arguments, and types concrete. Closures should be avoided whenever possible Closures can cause accidental type instabilities that are difficult to track down and debug; in the long run it saves time to always program defensively and avoid writing closures in the first place, even when a particular closure would not have been problematic. A similar argument applies to reading code with closures; if someone is looking for type instabilities, this is faster to do when code does not contain closures. Furthermore, if you want to update variables in an outer scope, do so explicitly with Refs or self defined structs. For example, map(Base.Fix2(getindex, i), vector_of_vectors) is preferred over map(v -> v[i], vector_of_vectors) or [v[i] for v in vector_of_vectors] Numerical functionality should use the appropriate generic numerical interfaces While you can use A\b to do a linear solve inside of a package, that does not mean that you should. This interface is only sufficient for performing factorizations, and so that limits the scaling choices, the types of A that can be supported, etc. Instead, linear solves within packages should use LinearSolve.jl. Similarly, nonlinear solves should use NonlinearSolve.jl. Optimization should use Optimization.jl. Etc. This allows the full generic choice to be given to the user without depending on every solver package (effectively recreating the generic interfaces within each package). Functions should capture one underlying principle Functions mean one thing. Every dispatch of + should be "the meaning of addition on these types". While in theory you could add dispatches to + that mean something different, that will fail in generic code for which + means addition. Thus for generic code to work, code needs to adhere to one meaning for each function. Every dispatch should be an instantiation of that meaning. Internal choices should be exposed as options whenever possible Whenever possible, numerical values and choices within scripts should be exposed as options to the user. This promotes code reusability beyond the few cases the author may have expected. Prefer code reuse over rewrites whenever possible If a package has a function you need, use the package. Add a dependency if you need to. If the function is missing a feature, prefer to add that feature to said package and then add it as a dependency. If the dependency is potentially troublesome, for example because it has a high load time, prefer to spend time helping said package fix these issues and add the dependency. Only when it does not seem possible to make the package "good enough" should using the package be abandoned. If it is abandoned, consider building a new package for this functionality as you need it, and then make it a dependency. Prefer to not shadow functions Two functions can have the same name in Julia by having different namespaces. For example, X.f and Y.f can be two different functions, with different dispatches, but the same name. This should be avoided whenever possible. Instead of creating MyPackage.sort, consider adding dispatches to Base.sort for your types if these new dispatches match the underlying principle of the function. If it doesn't, prefer to use a different name. While using MyPackage.sort is not conflicting, it is going to be confusing for most people unfamiliar with your code, so MyPackage.special_sort would be more helpful to newcomers reading the code. Specific Rules High Level Rules - Use 4 spaces per indentation level, no tabs. - Try to adhere to a 92 character line length limit. General Naming Principles - All type names should be CamelCase. - All struct names should be CamelCase. - All module names should be CamelCase. - All function names should be snake_case(all lowercase). - All variable names should be snake_case(all lowercase). - All constant names should be SNAKE_CASE(all uppercase). - All abstract type names should begin with Abstract. - All type variable names should be a single capital letter, preferably related to the value being typed. - Whole words are usually better than abbreviations or single letters. - Variables meant to be internal or private to a package should be denoted by prepending two underscores, i.e. __. - Single letters can be okay when naming a mathematical entity, i.e. an entity whose purpose or non-mathematical "meaning" is likely only known by downstream callers. For example, aand bwould be appropriate names when implementing *(a::AbstractMatrix, b::AbstractMatrix), since the "meaning" of those arguments (beyond their mathematical meaning as matrices, which is already described by the type) is only known by the caller. - Unicode is fine within code where it increases legibility, but in no case should Unicode be used in public APIs. This is to allow support for terminals which cannot use Unicode: if a keyword argument must be η, then it can be exclusionary to uses on clusters which do not support Unicode inputs. TODOto mark todo comments and XXXto mark comments about currently broken code - Quote code in comments using backticks (e.g. $`variable_name`$). - When possible, code should be changed to incorporate information that would have been in a comment. For example, instead of commenting # fx applies the effects to a tree, simply change the function and variable names apply_effects(tree). - Comments referring to Github issues and PRs should add the URL in the comments. Only use inline comments if they fit within the line length limit. If your comment cannot be fitted inline then place the comment above the content to which it refers: # Yes: # Number of nodes to predict. Again, an issue with the workflow order. Should be updated # after data is fetched. p = 1 # No: p = 1 # Number of nodes to predict. Again, an issue with the workflow order. Should be # updated after data is fetched. - In general, comments above a line of code or function are preferred to inline comments. Modules - Module imports should occur at the top of a file or right after a moduledeclaration. - Module imports in packages should either use importor explicitly declare the imported functionality, for example using Dates: Year, Month, Week, Day, Hour, Minute, Second, Millisecond. - Import and using statements should be separated, and should be divided by a blank line. # Yes: import A: a import C using B using D: d # No: import A: a using B import C using D: d - Exported variables should be considered as part of the public API, and changing their interface constitutes a breaking change. - Any exported variables should be sufficiently unique. I.e., do not export fas that is very likely to clash with something else. - A file that includes the definition of a module, should not include any other code that runs outside that module. i.e. the module should be declared at the top of the file with the modulekeyword and endat the bottom of the file. No other code before, or after (except for module docstring before). In this case the code with in the module block should not be indented. - Sometimes, e.g. for tests, or for namespacing an enumeration, it is desirable to declare a submodule midway through a file. In this case the code within the submodule should be indented. Functions - Only use short-form function definitions when they fit on a single line: # Yes: foo(x::Int64) = abs(x) + 3 # No: foobar(array_data::AbstractArray{T}, item::T) where {T <: Int64} = T[ abs(x) * abs(item) + 3 for x in array_data ] - Inputs should be required unless a default is historically expected or likely to be applicable to >95% of use cases. For example, the tolerance of a differential equation solver was set to a default of abstol=1e-6,reltol=1e-3as a generally correct plot in most cases, and is an expectation from back in the 90's. In that case, using the historically expected and most often useful default tolerances is justified. However, if one implements GradientDescent, the learning rate needs to be adjusted for each application (based on the size of the gradient), and thus a default of GradientDescent(learning_rate = 1)is not recommended. - Arguments which do not have defaults should be preferrably made into positional arguments. The newer syntax of required keyword arguments can be useful but should not be abused. Notable exceptions are cases where "either or" arguments are accepted, for example of defining gor dgduis sufficient, then making them both keyword arguments with = nothingand checking that either is not nothing(and throwing an appropriate error) is recommended if distinct dispatches with different types is not possible. - When calling a function always separate your keyword arguments from your positional arguments with a semicolon. This avoids mistakes in ambiguous cases (such as splatting a Dict). - When writing a function that sends a lot of keyword arguments to another function, say sending keyword arguments to a differential equation solver, use a named tuple keyword argument instead of splatting the keyword arguments. For example, use diffeq_solver_kwargs = (; abstol=1e-6, reltol=1e-6,)as the API and use solve(prob, alg; diffeq_solver_kwargs...)instead of splatting all keyword arguments. - Functions which mutate arguments should be appended with !. - Avoid type piracy. I.e., do not add methods to functions you don't own on types you don't own. Either own the types or the function. - Functions should prefer instances instead of types for arguments. For example, for a solver type Tsit5, the interface should use solve(prob,Tsit5()), not solve(prob,Tsit5). The reason for this is multifold. For one, passing a type has different specialization rules, so functionality can be slower unless ::Type{Tsit5}is written in the dispatches which use it. Secondly, this allows for default and keyword arguments to extend the choices, which may become useful for some types down the line. Using this form allows adding more options in a non-breaking manner. - If the number of arguments is too large to fit into a 92 character line, then use as many arguments as possible within a line and start each new row with the same indentation, preferably at the same column as the (but this can be moved left if the function name is very long. For example: # Yes function my_large_function(argument1, argument2, argument3, argument4, argument5, x, y, z) # No function my_large_function(argument1, argument2, argument3, argument4, argument5, x, y, z) Function Argument Precedence. Tests and Continuous Integration - The high level runtests.jlfile should only be used to shuttle to other test files. - Every set of tests should be included into a @safetestset. A standard @testsetdoes not fully enclose all defined values, such as functions defined in a @testset, and thus can "leak". - Test includes should be written in one line, for example: @time @safetestset "Jacobian Tests" begin include("interface/jacobian_tests.jl") end - Every test script should be fully reproducible in isolation. I.e., one should be able to copy paste that script and receive the results. - Test scripts should be grouped based on categories, for example tests of the interface vs tests for numerical convergence. Grouped tests should be kept in the same folder. - A GROUPenvironment variable should be used to specify test groups for parallel testing in continuous integration. A fallback group Allshould be used to specify all of the tests that should be run when a developer runs ]test Packagelocally. As an example, see the OrdinaryDiffEq.jl test structure - Tests should include downstream tests to major packages which use the functionality, to ensure continued support. Any update which breaks the downstream tests should follow with a notification to the downstream package of why the support was broken (preferably in the form of a PR which fixes support), and the package should be given a major version bump in the next release if the changed functionality was part of the public API. - CI scripts should use the default settings unless required. - CI scripts should test the Long-Term Support (LTS) release and the current stable release. Nightly tests are only necessary for packages which a heavy reliance on specific compiler details. - Any package supporting GPUs should include continuous integration for GPUs. - Doctests should be enabled except for on the examples which are computationally-prohibitive to have as part of continuous integration. Whitespace Avoid extraneous whitespace immediately inside parentheses, square brackets or braces. ```julia Yes: spam(ham[1], [eggs]) No: spam( ham[ 1 ], [ eggs ] ) ``` Avoid extraneous whitespace immediately before a comma or semicolon: ```julia Yes: if x == 4 @show(x, y); x, y = y, x end No: if x == 4 @show(x , y) ; x , y = y , x end ``` Avoid whitespace around :in ranges. Use brackets to clarify expressions on either side. ```julia Yes: ham[1:9] ham[9:-3:0] ham[1:step:end] ham[lower:upper-1] ham[lower:upper - 1] ham[lower:(upper + offset)] ham[(lower + offset):(upper + offset)] No: ham[1: 9] ham[9 : -3: 1] ham[lower : upper - 1] ham[lower + offset:upper + offset] # Avoid as it is easy to read as ham[lower + (offset:upper) + offset]``` Avoid using more than one space around an assignment (or other) operator to align it with another: ```julia Yes: x = 1 y = 2 long_variable = 3 No: x = 1 y = 2 long_variable = 3 ``` Surround most binary operators with a single space on either side: assignment ( =), updating operators ( +=, -=, etc.), numeric comparisons operators ( ==, <, >, !=, etc.), lambda operator ( ->). Binary operators may be excluded from this guideline include: the range operator ( :), rational operator ( //), exponentiation operator ( ^), optional arguments/keywords (e.g. f(x = 1; y = 2)). ```julia Yes: i = j + 1 submitted += 1 x^2 < y No: i=j+1 submitted +=1 x^2<y ``` Avoid using whitespace between unary operands and the expression: ```julia Yes: -1 [1 0 -1] No: - 1 [1 0 - 1] # Note: evaluates to [1 -1]``` Avoid extraneous empty lines. Avoid empty lines between single line method definitions and otherwise separate functions with one empty line, plus a comment if required: ```julia Yes: Note: an empty line before the first long-form domathsmethod is optional. domaths(x::Number) = x + 5 domaths(x::Int) = x + 10 function domaths(x::String) return "A string is a one-dimensional extended object postulated in string theory." end dophilosophy() = "Why?" No: domath(x::Number) = x + 5 domath(x::Int) = x + 10 function domath(x::String) return "A string is a one-dimensional extended object postulated in string theory." end dophilosophy() = "Why?" ``` Function calls which cannot fit on a single line within the line limit should be broken up such that the lines containing the opening and closing brackets are indented to the same level while the parameters of the function are indented one level further. In most cases the arguments and/or keywords should each be placed on separate lines. Note that this rule conflicts with the typical Julia convention of indenting the next line to align with the open bracket in which the parameter is contained. If working in a package with a different convention follow the convention used in the package over using this guideline. ```julia Yes: f(a, b) constraint = conicform!(SOCElemConstraint(temp2 + temp3, temp2 - temp3, 2 * temp1), uniqueconic_forms) No: Note: fcall is short enough to be on a single line f( a, b, ) constraint = conicform!(SOCElemConstraint(temp2 + temp3, temp2 - temp3, 2 * temp1), uniqueconic_forms) ``` Group similar one line statements together. ```julia Yes: foo = 1 bar = 2 baz = 3 No: foo = 1 bar = 2 baz = 3 ``` Use blank-lines to separate different multi-line blocks. ```julia Yes: if foo println("Hi") end for i in 1:10 println(i) end No: if foo println("Hi") end for i in 1:10 println(i) end ``` After a function definition, and before an end statement do not include a blank line. ```julia Yes: function foo(bar::Int64, baz::Int64) return bar + baz end No: function foo(bar::Int64, baz::Int64) return bar + baz end No: function foo(bar::In64, baz::Int64) return bar + baz end ``` Use line breaks between control flow statements and returns. ```julia Yes: function foo(bar; verbose = false) if verbose println("baz") end return bar end Ok: function foo(bar; verbose = false) if verbose println("baz") end return bar end ``` NamedTuples The = character in NamedTuples should be spaced as in keyword arguments. Space should be put between the name and its value. The empty NamedTuple should be written NamedTuple() not (;) # Yes: xy = (x = 1, y = 2) x = (x = 1,) # Trailing comma required for correctness. x = (; kwargs...) # Semicolon required to splat correctly. # No: xy = (x=1, y=2) xy = (;x=1,y=2) Numbers - Floating-point numbers should always include a leading and/or trailing zero: # Yes: 0.1 2.0 3.0f0 # No: .1 2. 3.f0 - Always prefer the type Intto Int32or Int64unless one has a specific reason to choose the bit size. Ternary Operator Ternary operators ( ?:) should generally only consume a single line. Do not chain multiple ternary operators. If chaining many conditions, consider using an if- elseif- else conditional, dispatch, or a dictionary. # Yes: foobar = foo == 2 ? bar : baz # No: foobar = foo == 2 ? bar : baz foobar = foo == 2 ? bar : foo == 3 ? qux : baz As an alternative, you can use a compound boolean expression: # Yes: foobar = if foo == 2 bar else baz end foobar = if foo == 2 bar elseif foo == 3 qux else baz end For loops For loops should always use in, never = or ∈. This also applies to list and generator comprehensions # Yes for i in 1:10 #... end [foo(x) for x in xs] # No: for i = 1:10 #... end [foo(x) for x ∈ xs] Function Type Annotations Annotations for function definitions should be as general as possible. # Yes: splicer(arr::AbstractArray, step::Integer) = arr[begin:step:end] # No: splicer(arr::Array{Int}, step::Int) = arr[begin:step:end] Using as generic types as possible allows for a variety of inputs and allows your code to be more general: julia> splicer(1:10, 2) 1:2:9 julia> splicer([3.0, 5, 7, 9], 2) 2-element Array{Float64,1}: 3.0 7.0 Struct Type Annotations Annotations on type fields need to be given a little more thought since field access is not concrete unless the compiler can infer the type (see type-dispatch design for details). Since well-inferred code is preferred, abstract type annotations, i.e. mutable struct MySubString <: AbstractString string::AbstractString offset::Integer endof::Integer end are not recommended. Instead a concretely-typed struct: mutable struct MySubString <: AbstractString string::String offset::Int endof::Int end is preferred. If generality is required, then parametric typing is preferred, i.e.: mutable struct MySubString{T<:Integer} <: AbstractString string::String offset::T endof::T end Untyped fields should be explicitly typed Any, i.e.: struct StructA a::Any end Macros - Do not add spaces between assignments when there are multiple assignments. Yes: @parameters a = b @parameters a=b c=d No: @parameters a = b c = d Types and Type Annotations - Avoid elaborate union types. Vector{Union{Int,AbstractString,Tuple,Array}}should probably be Vector{Any}. This will reduce the amount of extra strain on compilation checking many branches. - Unions should be kept to two or three types only for branch splitting. Unions of three types should be kept to a minimum for compile times. - Do not use ===to compare types. Use isaor <:instead. Package version specifications - Use Semantic Versioning - For simplicity, avoid including the default caret specifier when specifying package version requirements. # Yes: DataFrames = "0.17" # No: DataFrames = "^0.17" - For accuracy, do not use constructs like >=to avoid upper bounds. - Every dependency should have a bound. - All packages should use CompatHelper and attempt to stay up to date with the dependencies. - The lower bound on dependencies should be the last tested version. Documentation - Documentation should always attempt to be at the highest level possible. I.e., documentation of an interface that all methods follow is preferred to documenting every method, and documenting the interface of an abstract type is preferred to documenting all of the subtypes individually. All instances should then refer to the higher level documentation. - Documentation should use Documenter.jl. - Tutorials should come before reference materials. - Every package should have a starting tutorial that covers "the 90% use case", i.e. the ways that most people will want to use the package. - The tutorial should show a complete workflow and be opinionated in said workflow. For example, when writing a tutorial about a simulator, pick a plotting package and show to plot it. - Variable names in tutorials are important. If you use u0, then all other codes will copy that naming scheme. Show potential users the right way to use your code with the right naming. - When applicable, tutorials on how to use the "high performance advanced features" should be separated from the beginning tutorial. - All documentation should summarize contents before going into specifics of API docstrings. - Most modules, types and functions should have docstrings. - Prefer documenting accessor functions instead of fields when possible. Documented fields are part of the public API and changing their contents/name constitutes a breaking change. - Only exported functions are required to be documented. - Avoid documenting methods common overloads ==. - Try to document a function and not individual methods where possible as typically all methods will have similar docstrings. - If you are adding a method to a function which already has a docstring only add a docstring if the behaviour of your function deviates from the existing docstring. - Docstrings are written in Markdown and should be concise. - Docstring lines should be wrapped at 92 characters. """ bar(x[, y]) Compute the Bar index between `x` and `y`. If `y` is missing, compute the Bar index between all pairs of columns of `x`. """ function bar(x, y) ... - It is recommended that you have a blank line between the headings and the content when the content is of sufficient length. - Try to be consistent within a docstring whether you use this additional whitespace. - Follow one of the following templates for types and functions when possible: Type Template (should be skipped if is redundant with the constructor(s) docstring): """ MyArray{T, N} My super awesome array wrapper! # Fields - `data::AbstractArray{T, N}`: stores the array being wrapped - `metadata::Dict`: stores metadata about the array """ struct MyArray{T, N} <: AbstractArray{T, N} data::AbstractArray{T, N} metadata::Dict end Function Template (only required for exported functions): """ mysearch(array::MyArray{T}, val::T; verbose = true) where {T} -> Int Searches the `array` for the `val`. For some reason we don't want to use Julia's builtin search :) # Arguments - `array::MyArray{T}`: the array to search - `val::T`: the value to search for # Keywords - `verbose::Bool = true`: print out progress details # Returns - `Int`: the index where `val` is located in the `array` # Throws - `NotFoundError`: I guess we could throw an error if `val` isn't found. """ function mysearch(array::AbstractArray{T}, val::T) where {T} ... end - The @doc doc""" """formulation from the Markdown standard library should be used whenever there is LaTeX. - Only public fields of types must be documented. Undocumented fields are considered non-public internals. - If your method contains lots of arguments or keywords you may want to exclude them from the method signature on the first line and instead use args...and/or kwargs.... """ Manager(args...; kwargs...) -> Manager A cluster manager which spawns workers. # Arguments - `min_workers::Integer`: The minimum number of workers to spawn or an exception is thrown - `max_workers::Integer`: The requested number of workers to spawn # Keywords - `definition::AbstractString`: Name of the job definition to use. Defaults to the definition used within the current instance. - `name::AbstractString`: ... - `queue::AbstractString`: ... """ function Manager(...) ... end - Feel free to document multiple methods for a function within the same docstring. Be careful to only do this for functions you have defined. """ Manager(max_workers; kwargs...) Manager(min_workers:max_workers; kwargs...) Manager(min_workers, max_workers; kwargs...) A cluster manager which spawns workers. # Arguments - `min_workers::Int`: The minimum number of workers to spawn or an exception is thrown - `max_workers::Int`: The requested number of workers to spawn # Keywords - `definition::AbstractString`: Name of the job definition to use. Defaults to the definition used within the current instance. - `name::AbstractString`: ... - `queue::AbstractString`: ... """ function Manager end - If the documentation for bullet-point exceeds 92 characters the line should be wrapped and slightly indented. Avoid aligning the text to the :. """ ... # Keywords - `definition::AbstractString`: Name of the job definition to use. Defaults to the definition used within the current instance. """ Error Handling error("string")should be avoided. Defining and throwing exception types is preferred. See the manual on exceptions for more details. - Try to avoid try/catch. Use it as minimally as possible. Attempt to catch potential issues before running code, not after. Arrays - Avoid splatting ( ...) whenever possible. Prefer iterators such as collect, vcat, hcat, etc. instead. Line Endings Always use Unix style \n line ending. VS-Code Settings If you are a user of VS Code we recommend that you have the following options in your Julia syntax specific settings. To modify these settings open your VS Code Settings with <kbd>CMD</kbd>+<kbd>,</kbd> (Mac OS) or <kbd>CTRL</kbd>+<kbd>,</kbd> (other OS), and add to your settings.json: { "[julia]": { "editor.detectIndentation": false, "editor.insertSpaces": true, "editor.tabSize": 4, "files.insertFinalNewline": true, "files.trimFinalNewlines": true, "files.trimTrailingWhitespace": true, "editor.rulers": [92], "files.eol": "\n" }, } Additionally you may find the Julia VS-Code plugin useful. JuliaFormatter Note: the sciml style is only available in JuliaFormatter v1.0 or later One can add .JuliaFormatter.toml with the content style = "sciml" in the root of a repository, and run using JuliaFormatter, SomePackage format(joinpath(dirname(pathof(SomePackage)), "..")) to format the package automatically. Add FormatCheck.yml to enable the formatting CI. The CI will fail if the repository needs additional formatting. Thus, one should run format before committing. References Many of these style choices were derived from the Julia style guide, the YASGuide, and the Blue style guide.
https://docs.sciml.ai/stable/modules/SciMLStyle/
2022-08-07T23:01:07
CC-MAIN-2022-33
1659882570730.59
[array(['https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826', 'SciML Code Style'], dtype=object) ]
docs.sciml.ai
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Deletes the specified Application Auto Scaling scaling policy. Deleting a policy deletes the underlying alarm action, but does not delete the CloudWatch alarm associated with the scaling policy, even if it no longer has an associated action. To create a scaling policy or update an existing one, see PutScalingPolicy. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DeleteScalingPolicyAsync. Namespace: Amazon.ApplicationAutoScaling Assembly: AWSSDK.ApplicationAutoScaling.dll Version: 3.x.y.z Container for the necessary parameters to execute the DeleteScalingPolicy service method. This example deletes a scaling policy for the Amazon ECS service called web-app, which is running in the default cluster. var response = client.DeleteScalingPolicy(new DeleteScalingPolicyRequest { PolicyName = "web-app-cpu-lt-25", ResourceId = "service/default/web-app", ScalableDimension = "ecs:service:DesiredCount", ServiceNamespace = "ecs" }); .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ApplicationAutoScaling/MApplicationAutoScalingDeleteScalingPolicyDeleteScalingPolicyRequest.html
2018-03-17T06:07:07
CC-MAIN-2018-13
1521257644701.7
[]
docs.aws.amazon.com
, and then plug the). Configure a static IP address through the front panel Complete the following steps to manually configure an IP address through the front panel LCD controls. - -_1<< Connected to Discover and Command Appliance >>IMAGE?
https://docs.extrahop.com/6.2/deploy-eta/
2018-03-17T06:34:29
CC-MAIN-2018-13
1521257644701.7
[array(['/images/6.2/back_of_ETA.png', None], dtype=object) array(['/images/6.2/eda-eta-diagram.png', None], dtype=object) array(['/images/6.2/eda-eta-eca-diagram.png', None], dtype=object)]
docs.extrahop.com
VPN Reconnect Applies To: Windows Server 2008 R2 DirectAccess can replace the VPN as the preferred remote access method for many organizations. However, some organizations will continue to use VPNs side-by-side with DirectAccess. Therefore, Microsoft is improving VPN usability in Windows 7 with VPN Reconnect. VPN Reconnect uses IKEv2 technology to provide seamless and consistent VPN connectivity, automatically re-establishing a VPN when users temporarily lose their Internet connections. Users who connect using wireless mobile broadband will benefit most from this capability. For example, consider a user traveling to work on a train. To make the most out of her time, she uses a wireless mobile broadband card to connect to the Internet and then establishes a VPN connection to her company’s network. As the train passes through a tunnel, she loses her Internet connection. Once outside of the tunnel, the wireless mobile broadband card automatically reconnects to the Internet. However, with earlier versions of Windows, the VPN does not reconnect, and she needs to repeat the multi-step process of connecting to the VPN. This can quickly become time consuming for mobile users with intermittent connectivity. With VPN Reconnect, Windows 7 automatically re-establishes active VPN connections when Internet connectivity re-establishes. While the re-connection might take several seconds, it is completely transparent to users, who are more likely to stay connected to a VPN and get more use out of internal network resources. For more information about VPN Reconnect, see Remote Access Step-by-Step Guide: Deploying Remote Access with VPN Reconnect.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-7/dd637830(v=ws.10)
2018-03-17T07:26:55
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
Windows Virtual PC Help Applies To: Windows 7 Windows® Virtual PC is an optional component of the Windows 7 operating system that lets you run more than one operating system at the same time on one computer. One of the key benefits is that you can use Windows Virtual PC to migrate to Windows 7 while continuing to use applications that run on older versions of Windows, such as Windows XP or Windows Vista®. If Windows Virtual PC is not located in All Programs on the Start menu, you can download it free of charge from the Windows Virtual PC home page (). Note Hardware requirements for Windows Virtual PC differ slightly from Windows 7. For more information about these requirements, see the Windows Virtual PC Evaluation Guide (). We recommend that you review the requirements before you try to install Windows Virtual PC because the computer must be configured correctly to install the component. Windows Virtual PC basics Windows Virtual PC enables you to run more than one operating system at a time on one computer by providing a virtualization environment. This environment uses virtual machines, each of which is like a separate physical computer. Each virtual machine emulates a physical computer and can run one 32-bit operating system, which is called a guest operating system. The physical computer and the Windows 7 operating system that runs directly on the computer (instead of in a virtual machine) together are called the host. This instance of Windows 7 is sometimes called the host operating system. Windows Virtual PC offers two basic ways for you to interact with this virtual environment. You can: Use applications directly from the host when those applications are actually installed in the guest operating system. This is useful when you just want to run these older applications and you do not need to interact with the guest operating system. When you use this option, the virtual machine environment is essentially hidden. The applications are called virtual applications and are available from the Start menu of the host. This option is supported for specific versions of Windows (which require an update) and is suited for business applications. For more information, see Publish and use virtual applications. Interact directly with the guest operating system and the virtual machine. You can view the desktop of the guest operating system from a window, called the virtual machine window—or in full desktop mode, which is similar to a Remote Desktop client session. This is useful if you want to interact with the guest operating system. For example, you might do this to reproduce a scenario for troubleshooting or support purposes. In either case, you will need a virtual machine. Your options are: Set up Windows XP Mode. This option creates a preconfigured virtual machine with Windows XP Service Pack 3 (SP3) already installed. After an easy set up, this environment is ready for you to customize with your own applications. Windows XP Mode is available for the 32-bit and 64-bit editions of Windows 7 Professional, Windows 7 Enterprise, and Windows 7 Ultimate. For more information, see Configuring and using Windows XP Mode. Set up your own virtual machine. To do this, you create a virtual machine and install the guest operating system yourself. This option is useful if you do not want to run Windows XP SP3, or if you are using a version of Windows 7 for which Windows XP Mode is not available. For instructions, see Create a virtual machine and install a guest operating system. For information about the operating systems that are supported for use with Windows Virtual PC, see the Windows Virtual PC home page (). Note Windows XP Mode is the faster method because it requires fewer steps. One of the benefits of using a virtual machine is that you can change the hardware faster and more easily than on a physical computer. For example, to add or remove a network adapter or change the amount of memory, you shut down the virtual machine and modify a setting. There is no need to open a computer case. For more information, see Configuring a virtual machine. Windows Virtual PC includes the Integration Components package, which provides features that improve the integration between the virtual environment and the physical computer. For more information, see About integration features. See Also Concepts Using a virtual machine About virtual hard disks Modify a virtual hard disk Troubleshooting Windows Virtual PC Resources for Windows Virtual PC Remove Windows XP Mode, virtual machines, or Windows Virtual PC
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-7/ee449411(v=ws.10)
2018-03-17T06:09:31
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. The QualificationRequirement data structure describes a Qualification that a Worker must have before the Worker is allowed to accept a HIT. A requirement may optionally state that a Worker must have the Qualification in order to preview the HIT. Namespace: Amazon.MTurk.Model Assembly: AWSSDK.MTurk.dll Version: 3.x.y.z The QualificationRequirement
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/MTurk/TQualificationRequirement.html
2018-03-17T06:30:03
CC-MAIN-2018-13
1521257644701.7
[]
docs.aws.amazon.com
Parameters may seem obvius to some but not for everyone so: - $data can be as the description says raw or base64. If no $option is set (this is, if value of 0 is passed in this parameter), data will be assumed to be base64 encoded. If parameter OPENSSL_RAW_DATA is set, it will be understood as row data. - $password (key) is a String of [pseudo] bytes as those generated by the function openssl_random_pseudo_bytes(). - $options as (as for 2016) two possible values OPENSSL_RAW_DATA and OPENSSL_ZERO_PADDING. Setting both can be done by OPENSSL_RAW_DATA|OPENSSL_ZERO_PADDING. If no OPENSSL_ZERO_PADDING is specify, default pading of PKCS#7 will be done as it's been observe by [openssl at mailismagic dot com]'s coment in openssl_encrypt() - $iv is as in the case of $password, a String of bytes. Its length depends on the algorithm used. May be the best way to generate an $iv is by: <?php $iv = openssl_random_pseudo_bytes(openssl_cipher_iv_length('your algorithm'));// for example you algorithm = 'AES-256-CTR' ?>
http://docs.php.net/manual/it/function.openssl-decrypt.php
2018-03-17T06:29:52
CC-MAIN-2018-13
1521257644701.7
[]
docs.php.net
There's currently an issue in the SQLite3 PHP binding (not SQLite3 itself) that causes all queries to be executed twice. It has apparently existed for quite a while. See here for more info: Before discovering the above, I posted this: (includes copy-paste bug demo) Workaround: I strongly recommend wrapping any code that might run fetchArray() on non-SELECT query results inside a numColumns() check, like this: <?php $op = $db->prepare(...); $r = $op->execute(); // query #1 if ($r->numColumns()) { // returns column count, here being used as true/false test while ($row = $r->fetchArray(SQLITE3_ASSOC)) { // query #2 // your code here } } ?> To clarify: - Query #1 is where the SQLite3 query is executed the first time, query #2 is where the query is executed again. Yes, *everything* is executed twice; this is the bug. - If your code will only read from and not alter the database (so, a SELECT that won't cause database-altering triggers to run, for example), you're fine. Your query runs twice but it doesn't alter the result. - If your code will write to the database - for example an INSERT - you MUST not run fetchArray() (and execute the query again) if the number of columns is zero. - It's not documented in the manual but over here - - user 'bohwaz' mentions that there's also a SQLite3Stmt::readOnly() function since PHP 5.3.11 which will tell you if you just wrote to the DB. This is currently undocumented but might be a more appropriate alternative to numColumns() (I'm not sure what it does, it might be the same). You might prefer PDO for higher-volume work with SQLite3. This binding is ironically lighter-weight and provides direct access to some SQLite3-specific primitives and behavior... but it runs all queries twice. [[Note to moderators (this section may be deleted once it has been read; I'm also fine with feedback on the following): - Please don't consider this comment a bug report - I just want others to be aware of this issue so they don't have to bumble around for hours scratching their heads. :P - As of the submission date of this comment, there's a unapproved diff for this page stuck in DocBook so I can't add something like "due to bug #64531, you are recommended to wrap fetchArray() inside numColumns()...", which I think would carry more weight than this comment until this bug is fixed.]]
http://docs.php.net/manual/it/sqlite3stmt.execute.php
2018-03-17T06:30:05
CC-MAIN-2018-13
1521257644701.7
[]
docs.php.net
Change management for Office 365 clients The client applications that are included with Office 365 are released regularly with updates that provide new features and functionality together with security and other updates. Windows 10 has also adopted this servicing model and is also releasing new functionality regularly. As an IT Professional, you need to understand this servicing model and how you can manage the releases while your organization takes advantage of the new functionality. This article gives you an overview of this servicing model, and helps you understand the release channels and cadence, and how to effectively manage releases of the Office 365 client applications for your organization. In this article: Download this information as a model poster in Visio or PDF format. A Servicing Model for Updates Both Windows 10 and Office 365 have adopted the servicing model for client updates. This means that new features, non-security updates, and security updates are released regularly, so your users can have the latest functionality and improvements. The servicing model also includes time for enterprise organizations to test and validate releases before adopting them. What is the Servicing Model? In a traditional development model it can take months of planning, development, and testing before a large release is ready. Traditional deployments take enterprises years to plan, evaluate test, pilot, and deploy and then maintain the entire environment. The following illustration shows a traditional release model: The following illustration shows a traditional deployment model: In a servicing model, new features and innovations can be developed and released in a quicker cadence, so that customers are always seeing improvements. Because development has changed, so does the deployment process for an enterprise organization. Quicker releases of features means that you can evaluate, pilot, and deploy different sets of features at the same time. The following illustration shows releases in a servicing world: The following illustration shows deployment in a servicing world: What's in it for Enterprise Organizations? You want up-to-date features, but you also want the control and support you need to run your business. With the variety of release programs for Windows 10 and Office 365, you can evaluate new features, pilot them with specific groups in your organization, and then deploy them more broadly to your organization. You can also keep specialized devices on a long-term build if you need to limit changes. Use the update option that's right for your devices and your business needs. We recommend the following: Focus on your business, not managing the software. The following table explains ways you can focus on your business, not managing the software, with the Office 365 client applications. Release Options You need to understand the different release options for Windows 10 and the Office 365 client applications so you can choose the right options for your business. You decide what works and choose the combinations of releases to support for your organization. This section provides an overview and helps you choose. Summary of Release Options Use the following table to choose the right release option for your business needs: Default channels for client applications Office 365 ProPlus includes the following Office applications: Word, Excel, PowerPoint, Outlook, OneNote, Access, Skype for Business, and Publisher. Project Online Desktop Client and Visio Pro for Office 365 also follow this release model. The following illustration shows these sets of client applications. By default, the client applications for Office 365 are set to these channel releases: Office 365 ProPlus is set to use Semi-Annual Channel. Project Online Desktop Client and Visio Pro for Office 365 are set to use Monthly Channel. But you can determine which channel is used for your client applications, according to your business needs. For more information about the channels for Office 365 client applications, see Overview of update channels. Which Release Channel? This table shows the release channels for Windows 10 and Office 365 clients. The release channels at the top of the table provide the freshest features, those at the bottom provide the most administrative control. Freshness indicates rapid access to new features and the ability to provide feedback to Microsoft. You'll sacrifice features for extra control - use the most controlled options for specialized devices, such as devices used on factory floors, for air traffic control, or in emergency rooms. Semi-Annual Channel for Office 365 offers a balance between new features and control. Release Cadences It is important to keep both Windows and your Office clients up to date. Office 365 and Windows 10 have aligned release cadences to make this easier. Both will have regular security updates and will have new features releasing twice per year. For specialized devices running Windows 10, there is a long-term servicing channel available which provides regular security updates but only provides new features every few years. Windows 10 Release Cadence Windows 10 has three types of releases: Windows Insider Preview Join the Windows Insider program to evaluate and provide feedback on pre-release features, and perform application compatibility validation testing. New features are released frequently. Semi-Annual Channel After new features have been tested through the Windows Insider program, they are released along with bug fixes and security patches as the Semi-Annual Channel. A new Semi-Annual Channel is released twice a year, around March and September, and is supported for 18 months. Start by deploying to a targeted group of devices that represent the broader organization. Approximately 4 months after a Semi-Annual Channel release, Microsoft will announce broad deployment readiness, indicating that Microsoft, independent software vendors (ISVs), partners, and customers believe that the release is ready for broad deployment. But, organizations can begin broad deployment whenever they're ready. Long-Term Servicing Channel This channel provides a supported OS platform for 10 years. Enterprises can choose to move to a new release or stay where they are at and skip to the next one. The Long-Term Servicing Channel is intended for specialized devices, such as PCs that control medical equipment, point-of-sale systems, or ATMs. If Office is needed, use Office Professional Plus 2016 with these specialized devices. The following illustration shows the relationships between these releases for Windows 10. Read Overview of Windows as a service for more information. Office 365 Client Release Cadence Office 365 clients have the following types of releases: Monthly Channel New features, security updates, and fixes are released to Monthly Channel approximately every month. Semi-Annual Channel (Targeted) You can validate this release for four months before it becomes a Semi-Annual Channel release. New features are included only at the beginning of a release, in March and September. This channel is refreshed with non-security updates and security updates every month. Semi-Annual Channel The Semi-Annual Channel (Targeted) is rolled up and released as the Semi-Annual Channel every 6 months, in January and July. No new features are added until the next Semi-Annual Channel, although security updates will continue to be released. Each Semi-Annual Channel feature release is supported for an additional 14 months. This is the default channel for Office 365 ProPlus clients. Read Overview of update channels for more information. The following illustration shows the relationships between these releases for Office 365 ProPlus. Office Mobile Apps Office Mobile Apps for iOS and Android have regular releases available through their respective app stores. Office Mobile Apps for Windows have regular releases available through the Microsoft Store. Office Professional Plus 2016 client installs Security updates are made available for the Office clients that you install by using .MSI files as part of the Office Volume Licensing program. New features are not delivered outside of full product releases. Use, if Office is needed, for specialized devices on the Long-Term Servicing Channel of Windows 10. Deployment tools The following table lists the deployment tools that can be used for Windows 10 and the Office 365 client applications: Types of Changes There are several types of changes that are made to Office 365 on a regular basis. The communication channels for those changes, and the actions that you might have to take for them will vary, depending on the type of change. This section explains the types of changes you can expect, when to expect changes, and what you need to do to be prepared for changes in Office 365. Types of Changes for the Office 365 Service and Client Applications Not all changes have the same impact on your users or require action. Some are planned and some unplanned by their nature (non-security updates and security updates aren't usually planned in advance). Depending on the type of change, the communication channel also varies. The following table lists the types of changes you can expect for the Office 365 service and client applications. Guidelines for managing change when using Office Add-ins: We recommend that customers use Monthly Channel to get the latest feature updates. If you have Office customizations or Add-ins deployed, you can use Semi-Annual Channel, which allows you to wait for feature updates to Office until you have had the chance to test and fix your customizations. To test and fix your customizations before those features updates are applied to Semi-Annual Channel, use Semi-Annual Channel (Targeted). Use the Office Telemetry Dashboard to check Add-ins for compatibility. For more information, see Compatibility and telemetry in Office. If your developers built the Office Add-in, we suggest they update the code and redeploy the custom Office Add-in. If you built your customization using VBA, VSTO, or COM, consider rebuilding your customization as an Office Add-in, or check the Office Store to see if there is a 3rd-party Add-in that provides similar functionality. Consider decommissioning Office Add-in that are no longer used or have low utilization. More information about Office Add-ins. Tips for Testing For functionality changes, you should test against your add-ins and other customizations to see if you need to update them. Use these tips for testing: Don't wait - have a pilot team use Monthly Channel to start evaluating new features. Use Semi-Annual Channel (Targeted) if you need a longer lead time for testing. Use an Azure virtual environment to test against your customizations or processes. Align your work with the release schedule - schedule testing passes monthly. Roles and Responsibilities Responsibility for managing change is shared between Microsoft and you as the admin of your Office 365 tenancy. The balance of responsibility is different for an online service than it is for an on-premises server or client. Understand the roles both Microsoft and you need to play before, during, and after a change occurs to the service. Review what's included in each release on the Office 365 client update channel releases page on TechNet. Balance of Responsibility In a service offering, the balance of responsibility for things such as hardware maintenance and security updates shifts to the service provider (Microsoft) instead of the customer (you). However, you still need to ensure that custom software continues to function as expected when updates are rolled out. For on-premises products, your organization takes on most of the responsibility for managing change. Your responsibility for change management is based on the type of service. The following table summarizes the balance of responsibility for both Microsoft and the customer for online services and on-premises software. Microsoft's Role and Your Role Microsoft and you both play a role in managing change for Office 365 before, during, and after a change. Before a change Microsoft's role Set expectations for service changes. Notify customers 30 days in advance for changes that require administrator action. Publish majority of new features and updates on the Office 365 Roadmap. Customer's role Understand what to expect for changes and communications. Read Message Center, Office 365 Roadmap and Office Blog regularly. Set up pilot teams to preview new functionality using Monthly Channel. Review and update internal change management processes. Understand the Office 365 system requirements and check compliance. During a change Microsoft's role Roll change out to customers. Specifically for Office 365 clients: release a new Monthly Channel approximately each month and new security and non-security updates, if needed, for Semi-Annual Channel. Monitor telemetry and support escalations for any unexpected issues. Customer's role Check Message Center and review the additional information link. Take any action required (if applicable) and test any add-ins. If using an internal share for updates, download the latest builds and upload to your share. If a break/fix scenario is experienced, create a Support Request. After a change Microsoft's role Listen to customer feedback to improve rollout of future changes. Listen to feedback on the Office 365 space on the Microsoft Tech Community and in the admin feedback tool. Update Office 365 Roadmap statuses and add new features. Customer's role Work with people in your organization to adopt the change (get help on Microsoft FastTrack). Review change management processes and bottlenecks for opportunities to streamline, and use more Microsoft resources. Provide general feedback on the Office 365 space on the Microsoft Tech Community and specific feedback in the admin feedback tool. Train users to provide app specific feedback using the Smile button in Office apps. Manage Update Deployments You choose when and how updates are deployed to your organization by configuring: Which channel to use. This controls how often updates are available. Which update method to use (automatic or manual). This controls how your client computers get the updates. How to Apply Updates You can decide how updates are deployed to your users' computers. You can allow the client computers to automatically receive updates over the Internet or from an on-premises location. Alternatively, you might want to have more control by packaging the updates yourself and manually delivering them to client computers over your network. Methods for applying Office updates to client computers The following table explains three methods you can use to apply Office updates to client computers. The following illustration shows these methods: Do you need to control the delivery of updates? To decide which method to use, consider the following scenarios: Customizations Allow automatic updates for...Users or computers that are primarily for productivity and don't use customizations or integrated solutions. Recommended Channel: Monthly Channel Use manual delivery of updates for...Users or computers that rely on customizations or integrated solutions that work with the Office 365 clients. Recommended Channel: Semi-Annual Channel. Validate by using Semi-Annual Channel (Targeted). Managed or un-managed computers? Allow automatic updates from the Internet for...Consumers and small businesses without an IT department. Deploy from an on-premises location when...You want to control when updates are pushed out to your organization's computers. How to configure channels and update methods Use the following methods to configure which channels are used by which client computers and how those clients computers are updated: Office Deployment Tool Group Policy - for centralized administration of domain-joined computers. Related Topics Overview of update channels for Office 365 ProPlus Overview of Windows as a service Microsoft cloud IT architecture resources Office 365 client update channel releases
https://docs.microsoft.com/en-us/deployoffice/change-management-for-office-365-clients
2018-03-17T06:42:41
CC-MAIN-2018-13
1521257644701.7
[array(['images/f0f92823-29ae-443e-af36-d913c6ba6486.png', 'Servicing model poster'], dtype=object) array(['images/37437b6c-5325-4739-ac6a-0dca404329ca.png', 'Traditional release model'], dtype=object) array(['images/45ce18de-b51b-4bda-8189-c593883ddd80.png', 'Traditional deployment model'], dtype=object) array(['images/64f9e1cf-db58-43bc-945a-9dc643eb6596.png', 'Releases in a servicing world'], dtype=object) array(['images/03839cb7-6e68-4697-9eb4-194c7edba3c4.png', 'Deployment in a servicing world'], dtype=object) array(['images/bc32bed2-b7be-4652-93d0-4f0f90b97e8e.png', 'Office 365 client applications'], dtype=object) array(['images/42471221-f1a5-4bd6-9d7e-5f24a3049330.png', 'Windows 10 release cadence'], dtype=object) array(['images/99a17880-4029-44e9-b478-be4058c30f92.png', 'Office 365 release cadence'], dtype=object) array(['images/c8774868-5fc5-4f07-890a-8ae777d5d44d.png', 'Apply Office updates to clients'], dtype=object) ]
docs.microsoft.com
Walkthrough: Implementing IEnumerable(Of T) in Visual Basic The IEnumerable<T> interface is implemented by classes that can return a sequence of values one item at a time. The advantage of returning data one item at a time is that you do not have to load the complete set of data into memory to work with it. You only have to use sufficient memory to load a single item from the data. Classes that implement the IEnumerable(T) interface can be used with For Each loops or LINQ queries. For example, consider an application that must read a large text file and return each line from the file that matches particular search criteria. The application uses a LINQ query to return lines from the file that match the specified criteria. To query the contents of the file by using a LINQ query, the application could load the contents of the file into an array or a collection. However, loading the whole file into an array or collection would consume far more memory than is required. The LINQ query could instead query the file contents by using an enumerable class, returning only values that match the search criteria. Queries that return only a few matching values would consume far less memory. You can create a class that implements the IEnumerable<T> interface to expose source data as enumerable data. Your class that implements the IEnumerable(T) interface will require another class that implements the IEnumerator<T> interface to iterate through the source data. These two classes enable you to return items of data sequentially as a specific type. In this walkthrough, you will create a class that implements the IEnumerable(Of String) interface and a class that implements the IEnumerator(Of String) interface to read a text file one line at a time. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. Creating the Enumerable Class The first class in this project is the enumerable class and will implement the IEnumerable(Of String) interface. This generic interface implements the IEnumerable interface and guarantees that consumers of this class can access values typed as String. Using the Sample Iterator You can use an enumerable class in your code together with control structures that require an object that implements IEnumerable, such as a For Next loop or a LINQ query. The following example shows the StreamReaderEnumerable in a LINQ query. Dim adminRequests = From line In New StreamReaderEnumerable("..\..\log.txt") Where line.Contains("admin.aspx 401") Dim results = adminRequests.ToList() See Also Introduction to LINQ in Visual Basic Control Flow Loop Structures For Each...Next Statement
https://docs.microsoft.com/en-us/dotnet/visual-basic/programming-guide/language-features/control-flow/walkthrough-implementing-ienumerable-of-t
2018-03-17T06:42:39
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
Assigning Nodes to Environments Included in Puppet Enterprise 2016.2. A newer version is available; see the version menu above for details. By default, all nodes are assigned to a default environment named production. There are two ways to assign nodes to a different environment: - Via your ENC or node terminus - Via each agent node’s puppet.conf Note: If you have Puppet Enterprise (PE), you can use the PE console to set the environment for your nodes. The value from the ENC is authoritative, if it exists. If the ENC doesn’t specify an environment, the node’s config value is used. Assigning environments via an ENC The interface to set the environment for a node will be different for each ENC. Some ENCs cannot manage environments. When writing an ENC, simply ensure that the environment: key is set in the YAML output that the ENC returns. See the documentation on writing ENCs for details. If the environment key isn’t set in the ENC’s YAML output, the Puppet master will just use the environment requested by the agent. Assigning environments via the agent’s config file In puppet.conf on each agent node, you can set the environment setting in either the agent or main config section. When that node requests a catalog from the Puppet master, it will request that environment. If you are using an ENC and it specifies an environment for that node, it will override whatever is in the config file. Non-existant environments Nodes can’t be assigned to unconfigured environments. If a node is assigned to an environment which doesn’t exist — that is, there is no directory of that name in any of the environmentpath directories — the Puppet master will fail compilation of its catalog. The one exception is if the default production environment doesn’t exist. In this case, the agent will successfully retrieve an empty catalog.
https://docs.puppet.com/puppet/4.5/environments_assigning.html
2018-03-17T06:16:53
CC-MAIN-2018-13
1521257644701.7
[]
docs.puppet.com
Language: Data types: Default Included in Puppet Enterprise 2016.2. A newer version is available; see the version menu above for details. Puppet’s special default value usually acts like a keyword in a few limited corners of the language. Less commonly, it can also be used as a value in other places. Syntax The only value in the default data type is the bare word default. Usage The special default value is used in a few places: Cases and selectors In case statements and selector expressions, you can use default as a case, where it causes special behavior. Puppet will only try to match a default case last, after it has tried to match against every other case. Per-block resource defaults You can use default as the title in a resource declaration to invoke special behavior. (For details, see Resources (Advanced).) Instead of creating a resource and adding it to the catalog, the special default resource sets fallback attributes that can be used by any other resource in the same resource expression. That is: file { default: mode => '0600', owner => 'root', group => 'root', ensure => file, ; '/etc/ssh_host_dsa_key': ; '/etc/ssh_host_key': ; '/etc/ssh_host_dsa_key.pub': mode => '0644', ; '/etc/ssh_host_key.pub': mode => '0644', ; } All of the resources in the block above will inherit attributes from default unless they specifically override them. max length parameter is infinity, which can’t be represented in the Puppet language. These parameters will often let you provide a value of default to say you want the otherwise-unwieldy default value. Anywhere else You can also use the value default anywhere you aren’t prohibited from using it. In these cases, it generally won’t have any special meaning. There are a few reasons you might want to do this. The main one would be if you were writing a class or defined resource type and wanted to give users the option to specifically request a parameter’s default value. Some people have used undef to do this, but that’s tricky when dealing with parameters where undef would, itself, be a meaningful value. Others have used some gibberish value, like the string "UNSET", but this can be messy. In other words, using default would let you distinguish between: - A chosen “real” value - A chosen value of undef - Explicitly declining to choose a value, represented by default In other other words, default can be useful when you need a truly meaningless value. The Default data type The data type of default is Default. It matches only the value default, and takes no parameters. Example Variant[String, Default, Undef]— matches undef, default, or any string.
https://docs.puppet.com/puppet/4.5/lang_data_default.html
2018-03-17T06:23:34
CC-MAIN-2018-13
1521257644701.7
[]
docs.puppet.com
Upgrading Report Server To upgrade an existing Report Server installation, you can run the Telerik Report Server Installer. Telerik Report Server installer replaces existing files in the selected installation folder. The installer will preserve existing data storage and configuration files - ReportServerAdmin.config and ServiceAgent.config. The installer will not preserve any other config files such as Web.config and Telerik.ReportServer.ServiceAgent.exe.config. Thus, it is recommended to perform a file backup before upgrading. On first launch of the Report Server you can select to re-use the same data storage folder from the previous installation. The Report Server installer will not run if it detects an already installed Report Server of the same or greater version.
https://docs.telerik.com/report-server/implementer-guide/setup/upgrade
2018-03-17T06:06:48
CC-MAIN-2018-13
1521257644701.7
[]
docs.telerik.com
TOPICS× About Administrating Adobe Campaign As a cloud-based solution, Adobe Campaign offers administrators different ways to configure the application. Though the infrastructure configuration is performed by Adobe, functional administrators can: - Invite users to access the application and manage groups of users as well as their rights and roles. - Configure external accounts, which are used to connect Adobe Campaign to external servers. - Adjust and configure routing parameters for all communication channels. - Monitor the platform by accessing technical workflows. - Import and export packages as well as extend the data model to add new fields or resources. . The different menus available are: - Users & Security : This menu allows you to manage access to the platform (users, roles, security groups, units). - Application settings : This menu allows you to configure different application elements (external accounts, options, technical workflows). - Development : This menu allows you to manage your custom resources and access diagnostic tools. - Instance settings : This menu is where you define your different brands and configure their settings (logo, manage tracking, URL domain to access the landing pages, etc.). - Deployment : This menu regroups the package import and export options. - Customer metrics : Adobe Campaign provides a report that displays the number of active profiles. This report is only informative, it doesn't have a direct impact on billing. - Privacy Tools : This menu allows you to create GDPR access and delete requests and track their evolution.
https://docs.adobe.com/content/help/en/campaign-standard/using/administrating/about-administrating-adobe-campaign.html
2020-02-17T08:44:08
CC-MAIN-2020-10
1581875141749.3
[array(['/content/dam/help/campaign-standard.en/help/administration/using/assets/admin_overview.png', None], dtype=object) array(['/content/dam/help/campaign-standard.en/help/administration/using/assets/admin_overview2.png', None], dtype=object) ]
docs.adobe.com
Top 18 trends in the embedded market Windows for Devices just published an interesting article about a research from VDC on the embedded market trends. -? You can find the more detailed article here. I am curious to know what you think of this?
https://docs.microsoft.com/en-us/archive/blogs/obloch/top-18-trends-in-the-embedded-market
2020-02-17T08:11:27
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
Failed to delete Storage account 'xyz07bgys83j7hk'. I was playing around with Windows Azure Virtual Machines the last couple of days and after deleting a bunch of them I noticed I had some storage containers that were obviously related but no longer necessary. The most obvious thing to do at this stage is to delete them right? That’s too simple (but actually still correct). If you are reading this post it’s because you most likely tried the obvious way to delete the storage items:!
https://docs.microsoft.com/en-us/archive/blogs/karldb/failed-to-delete-storage-account-xyz07bgys83j7hk
2020-02-17T08:21:29
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
Introduction¶ The Workbench Engine is a render engine optimized for fast rendering during modeling and animation preview. It is not intended to be a render engine that will render final images for a project. Its primary task is to display a scene in the 3D Viewport when it is being worked on. Примечание While its not intended to be used for final renders, the Workbench render engine can be selected as the Render Engine in the Render properties. By default the 3D Viewport uses Workbench to shade and light objects. Shading settings can be tweaked in the 3D Viewport’s Shading popover. Workbench supports assigning random colors to objects to make each visually distinct. Other coloring mechanisms also exist, including; materials, vertex colors, and textures. Workbench also has an X-ray mode to see through objects, along with cavity and shadow shading to help display details in objects. Workbench supports several lighting mechanisms including studio lighting and MatCaps. The image below is an excellent example of the Workbench engine’s capabilities using random coloring and shadows to show the details of the model.
https://docs.blender.org/manual/ru/dev/render/workbench/introduction.html
2020-02-17T08:02:23
CC-MAIN-2020-10
1581875141749.3
[]
docs.blender.org
Cancellation Token. None Property Definition Returns an empty CancellationToken value. public: static property System::Threading::CancellationToken None { System::Threading::CancellationToken get(); }; public static System.Threading.CancellationToken None { get; } member this.None : System.Threading.CancellationToken Public Shared ReadOnly Property None As CancellationToken Property Value An empty cancellation token. Remarks The cancellation token returned by this property cannot be canceled; that is, its CanBeCanceled property is false. You can also use the C# default(CancellationToken) statement to create an empty cancellation token. Two empty cancellation tokens are always equal.
https://docs.microsoft.com/en-us/dotnet/api/system.threading.cancellationtoken.none?view=netframework-4.7.2
2020-02-17T07:20:48
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
New in this release The new version of add-on also includes the following features: - Added a capability to collapse attachment categories with saving settings for each user. - Added the link to the comment next to the attachment. Fixed in this release - Resolved the issue with impossibility to move attachment across categories in Chrome.
https://docs.stiltsoft.com/display/CATAT/Smart+Attachments+1.1.0
2020-02-17T08:08:49
CC-MAIN-2020-10
1581875141749.3
[]
docs.stiltsoft.com
Managing packages¶ On the Navigator Environments tab, the packages table in the right column lists the packages included in the environment selected in the left column. Note Packages are managed separately for each environment. Changes you make to packages only apply to the active environment. Tip Click a column heading in the table to sort the table by package name, description, or version. Tip The Update Index button updates the packages table with all packages that are available in any of the enabled channels. Filtering the packages table¶ By default, only installed packages are shown in the packages table. To filter the table to show different packages, click the arrow next to Installed, then select which packages to display: Installed, Not installed, Updatable, Selected, or All. Note Selecting the Updatable filter lists packages that are installed and have updates available. Finding a package¶ In the Search Packages box, type the name of the package. Installing a package¶ Select the Not Installed filter to list all packages that are available in the environment’s channels but are not installed. Note Only packages that are compatible with your current environment are listed. Select the name of the package you want to install. Click the Apply button. Review the Install Packages information. You can filter your packages to be installed by Name, Unlink, Link, and Channel. Unlink indicates what is being removed. Link is what is being installed in place of the package that is unlinked. Channel shows from where the package is being installed. Packages are in a cache and they rely on other packages using hard links, which essentially point to a package instead of copying it to the environment. Unlink removes the hard link to that package. If the package you are trying to install is a dependency of other packages, the Link column will show the hard link to the package version that is being created in order to install your selected package. Tip If after installing a new package it doesn’t appear in the packages table, select the Home tab, then click the Refresh button to reload the packages table. Updating a package¶ Select the Updatable filter to list all installed packages that have updates available. Click the checkbox next to the package you want to update, then in the menu that appears select Mark for Update. OR In the Version column, click the blue up arrow that indicates there is a newer version available. Click the Apply button. Installing a different package version¶ Click the checkbox next to the package whose version you want to change. In the menu that appears, select Mark for specific version installation. If other versions are available for this package, they are displayed in a list. Click the package version you want to install. Click the Apply button. Removing a package¶ Click the checkbox next to the package you want to remove. In the menu that appears, select Mark for removal. Click the Apply button. Advanced package management¶ Navigator provides a convenient graphical interface for managing conda environments, channels, and packages. But if you’re comfortable working with Anaconda Prompt (terminal on Linux or macOS), you can access additional, advanced management features. To learn more, see Managing packages with conda.
https://docs.anaconda.com/anaconda/navigator/tutorials/manage-packages/
2020-02-17T08:16:13
CC-MAIN-2020-10
1581875141749.3
[]
docs.anaconda.com
How to analyze my past successes? With Goalify you can analyze your past success in great detail. Open your goal, open the options menu on the lower right and choose the analysis option. The following features are available: - Choose your preferred timeframe - Show or hide targets - Switch between achievement level ( a percentage value) and recorded values. This feature is available to subscribers of the Unlimited-Edition. Users of the free edition can try it out multiple times.
https://docs.goalifyapp.com/how_to_analyze_my_past_successes.en.html
2020-02-17T07:47:47
CC-MAIN-2020-10
1581875141749.3
[]
docs.goalifyapp.com
Projects¶ Introduction¶ For a full reference of the Project API, please see the Python API. A signac project is a conceptual entity consisting of three components: - a data space, - scripts and routines that operate on that space, and - the project’s documentation. This division corresponds largely to the definition of a computational project outlined by Wilson et al. The primary function of signac is to provide a single interface between component (2), the scripts encapsulating the project logic, and component (1), the underlying data generated and manipulated by these operations. By maintaining a clearly defined data space that can be easily indexed, signac can provide a consistent, homogeneous data access mechanism. In the process, signac’s maintenance of the data space also effectively functions as an implicit part of component (3), the project’s documentation. Project Initialization¶ In order to use signac to manage a project’s data, the project must be initialized as a signac project. After a project has been initialized in signac, all shell and Python scripts executed within or below the project’s root directory have access to signac’s central facility, the signac project interface. The project interface provides simple and consistent access to the project’s underlying data space. 1 - 1 You can access a project interface from other locations by explicitly specifying the root directory. To initialize a project, simply execute $ signac init <project-name> on the command line inside the desired project directory (create a new project directory if needed). For example, to initialize a signac project named MyProject in a directory called my_project, execute: $ mkdir my_project $ cd my_project $ signac init MyProject You can alternatively initialize your project within Python with the init_project() function: >>> project = signac.init_project('MyProject') This will create a configuration file which contains the name of the project. The directory that contains this configuration file is the project’s root directory. The Data Space¶ The project data space is stored in the workspace directory. By default this is a sub-directory within the project’s root directory named workspace. Once a project has been initialized, any data inserted into the data space will be stored within this directory. This association is not permanent; a project can be reassociated with a new workspace at any time, and it may at times be beneficial to maintain multiple separate workspaces for a single project. You can access your signac Project and the associated data space from within your project’s root directory or any subdirectory from the command line: $ signac project MyProject Or with the get_project() function: >>> import signac >>> project = signac.get_project() >>> print(project) MyProject Jobs¶ The central assumption of the signac data model is that the data space is divisible into individual data points, consisting of data and metadata, which are uniquely addressable in some manner. Specifically, the workspace is divided into sub-directories, where each directory corresponds to exactly one Job. Each job has a unique address, which is referred to as a state point. A job can consist of any type of data, ranging from a single value to multiple terabytes of simulation data; signac’s only requirement is that this data can be encoded in a file. A job is essentially just a directory on the file system, which is part of a project workspace. That directory is called the job workspace and contains all data associated with that particular job. You access a job by providing a state point, which is a unique key-value mapping describing your data. All data associated with your job should be a unique function of the state point, e.g., the parameters that go into your physics or machine learning model. For example, to store data associated with particular temperature or pressure of a simulation, you would first initialize a project, and then open a job like this: project = get_project('path/to/my_project') job = project.open_job({'temperature': 20, 'pressure': 1.0}) job.init() with open(job.fn('results.txt')) as file: ... Tip You only need to call the Job.init() function the first time that you are accessing a job. Furthermore, the Job.init() function returns itself, so you can abbreviate like this: job = project.open_job({'temperature': 20, 'pressure': 1.0}).init() The job state point represents a unique address of your data within one project. There can never be two jobs that share the same state point within the same project. Any other kind of data and metadata that describe your job, but do not represent a unique address should be stored within the Job.doc, which has the exact same interface like the Job.sp, but does not represent a unique address of the job. Tip The Job interface and the various methods of storing data are described in detail in the Jobs section. In addition to obtaining a job handle via the project open_job() function, you can also access it directly with the signac.get_job() function. For example, you can get a handle on a job by switching into the workspace directory and then calling signac.get_job(): >>> import signac >>> job = signac.get_job() >>> print(job) 42b7b4f2921788ea14dac5566e6f06d0 Finding jobs¶ In general, you can iterate over all initialized jobs using the following idiom: for job in project: pass This notation is shorthand for the following snippet of code using the Project.find_jobs() method: for job in project.find_jobs(): pass However, the find_jobs() interface is much more powerful in that it allows filtering for subsets of jobs. For example, to iterate over all jobs that have a state point parameter b=0, execute: for job in project.find_jobs({'b': 0}): pass For more information on how to search for specific jobs in Python and on the command line, please see the Query API chapter. Grouping¶ Grouping operations can be performed on the complete project data space or the results of search queries, enabling aggregated analysis of multiple jobs and state points. The return value of the find_jobs() method is a cursor that we can use to iterate over all jobs (or all jobs matching an optional filter if one is specified). This cursor is an instance of JobsCursor and allows us to group these jobs by state point parameters, the job document values, or even arbitrary functions. Note The groupby() method is very similar to Python’s built-in itertools.groupby() function. Basic Grouping by Key¶ Grouping can be quickly performed using a statepoint or job document key. If a was a state point variable in a project’s parameter space, we can quickly enumerate the groups corresponding to each value of a like this: for a, group in project.groupby('a'): print(a, list(group)) Similarly, we can group by values in the job document as well. Here, we group all jobs in the project by a job document key b: for b, group in project.groupbydoc('b'): print(b, list(group)) Grouping by Multiple Keys¶ Grouping by multiple state point parameters or job document values is possible, by passing an iterable of fields that should be used for grouping. For example, we can group jobs by state point parameters c and d: for (c, d), group in project.groupby(('c', 'd')): print(c, d, list(group)) Searching and Grouping¶ We can group a data subspace by combining a search with a group-by function. As an example, we can first select all jobs, where the state point key e is equal to 1 and then group them by the state point parameter f: for f, group in project.find_jobs({'e': 1}).groupby('f'): print(f, list(group)) Custom Grouping Functions¶ We can group jobs by essentially arbitrary functions. For this, we define a function that expects one argument and then pass it into the groupby() method. Here is an example using an anonymous lambda function as the grouping function: for (d, count), group in project.groupby(lambda job: (job.sp['d'], job.document['count'])): print(d, count, list(group)) Moving, Copying and Removal¶ In some cases it may desirable to divide or merge a project data space. To move a job to a different project, use the move() method: other_project = get_project(root='/path/to/other_project') for job in jobs_to_move: job.move(other_project) Copy a job from a different project with the clone() method: project = get_project() for job in jobs_to_copy: project.clone(job) Trying to move or copy a job to a project which has already an initialized job with the same state point, will trigger a DestinationExistsError. Warning While moving is a cheap renaming operation, copying may be much more expensive since all of the job’s data will be copied from one workspace into the other. To clear all data associated with a specific job, call the clear() method. Note that this function will do nothing if the job is uninitialized; the reset() method will also clear all data associated with a job, but it will also automatically initialize the job if it was not originally initialized. To permanently delete a job and its contents use the remove() method: job = project.open_job(statepoint) job.remove() assert job not in project Centralized Project Data¶ To support the centralization of project-level data, signac offers simple facilities for placing data at the project level instead of associating it with a specific job. For one, signac provides a project document and project data analogous to the job document and job data. The project document is stored in JSON format in the project root directory and can be used to store similar types of data to the job document. >>> project = signac.get_project() >>> project.doc['hello'] = 'world' >>> print(project.doc().get('hello')) 'world' >>> print(project.doc.hello) 'world' The project data is stored in HDF5 format in a file named signac_data.h5 in the project root directory. Although it can be used to store similar types of data as the job document, it is meant for storage of large, array-like or dictionary-like information. >>> project = signac.get_project() >>> project.data['x'] = np.ones([10, 3, 4]) Data may be accessed as an attribute, key, or through a functional interface: To access data as an attribute: >>> with project.data: ... x = project.data.x[:] To access data as a key: >>> with project.data: ... x = project.data['x'][:] To access data through a functional interface: >>> with project.data: ... x = project.data.get('x')[:] In addition, signac also provides the signac.Project.fn() method, which is analogous to the Job.fn() method described above: >>> print(project.root_directory()) '/home/johndoe/my_project/' >>> print(project.fn('foo.bar')) '/home/johndoe/my_project/foo.bar' Warning Be careful when accessing the project-level data concurrently from different running jobs, as the underlying HDF5 file is locked by default, even when data is only being read from. When trying to read concurrently you may get the following exception: OSError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable'). If data will only be read concurrently, the environment variable HDF5_USE_FILE_LOCKING can safely be set to FALSE to avoid this behavior. For concurrent writing and reading, try using one of the following approaches: Schema Detection¶ While signac does not require you to specify an explicit state point schema, it is always possible to deduce an implicit semi-structured schema from a project’s data space. This schema is comprised of the set of all keys present in all state points, as well as the range of values that these keys are associated with. Assuming that we initialize our data space with two state point keys, a and b, where a is associated with some set of numbers and b contains a boolean value: for a in range(3): for b in (True, False): project.open_job({'a': a, 'b': b}).init() Then we can use the detect_schema() method to get a basic summary of keys within the project’s data space and their respective range: >>> print(project.detect_schema()) { 'a': 'int([0, 1, 2], 3)', 'b': 'bool([False, True], 2)', } This functionality is also available directly from the command line: $ signac schema { 'a': 'int([0, 1, 2], 3)', 'b': 'bool([False, True], 2)', } Importing and Exporting Data¶ Data archival is important to preserving the integrity, utility, and shareability of a project. To this end, signac provides interfaces for importing workspaces from and exporting workspaces to directories, zip-files, and tarballs. The exported project archives are useful for publishing data, e.g., for researchers wishing to make an original data set available alongside a publication. Exporting a Workspace¶ Exporting a project could be as simple as zipping the project files and workspace paths ( $ zip -r project_archive.zip /data/my_project/). The functionality provided by signac export is a bit more fine-grained and allows the use of a custom path structure or the export of a subset of the jobs based on state point or document filters or by job id. For example, suppose we have a project stored locally in the path /data/my_project and want to export it to a directory /data/my_project_archive. The project’s jobs are assumed to have state point keys a and b with integer values. We would first change into the root directory of the project that we want to export and then call signac export with the target path: $ cd /data/my_project $ signac export /data/my_project_archive This would copy data from the source project to the export directory with the following directory structure: /data/my_project_archive/a/0/b/0/ /data/my_project_archive/a/0/b/1/ /data/my_project_archive/a/0/b/2/ # etc. The default path function is based on the implicit schema of all exported jobs, but we can also optionally specify a specific export path, for example like this: $ signac export /data/my_project_archive "a_{a}/b_{b}" It is possible to directly export to a zip-file or tarball by simply providing the path to the archive-file as target (e.g. $ signac export /data/my_project_archive.zip). For more details on how to use signac export, type $ signac export --help or see the documentation for the export_to() method. Importing a Data Space¶ The import of data spaces into a signac workspace means to map all directories as part of an arbitrary directory structure to signac job state points. That is easiest when one imports a previously exported workspace, which will still contain all state point files. For example, we could first export our workspace in ~/my_project to ~/data/ with ~/my_project $ signac export ~/data/ and then import the exported data into a second project: ~/my_new_project $ signac import ~/data/ Since the imported data space was previously exported with signac, all state point metadata is automatically determined from the state point manifest files. In the case that we want to import a data space that was not previously exported with signac, we need to provide a schema-function. In the simplest case, that is just a function based on the data space paths, e.g., $ signac import /data/non_signac_archive "a_{a:int}/b_{b:int}" The command above will copy all data from the the /data/non_signac_archive directory and use the paths of sub-directories to identify the associated state points. For example, the path a_0/b_1 will be interpreted as {'a': 0, 'b': 1}. The type specification – here int for both a and b – is optional and means that these values are converted to type int; the default type is str. Importing from zip-files and tarballs works similarly, by specifying that path as the origin. For more details on how to use signac import, type $ signac import --help or see the documentation for import_from(). Linked Views¶ Data space organization by job id is both efficient and flexible, but the obfuscation introduced by the job id makes inspecting the workspace on the command line or via a file browser much harder. A linked view is a directory hierarchy with human-interpretable names that link to to the actual job workspace directories. Unlike the default mode for data export, no data is copied for the generation of linked views. See create_linked_view() for the Python API. To create views from the command line, use the $ signac view command. Important When the project data space is changed by adding or removing jobs, simply update the view, by executing create_linked_view() or $ signac view for the same view directory again. You can limit the linked view to a specific data subset by providing a set of job ids to the create_linked_view() method. This works similarly for $ signac view on the command line, but here you can also specify a filter directly: $ signac view -f a 0 will create a linked view for all jobs where a=0. Synchronization¶ In some cases it may be necessary to store a project at more than one location, perhaps for backup purposes or for remote execution of data space operations. In this case there will be a regular need to synchronize these data spaces. Synchronization of two projects can be accomplished by either using rsync to directly synchronize the respective workspace directories, or by using signac sync, a tool designed for more fine-grained synchronization of project data spaces. Users who are familiar with rsync will recognize that most of the core functionality and API of rsync is replicated in signac sync. As an example, let’s assume that we have a project stored locally in the path /data/my_project and want to synchronize it with /remote/my_project. We would first change into the root directory of the project that we want to synchronize data into. Then we would call signac sync with the path of the project that we want to synchronize with: $ cd /data/my_project $ signac sync /remote/my_project This would copy data from the remote project to the local project. For more details on how to use signac sync, type $ signac sync --help. Projects can also be synchronized using the Python API: project.sync('/remote/my_project')
https://docs.signac.io/en/latest/projects.html
2021-02-25T07:48:47
CC-MAIN-2021-10
1614178350846.9
[array(['_images/signac_data_space.png', '_images/signac_data_space.png'], dtype=object) ]
docs.signac.io
Basic WinForms & ASP.NET Tutorial (Simple Project Manager Demo) - 3 minutes to read This: - a user can view, search, filter, print, create, update, and delete employees, projects and task data; - a user can view, search, filter, print, create, update, and delete customers and comments about a product to organize and provide data for a promotional website. In this tutorial, you write the platform-agnostic business code which affects application aspects that are not specific to WinForms and ASP.NET Web Forms. XAF automatically generates UI screens and specifies database access.'s Lessons - Create an XAF Application - create a WinForms and ASP.NET XAF application and specify their connection strings. - Define the Logical Data Model and Relationships - define the business model that serves as a base for the application's CRUD UI. - Customize the Application UI and Behavior - customize the auto-generated UI's structure and metadata and implement custom user interaction. - Reuse Implemented Functionality - add modules to your applications to enable additional functionality. To see the result, run the SimpleProjectManager demo installed in %PUBLIC%\Documents\DevExpress Demos 20.2\Components.NET Core Desktop Libraries\eXpressApp Framework\SimpleProjectManager or use the online version at. Learn More - Try XAF demos. - Visit the XAF home page. - Follow the In-Depth Tutorial. NOTE If you need assistance with your XAF application, submit a new ticket to the Support Center.
https://docs.devexpress.com/eXpressAppFramework/113496/getting-started/basic-tutorial-winforms-aspnet?p=netcore
2021-02-25T08:33:04
CC-MAIN-2021-10
1614178350846.9
[]
docs.devexpress.com
Loading. Emacs can also load compiled dynamic modules: shared libraries that provide additional functionality for use in Emacs Lisp programs, just like a package written in Emacs Lisp would. When a dynamic module is loaded, Emacs calls a specially-named initialization function which the module needs to implement, and which exposes the additional functions and variables to Emacs Lisp programs. For on-demand loading of external libraries which are known in advance to be required by certain Emacs primitives, see Dynamic Libraries. Licensed under the GNU GPL license.
https://docs.w3cub.com/elisp/loading
2021-02-25T08:22:46
CC-MAIN-2021-10
1614178350846.9
[]
docs.w3cub.com
There may be times when you need to place files onto your space on your Web server. There are a number of scenarios when this might be necessary: - You’re working with an application that allows you to install plugins/extensions, but the files need to be manually moved to the server in order to add them. (Note: This is NOT required with WordPress which allows you to install plugins through the backend in your browser.) - You’ve developed a custom site/pages using a Web design program, and you need to upload the files you created to the server - You’re installing an application that isn’t part of the applications in Installatron. One way to upload files is by using the File Manager that is part of cPanel. However, sometimes you’ll find it easier/necessary to use FTP, or File Transfer Protocol, to move files to the server., but you should be able to generalize these instructions to use in any FTP client. Get Information about Your FTP Account If you’re FTPing to your own space on the Web server, or if you’re setting up an FTP account for someone else to use to FTP to your space, you’ll need to start by getting information about the FTP credentials from cPanel: - Login to plymouthcreate.net. - - You’ll have the option to create a new FTP account, or you can scroll down the page to find the credentials for the default FTP account. If you want to create an account, fill out the Add FTP Account form with a username and password. By default, any space on your Web server. - For whichever account you need credentials for, click the Configure FTP Client link. - Configure FTP in Your FTP Client Below are links to tutorials for setting up both FileZilla and CyberDuck to connect to you FTP account.
http://docs.uofsccreate.org/uncategorized/setting-up-ftp/
2021-02-25T07:18:33
CC-MAIN-2021-10
1614178350846.9
[]
docs.uofsccreate.org
Notch technology is the connecting layer in the ecosystem of motion analysis apps and products developed by Notch Interfaces and our partners. Notch technology includes Notch devices, SDKs and APIs made available to all participants in Notch Pioneer developer program. Notch Pioneer is our program for people who are interested in using our technology to develop new apps and products for healthcare, wellness, sports, and entertainment. Over 4000 people have joined the program and started using Notch devices and software since the public launch of Notch Pioneer in 2017. “Powered by Notch” is the program that allows companies to partner with us to bring to market new products developed using our technology and manufacturing capabilities. An example of such a product is 4D Motion Sports - a platform for golf and baseball analysis, used by professional and amateur athletes and their coaches around the world. n+Notch is the line of products developed by Notch Interfaces internally to showcase our technology and contribute to the advancement of the understanding of human motion throughout consumer markets. The first of such products is Yoganotch - sensor-powered AI yoga assistant. Please note - we are perfectly fine having “powered by Notch” and n+Notch products exist in the same markets. As a platform company, we do not see our internal initiatives as competitive to the products provided by our partners. So, if you have a yoga product in mind and would be interested in using Notch as a platform for its launch - by all means, reach out to powereby[a]wearnotch.com and we will be happy to share our findings and success with you.
http://docs.wearnotch.com/docs/poweredby_ecosystem/
2021-02-25T07:47:45
CC-MAIN-2021-10
1614178350846.9
[]
docs.wearnotch.com
.command... Query Service: No RAM-allocation is required for this service. (GSIs), Setting Up.] -d port=[desired-rest-api-port|SAME]'
https://docs.couchbase.com/server/5.1/install/init-setup.html
2021-02-25T08:07:46
CC-MAIN-2021-10
1614178350846.9
[array(['_images/admin/welcome.png', 'welcome'], dtype=object) array(['_images/admin/setUpNewCluster01.png', 'setUpNewCluster01'], dtype=object) array(['_images/admin/TsAndCs01.png', 'TsAndCs01'], dtype=object) array(['_images/admin/registerForUpdates01.png', 'registerForUpdates01'], dtype=object) array(['_images/admin/configureNewCluster01.png', 'configureNewCluster01'], dtype=object) array(['_images/admin/dashboard01.png', 'dashboard01'], dtype=object) array(['_images/admin/joinClusterInitial.png', 'joinClusterInitial'], dtype=object) array(['_images/admin/joinWithCustomConfig.png', 'joinWithCustomConfig'], dtype=object) array(['_images/admin/joinClusterServiceCheckboxes.png', 'joinClusterServiceCheckboxes'], dtype=object) array(['_images/admin/joinExistingNewServiceSettings.png', 'joinExistingNewServiceSettings'], dtype=object) array(['_images/admin/joinClusterMemoryQuotaSaved.png', 'joinClusterMemoryQuotaSaved'], dtype=object) ]
docs.couchbase.com
Introduction Service Directory Mapping allows organizations to use their LDAP Directory for authentication and authorization in Kong Enterprise. After starting Kong Enterprise with the desired configuration, you can create new Admins whose usernames match those in your LDAP directory. Those users will then be able to accept invitations to join Kong Manager and log in with their LDAP credentials. How Service Directory Mapping works in Kong: - Roles are created in Kong Enterprise using the Admin API or Kong Manager. - Groups are created and roles are associated with the groups. - When users log in to Kong Manager, they get permissions based on the group(s) they belong to. For example, if a User’s Group changes in the Service Directory, their Kong Admin account’s associated Role also changes in Kong Enterprise the next time they log in to Kong Manager. The mapping removes the task of manually managing access in Kong Enterprise, as it makes the Service Directory the system of record. Prerequisites - Kong Enterprise installed and configured - Kong Manager access - A local LDAP directory Configure Service Directory Mapping Configure Service Directory Mapping to use your LDAP Directory for authentication and authorization. Step 1: Start Kong Enterprise From a terminal window, enter: $ kong start [-c /path/to/kong/conf] Step 2: Enable LDAP Authentication and enforce RBAC To enable LDAP Authentication and enforce RBAC for Kong Manager, configure Kong with the following properties: admin_gui_auth = ldap-auth-advanced enforce_rbac = on Note: When enabling LDAP Authentication in this step, you are enabling and configuring the LDAP Authentication Advanced Plugin for Kong Manager. No other configuration for the plugin is needed. Step 3: Configure the Sessions plugin Configure the Sessions Plugin for Kong Manager: admin_gui_session_conf = { "secret":"set-your-string-here" } Note: The Sessions Plugin requires a secret and is configured securely by default: - Under all circumstances, the secret must be manually set to a string. - If using HTTP instead of HTTPS, cookie_secure must be manually set to false. - If using different domains for the Admin API and Kong Manager, cookie_samesite must be set to off. Learn more about these properties in Session Security in Kong Manager, and see example configurations. Step 4: Configure LDAP Authentication for Kong Manager Configure LDAP Authentication for Kong Manager with the following properties. Note the attribute variables are defined below:"], \ "group_base_dn":"<ENTER_YOUR_GROUP_BASE_DN_HERE>", "group_name_attribute":"<ENTER_YOUR_GROUP_NAME_ATTRIBUTE_HERE>", "group_member_attribute":"<ENTER_YOUR_GROUP_MEMBER_ATTRIBUTE_HERE>", } - Important: As with any configuration property, sensitive information may be set as an environment variable instead of being written directly in the configuration file. group_base_dn: <ENTER_YOUR_BASE_DN_HERE>: Sets a distinguished name for the entry where LDAP searches for groups begin. The default is the value from conf.base_dn. group_name_attribute: <ENTER_YOUR_GROUP_NAME_ATTRIBUTE_HERE>: Sets the attribute holding the name of a group, typically called name(in Active Directory) or cn(in OpenLDAP). The default is the value from conf.attribute. group_member_attribute: <ENTER_YOUR_GROUP_MEMBER_ATTRIBUTE_HERE>: Sets the attribute holding the members of the LDAP group. The default is memberOf. Define Roles with Permissions Define Roles with Permissions in Kong Enterprise, using the Admin API’s RBAC endpoints or using Kong Manager’s Teams > Admins tab. You must manually define which Kong Roles correspond to each of the Service Directory’s Groups using either of the following: In Kong Manager’s Directory Mapping section. Go to Teams > Groups tab. With the Admin API’s Directory Mapping endpoints. Kong Enterprise will not write to the Service Directory, for example, a Kong Enterprise Admin cannot create Users or Groups in the directory. You must create Users and Groups independently before mapping them to Kong Enterprise. User-Admin Mapping To map a Service Directory User to a Kong Admin, you must configure the Admin’s username as the value of the User’s name from their LDAP Distinguished Name (DN) corresponding the attribute configured in admin_gui_auth_conf. Creating an Admin account in Kong Manager or using the Admin API. For instructions on how to pair the bootstrapped Super Admin with a Directory User, see How to Set Up a Service Directory User as the First Super Admin. If you already have Admins with assigned Roles and want to use Group mapping instead, it is necessary to first remove all of their Roles. The Service Directory will serve as the system of record for User privileges. Assigned Roles will affect a user’s privileges in addition to any roles mapped from Groups. Group-Role Assignment Using Service Directory Mapping, Groups to Roles are mapped. When a user logs in, they are identified with their Admin username and then authenticated with the matching User credentials in the** Service Directory**. The Groups in the Service Directory are then automatically matched to the associated Roles that the organization has defined. Example - Wayne Enterprises maps the Service Directory Group, T1-Mgmt, to the Kong Role super-admin. - Wayne Enterprises maps a Service Directory User, named bruce-wayne, to a Kong Admin account with the same name, bruce-wayne. - The User, bruce-wayne, is assigned to the Group T1-Mgmt in the LDAP Directory. When bruce-wayne logs in as an Admin to Kong Manager, they will automatically have the Role of super-admin as a result of the mapping. If Wayne Enterprises decides to revoke bruce-wayne’s privileges by removing their assignment to T1-Mgmt, they will no longer have the super-admin Role when they attempt to log in. Set Up a Directory User as the First Super Admin Important: Setting up a Directory User as the first Super Admin is recommended by Kong. The following is an example of setting up a Directory User as the first Super Admin. The example shows an attribute is configured with a unique identifier (UID), and the Directory User you want to make the Super Admin has a distinguished name (DN) entry of UID=bruce-wayne: HTTPie $ http PATCH :8001/admins/kong_admin username="bruce-wayne" Kong-Admin-Token:<RBAC_TOKEN> cURL $ curl --request 'PATCH' --header 'Kong-Admin-Token: <RBAC_TOKEN>' --header 'Content-Type: application/json' --data '{"username":"bruce-wayne"}' 'localhost:8001/admins/kong_admin' This User will be able to log in, but until you map a Group belonging to bruce-wayne to a Role, the User will only use the Directory for authentication. Once you map the super-admin Role to a Group that bruce-wayne is in, then you can delete the super-admin Role from the bruce-wayne Admin. Note the group you pick needs to be “super” in your directory, otherwise as other admins log in with a generic group, for example the “employee” group, they will also become super-admins. Important: If you delete the super-admin Role from your only Admin, and have not yet mapped the super-admin Role to a Group that Admin belongs to, then you will not be able to log in to Kong Manager. Alternatives: - Start Kong with RBAC turned off, map a Group to the super-admin Role, and then create an Admin to correspond to a User belonging to that Group. Doing so ensures that the Super Admin’s privileges are entirely tied to the Directory Group, whereas bootstrapping a Super Admin only uses the Directory for authentication. Create all Admin accounts for matching Directory Users and ensure that their existing Groups map to appropriate Roles before enforcing RBAC.
https://docs.konghq.com/enterprise/2.3.x/kong-manager/service-directory-mapping/
2021-02-25T08:24:35
CC-MAIN-2021-10
1614178350846.9
[]
docs.konghq.com
Embedding video and audio¶ Like images, it is best practice to add new video and audio files using the File > Filelist module, which is covered in the Getting Started Tutorial. This method means files are centrally stored and any information or metadata you add to the file is used wherever that media file is used on the site. When you use the Select & upload files button, the media file is attached to the page, and any information or metadata is stored only for use on this page. Add a video to a page¶ On the Media tab, click the Add media file button, then follow the same process as you would to :ref:<add-image-to-page>. Alternatively, you can click the Add media by URL button to paste a link to a video or audio file from the web. Configure the video¶ Use the Autoplay setting to specify whether the video should start playing as soon as the page loads. You can configure various settings for media files (for example, adding a border, setting page position and behavior) just as you would to configure an image.
https://docs.typo3.org/m/typo3/tutorial-editors/master/en-us/ContentElements/Media/Index.html
2021-02-25T08:00:36
CC-MAIN-2021-10
1614178350846.9
[array(['../../_images/EditContentMediaTab.png', 'The Media tab for a content element'], dtype=object)]
docs.typo3.org
Google Cloud Dataproc Sink Connector for Confluent Platform¶ Note If you are using Confluent Cloud, see Google Cloud Dataproc Sink Connector for Confluent Cloud for the cloud Quick Start. The Kafka Connect Google Cloud Dataproc Sink Connector integrates Apache Kafka® with managed HDFS instances in Google Cloud Dataproc. The connector periodically polls data from Kafka and writes this data to HDFS. The data from each Kafka topic is partitioned by the provided partitioner and divided into chunks. Each chunk of data is represented as an HDFS file. The HDFS filename is a combination of the topic, Kafka partition, with the start and end offsets of the data chunk. If no partitioner is specified in the configuration, the default partitioner that preserves the Kafka partitioning is used. The size of each data chunk is determined by the number of records written to HDFS, the time the data was written to HDFS, and schema compatibility. The Kafka Connect Google Cloud Sink Connector Configuration Properties. Quick Start¶ This quick start uses the Google Cloud Dataproc Sink Connector Configuration Properties..gcp.dataproc.hdfs.parquet.ParquetFormat partitioner.class=io.confluent.connect.gcp.dataproc.hdfs.partitioner.HourlyPartitioner Note If you want to://<namenode>:9083. Also, to support schema evolution, set the schema.compatibility to be BACKWARD, FORWARD, or FULL. This ensures that Hive can query the data written to HDFS with different schemas using the latest Hive table schema. For more information on schema compatibility, see Schema Evolution. using Kerberos and distribute the keytab file to all hosts that running the connector and ensures that only the connector user has read access to the keytab file. Dataproc manner, you can always use the latest schema to query all the data uniformly. For example, removing fields is a backward-compatible change to a schema, since when the connector encounters records written with the old schema that contains these fields a schema of an earlier version, the connector projects the data record to the latest schema before writing to the same set of files in HDFS. FORWARD Compatibility: If a schema is evolved in a forward-compatible manner,.
https://docs.confluent.io/5.5.0/connect/kafka-connect-gcp-dataproc/sink/index.html
2021-02-25T08:42:18
CC-MAIN-2021-10
1614178350846.9
[]
docs.confluent.io
Keywords are just a space delimited list of keywords that publishers use to influence their asset’s results in Asset Store’s search. There is a limit of 255 characters. Keywords are very simple and easy to add. They are very beneficial for your assets, as they will make them easier for potential customers to locate them within the store when using the search bar. Step 1: Log into your Publisher Admin Area. Step 2: Navigate to the “Packages” tab. Step 3: Next you will need to create a draft of the asset you wish to add keywords to in order to prepare it for submission and review. The Blue arrow shows an asset that already has a draft. We will create a draft for the asset indicated by the green arrow. Step 4: Click the name of the asset to bring up the page below and click the “Create New Draft” button. Step 5: Now that we have a draft we can edit the package. Click the Metadata edit button. Step 6: At the bottom of this page you will be able to enter your Keywords. Before you, a publisher, add keywords for your asset, think about what kind of search terms you expect developers to use in order to find your asset. Think about how users search. Picking the right keywords in combination with your description, will give you the ability to operate your SEO. If your asset is a 3D Model, maybe using 3D is a waste of a keyword. Note: Keywords are seperated by whitespace and there is a limit of 255 characters Step 7: Once you have entered all your keywords and are happy with your choices then you can click the “Save” button. Step 8: When you feel that you are ready to submit. Then return to your draft and Submit the package for approval. Please add a submission comment (the popup immediately after clicking “Submit package for approval”) for our Asset Store Specialists describing why you chose your keywords. This will be vital information for them to use to make a decision and expedite your update. Our Asset Store Specialists will vet keywords to ensure that they are relevant to your asset. As there are a lot of publishers, expect around 5 business days for any updates to pass through our process. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.2/Documentation/Manual/AssetStoreMassLabeler.html
2021-02-25T08:24:02
CC-MAIN-2021-10
1614178350846.9
[]
docs.unity3d.com
Gaussian Blur¶ Applies a gaussian blur filter. Applies median value to central pixel within a kernel size (ksize x ksize). The function is a wrapper for the OpenCV function gaussian blur. plantcv.gaussian_blur(img, ksize, sigma_x=0, sigma_y=None) returns blurred image - Parameters: - img - RGB or grayscale image data - ksize - Tuple of kernel dimensions, e.g. (5, 5). Must be odd integers. - sigma_x - standard deviation in X direction; if 0 (default), calculated from kernel size - sigma_y - standard deviation in Y direction; if sigma_Y is None (default), sigma_Y is taken to equal sigma_X - Context: - Used to reduce image noise Original image from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), # or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Apply gaussian blur to a binary image that has been previously thresholded. gaussian_img = pcv.gaussian_blur(img=img1, ksize=(51, 51), sigma_x=0, sigma_y=None) Gaussian blur (ksize = (51,51)) # Apply gaussian blur to a binary image that has been previously thresholded. gaussian_img = pcv.gaussian_blur(img=img1, ksize=(101, 101), sigma_x=0, sigma_y=None) Gaussian blur (ksize = (101,101))
https://plantcv.readthedocs.io/en/stable/gaussian_blur/
2021-02-25T07:55:41
CC-MAIN-2021-10
1614178350846.9
[array(['../img/documentation_images/gaussian_blur/original_image.jpg', 'Screenshot'], dtype=object) array(['../img/documentation_images/gaussian_blur/gaussian_blur51.jpg', 'Screenshot'], dtype=object) array(['../img/documentation_images/gaussian_blur/gaussian_blur101.jpg', 'Screenshot'], dtype=object) ]
plantcv.readthedocs.io
Performing the silent installation of an App Visibility agent for Java Install the App Visibility agent for Java on the computer with the application server you want to monitor. After you install the agent, connect it to the application server instance. You can perform a silent installation as described in this topic or use the installation wizard . This topic contains the following sections: Before you begin Before you install the App Visibility agent for Java, ensure that your environment meets the following requirements: - Your system meets all requirements for App Visibility Manager. - You completed all tasks to prepare the environment for agent installation. To install the agent for Java in silent mode Log on to the computer with the user that runs the application server. On Windows computers, you must run the installation with a user that has administrator privileges. - Copy and extract the installation files to a temporary directory on the target computer. - Open the silent options file javaagent-silent-options.txt in a text editor. - Enter the destination directory where the agent for Java will be installed. The default value is: - (Windows) C:\Program Files (x86)\BMC Software\App Visibility\Agent for Java - (Linux, AIX, Solaris) /opt/bmc/App_Visibility/Agent_for_Java: ProxyType= HTTPProxyServerHost= HTTPProxyServerPort= Note Proxy: (Windows) For example: setup.exe -i silent -DOPTIONS_FILE="<Full-File-Path>\<Silent Options Filename>" setup.exe -i silent -DOPTIONS_FILE="C:\JavaAgentInstaller\Disk1\javaagent-silent-options.txt" (Linux, AIX, Solaris) ./setup.bin -i silent -DOPTIONS_FILE="<Full-File-Path>/<Silent Options Filename>" For example: ./setup.bin -i silent -DOPTIONS_FILE="/home/Admin/JavaagentInstaller/Disk1/javaagent-silent-options.txt" Make sure to specify the full path and filename for the silent options file. To configure Java options After you install the agent, you need to add the agent to the Java command line. The javaagent option is required to connect the App Visibility agent to the JVM process. Enter the information that is relevant to your installation. Add the following agent JVM options to the Java command line of your application server. Replace <installationDirectory>with the destination directory in which the App Visibility agent was installed. The procedure to update Java options is different for each application server type. Windows -javaagent:<installationDirectory>\ADOPsInstall\adops-agent.jar Linux, AIX, Solaris -javaagent:<installationDirectory>/ADOPsInstall/adops-agent.jar - If you have a single App Visibility agent installation directory that is used by multiple JVM processes, you can assign meaningful names to distinguish each instance. For information, see To distinguish multiple JVM processes using the same agent installation. - If your application server is using Java 2 security, set Java 2 security options for application servers that have Java 2 security enabled. For information, see Granting Java 2 permissions to the App Visibility agent. - Restart the application server or JVM process. Where to go from here After you install the App Visibility agent for Java, perform the following tasks: - Verify the installation of App Visibility agent for Java. - Configure the App Visibility agents for Java after installation. - Access the TrueSight console. Related topics Installing an App Visibility agent for .NET Changing settings of the App Visibility agent for Java
https://docs.bmc.com/docs/TSOperations/110/performing-the-silent-installation-of-an-app-visibility-agent-for-java-722060340.html
2021-02-25T07:15:36
CC-MAIN-2021-10
1614178350846.9
[]
docs.bmc.com
Adding laptop memory Joe Unser is an employee at Calbro Services. To improve the performance of his laptop, he needs an additional 3 GB of memory added. He submits a request to install memory for his laptop. The Calbro Services business process has predefined that this type of change request does not require the standard Review and Business Approval processes. Mary Mann is the change coordinator at Calbro Services. Mary schedules and plans the change request. Ian Plyment, who is part of Mary's Front Office Support team, implements the change request. The following table describes the typical steps involved in this user scenario. Adding laptop memory
https://docs.bmc.com/docs/change1908/adding-laptop-memory-877691064.html
2021-02-25T08:13:05
CC-MAIN-2021-10
1614178350846.9
[]
docs.bmc.com
Skeletal . Common Pins and Properties While the properties available will largely be based on the node itself, some pins and properties are common to all SkeletalControls which are outlined below. Component Space Poses. See Convert Spaces Nodes for more information on the space conversion nodes. Skeletal Control Nodes Below are links to additional pages with information about each of the Skeletal Control Nodes within the AnimGraph.
https://docs.unrealengine.com/en-US/AnimatingObjects/SkeletalMeshAnimation/NodeReference/SkeletalControls/index.html
2021-02-25T07:25:48
CC-MAIN-2021-10
1614178350846.9
[array(['./../../../../../Images/AnimatingObjects/SkeletalMeshAnimation/NodeReference/SkeletalControls/perf.jpg', 'perf.png'], dtype=object) ]
docs.unrealengine.com
Accessibility Assistive Technologies Keyboard These settings are here to help people with certain disabilities get the most out of their keyboards. Sticky Keys When Sticky Keys are active, you can press key combinations in sequence instead of at the same time. For instance, Ctrl followed by C (instead of Ctrl-C) will copy selected text. Slow Keys When Slow Keys are active, the keyboard won't register a keystroke until you hold the key down for a certain length of time. The default time is 500 milliseconds, or half a second. You can move the slider with the mouse to adjust the length of time. Bounce Keys When Bounce Keys are active, the keyboard won't accept a keystroke until there has been a certain interval of time since the previous keystroke. The default interval is 500 milliseconds. You can move the slider with the mouse to adjust the length of time. Mouse Mouse Emulation lets you move the mouse pointer with the arrow keys on the numeric keypad. As seen in the above screenshot, the values that can be set are: - Acceleration delay (in milliseconds) - Repeat interval (in milliseconds) - Acceleration time (in milliseconds) - Maximum speed (in pixels per second) - Acceleration profile .
https://docs.xfce.org/xfce/xfce4-settings/4.12/accessibility
2021-02-25T09:12:21
CC-MAIN-2021-10
1614178350846.9
[]
docs.xfce.org
The Lazy class has a .clear() method. When called, the reference held in the Lazy Reference is removed and only the ID is kept so that the instance can be reloaded when needed. Important background knowledge: However, such a clear does not mean that the referenced instance immediately disappears from memory. That's the job of the garbage collector of the JVM. The reference is even registered in another place, namely in a global directory (Swizzle Registry), in which each known instance is registered with its ObjectId in a bijective manner. This means: if you clear such a reference, but shortly thereafter the Lazy Reference is queried again, probably nothing has to be loaded from the database, but simply the reference from the Swizzle Registry is restored. Nevertheless, the Swizzle Registry is not a memory leak, because it references the instances only via WeakReference. In short, if an instance is only referenced as "weak," the JVM GC will still clean it up. So that the Lazy References do not have to be managed manually, but the whole goes automatically, there is the following mechanism: Each Lazy instance has a lastTouched timestamp. Each .get() call sets it to the current time. This will tell you how long a Lazy Reference has not been used, i.e. if it is needed at all. The LazyReferenceManager audits this. It is enabled by default, with a timeout of 1,000,000 milliseconds, which is about 15 minutes. A custom manager can be set easily, which should happen before a storage is started. LazyReferenceManager.set(LazyReferenceManager.New(Lazy.Checker(Duration.ofMinutes(30).toMillis(), // timeout of lazy access0.75 // memory quota)); The timeout of lazy references is set to 30 minutes, meaning references which haven't been touched for this time are cleared. In combination with a memory quota of 0.75.
https://manual.docs.microstream.one/data-store/loading-data/lazy-loading/clearing-lazy-references
2021-02-25T07:51:54
CC-MAIN-2021-10
1614178350846.9
[]
manual.docs.microstream.one
According to Mongo, this (i.e., MongoDB\Driver\Manager) is an "entry point" for the extension: "This class serves as an entry point for the MongoDB PHP Library. It is the preferred class for connecting to a MongoDB server or cluster of servers and acts as a gateway for accessing individual databases and collections. MongoDB\Client is analogous to the driver’s MongoDB\Driver\Manager class, which it composes." copied from here: However, any comparison of the "mongodb" docs here on php.net versus the "mongodb driver" docs on mongo's site shows dramatic and ever-changing differences.
http://docs.php.net/manual/zh/class.mongodb-driver-manager.php
2021-02-25T08:36:01
CC-MAIN-2021-10
1614178350846.9
[]
docs.php.net
Embedded control libraries. More... Embedded control libraries. Various core libraries useful for embedded control systems. Shared abstraction of the scheduling priorities. Defines a priority level for priority scheduling of a thread/process. They are effectively ranked as indicated, however their implementation will be different for different systems. Definition at line 32 of file priority_common.hpp.
http://docs.ros.org/en/kinetic/api/ecl_threads/html/namespaceecl.html
2021-02-25T08:23:00
CC-MAIN-2021-10
1614178350846.9
[]
docs.ros.org
Motion Controller Key Deprecation Motion Controller keys have been deprecated in 4.24 in favor of keys specific to a set of common XR controllers which are defined in the OpenXR specification. This change makes it easier to customize the input bindings for each controller model and gets rid of the ambiguity around the mapping of Motion Controller keys to physical controller buttons. By using the new keys, projects will improve support for SteamVR Input and OpenXR which are built around action systems. These new XR input systems are designed around providing cross-device compatibility by emulating the specific controller that the project is targeting. During the deprecation existing inputs using Motion Controller keys are still supported, however it's no longer possible to add new inputs using Motion Controller keys. Any new input has to use the new XR keys, however the upgrade of old inputs can be done piecemeal since it's possible to use both the old and new keys interchangeably. What Changes for Existing Plugins? SteamVR In UE4.24 the legacy input has been removed and replaced by the SteamVR Input plugin. This plugin was developed by Valve and has been available on the marketplace for earlier engine versions. SteamVR Input is a native action system meaning that at the lowest level all input is handled through actions. This means that it's not possible to maintain backwards compatibility with Blueprints that do not use the action system. However, if your project was already using the action system then backwards compatibility with Motion Controller keys is maintained. Oculus The Oculus plugin fully supports the deprecated Motion Controller keys even if they are being used directly in Blueprints. With the new XR keys the Oculus Go and Oculus Touch controllers now each have their own set of keys making it possible to have different bindings for each of them. If you prefer to have one set of keys again it's possible to map the Oculus Go to the Touch keys by adding bGoKeysMappedToTouch=1 to the [OculusTouch.Settings] section in BaseEngine.ini. However it is recommended to simply add two keys to every action instead. Windows Mixed Reality Full backwards compatibility is maintained for this plugin, the only change is the addition of new XR keys for Windows Mixed Reality Motion Controllers. Migrating from 4.23 or Earlier The biggest change is that the new XR keys can't be used directly in Blueprints. All input now has to go through the action system. If your project isn't using the action system yet, now is the right time to upgrade to it. This will ensure your project will work with SteamVR and OpenXR when you package for those platforms. Upgrade your project to use the action system if it isn't already. A tutorial on how to use the action system can be found here . The Blueprint compiler will warn you if your Blueprint uses a deprecated key, such as: InputKey Event specifies FKey 'MotionController_Right_Shoulder'which has been deprecated for MotionController (R) Shoulder Upgrade existing actions to use the new XR keys. Simply remove the existing MotionController key and replace it with one of the new XR keys. Add an XR key to the action from each controller you support. You should not add keys from controllers you don't currently actively support. You can simply leave it up to the SteamVR or OpenXR runtime to emulate one of the controllers you do support. For compatibility with SteamVR ensure all Thumbstick and Trackpad axes are suffixed with _Xand _Ycorresponding to the horizontal and vertical axes.
https://docs.unrealengine.com/ko/SharingAndReleasing/XRDevelopment/VR/DevelopVR/MotionControllerKeyDeprecation/index.html
2021-02-25T08:44:56
CC-MAIN-2021-10
1614178350846.9
[]
docs.unrealengine.com
Another method to integrate Matestack in your rails application is by reusing your partials with components. Matestack rails_view component offers the possibility to render a view or partial by passing it's name and required params to it. You can either replace your views step by step refactoring them with components which reuse partials and keep the migration of these partials for later or you can reuse a complete view with a single component rendering this view. Imagine the partial app/views/products/_teaser.html.erb containing following content: <%= link_to product_path(product), class: 'product-teaser' do %><div><h2><%= product.name %></h2><p><%= product.description %></p><b><%= product.price %></b></div><% end %> class Components::Products::Trending < Matestack::Ui::Componentdef prepare@products = Product.where(trending: true)enddef responseheading text: 'Trending products'@products.each do |product|rails_view partial: 'products/teaser', product: productendendend As you see, we used the rails_view component here to render our products teaser partial. Given the string rails searches for a partial in app/views/products/_teaser.html.erb. As our product teaser partial uses a product we pass in a product. All params except those for controlling the rendering like :partial or :view get passed to the partial or view as locals. Therefore the partial teaser can access the product like it does. rails_view works with ERB, Haml and Slim Templates. ERB and Haml are supported out of the box. In order to use slim templates the slim gem needs to be installed. As mentioned above the rails_view component can not only render partials but also views. Following Rails view can be reused within a Matestack component: app/views/static/index.html.erb <main><%= render partial: 'products/teaser', collection: products, as: :product %></main><div><%= link_to 'All products', products_path %></div> class Components::Products::Index < Matestack::Ui::Componentdef responserails_view view: 'static/index', products: productsendend
https://docs.matestack.io/ui-components/reusing-views-or-partials
2021-02-25T08:04:58
CC-MAIN-2021-10
1614178350846.9
[]
docs.matestack.io
Django-refinery is a generic, reusable application to alleviate some of the more mundane bits of view code. Specifically allowing the users to filter down a queryset based on a models fields, and displaying the form to let them do this. Contents: Full-text doc search. Installing django-refinery Enter search terms or a module, class or function name.
https://django-refinery.readthedocs.io/en/v0.1/
2019-03-18T16:39:24
CC-MAIN-2019-13
1552912201455.20
[]
django-refinery.readthedocs.io
Configuration The views in the Configuration module provide access to your personal information, shared transport arrangements, and the schedule exception totals. Click Configuration to reveal the drop-down menu that lists the views in this module: Viewing my settings The information in this view is displayed in two sections: Personal Information and Settings. The Personal Information section includes: your name, employee ID, contract, hire date, site (business unit), team, and date/time of your last login. You cannot change any of this information. If your supervisor changes something, WFM updates this information accordingly. The Settings section includes: - Time zone—The Site time zone (default) or your current one (depending on whether or not you have changed it). - Name order—Your name order, whichever order of the three you chose (First name first [default], Last name first, or Last name first, separated from first name with a comma). - On Startup—The view that you prefer to see at startup. The choices are: Open My Schedule view (default) or Continue where you left off. If you select a time zone other than the default (Site) option, the Schedule, Trading, Preferences, and Time Off modules display information, based on your selection. Feedback Comment on this article:
https://docs.genesys.com/Documentation/WM/latest/AArkHelp/CfgO
2019-03-18T15:27:34
CC-MAIN-2019-13
1552912201455.20
[]
docs.genesys.com
If will first cover what you need to know to build your own auto-configuration and we shouldn.
https://docs.spring.io/spring-boot/docs/1.5.x/reference/html/boot-features-developing-auto-configuration.html
2019-03-18T16:22:04
CC-MAIN-2019-13
1552912201455.20
[]
docs.spring.io
Asclepias Broker¶ The Asclepias Broker is a web service that enables building and flexibly querying graphs of links between research outputs. It’s aiming to address a couple of problems in the world of scholarly link communication, with a focus on Software citation: - Governance of the scholarly links data and metadata - Storage and curation of scholarly links is a problem that cannot be easily solved in a centralized fashion. In the same manner that specialized repositories exist to facilitate research output of different scientific fields, scholarly link tracking is a task performed best by a service that specializes in a specific scientific field. - Meaningful counting of software citations - Software projects (and other types of research) evolve over time, and these changes are tracked via the concept of versioning. The issue that rises is that citations to software projects end up being “diluted” throughout their versions, leading to inaccurate citation counting for the entire sotware project. Rolling-up these citations is critical to assess the impact a software project has in a scientific field. - Sharing of scholarly links across interested parties - Keeping track of the incoming scholarly links for a research artifact is a difficult task that usually repositories have to individually tackle by tapping into a multitude of external services, that expose their data in different ways. Receiving “live” notifications and having a consistent format and source for these events is crucial in order to reduce complexity and provide a comprehensive view. These problems are addressed by providing an easy to setup service that: - Can receive and store scholarly links through a REST API - Exposes these scholarly links through a versatile REST API - Can connect to a network of similar services and exchange links with them The code in this repository was funded by a grant from the Alfred P. Sloan Foundation to the American Astronomical Society (2016). User’s Guide¶ This part of the documentation will show you how to get started using the Asclepias Broker. Architecture¶ This section describes the design principles of the Asclepias Broker. REST API¶ This section documents the REST APIs exposed by the service. API Reference¶ If you are looking for information on a specific function, class or method, this part of the documentation is for you. Additional Notes¶ Notes on how to contribute, legal information and changes are here for the interested.
https://asclepias-broker.readthedocs.io/en/latest/
2019-03-18T15:52:43
CC-MAIN-2019-13
1552912201455.20
[]
asclepias-broker.readthedocs.io
Contents - Viewing Impact Charts - Impact Charts for Buildings, Rooms, and Racks - Topology Charts for Devices - Topology Chart Options - Impact Lists - Service Dependencies Reports - Dependency Charts (Graphs) - How are dependencies created in Device42? Viewing Impact Charts Impact charts enable you to see, at a glance, the impacts of an outage scenario, locate potential performance issues, and identify relevant security issues around data center objects. Impact charts are available from the “view” page for any building, room, rack, device, or application component. Simply select the “…” menu button and choose “Impact Chart”: Device42’s powerful, agentless auto-discovery uses native WMI and SSH in combination with other platform-dependent technologies to identify the details around running services, listening ports, and the relationships between those services and ports or executables and ports. This provides a clear picture of exactly what services/executables are listening on what ports on that machine. Device42 also goes on to capture a point in time snapshot of the IP addresses that are connected to each listening port. Should these communicating IP addresses already exist in Device42 and be mapped to a device, the system automatically shows the device when drawing the dynamic impact charts. Impact Charts for Buildings, Rooms, and Racks Impact charts are a great way to quickly visualize deployments and understand dependency chains. The following is a sample Building Impact Chart: Notice that at the top of the chart is the building or room we selected (the “Building A” in the example above, “Corner Room” below). The Building A impact chart shows there is only one room in the building (“Room A”), which has 3 racks (Racks ‘A’, ‘B’, & ‘C’). The following is a sample Room Impact Chart for the room called “Corner Room”: Looking at the impact chart for the “Corner Room”, from left to right we see the orange “Corner Room” itself, the list of racks (“CHI-DC1-13” is selected), and then all servers that live in selected rack “CHI-DC1-13”: “USOXIS-P0022” and eight other servers. You can view a legend via the “legend” button above: Now, let’s go ahead and get some more information about one of the servers! We can do this easily by hovering over it, or any item in this chart, as such: Hovering over any object will present a quick overview and relevant options. Hovering over server “USOXIS-P0034”, we can view the individual server’s “Topology’, or by clicking the “Device Page” button, head straight to the Device Details page for “USOXIS-P0034”: We now know that we are looking at an HP Proliant DL560GB, which was added all the way back in Feb 2014! (its useful life might be up were it not a lab machine!) Notice we can also get right to the “Topology” screen (our other option when we hovered) from the details screen, as well! Topology Charts for Devices Topology Charts for Devices have more detail than for other objects like buildings, rooms, and racks. In particular, device topology charts show Services, Executables, and Ports. A device topology chart displays information in three categories: - See what Services, Executables, and Application Components are running on a given device. Both services and their respective executables are detected automatically. Information about Application Components will be entered by you. Application Components are explained in more detail below. - See what ports are in use, including details about which services and executables are providing information over those ports. You can also see detailed information about which services and executables on remote devices are accessing data from each in-use port. - Get a full picture of exactly what would be impacted were a performance or security issue to exist on a given device, which can help you determine if you need to remove a given device from service, either temporarily or permanently. Topology charts provide a full picture of all the services, executables, and applications that could be affected both on the device itself, and more importantly, you will be able to see all services, executables, and application components on other related devices that depend on this device. As an example, if the device is a blade chassis or a virtual host, all the blades and/or virtual machines would be dependent on this device, and you would see those dependencies. Similarly, if Device42 discovers that a remote device is connecting to a port on the device, you will also see the remote device (and its services, executables, and application components) in the topology chart. You can also define which Application Components depend on which other Application Component manually (see below), and those custom dependencies will display in the chart as well. Below is the topology chart for a device. (Don’t try to read the details. We’ll zoom in below.) Topology Element Overview: Global View “Global View” is a simplified view of Device to Device directionality in relationship, no details of the relationship is viewable here except what hostnames have interactions to each other. Any type of Device can be visible here, if there is any relationship of services, applications, or hypervisor/virtual it will be represented in the “Global View”. Local View “Local View” is a complete view of the details for the relationships shown in the “Global View”. Each “Device” will have a grouping that contains the device itself, any nested device’s (VM’s/Containers), services, and Application Components. The “Local View” will represent the communication directionality of any Services and Application Components that exist to represent the dependency/impact of each configuration item depending on the discovered listener/client service connections. Device Topology Legend The Device topology chart has its own legend: Legend Definitions Elements: - Device – device objects that have been discovered or added with relational service/application data. - Service/Executable – Discovered or added services, typically all listening services running at point of discovery. Often associated to an Application Component, “Application” in the above legend. - Application – Discovered common applications or Application Component that has been added manually and related to any in view device or service. Groups: - Target Device – The point of origin for the “Topology”, this color highlights the device that the “Topology” button was selected from. - Server Device – Device object that is running a Service as a Listener with clients connecting to them. - Client Device – Device object that is running a Service as a Client connecting to a remote listener. - App Device – A grouping for only device objects that have been related directly to an Application Component, no services in this case and typically defined in an Application Component manually. Topology Chart Options Display options Display Options are used to manipulate current in-view objects based on the below criteria: “Hide services without connections – (Default value is Checked) Will show any service objects that have been discovered on an in-view and associated Device, but with no connections yet discovered/added. Hide client IP addresses with no device – (Default value is Checked) Will show IP Addresses for any remote connections that were found in the netstat table of the in-view discovered devices, these are not yet “Device” objects in the database and are Client/Remote Connections part of the Service objects. Display hidden services – Displays any services that have been toggled as “hidden”. Display only starred services – Displays any services that have been toggled as “starred”. Display starred and related services – Displays any services that have been toggled as “starred”, and services that are client/listener of a “starred” service. Service Dependencies Report – Will generate an xls file for all listener services with connections for any devices currently in-view of the current Topology. This will include raw data for listener/client device and service details with port and connection statistic information. Filter – Provides a list of “Show Top #” of services, to select key services that are desired to show or hide in the currently topology. Selecting any services will calculate any new topology considering any of the services will selected for show/hide. See image above. “Pause” button – This button will allow you to stop Topology calculation, this can be beneficial if the Topology selected has a large number of relations and will allow you to stop calculation at levels to look at current data and continue calculation if desired. Nested Context Menus Activated context menus by mousing over and hovering on an in-view object within a topology chart. Service Object – Summary details of the related Service. Star – will set the service as “starred” status allowing control with Display Options and a parameter available for queries/reports. Hide – will set the service as “hidden” status allowing control with Display Options and a parameter available for queries/reports. Hidden services will not be in view by default when Topology loads. Service Page – will navigate you to the object details view for the related service. Device Object – Summary details of the related Device. Expand/Collapse – will set open the Device to show all related services to expand the potential impact/dependency in view of the current Topology based on the level of the “Target Device”. Topology – will open a Topology for the related Device setting as the “Target Device” for the Topology to be loaded. Device Page – will navigate you to the object details view for the related device. Application Component Object – Summary details of the related Application Component. Impact Chart – A “downstream” view of any Application Components that rely on the select Application Component. View is simplified to only Application Component Objects. Dependency Chart – A “upstream” view of any Application Components that the selected Application Component is dependent upon. View is simplified to only Application Component Objects. App Page – will navigate you to the object details view for the related Application Component. Details – will open a pop-out window for a blob of the configuration file for the related Application Components added by discovery for any common applications. Downloading images / Service dependency reports Most Impact and Topology screens have a “Create Image” button that allows you to download an image in your chosen format of the graph: The create image button allows you to choose from two layout options; You may choose the Global or the Local view pane, and can also choose either PNG format or SVG (vector) image format. Simply click the “Download” button to choose your save location, saving the file wherever is convenient for you. Service Dependencies Report Download The topology chart view screen offers users a “Service Dependency Report” download as well. Service dependency reports are generated in real-time, as soon as the button is clicked, and delivered as an excel file containing a list of all source machines, listening ports, services, and any remote machines that are connected to those services. Users may also download previously requested service dependency reports by visiting the Reports menu → Excel Reports Status: For a sample report & field explanations, scroll down to the “service dependencies report” section on this page below. Impact Lists Clicking the impact list is navigation button will bring you to a hierarchical and contextual text-based option of the Topology. All of the same objects and behavior is included in the Impact List, and allows to expand a configuration item for a view of any related and nested items. An Impact List is simply a list version of an Impact Graph. Impact lists are typically available for view on most devices. The following is an example of the entire impact list for the device “webserver.dev”: The full impact list for webserver.dev. Sometimes, it is useful to hide services without connections, thus significantly reducing clutter by hiding services you might not be concerned with (many services that fit this criteria are standard operating-system components). See the example following the full list for more details: Impact list display options Display options allow hiding of services without connections, forcing the display of hidden services, or even showing only services you’ve starred: Service Dependencies Reports Service dependencies reports can be downloaded via buttons found on Impact lists and Topology charts. Service dependency reports are delivered as Excel files (.XLSX), and contain a list of all source machines, listening ports, services, and any remote machines that are connected to those services. Field Definitions: Listener name: listener hostname Listener IP: listening IP address All listener device IPs: Discovered IP addresses Listener service: listener service found Listener port: port listener found on Listener OS: listener operating system Listener is Virtual: yes/no – is listener a VM Client Listener Hardware: listener hardware type Client name: client hostname Client Service: service name Client OS: client operating system Client is virtual: yes/no – is client a VM Client Hardware: client hardware type Client Stat Type: netstat/netflow – which was discovered Client Connection First Found: date/time Client Connection Last Found: date/time Total Client Connections Detected: running total count — since first found Detected Average Minutes Between Client Connections: time, in minutes, since last connection(s) found Average # of Connections from the client: running average of connections found since first found Latest detected # of Connections from the client: integer count of # of connections found as of last check Latest contiguous stats – Client connection First Found: date/time when this stat was first found if the connection is different [different ports connected than connected from last discovery] Latest contiguous stats – Client connection Last Found: date/time when this stat was last rediscovered [if the connection is different, or different ports connected than were connected when last discovered] Latest contiguous stats – Total Client connections Detected: count; only includes connections found during latest [different ports connected from last discovery] Latest contiguous stats – Detected Average Minutes Between client connections: time in minutes; Only goes from latest, [different ports connected from last discovery] Dependency Charts (Graphs) A Dependency Chart (previously a ‘dependency graph’) can also be generated for any Application Component, and will show all the devices, services, executables, and application components that the application component requires to function. A Dependency Chart for the “MySQL” application is shown below: How are dependencies created in Device42? One question we get in almost every demo is, “How are all these dependencies created?” Nearly all the dependencies you saw in the above charts were automatically created by autodiscovery in combination with internal Device42 correlation processes. It should be fairly obvious how all the physical dependencies are created: buildings have rooms, which have racks, which have devices. The blades in a chassis are dependent on their blade host, while virtual machines (VMs) are dependent on their virtual host(s). Software and Services that are discovered on a virtual or physical machine are dependent on that machine. Many of the service-to-service dependencies and/or software are auto-discovered. Only some Application Components need to be manually entered. If a service is defined to be application component, then the application component dependencies are all known. You may, however, want to define application components that are not tied to a service. Or, you may want to define an application component that is composed of multiple services. These application components and their dependencies can be defined through form, spreadsheets imports, and/or API calls. The Device42 main appliance in conjunction with the WDS (Windows discovery service) performs auto-discoveries. You can exclude servers, remote IP addresses, and even service ports to reduce the noise by limiting discovery to only things you care to see. Example exclusions might be: - Windows listening ports: 3389 is excluded by default - Windows remote ports: add any remote ports you want to exclude - Linux listening ports: Port 22 (SSH) is excluded by default - Linux remote ports: any remote ports you want to exclude - Remote IP Addresses: Remote IPs to exclude. Exclude things like your monitoring server IPs
https://docs.device42.com/software/impact-charts/
2019-03-18T16:38:13
CC-MAIN-2019-13
1552912201455.20
[array(['https://docs.device42.com/wp-content/uploads/2018/09/view_impact_chart_BUTTON-from-room-HL.png', 'View impact chart menu button'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/building_impact_chart-sample.png', 'Impact Chart Sample, bldg A'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/sample_room_impact_chart-201809.png', 'Room impact chart - example'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/room_impact_LEGEND.png', 'Impact chart legend'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/impact_HOVER_for_details.png', 'Hover over server for details demonstration'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/view_device_USOXIS-P0034.png', 'USOXISP0034 Device Details'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/device_toplogy_chart.png', 'device topology chart'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/device_topology_legend.png', 'device topology legend'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/display_options.png', 'display options'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/topology_filter.png', 'topology filter'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/topology_pause.png', 'topology pause'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/apache2_nested_context.png', 'service object context'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/webserver.dev_nested_context.png', 'device object context menu'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/oracle_nested_context.png', 'Application componenet context menu'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/Create_image_button-HL.png', 'Create imact chart image'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/excel_reports_status_download.png', 'Excel Reports Status and Download page'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/view_device_impact_list-HL.png', 'view device impact list'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/impact_list_webserver.dev_.png', 'Device Impact List Full'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/impact_list-display-options-webserver-HL.png', 'Impact list example with all services wo connections hidden.'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/10/service_deps_report.png', 'service dependencies report sample'], dtype=object) array(['https://docs.device42.com/wp-content/uploads/2018/09/dependency_graph.png', 'Dependency Chart Sample'], dtype=object) ]
docs.device42.com
Add authentication to your Xamarin.iOS app This topic shows you how to authenticate users of an App Service Mobile App from your client application. In this tutorial, you add authentication to the Xamarin.iOS quickstart project using an identity provider that is supported by App Service. After being successfully authenticated and authorized by your Mobile App, the user ID value is displayed and you will be able to access restricted table data. You must first complete the tutorial Create a Xamarin.iOS app. If you do not use the downloaded quick start server project, you must add the authentication extension package to your project. For more information about server extension packages, see Work with the .NET backend server SDK for Azure Mobile Apps. Register your app for authentication and configure App Services. Add your app to the Allowed External Redirect URLs Secure authentication requires that you define a new URL scheme for your app. This allows the authentication system to redirect back to your app once the authentication process is complete. In this tutorial, we use the URL scheme appname throughout. However, you can use any URL scheme you choose. It should be unique to your mobile application. To enable the redirection on the server side: In the Azure portal, select your App Service. Click the Authentication / Authorization menu option. In the Allowed External Redirect URLs, enter url_scheme_of_your_app://easyauth.callback. The url_scheme_of_your_app in this string is the URL Scheme for your mobile application. It should follow normal URL specification for a protocol (use letters and numbers only, and start with a letter). You should make a note of the string that you choose as you will need to adjust your mobile application code with the URL Scheme in several places. Click OK. Click Save.. In Visual Studio or Xamarin Studio, run the client project on a device or emulator. Verify that an unhandled exception with a status code of 401 (Unauthorized) is raised after the app starts. The failure is logged to the console of the debugger. So in Visual Studio, you should see the failure in the output window. This unauthorized failure happens because the app attempts to access your Mobile App backend as an unauthenticated user. The TodoItem table now requires authentication. Next, you will update the client app to request resources from the Mobile App backend with an authenticated user. Add authentication to the app In this section, you will modify the app to display a login screen before displaying data. When the app starts, it will not connect to your App Service and will not display any data. After the first time that the user performs the refresh gesture, the login screen will appear; after successful login the list of todo items will be displayed. In the client project, open the file QSTodoService.cs and add the following using statement and MobileServiceUserwith accessor to the QSTodoService class: using UIKit; // Logged in user private MobileServiceUser user; public MobileServiceUser User { get { return user; } } Add new method named Authenticate to QSTodoService with the following definition: public async Task Authenticate(UIViewController view) { try { AppDelegate.ResumeWithURL = url => url.Scheme == "{url_scheme_of_your_app}" && client.ResumeWithURL(url); user = await client.LoginAsync(view, MobileServiceAuthenticationProvider.Facebook, "{url_scheme_of_your_app}"); } catch (Exception ex) { Console.Error.WriteLine (@"ERROR - AUTHENTICATION FAILED {0}", ex.Message); } } Note If you are using an identity provider other than a Facebook, change the value passed to LoginAsync above to one of the following: MicrosoftAccount, Twitter, Google, or WindowsAzureActiveDirectory. Open QSTodoListViewController.cs. Modify the method definition of ViewDidLoad removing the call to RefreshAsync() near the end: public override async void ViewDidLoad () { base.ViewDidLoad (); todoService = QSTodoService.DefaultService; await todoService.InitializeStoreAsync(); RefreshControl.ValueChanged += async (sender, e) => { await RefreshAsync(); } // Comment out the call to RefreshAsync // await RefreshAsync(); } Modify the method RefreshAsync to authenticate if the User property is null. Add the following code at the top of the method definition: // start of RefreshAsync method if (todoService.User == null) { await QSTodoService.DefaultService.Authenticate(this); if (todoService.User == null) { Console.WriteLine("couldn't login!!"); return; } } // rest of RefreshAsync method Open AppDelegate.cs, add the following method: public static Func<NSUrl, bool> ResumeWithURL; public override bool OpenUrl(UIApplication app, NSUrl url, NSDictionary options) { return ResumeWithURL != null && ResumeWithURL(url); } Open Info.plist file, navigate to URL Types in the Advanced section. Now configure the Identifier and the URL Schemes of your URL Type and click Add URL Type. URL Schemes should be the same as your {url_scheme_of_your_app}. In Visual Studio, connected to your Mac Host or Visual Studio for Mac, run the client project targeting a device or emulator. Verify that the app displays no data. Perform the refresh gesture by pulling down the list of items, which will cause the login screen to appear. Once you have successfully entered valid credentials, the app will display the list of todo items, and you can make updates to the data. Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-xamarin-ios-get-started-users
2019-03-18T15:52:58
CC-MAIN-2019-13
1552912201455.20
[]
docs.microsoft.com
- Reference > - Operators > - Aggregation Pipeline Operators > - $meta (aggregation) $meta (aggregation)¶ On this page Definition¶: The following aggregation operation performs a text search and use the $meta operator to group by the text search score: The operation returns the following results: For more examples, see Text Search in the Aggregation Pipeline.
https://docs.mongodb.com/master/reference/operator/aggregation/meta/
2019-03-18T16:54:49
CC-MAIN-2019-13
1552912201455.20
[]
docs.mongodb.com
Progress NativeScript UI is a suite of UI components targeting the NativeScript platform. The controls are based on the familiar Progress Telerik UI for Android and Progress Telerik UI for iOS suites and expose common API for utilizing these suites Android and iOS cross-platform development. Progress NativeScript UI is a set of components that enable implementing rich-ui applications for iOS and Android by using NativeScript. Progress NativeScript UI is built on top of natively implemented components targeting iOS and Android. For more information on how to use Progress NativeScript UI, please visit the documentation website here. Progress NativeScript UI is distributed via npm. You may download the package that contains the component that you want to use from the following locations:. You can use the Progress NativeScript UI getting started application, which is publicly available on GitHub here:. This application contains various examples of the usage of the components in the suite. More information about how to run the application is available on its GitHub page. You can use the Progress NativeScript UI getting started application for Angular, which is publicly available on GitHub here:. This application contains various examples of the usage of the components in the suite. More information about how to run the application is available on its GitHub page.. Your feedback will be highly appreciated and will directly influence the development of Progress NativeScript UI. You can submit issues and feedback at the dedicated feedback GitHub repository here:
https://docs.nativescript.org/ns-ui-api-reference/index
2019-03-18T15:56:03
CC-MAIN-2019-13
1552912201455.20
[]
docs.nativescript.org
@ to the class, and method access is protected by the lock. If the method is static then the field is static and named $REENTRANTLOCK. The annotation takes an optional parameter for the name of the field. This field must exist on the class and must be of type ReentrantReadWriteLock. To understand how this annotation works, it is convenient to think in terms of the source code it replaces. The following is a typical usage of this annotation from Groovy: import groovy.transform.*; public class ResourceProvider { private final Map<String, String> data = new HashMap<String, String>();As part of the Groovy compiler, code resembling this is produced: @WithReadLockpublic String getResource(String key) throws Exception { return data.get(key); } @WithWriteLockpublic void refresh() throws Exception { //reload the resources into memory } } import java.util.concurrent.locks.ReentrantReadWriteLock; import java.util.concurrent.locks.ReadWriteLock; public class ResourceProvider { private final ReadWriteLock $reentrantlock = new ReentrantReadWriteLock(); private final Map<String, String> data = new HashMap<String, String>(); public String getResource(String key) throws Exception { $reentrantlock.readLock().lock(); try { return data.get(key); } finally { $reentrantlock.readLock().unlock(); } } public void refresh() throws Exception { $reentrantlock.writeLock().lock(); try { //reload the resources into memory } finally { $reentrantlock.writeLock().unlock(); } } } public abstract String value
http://docs.groovy-lang.org/latest/html/api/groovy/transform/WithWriteLock.html
2015-03-26T22:31:17
CC-MAIN-2015-14
1427131292683.3
[]
docs.groovy-lang.org
Aspect the section entitled Section 6.3, “Schema-based AOP support”. The transaction elements are discussed in Section 6.2.4.6.3, .11, “Annotation-based-agent.jar foo.Main The '-javaagent' is a Java 5+ flag for specifying and enabling agents to instrument programs running on the JVM. The Spring Framework ships with such an agent, the InstrumentationSavingAgent, which is packaged in the spring-agent.jar (version 2.5 or later) aspectjrt.jar (version 1.5 or later) aspectjweaver.jar (version 1.5 or later) If you are using the Spring-provided agent to enable instrumentation, you will also need: spring-agent Section 12.6.1.3.1, -agent).
http://docs.spring.io/spring/docs/2.5.6/reference/aop.html
2015-03-26T22:31:41
CC-MAIN-2015-14
1427131292683.3
[]
docs.spring.io
Development¶ Working on front-end¶ To started development fron-end part of django-filer simply install all the packages over npm: npm install To compile and watch scss, run javascript unit-tests, jshint and jscs watchers: gulp To compile scss to css: gulp sass To run sass watcher: gulp sass:watch To run javascript linting and code styling analysis: gulp lint To run javascript linting and code styling analysis watcher: gulp lint:watch To run javascript linting: gulp jshint To run javascript code style analysis: gulp jscs To fix javascript code style errors: gulp jscs:fix To run javascript unit-tests: gulp tests:unit Contributing¶ Claiming Issues¶ Since github issues does not support assigning an issue to a non collaborator (yet), please just add a comment on the issue to claim it. Code Guidelines¶ The code should be PEP8 compliant. With the exception that the line width is not limited to 80, but to 120 characters. The flake8 command can be very helpful (we run it as a separate env through Tox on Travis). If you want to check your changes for code style: $ flake8 This runs the checks without line widths and other minor checks, it also ignores source files in the migrations and tests and some other folders. This is the last command to run before submitting a PR (that will run tests in all tox environments): $ tox Another useful tool is reindent. It fixes whitespace and indentation stuff: $ reindent -n filer/models/filemodels.py Workflow¶ Fork -> Code -> Pull request django-filer uses the excellent branching model from nvie. It is highly recommended to use the git flow extension that makes working with this branching model very easy. fork django-filer on github clone your fork git clone [email protected]:username/django-filer.git cd django-filer initialize git flow: git flow init(choose all the defaults) git flow feature start my_feature_namecreates a new branch called feature/my_feature_namebased on master …code… …code… ..commit.. ..commit.. git flow feature publishcreates a new branch remotely and pushes your changes navigate to the feature branch on github and create a pull request to the masterbranch on divio/django-filer after reviewing the changes may be merged into masterfor the release. If the feature branch is long running, it is good practice to merge in the current state of the master branch into the feature branch sometimes. This keeps the feature branch up to date and reduces the likeliness of merge conflicts once it is merged back into master.
https://django-filer.readthedocs.io/en/latest/development.html
2022-06-25T03:49:09
CC-MAIN-2022-27
1656103034170.1
[]
django-filer.readthedocs.io
ansible.windows.win_environment module – Modify environment variables on windows hosts_environment. Synopsis Uses .net Environment to set or remove environment variables and can set at User, Machine or Process level. User level environment variables will be set, but not available until the user has logged off and on again. Parameters Notes Note This module is best-suited for setting the entire value of an environment variable. For safe element-based management of path-like environment vars, use the ansible.windows.win_path module. This module does not broadcast change events. This means that the minority of windows applications which can have their environment changed without restarting will not be notified and therefore will need restarting to pick up new environment settings. User level environment variables will require the user to log out and in again before they become available. In the return, before_valueand valuewill be set to the last values when using variables. It’s best to use valuesin that case if you need to find a specific variable’s before and after values. See Also See also - ansible.windows.win_path The official documentation on the ansible.windows.win_path module. Examples - name: Set an environment variable for all users ansible.windows.win_environment: state: present name: TestVariable value: Test value level: machine - name: Remove an environment variable for the current user ansible.windows.win_environment: state: absent name: TestVariable level: user - name: Set several variables at once ansible.windows.win_environment: level: machine variables: TestVariable: Test value CUSTOM_APP_VAR: 'Very important value' ANOTHER_VAR: '{{ my_ansible_var }}' - name: Set and remove multiple variables at once ansible.windows.win_environment: level: user variables: TestVariable: Test value CUSTOM_APP_VAR: 'Very important value' ANOTHER_VAR: '{{ my_ansible_var }}' UNWANTED_VAR: '' # < this will be removed Return Values Common return values are documented here, the following are the fields unique to this module: Collection links Issue Tracker Repository (Sources)
https://docs.ansible.com/ansible/latest/collections/ansible/windows/win_environment_module.html
2022-06-25T05:26:29
CC-MAIN-2022-27
1656103034170.1
[]
docs.ansible.com
Apps Firewall is a product of Cisco Cloudlock that monitors the OAuth grants of third-party applications by users in your environment. The features of the product enable administrators to verify the potential risk that installed apps in the environment can have, depending on what information the scopes request. Cloudlock Apps Firewall is available for Google and O365 platforms. Google Apps Cloudlock monitors Google apps installed by the following methods: - GSuite (Marketplace apps), - Google web apps - Android OAuth apps - Chrome extensions (via OAuth) Apps Firewall can monitor apps that are installed domain-wide by Google admins but Cloudlock cannot revoke these apps. To revoke these apps a Google admin would need to remove that app domain-wide. O365 Apps Cloudlock monitors apps installed through the Azure AD API by the following methods: - Admin-approved apps installed from the Azure AD portal - Admin-approved and user-installed apps installed from the Office 365 Store - Apps installed and authorized via OAuth single sign-on (web or mobile) Cloudlock does not support apps that are not admin-approved and installed by users directly from the Office 365 store. This includes add-ins and add-ons. Updated 2 years ago
https://docs.umbrella.com/cloudlock-documentation/docs/introduction
2022-06-25T04:59:32
CC-MAIN-2022-27
1656103034170.1
[]
docs.umbrella.com
A centralized setting is one that can be applied to multiple organizations at the same time, including new customers as they come on board. Centralized settings are powerful and easy to use, helping you reduce your total cost of ownership while increasing your free time. The MSSP console divides centralized settings into the following areas: - Overview Page - Destination Lists - Block Pages - Content Settings - Security Settings - Custom Integrations - Advanced Settings Centralized Settings can also be applied to customers from the Customer Management section of the MSSP console, although you cannot create centralized settings through Customer Management. Access Centralized Settings and the Overview Page - Navigate to Centralized Settings > Overview. The Overview page opens, which allows you to view all of your customers, the number of policies for each, and the applied settings for the customer's default policy. You can also make changes to settings for individual customer policies from the Overview page. Note: Customers are added through Customer Management. - Click the customer's name to expand the customer and see details about the customer's settings. Bolded settings are those settings which are not a part of your Centralized Settings but instead are uniquely configured in that customer's Umbrella dashboard. You can change both kinds of settings (centralized and unique to the customer) from within the Overview Page. If you wish to change settings, instead of logging into the individual customer's dashboard, simply choose a setting from the appropriate drop-down list. This can be either a setting that is unique to the customer or one that is available for all customers. Each drop-down list organizes Centralized Settings and individual Customer Settings into separate sub-lists. Centralized Settings Configure Destination Lists Updated 10 months ago
https://docs.umbrella.com/mssp-deployment/docs/new-centralized-settings
2022-06-25T04:14:05
CC-MAIN-2022-27
1656103034170.1
[]
docs.umbrella.com
Setup steps for SSH connections to AWS CodeCommit repositories on Linux, macOS, or Unix Before you can connect to CodeCommit for the first time, you must complete some initial configuration steps. After you set up your computer and AWS profile, you can Amazon Web Services account, create an IAM user, and configure access to CodeCommit. To create and configure an IAM user for accessing CodeCommit Create an Amazon Web Services account by going to and choosing Sign Up. Create an IAM user, or use an existing one, in your Amazon Web Services account. Make sure you have an access key ID and a secret access key associated with that IAM user. For more information, see Creating an IAM User in Your Amazon Web ServicesPowerUser or another managed policy for CodeCommit access. For more information, see AWS managed policies for CodeCommit. After you have selected the policy you want to attach, choose Next: Review to review the list of policies to attach to the IAM user. If the list is correct, choose Add permissions. For more information about CodeCommit managed policies and sharing access to repositories with other groups and users, see Share a repository and Authentication and access control for AWS CodeCommit.. Git version 2.28 supports configuring the branch name for initial commits. We recommend using a recent version of Git. To install Git, we recommend websites such as Git Downloads Git is an evolving, regularly updated platform. Occasionally, a feature change might affect the way it works with CodeCommit. If you encounter issues with a specific version of Git and CodeCommit, review the information in Troubleshooting. Step3: Configure credentials on Linux, macOS, or Unix SSH and Linux, macOS, or Unix: Set up the public and private keys for Git and CodeCommit Tofile, which is the public key file. Tip By default, ssh-keygen generates a 2048 bit key. You can use the -t and -b parameters to specify the type and length of the key. If you want a 4096 bit key in the rsa format, you would specify this by running the command with the following parameters: ssh-keygen -t rsa -b 4096 For more information about the formats and lengths required for SSH keys, see Using IAM with CodeCommit.. You can set up SSH access to repositories in multiple Amazon Web Services accounts, For more information, see Troubleshooting SSH connections to AWS CodeCommit.. For example: ssh -v git-codecommit.us-east-2.amazonaws.com For information to help you troubleshoot connection problems, see Troubleshooting SSH connections to AWS CodeCommit.. Find the repository you want to connect to from the list and choose it. Choose Clone URL, and then choose the protocol you want to use when cloning or connecting to the repository. This copies the clone URL. Copy the HTTPS URL if you are using either Git credentials with your IAM user or the credential helper included with the AWS CLI. Copy the HTTPS (GRC) URL if you are using the git-remote-codecommit command on your local computer. Copy the SSH URL if you are using an SSH public/private key pair with your IAM user. Note If you see a Welcome page instead of a list of repositories, there are no repositories associated with your AWS account in the AWS Region where you are signed in. To create a repository, see Create an AWS CodeCommit repository or follow the steps in the Getting started with Git and CodeCommit tutorial. Open a terminal. From the /tmp directory, run the git clone command with the SSH URL you copied Getting started with CodeCommit to start using CodeCommit.
https://docs.aws.amazon.com/en_en/codecommit/latest/userguide/setting-up-ssh-unixes.html
2022-06-25T05:52:59
CC-MAIN-2022-27
1656103034170.1
[]
docs.aws.amazon.com
SecureAuth IdP integrates with your company's applications to provide Single sign-on (SSO) access via a Security Assertion Markup Language (SAML) assertion to all applications the authorized end-user is allowed to access. Each SAML application integration configured on the SecureAuth IdP Web Admin results in the creation of an XML metadata file to be uploaded to your application (service provider). This metadata file contains information to identify and assert the end-user during the authentication login process in which digitally-signed XML documents are exchanged between SecureAuth IdP and the application over a secure connection. You create and manage SAML application integrations using the app onboarding tool on the New Experience user interface. Select a SAML application template from the library, then use common components to customize each new application integration you create. Provide a name for the app, associate a data store with it, and specify which group(s) can access the app. Upload a logo to quickly find the completed app in the Application Manager list. Define how the connection will be initiated – by service provider (SP-initiated) or by SecureAuth IdP (IdP-initiated) – and configure user ID mapping criteria, user attributes, and information about the SAML assertion. The SP-initiated SAML application integration starts the login process at the service provider / application, then redirects the end-user to SecureAuth IdP for authentication, and finally asserts the end-user back to the application once successfully authenticated. The IdP-initiated SAML application integration starts the login process at SecureAuth IdP and asserts the end-user to the application once successfully authenticated. Use the Classic Experience user interface to configure the end-user's authentication Workflow, and enable Two-Factor Authentication methods and Adaptive Authentication modules. Return to the New Experience user interface to make any modifications to data stores associated with the app. NOTE: An application integration created on the New Experience user interface is stored in the cloud as well as in the web.config file on the SecureAuth IdP appliance, making many configured elements of the application accessible on the Classic Experience user interface. On-premises Active Directory / SQL Server (membership directory / profile directory) integrated with SecureAuth IdP which can be used in the application integration. SAML Application integration Salesforce app integration See Application template library master list for the current list of available application templates
https://docs.classic.secureauth.com/plugins/viewsource/viewpagesrc.action?pageId=47230527
2022-06-25T05:26:01
CC-MAIN-2022-27
1656103034170.1
[]
docs.classic.secureauth.com
Multiple Resources per Object Can I select more than one resource for the same item? Yes! You don't need to do anything to get multiple resources to work in Google calendar. DayBack essentially lets you add new fields to Google calendar and Resource is one example of that. DayBack currently supports multiple resources (and a single status) per Google event. Note: You can drag an event on the Resource view from one resource column to another; when you do you'll be swapping out the new resource for the resource you dragged from. Other resources for the event remain intact. Multiple Resources in Salesforce Do Salesforce records support more than one resource per item? Yes, provided you've mapped your resource field to something other than Owner.Name because Salesforce only permits an item to have a single owner. But if you've mapped to a custom field set up as either a long text field or a multi-select picklist then you'll be able to select more than one resource by shift-clicking resources in DayBack's resource drawer (screenshot below). For more on field mapping, check this out: Field & Object Mapping Can I treat both people and rooms as resources? Absolutely-- this is one of the most common setups for DayBack. How you set this up will depend on where the resources are recorded in your Salesforce object. Some of my objects use "owner" for a person, and some use another field for "room" (My objects each have one resource field) This setup requires no special configuration, though you'll likely want to create a resource folder for people and another for rooms. If the field you're using for "room" is a long text field or a multi-select picklist you'll be able to associate multiple resources with the same item. Within the same object, one resource is the activity owner, the other is a custom field for "room" (One of my objects has two resource fields) In this scenario, you'll create two calendar sources mapped to the same table: one will use the owner as the resource field and the other source will use the room. You can then turn on the first source (let's say you've named it "Activities by Technician") to view your assignments by person, and show just the second source ("Activities by Room No.") to view them by room. Again, you'll likely want to create a resource folder for people and another for rooms. My resource is a long text field or a multi-select picklist that can support multiple values (My object has one resource field that will contain more than one entry) This is by far the simplest setup and the easiest for users when they are working with the schedule. In the event popover, users can simply shift-click when selecting resources for an item to associate more than one resource with it. In your resources field, these multiple resources will be written as a semicolon-separated list in your Salesforce object's resource field just as if they were entered in your picklist. You can drag an event on the Resource view from one resource column to another; when you do you'll be swapping out the new resource for the resource you dragged from. Other resources for the event remain intact. FileMaker Specific Simply shift-click when selecting resources for an event to associate more than one resource with the event. In FileMaker, these multiple resources will be written as a return-separated list in your FileMaker table's resource field. resources' IDs based on the names entered in the field you've mapped to "Resource". Learn more here: Mapping the Resource Field in FileMaker.
https://docs.dayback.com/article/73-multiple-resources-per-object
2022-06-25T05:26:26
CC-MAIN-2022-27
1656103034170.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568d5975c69791436155c1b3/images/56e4600190336026d8717918/file-2bNx273uZX.png', None], dtype=object) ]
docs.dayback.com
to be connecting to the license server, once the Licensing wizard appears.. Launch Internet Information Services (IIS) Manager. This can be found by launching the Control Panel (from the Windows Start Button), selecting System and Security, then select Administrative Tools and then double clicking Internet Information Services (IIS) Manager. If the module is not shown, follow the steps below: ASPNetCore Module Download The Integration Theme consumes the Integration Theme RESTful API, it is best practice to set this up on its own endpoint and secure with HTTPS.. This name identifies the site in the IIS Manager only, it does not appear externally, although it is good practice to use the same name as your site (for instance api.driveworkslive.com). This will usually be the following location: Integration Theme - %ProgramData%\DriveWorks\[version number]\Live\Themes\Integration For testing purposes, the built in self-signed certificate can be used. For production, we strongly recommend a SSL certificate from a trusted certificate authority. From the IIS Manager, select Application Pools from the Connections panel. A new application pool has been added for the new website previously created. From the Application Pools list, right click the site that was added above and choose Advanced Settings. From the Advanced Settings dialog: From the .NET CLR Version drop down select No Managed Code: Ensure Anonymous Authentication is enabled and other forms of Authentication are Disabled. There are many ways to secure the API endpoint, Anonymous allows DriveWorks to manage the security of the endpoint. Other types of Authentication can be enabled to add an extra layers of protection. For example, Basic Authentication would require the client to send a basic authorization header with any API requests to the endpoint. The group not found message should be returned. Please ensure no SSL certificate warnings are received at this point. Please follow the information in the topic Integration Theme Settings to configure further settings. This is only required if your Projects use the Upload Control. Visit Public Integration Theme Demo sites for more information on sites hosted in the Integration Theme. This collection of example sites is another great way to get started. Each example is freely customizable and ready to use with your existing DriveWorks Projects. These are enabled by selecting "Copy Client SDK Examples to this folder" during the Theme Configuration setup process. The following is a instruction set in getting started with the Integration theme quickly. Please ensure that an appropriate code editor is installed, as this will be needed to make changes to the site. Visual Studio code is a free application that can be downloaded from Microsoft. These sites are designed to be copied out and hosted on another server acting as the Integration Theme client (not hosted by DriveWorks Live). The Integration Theme landing page should not be used to host a production website - only to preview the funcionality demonstrated. See Also:
https://docs.driveworkspro.com/Topic/ConfiguringIntegrationThemeForIIS
2022-06-25T05:23:58
CC-MAIN-2022-27
1656103034170.1
[]
docs.driveworkspro.com
Returns a pipe bar (|) delimited list of all of the values in a specific picklist within Salesforce. SFGetPicklist ([Object Name],[Picklist Name]) Where: Object Name is the name of the object in Salesforce (e.g. Account, Contact) Picklist Name is the name of the specific picklist for this object type in Salesforce (e.g. Type for Account). Salesforce SOAP API Developer Guide.
https://docs.driveworkspro.com/Topic/SFGetPicklist
2022-06-25T04:47:23
CC-MAIN-2022-27
1656103034170.1
[]
docs.driveworkspro.com
As well as co-branding the MSSP console, you can also co-brand the Login page. Create a Canonical Name (CNAME) record and point it to msp-login.opendns.com. Once it's created, add the domain name of the CNAME you've created to the MSSP console and your logo appears when the Login page is accessed through your CNAME. Prerequisites Create a CNAME record with your authoritative DNS provider that points your domain to msp-login.opendns.com. Procedure - Navigate to MSSP Settings > Dashboard Co-branding. - In the Branded Login area, click Add. - Add your CNAME and click Save. Add Your Logo to the MSSP Console < Create a Branded Login Page > Integrate a PSA Updated 3 years ago
https://docs.umbrella.com/mssp-deployment/docs/create-a-branded-login-page
2022-06-25T04:41:28
CC-MAIN-2022-27
1656103034170.1
[]
docs.umbrella.com
Submit a Hive Warehouse Connector Python app You can submit a Python app based on the HiveWarehouseConnector library by following the steps to submit a Scala or Java application, and then adding a Python package. - Locate the hive-warehouse-connector-assembly jar in /usr/hdp/current/hive_warehouse_connector/. - Add the connector jar to the app submission using --jars. spark-shell --jars /usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-<version>.jar - Locate the pyspark_hwc zip package in /usr/hdp/current/hive_warehouse_connector/. - Add the Python package to app submission: spark-shell --jars /usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-1.0.0.jar - Add the Python package for the connector to the app submission. pyspark --jars /usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-<version>.jar --py-files /usr/hdp/current/hive_warehouse_connector/pyspark_hwc-<version>.zip
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/integrating-hive/content/hive_submit_a_hivewarehouseconnector_python.html
2022-06-25T05:21:22
CC-MAIN-2022-27
1656103034170.1
[]
docs.cloudera.com
The most important aspect of testing Mobile Devices is testing the individual application. Developing n Mobile Device Application is of utmost importance for Enterprises which requires substantial development cycles. Equally important is to test it for the overall experience and for defects. Continuous Testing allows end to end support for testing an Application in a large set of devices, - Installing the application in a device - Launching the application in a device - Manual and Automated testing To begin, open a device from the device screen. You can then start interacting with the application as if you were a real user. Proceed by installing the application on the device. If the app is not on the list, upload the app package file. Begin by launching the application, in this example "eribank" application is launched. Once the application has launched, you can start interacting with it
https://docs.experitest.com/pages/viewpage.action?pageId=52599329&spaceKey=LT
2022-06-25T04:42:02
CC-MAIN-2022-27
1656103034170.1
[]
docs.experitest.com
ThoughtSpot Software Documentation ThoughtSpot Software is our original offering that you deploy and manage yourself. For details on all deployment options, see ThoughtSpot Software Deployment. Find topics for the common types of ThoughtSpot users below. Analyst What’s new in ThoughtSpot Software 8.4.0.sw June 2022 Key Performance Indicator (KPI) chart type You can now create visualizations of your data’s Key Performance Indicators (KPIs). When you search for a measure with a time-related keyword (for example, Sales weekly), you can create sparkline visualizations of your data’s Key Performance Indicators (KPIs). ThoughtSpot also supports conditional formatting to add visual cues for KPIs or threshold metrics to easily show where you are falling short or exceeding targets. For more information, see KPI charts. Auto-select search data source for new users When a new user uses Search Data, ThoughtSpot intelligently selects a data source for them to search on. ThoughtSpot chooses the most popular data source in the cluster that the user has access to. This allows users to begin to search data easily, without looking through all the existing data sources on their cluster. Geo map support for France postcode We now support more detailed geographic maps for France. You can now create maps based on postal codes, as well as region and city name. For more information, see Geo map reference. New answer experience The new answer experience contains new features and enhancements, including an in-product undo, redo, and reset button, HTML for answer titles and descriptions, and improvements to conditional formatting for charts, tables, and pivot tables. To try it out, navigate to your profile, scroll down to Experience, select Edit, and toggle the Answer experience to New experience. See New answer experience. Conditional and number formatting for downloaded tables When you download a table in XLSX format, the downloaded table now shows the same conditional and number formatting as the table in ThoughtSpot. See Download a search. Liveboard schedule ThoughtSpot now combines the Liveboard follow and Liveboard Schedule features into a single action called "Schedule." To create a new schedule to receive an email containing a pdf of your Liveboard, users now select the Schedule button to the left of the more options menu . ThoughtSpot will migrate any existing Liveboard follows to Liveboard schedules. For users who followed a Liveboard in November Cloud or earlier, those Liveboard schedules appear in the list of Liveboard schedules as Migrated from follow (your display name). New SpotIQ experience SpotIQ has a new reorganized and more intuitive UI. The functionality remains the same. To try it out, navigate to your profile, scroll down to Experience, select Edit, and toggle the SpotIQ experience to New experience. See SpotIQ. Other features and enhancements Date interval functions We introduced new functions for computing time intervals between two dates. In addition to the existing diff_days and diff_time functions, you can now use diff_years, diff_quarters, diff_months diff_weeks, diff_hours, and diff_minutes to calculate time intervals. If your organization uses a custom calendar for your fiscal year, use the optional custom calendar argument with these functions to calculate the difference between the two dates. See Formula function reference. Streamlined analyst setup We simplified the steps to set up an analyst account on ThoughtSpot. Now, you can create a connection, create a worksheet to model your business use cases, immediately search your data, and automatically create Search visualizations. See Analyst Onboarding for further details. This feature is specific to clusters based on connections to external data warehouses, not imported data (Falcon). To enable this feature for your cluster, contact ThoughtSpot Support. Data Workspace Beta The redesigned Data Workspace provides new features including SQL-based views, SpotApps, and a more intuitive user experience. To see it, select Data in the top navigation bar. SQL-based views Beta With SQL-based views, you can now create views based on custom SQL queries, and then use them as data sources. << OAuth for Databricks Databricks connections now support OAuth. See Configure OAuth for a Databricks connection. ThoughtSpot Everywhere Starting from the 8.4.0-sw release (Limited Availability), customers licensed to embed ThoughtSpot can use ThoughtSpot Everywhere features and the Visual Embed SDK. To enable ThoughtSpot Everywhere on your cluster, contact ThoughtSpot Support. For new features and enhancements introduced in this release for ThoughtSpot Everywhere, see ThoughtSpot Developer Documentation.
https://docs.thoughtspot.com/software/8.4.0.sw/
2022-06-25T04:24:05
CC-MAIN-2022-27
1656103034170.1
[array(['_images/persona-analyst.png', None], dtype=object) array(['_images/kpi-viz-sparkline.png', 'KPI visualization example 2'], dtype=object) array(['_images/new-answer-experience.gif', 'New answer experience gif'], dtype=object) array(['_images/liveboard-schedule.png', 'Liveboard schedule button'], dtype=object) array(['_images/spotiq-v2-ui.png', 'New SpotIQ experience'], dtype=object) array(['_images/data-workspace-image.png', 'New data workspace'], dtype=object) array(['_images/sql-bsd-view.png', 'sql-based-views'], dtype=object) array(['_images/dbt-integration.png', 'dbt integration'], dtype=object)]
docs.thoughtspot.com
DeleteProject Deletes an Amazon Rekognition Custom Labels project. To delete a project you must first delete all models associated with the project. To delete a model, see DeleteProjectVersion. DeleteProject is an asynchronous operation. To check if the project is deleted, call DescribeProjects. The project is deleted when the project no longer appears in the response. This operation requires permissions to perform the rekognition:DeleteProject action. Request Syntax { "ProjectArn": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - ProjectArn The Amazon Resource Name (ARN) of the project that you want to delete. Type: String Length Constraints: Minimum length of 20. Maximum length of 2048. Pattern: (^arn:[a-z\d-]+:rekognition:[a-z\d-]+:\d{12}:project\/[a-zA-Z0-9_.\-]{1,255}\/[0-9]+$) Required: Yes Response Syntax { "Status": :
https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DeleteProject.html
2022-06-25T05:59:46
CC-MAIN-2022-27
1656103034170.1
[]
docs.aws.amazon.com
Multiresolution Modifier The Multiresolution modifier (often shortened to “Multires”) gives you the ability to subdivide a mesh similarly to the Subdivision Surface modifier, but also allows you to edit the new subdivision levels in Sculpt Mode. Note Multiresolution is the only modifier that cannot be repositioned in the stack after any modifier that will change geometry or other object data (i.e. all Generate, some Modify and some Simulate modifiers cannot come before the Multiresolution one). Options The Multiresolution modifier. - Levels Viewport Set the level of subdivisions to show in Object Mode. - Sculpt Set the level of subdivisions to use in Sculpt Mode. - Render Set the level of subdivisions to show when rendering. - Sculpt Base Mesh Makes sculpt-mode tools deform the base mesh instead of the displaced mesh, while previewing the displacement of higher subdivision levels. This allows you to see the propagation of strokes in real-time, which enables to use complex tools like Cloth or Pose in much higher resolutions without surface noise and artifacts. - Optimal Display When rendering the wireframe of this object, the wires of the new subdivided edges will be skipped (only displays the edges of the original geometry). Subdivisions - Subdivide Creates a new level of subdivision using the type specified by Subdivision Type (see below). - Simple Creates a new level of subdivision using a simple interpolation by subdividing edges without any smoothing. - Linear Creates a new level of subdivision using linear interpolation of the current sculpted displacement. - Unsubdivide Rebuild a lower subdivision level of the current base mesh. - Delete Higher Deletes all subdivision levels that are higher than the current one. Shape -. Generate - Rebuild Subdivisions Rebuilds all possible subdivisions levels to generate a lower resolution base mesh. This is used to create an optimized multiresolution version of a pre-existing sculpt. This option is only available when no subdivision level have been created through the modifier. - Save External Saves displacements to an external .btxfile. Advanced - Quality How precisely the vertices are positioned (relatively to their theoretical position), can be lowered to get a better performance when working on high-poly meshes. - UV Smooth How to handle UVs during subdivision. - None UVs remain unchanged. - Keep Corners UV islands are smoothed, but their boundary remain unchanged. - Keep Corners, Junctions UVs are smoothed, corners on discontinuous boundary and junctions of three or more regions are kept sharp. - Keep Corners, Junctions, Concave UVs are smoothed, corners on discontinuous boundary, junctions of three or more regions and darts and concave corners are kept sharp. - Keep Boundaries UVs are smoothed, boundaries are kept sharp. - All UVs and their boundaries are smoothed. - Boundary Smooth Controls how open boundaries (and corners) are smoothed. - All Smooth boundaries, including corners. - Keep Corners Smooth boundaries, but corners are kept sharp. - Use Creases Use the Weighted Edge Creases values stored in edges to control how smooth they are made. - Use Custom Normals Interpolates existing Custom Split Normals of the resulting mesh.
https://docs.blender.org/manual/en/3.0/modeling/modifiers/generate/multiresolution.html
2022-06-25T05:34:01
CC-MAIN-2022-27
1656103034170.1
[array(['../../../_images/modeling_modifiers_generate_multiresolution_panel.png', '../../../_images/modeling_modifiers_generate_multiresolution_panel.png'], dtype=object) ]
docs.blender.org
bokeh.core.property.descriptor_factory¶ Provide a Base class for all Bokeh properties. Bokeh properties work by contributing Python descriptor objects to HasProps classes. These descriptors then delegate attribute access back to the Bokeh property class, which handles validation, serialization, and documentation needs. The PropertyDescriptorFactory class provides the make_descriptors method that is used by the metaclass MetaHasProps during class creation to install the descriptors corresponding to the declared properties. This machinery helps to make Bokeh much more user friendly. For example, the DataSpec properties mediate between fixed values and references to column data source columns. A user can use a very simple syntax, and the property will correctly serialize and validate automatically: from bokeh.models import Circle c = Circle() c.x = 10 # serializes to {'value': 10} c.x = 'foo' # serializes to {'field': 'foo'} c.x = [1,2,3] # raises a ValueError validation exception There are many other examples like this throughout Bokeh. In this way users may operate simply and naturally, and not be concerned with the low-level details around validation, serialization, and documentation. Note These classes form part of the very low-level machinery that implements the Bokeh model and property system. It is unlikely that any of these classes or their methods will be applicable to any standard usage or to anyone who is not directly developing on Bokeh’s own infrastructure. - class PropertyDescriptorFactory[source]¶ Base class for all Bokeh properties. A Bokeh property really consist of two parts: the familiar “property” portion, such as Int, String, etc., as well as an associated Python descriptor that delegates attribute access (e.g. range.start) to the property instance. Consider the following class definition: from bokeh.model import Model from bokeh.core.properties import Int class SomeModel(Model): foo = Int(default=10) Then we can observe the following: >>> m = SomeModel() # The class itself has had a descriptor for 'foo' installed >>> getattr(SomeModel, 'foo') <bokeh.core.property.descriptors.PropertyDescriptor at 0x1065ffb38> # which is used when 'foo' is accessed on instances >>> m.foo 10 - make_descriptors(name: str) List[PropertyDescriptor[T]] [source]¶ Return a list of PropertyDescriptorinstances to install on a class, in order to delegate attribute access to this property. - Parameters name (str) – the name of the property these descriptors are for - Returns list[PropertyDescriptor] The descriptors returned are collected by the MetaHasPropsmetaclass and added to HasPropssubclasses during class creation. Subclasses of PropertyDescriptorFactoryare responsible for implementing this function to return descriptors specific to their needs.
https://docs.bokeh.org/en/latest/docs/reference/core/property/descriptor_factory.html
2022-06-25T05:29:30
CC-MAIN-2022-27
1656103034170.1
[]
docs.bokeh.org
How do I directly link to a specific section of an article? By updated 10 months ago Use anchor links to help your visitors navigate to a specific section of your help content without scrolling. - In your Gist workspace, navigate to knowledge base section. - Hover over the knowledge base article where you want to add an anchor, then click Edit. - On the rich text toolbar, click the code view icon. It looks like this: </> - Scroll down to the section of the article, where you want your link to jump to. - Add an ID to the element, as shown (example: id="order-delivery-process" ). ID must begin with a letter and may only contain letters, numbers, hyphens, underscores, colons and periods. If you want to use more than one word for your ID, separate each word with dashes (-). - Next, create the link that sends the visitor to the section of the page where the anchor was inserted. - Highlight the text you want to hyperlink in the article body. - In the rich text toolbar, click the link icon. - If the anchor you're linking to is on the same page as your link, enter the # symbol followed by the ID of the anchor in the URL field. In the example above, #order-delivery-process is entered in the URL field. - If the anchor you are linking to is on a different page as the link, include the full URL of the page followed by the hashtag symbol # followed by the ID of the anchor. For example,. - Click Insert - If a visitor clicks this anchor link, they'll be redirected to the section of the article where the ID was placed. Sample code: For example, you want to create direct links to two parts of a lengthy help article. You'd need to add IDs to both the sections this way: <p id="part-one">I am part one</p> <p id="part-two">I am part two</p> Once done, you can add the URL in links of another articles' content: <a href="">Link to part one</a> <a href="">Link to part two</a> Need Help? If you have any further questions, please start a Live Chat. Just "Click" on the Chat Icon in the lower right corner to talk with our support team.
https://docs.getgist.com/article/257-how-do-i-directly-link-to-a-specific-section-of-an-article
2022-06-25T04:45:03
CC-MAIN-2022-27
1656103034170.1
[array(['https://d258lu9myqkejp.cloudfront.net/users_profiles/3/medium/jittarao.jpg?1588261838', 'Avatar'], dtype=object) ]
docs.getgist.com
The create-service command creates a Grails service class and associated unit test for the given base name. create-service grails create-service grails create-service book grails create-service org.bookstore.Book Creates a service for the given base name. The argument is optional, but if you don’t include it the command will ask you for the name of the service. A service encapsulates business logic and is delegated to by controllers to perform the core logic of a Grails application. The name of the service can include a Java package, such as org.bookstore in the final example above, but if one is not provided a default is used. So the second example will create the file grails-app/service/<appname>/BookService.groovy whereas the last one will create grails-app/services/org/bookstore/BookService.groovy directory. Note that the first letter of the service name is always upper-cased when determining the class name. org.bookstore grails-app/service/<appname>/BookService.groovy grails-app/services/org/bookstore/BookService.groovy If you want the command to default to a different package for services, provide a value for grails.project.groupId in the runtime configuration. grails.project.groupId Note that this command is just for convenience and you can also create services in your favorite text editor or IDE if you choose. Usage: grails create-service <<name>>
https://docs.grails.org/latest/ref/Command%20Line/create-service.html
2022-06-25T04:37:25
CC-MAIN-2022-27
1656103034170.1
[]
docs.grails.org
ContributingSource: CONTRIBUTING.md This contributing guide has been derived from the tidyverse boilerplate. Where it seems over the top, common sense is appreciated, and every contribution is appreciated. Non-technical contributions to ruODK Feel free to report issues: - Bug reports are for unplanned malfunctions. - Feature requests are for ideas and new features. - Account requests are for getting access to the ODK Central instances run by DBCA (DBCA campaigns only) or the CI server (contributors, to run tests). Technical contributions to ruODK If you would like to contribute to the code base, follow the process below. - Prerequisites - PR Process - Fork, clone, branch - Check - Style - Document - Test - NEWS - Re-check - Commit - Push and pull - Review, revise, repeat - Resources - Code of Conduct This explains how to propose a change to ruODK via a pull request using Git and GitHub. For more general info about contributing to ruODK, see the Resources at the end of this document. Prerequisites To test the package, you will need valid credentials for the ODK Central instance used as a test server. Create an account request issue. Before you do a pull request, you should always file an issue and make sure the maintainers agree that it is a problem, and is happy with your basic proposal for fixing it. If you have found a bug, follow the issue template to create a minimal reprex. Checklists Some changes have intricate internal and external dependencies, which are easy to miss and break. These checklists aim to avoid these pitfalls. Test and update reverse dependencies (wastdr, urODK, etlTurtleNesting, etc.). Adding a dependency - Update DESCRIPTION - Update GH Actions install workflows - do R package deps have system deps? Can GHA install them in all environments? - Update Dockerfile - Update urODK binder install.R - Update installation instructions Renaming a vignette - Search-replace all links to the vignette throughout - ruODK, - urODK, - ODK Central “OData” modal - ODK Central docs Adding or updating a test form - Update tests - Update examples - Update packaged data if test form submissions are included - Add new cassette to vcr cache for each test using the test form Adding or updating package data - Update tests using the package data - Update examples - Update README if showing package data PR process Fork, clone, branch The first thing you’ll need to do is to fork the ruODK GitHub repo, and then clone it locally. We recommend that you create a branch for each PR. Check Before changing anything, make sure the package still passes the below listed flavours of R CMD check locally for you. Style Match the existing code style. This means you should follow the tidyverse style guide. Use the styler package to apply the style guide automatically. Be careful to only make style changes to the code you are contributing. If you find that there is a lot of code that doesn’t meet the style guide, it would be better to file an issue or a separate PR to fix that first. Document We use roxygen2, specifically with the Markdown syntax, to create NAMESPACE and all .Rd files. All edits to documentation should be done in roxygen comments above the associated function or object. Then, run devtools::document() to rebuild the NAMESPACE and .Rd files. See the RoxygenNote in DESCRIPTION for the version of roxygen2 being used. spelling::spell_check_package() spelling::spell_check_files("README.Rmd", lang = "en_AU") spelling::update_wordlist() codemetar::write_codemeta("ruODK") if (fs::file_info("README.md")$modification_time < fs::file_info("README.Rmd")$modification_time) { rmarkdown::render("README.Rmd", encoding = "UTF-8", clean = TRUE) if (fs::file_exists("README.html")) fs::file_delete("README.html") } Test We use testthat. Contributions with test cases are easier to review and verify. To run tests and build the vignettes, you’ll need access to the ruODK test server. If you haven’t got an account yet, create an accont request issue to request access to this ODK Central instance. The tests require the following additions to your .Renviron: # Required for testing ODKC_TEST_SVC="" ODKC_TEST_URL="" ODKC_TEST_PID=2 ODKC_TEST_PID_ENC=3 ODKC_TEST_PP="ThePassphrase" ODKC_TEST_FID="Flora-Quadrat-04" ODKC_TEST_FID_ZIP="Spotlighting-06" ODKC_TEST_FID_ATT="Flora-Quadrat-04-att" ODKC_TEST_FID_GAP="Flora-Quadrat-04-gap" ODKC_TEST_FID_WKT="Locations" ODKC_TEST_FID_I8N0="I8n_no_lang" ODKC_TEST_FID_I8N1="I8n_label_lng" ODKC_TEST_FID_I8N2="I8n_label_choices" ODKC_TEST_FID_I8N3="I8n_no_lang_choicefilter" ODKC_TEST_FID_I8N4="I8n_lang_choicefilter" ODKC_TEST_FID_ENC="Locations" ODKC_TEST_VERSION=1.0 RU_VERBOSE=TRUE RU_TIMEZONE="Australia/Perth" RU_RETRIES=3 ODKC_TEST_UN="..." ODKC_TEST_PW="..." # Your ruODK default settings for everyday use ODKC_URL="..." ODKC_PID=1 ODKC_FID="..." ODKC_UN="..." ODKC_PW="..." Keep in mind that ruODK defaults to use ODKC_{URL,UN,PW}, so for everyday use outside of contributing, you will want to use your own ODKC_{URL,UN,PW} account credentials. devtools::test() devtools::test_coverage() NEWS For user-facing changes, add a bullet to NEWS.md that concisely describes the change. Small tweaks to the documentation do not need a bullet. The format should include your GitHub username, and links to relevant issue(s)/PR(s), as seen below. * `function_name()` followed by brief description of change (#issue-num, @your-github-user-name) Re-check Before submitting your changes, make sure that the package either still passes R CMD check, or that the warnings and/or notes have not changed as a result of your edits. devtools::check() goodpractice::goodpractice(quiet = FALSE) Commit When you’ve made your changes, write a clear commit message describing what you’ve done. If you’ve fixed or closed an issue, make sure to include keywords (e.g. fixes #101) at the end of your commit message (not in its title) to automatically close the issue when the PR is merged. Push and pull Once you’ve pushed your commit(s) to a branch in your fork, you’re ready to make the pull request. Pull requests should have descriptive titles to remind reviewers/maintainers what the PR is about. You can easily view what exact changes you are proposing using either the Git diff view in RStudio, or the branch comparison view you’ll be taken to when you go to create a new PR. If the PR is related to an issue, provide the issue number and slug in the description using auto-linking syntax (e.g. #15). Check the docs Double check the output of the rOpenSci documentation CI for any breakages or error messages. Resources - Happy Git and GitHub for the useR by Jenny Bryan. - Contribute to the tidyverse covers several ways to contribute that don’t involve writing code. - Contributing Code to the Tidyverse by Jim Hester. - R packages by Hadley Wickham. - dplyr’s NEWS.mdis a good source of examples for both content and styling. - Closing issues using keywords on GitHub. - Autolinked references and URLs on GitHub. - GitHub Guides: Forking Projects. Code of Conduct Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. Maintaining ruODK The steps to prepare a new ruODK release are in data-raw/make_release.R. It is not necessary to run them as a contributor, but immensely convenient for the maintainer to have them there in one place. Package maintenance The code steps run by the package maintainer to prepare a release live at data-raw/make_release.R. Being an R file, rather than a Markdown file like this document, makes it easier to execute individual lines. Pushing the Docker image requires privileged access to the Docker repository.
https://docs.ropensci.org/ruODK/CONTRIBUTING.html
2022-06-25T05:17:20
CC-MAIN-2022-27
1656103034170.1
[array(['logo.png', None], dtype=object)]
docs.ropensci.org
Assigning a Vertex Group Creating Vertex Groups Empty Vertex Groups panel. Vertex groups are maintained within the Object Data tab (1) in the Properties. Once a new vertex group has been added, the new group appears in the Vertex Groups panel. There you find three clickable elements: - Nombre de Grupo The group name can be changed by double-clicking LMB on the name itself. Then you can edit the name as you like. - Filtro (icono de flecha) When the little arrow icon in the left lower corner Right after creation of a vertex group, an open padlock icon shows up on the right side of the Assigning Vertices to a Group Assign weights to active group. You add vertices to a group as follows: Select the group from the group list, thus making it the active group (1). From the 3D Viewport select Shift. Nota Assign is additive The Assign button only adds the currently selected vertices to the active group. Vertices already assigned to the group are not removed from the group. Also keep in mind that a vertex can be assigned to multiple groups. Checking Assignments. Nota Selecting/Deselecting is additive If you already have vertices selected in the 3D Viewport,. Finding Ungrouped Vertices You can find ungrouped vertices as follows: Press Alt-A to deselect all vertices. In the header of the 3D Viewport, navigate to.
https://docs.blender.org/manual/es/dev/modeling/meshes/properties/vertex_groups/assigning_vertex_group.html
2022-06-25T04:52:03
CC-MAIN-2022-27
1656103034170.1
[array(['../../../../_images/modeling_meshes_properties_vertex-groups_assigning-vertex-group_empty.png', '../../../../_images/modeling_meshes_properties_vertex-groups_assigning-vertex-group_empty.png'], dtype=object) array(['../../../../_images/modeling_meshes_properties_vertex-groups_vertex-groups_panel-edit.png', '../../../../_images/modeling_meshes_properties_vertex-groups_vertex-groups_panel-edit.png'], dtype=object) array(['../../../../_images/modeling_meshes_properties_vertex-groups_assigning-vertex-group_delete.png', '../../../../_images/modeling_meshes_properties_vertex-groups_assigning-vertex-group_delete.png'], dtype=object) array(['../../../../_images/modeling_meshes_properties_vertex-groups_assigning-vertex-group_lock.png', '../../../../_images/modeling_meshes_properties_vertex-groups_assigning-vertex-group_lock.png'], dtype=object) array(['../../../../_images/modeling_meshes_properties_vertex-groups_assigning-vertex-group_assign.png', '../../../../_images/modeling_meshes_properties_vertex-groups_assigning-vertex-group_assign.png'], dtype=object) ]
docs.blender.org
The Scavenging Resource Records page allows you to configure scheduled jobs for scavenging different resource record types. The Auto Scavenge service that runs on one of the Data Nodes will run the configured scheduled jobs if the Auto Scavenge service is configured and running. By default, enabling the Auto Scavenge setting within the Configuration page automatically creates a scavenging settings for the ANY resource record type. Configuring a new schedule scavenging job - Click New Schedule. - Under Record Type, select the resource record type that will be scavenged. - Under Start date Time (UTC), select a custom date-time. By default, the current date and time is selected in UTC. You must enter a value that is the current date or a future date. If you enter a start time that is before the current time, the start date-time is the value plus the Scavenging Interval value. If you select a start time that is after the current time, the start date-time will remain the entered value. - Under Scavenging Interval, select the interval at which the scheduled scavenging job is performed. - Under Status, toggle the slider to enable or disable the scheduled scavenging job. By default, the job is enabled. - Click Save. Running a scheduled scavenging job - Click Run Now within the row of a configured scheduled scavenging job.Note: If you have configured the scheduled scavenging job without saving the settings, you can still select the Run Now button. The Run Now button is disabled if no resource record type is selected. - In the confirmation window, click OK to confirm that the scavenging job will be run. Once the service completes the scavenging job, a window appears displaying the results of the scavenging job. Deleting a scheduled scavenging job - Click Delete within the row of a configured scheduled scavenging job. - In the confirmation window, click OK to confirm that the scavenging job will be deleted.
https://docs.bluecatnetworks.com/r/BlueCat-Distributed-DDNS-Administration-Guide/Scavenging-Resource-Records/22.1
2022-06-25T04:37:58
CC-MAIN-2022-27
1656103034170.1
[]
docs.bluecatnetworks.com
Public restrooms are not the very best places to meet for affairs. If you want to prevent the embarrassment of your spouse finding out that you are having an affair, you must pick a place where you can dedicate some quality time together without having to worry about being found. A hotel room may be the perfect place to meet to get an affair, and this typically includes a restaurant and bar. When your lover truly does not like the restaurant, you can rent a room at a lodge. Movie theaters will be convenient areas for affair associates to fulfill. They can connect with in privacy without their particular partners realizing. Though a movie theatre can be a great place to meet and get along, this environment is certainly not the best place to have an affair. These days, the field of social media and the internet has turned it much simpler to carry out an affair, specifically if you use a messaging app. You’ll want to know the ideal places in order to meet for affairs to ensure the health and safety of your spouse and yours. Hotels and resorts also are great locations to meet intended for affairs. The restaurants and bars at hotels and resorts are generally full of persons, including people next door and holidaymakers. You can rent rooms just for the night and necessarily worry about simply being caught. Having an affair will certainly not be good for your relationship, which suggests you should really be sure to keep a clean conscience. You can even try going to a resort with a restaurant and bar close by. A good resort should be near the other person’s home. For many who are seriously interested in having an affair, a superb affair subreddit will help you connect with people with equivalent pursuits. These sites can be full of sadly married people. They’ll contain similar pursuits, and you may actually get a spark after you connect with them. You can even post an marketing and see if anyone responds. You are able to spend some quality time with this person. It may even be the main one you’ve been looking for! If you want to have an affair in a very discreet setting, a hotel is the best place for you to meet the cheating partner. Hotels and resorts wonderful options as they are usually safe, with pubs and cusine services readily available. But if you don’t want your lover to know, you should think of achieving in a conventional hotel or a lodge. This way, you are able to stay quietly and not get found. Ashley Madison is one of the most popular affair sites web based. The site possesses over thirty four million users, meaning that your chances of meeting someone are higher. Unlike on the traditional internet dating site, users of Ashley Madison are normally looking for a great affair or maybe to add a brand new person to their romantic relationship. You can find rich men buying a rich man on this site. But you need to take no chances and the actual rules. Usually, they have just a spend of your time.
https://docs.jagoanhosting.com/greatest-places-to-meet-for-affairs/
2022-06-25T04:52:47
CC-MAIN-2022-27
1656103034170.1
[]
docs.jagoanhosting.com
Ecosystem Participants The Pegasys ecosystem is primarily comprised of three types of users: liquidity providers, traders, and developers. Liquidity providers are incentivized to contribute ERC-20 tokens to common liquidity pools. Traders can swap these tokens for one another for a fixed 0.25% fee (which goes to liquidity providers). Developers can integrate directly with Pegasys smart contracts to power new and exciting interactions with tokens, trading interfaces, retail experiences, and more. In total, interactions between these classes create a positive feedback loop, fueling digital economies by defining a common language through which tokens can be pooled, traded and used. #Liquidity Providers Pegasys. Finally, some DeFi pioneers are exploring complex liquidity provision interactions like incentivized liquidity, liquidity as collateral, and other experimental strategies. Pegasys is the perfect protocol for projects to experiment with these kinds of ideas. #Traders There are a several categories of traders in the protocol ecosystem: Speculators use a variety of community built tools and products to swap tokens using liquidity pulled from the Pegasys protocol. Arbitrage bots seek profits by comparing prices across different platforms to find an edge. (Though it might seem extractive, these bots actually help equalize prices across broader Syscoin markets and keep things fair.) DAPP users buy tokens on Pegasys for use in other applications on Syscoin.. #Developers/Projects There are far too many ways Pegasys is used in the wider Syscoin ecosystem to count, but some examples include: The open-source, accessible nature of Pegasys means there are countless UX experiments and front-ends built to offer access to Pegasys functionality. You can find Pegasys. Pegasys is the biggest single decentralized liquidity source for these projects. Smart contract developers use the suite of functions available to invent new DeFi tools and other various experimental ideas. See projects like Unisocks or Zora, among many, many others. #Pegasys Team and Community The Pegasys team along with the broader Pegasys community drives development of the protocol and ecosystem.
https://docs.pegasys.finance/concepts/protocol-overview/02-ecosystem-participants
2022-06-25T04:37:08
CC-MAIN-2022-27
1656103034170.1
[array(['/assets/images/participants-3b12301061347445adaf904d10430112.jpg', None], dtype=object) ]
docs.pegasys.finance
For Kartuku payment method there aren’t any test data available, but you can see how it works with the payment flow given below. Kartuku Payment Flow The customer enters his email address, name and phone number. The customer receives an email with the reference number and the virtual account which he uses to make the payment at an ATM. Upon completion of the payment flow, the customer is redirected back to your ReturnURL.
https://docs.smart2pay.com/s2p_testdata_1055/
2022-06-25T04:25:47
CC-MAIN-2022-27
1656103034170.1
[]
docs.smart2pay.com
Interior In between each neighboring vertex of a mesh, you typically create edges to connect them. Imagine each edge as a spring. Any mechanical spring is able to stretch under tension, and to squeeze under pressure. All springs have an ideal length, and a stiffness that limits how far you can stretch or squeeze the spring. In Blender’s case, the ideal length is the original edge length which you designed as a part of your mesh, even before you enable the Soft Body system. Until you add the Soft Body physics, all springs are assumed to be perfectly stiff: no stretch and no squeeze. You can adjust the stiffness of all those edge springs, allowing your mesh to sag, to bend, to flutter in the breeze, or to puddle up on the ground. To create a connection between the vertices of a soft body object there have to be forces that hold the vertices together. These forces are effective along the edges in a mesh, the connections between the vertices. The forces act like a spring. Fig. Vertices and forces along their connection edges. illustrates how a 3×3 grid of vertices (a mesh plane in Blender) are connected in a soft body simulation. But two vertices could freely rotate if you do not create additional edges between them. The logical method to keep a body from collapsing would be to create additional edges between the vertices. This works pretty well, but would change your mesh topology drastically. Luckily, Blender allows to define additional virtual connections. On one hand you can define virtual connections between the diagonal edges of a quad face (Stiff Quads Fig. Additional forces with Stiff Quads enabled.), on the other hand you can define virtual connections between a vertex and any vertices connected to its neighbors” Bending Stiffness. In other words, the amount of bend that is allowed between a vertex and any other vertex that is separated by two edge connections. Configurações The characteristics of edges are set with the Springs and Stiff Quads properties in the Soft Body Edges panel. See the Soft Body Edges settings for details. Tips: Preventing Collapse Stiff Quads To show the effect of the different edge settings we will use two cubes (blue: only quads, red: only tris) and let them fall without any goal onto a plane (how to set up collision is shown on the page Collisions). See the example blend-file. In Fig. Without Stiff Quads., the default settings are used (without Stiff Quads). The «quad only» cube will collapse completely, the cube composed of tris keeps its shape, though it will deform temporarily because of the forces created during collision. In Fig. With Stiff Quads., Stiff Quads is activated (for both cubes). Both cubes keep their shape, there is no difference for the red cube, because it has no quads anyway. Bending Stiffness The second method to stop an object from collapsing is to change its Bending stiffness. This includes the diagonal edges (damping also applies to these connections). In Fig. Bending Stiffness., Bending is activated with a strength setting of 1. Now both cubes are more rigid. Bending stiffness can also be used if you want to make a subdivided plane more plank like. Without Bending the faces can freely rotate against each other like hinges Fig. No bending stiffness.. There would be no change in the simulation if you activated Stiff Quads, because the faces are not deformed at all in this example. Bending stiffness is the strength needed for the plane to be deformed.
https://docs.blender.org/manual/pt/dev/physics/soft_body/forces/interior.html
2022-06-25T04:24:32
CC-MAIN-2022-27
1656103034170.1
[]
docs.blender.org
. - title is for the main title of your website. - theme sets up the used theme. If your theme is located in my-project/themes/theme-namefolder, then the value for this parameter is theme-name. - languageCode defines your global site language. For more information, see Official Hugo Docs. -. - googleAnalytics add your Google Analytics ID to enable analytics on all pages. # example: UA-123-45. For more info, read the article. If you want another third-party analytics, you can contact us for custom service. -/_default/params. - logo_width defines the width of the logo in pixels. It doesn’t work with .svg file. - logo_text will only appear if the logo parameter is missing. -:. - mainSections defines the section names that you want to show on your website. It’s an array, so you can add more sections to show. For more information, see Official docs. - contact_info has some filed (like phone, address) to show your contact information in the footer and contact. - social is a loop item for your website’s social icons. You can add a loop item by following the existing loop. We are using Font Awesome icon pack for this theme. You can choose more icons from here - search is default active in this template, you can search with any content, tags or categories from here. if you don’t need search, you can falseit. - copyright is for copyright text at the bottom of the page. - subscription is for user subscription, give your won subscription form action url in mailchimp_form_actionfield, and your form name in mailchimp_form_namefield. You can get your action url and form name from here (after login or signup). widgets: All sidebar widgets are customizable. Here is the available widgets that we provided with the theme about, categories, recent-post, and newsletter. cookies: you can set cookie consent messege on and set expiry days from here.
https://docs.gethugothemes.com/geeky/basic-configuration/
2022-06-25T04:54:49
CC-MAIN-2022-27
1656103034170.1
[]
docs.gethugothemes.com
This class represents high-level API for segmentation models. More... #include <opencv2/dnn/dnn.hpp> This class represents high-level API for segmentation models. SegmentationModel allows to set params for preprocessing input image. SegmentationModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and returns the class prediction for each pixel. Create segmentation model from network represented in one of the supported formats. An order of model and config arguments does not matter. Create model from deep learning network. Given the input frame, create input blob, run net.
https://docs.opencv.org/4.5.1/da/dce/classcv_1_1dnn_1_1SegmentationModel.html
2022-06-25T04:49:44
CC-MAIN-2022-27
1656103034170.1
[]
docs.opencv.org
Revise the enforcements used by the identity manager framework in Every five minutes when the identity manager runs, it automatically enforces configuration file settings used by the framework, including inputs.conf, props.conf, macros.conf, transforms.conf, and identityLookup.conf (deprecated). With these enforcements enabled, if there are accidental changes made to your conf files, the settings are reverted back to the way they were. If you're doing manual testing or making changes on purpose to your conf files and you do not want the settings checked or reverted back, you can disable these enforcements. Prerequisites Perform the following prerequisite tasks before starting on these settings: - Collect and extract asset and identity data in . - Format the asset or identity list as a lookup in . - Configure a new asset or identity list in . Enable or disable enforcements Use the global settings to enable or disable enforcements as follows. For the majority of users who configure settings through the Splunk Web UI, there is no need to disable these settings: - From the menu bar, select Configure > Data Enrichment > Asset and Identity Management. - Click the Global Settings tab. - Scroll to the Enforcements panel. - Use the toggle to enable or disable. Example Using the example of Enforce props, you experience the following by default. If you add a custom field in Identity Settings, the field is automatically added to the props.conf file because the settings check occurs to sync and reload props to be consistent with the identity manager. Using the example of Enforce props, you experience the following by disabling it. If you add a custom field in Identity Settings, then you have to add that custom field to the props.conf file manually because the settings check no longer occurs. With enforce props disabled, any manual identity settings changes made without using the Splunk Web UI are also ignored. After upgrading to Enterprise Security 6.2.0, you need to enable the Enforce props setting Enterprise Security. This documentation applies to the following versions of Splunk® Enterprise Security: 7.0.1 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/ES/7.0.1/Admin/Enforcements
2022-06-25T05:17:19
CC-MAIN-2022-27
1656103034170.1
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Giving - Create a campaign - View transactions - Download giving reports - Change a recurring donation - Cancel a recurring donation - Update donor's credit card info - Use Tilma with other giving platforms - Settlements and Batches (Canada) - Settlements and Batches (US) - Refunds (Canada) - Refunds (US) - View Declined Charges (US) - View Declined Charges (Canada) - View Processing Fees (US) - View Processing Fees (Canada) - Giving Form Settings - Add a offline donation - Insights - Create a fund - Display Other Ways to Give
https://docs.tilmaplatform.com/category/142-giving
2022-06-25T04:02:42
CC-MAIN-2022-27
1656103034170.1
[]
docs.tilmaplatform.com