content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Configuration Loaders¶
Loaders are in charge of loading configuration from various sources, like
.ini files or environment variables. Loaders are ment to chained, so that
prettyconf checks one by one for a given configuration variable.
Prettyconf comes with some loaders already included in
prettyconf.loaders.
See also
Some loaders include a
var_format callable argument, see
Naming conventions for variables to read more about it’s purpose.
Environment¶
The
Environment loader gets configuration from
os.environ. Since it
is a common pattern to write env variables in caps, the loader accepts a
var_format function to pre-format the variable name before the lookup
occurs. By default it is
str.upper().
from prettyconf import config from prettyconf.loaders import Environment config.loaders = [Environment(var_format=str.upper)] config('debug') # will look for a `DEBUG` variable
EnvFile¶
The
EnvFile loader gets configuration from
.env file. If the file
doesn’t exist, this loader will be skipped without raising any errors.
# .env file DEBUG=1
from prettyconf import config from prettyconf.loaders import EnvFile config.loaders = [EnvFile(file='.env', required=True, var_format=str.upper)] config('debug') # will look for a `DEBUG` variable
IniFile¶
The
IniFile loader gets configuration from
.ini or
.cfg files. If
the file doesn’t exist, this loader will be skipped without raising any errors.
CommandLine¶
This loader lets you extract configuration variables from parsed CLI arguments. By default it works with argparse parsers.
from prettyconf import Configuration, NOT_SET from prettyconf.loaders import CommandLine import argparse parser = argparse.ArgumentParser(description='Does something useful.') parser.add_argument('--debug', '-d', dest='debug', default=NOT_SET, help='set debug mode') config = Configuration(loaders=[CommandLine(parser=parser)]) print(config('debug', default=False, cast=config.boolean))
Something to notice here is the
NOT_SET value. CLI parsers often force you
to put a default value so that they don’t fail. In that case, to play nice with
prettyconf, you must set one. But that would break the discoverability chain
that prettyconf encourages. So by setting this special default value, you will
allow prettyconf to keep the lookup going.
The
get_args function converts the
argparse parser’s values to a dict that ignores
NOT_SET values.
RecursiveSearch¶
This loader tries to find
.env or
*.ini|*.cfg files and load them with
the
EnvFile and
IniFile loaders respectively. It will
start at the
starting_path directory to look for configuration files.
Warning
It is important to note that this loader uses the glob module internally to
discover
.env and
*.ini|*.cfg files. This could be problematic if
the project includes many files that are unrelated, like a
pytest.ini
file along side with a
settings.ini. An unexpected file could be found
and be considered as the configuration to use.
Consider the following file structure:
project/ settings.ini app/ settings.py
When instantiating your
RecursiveSearch, if you pass
/absolute/path/to/project/app/ as
starting_path the loader will start
looking for configuration files at
project/app.
# Code example in project/app/settings.py import os from prettyconf import config from prettyconf.loaders import RecursiveSearch app_path = os.path.dirname(__file__) config.loaders = [RecursiveSearch(starting_path=app_path)]
By default, the loader will try to look for configuration files until it finds
valid configuration files or it reaches
root_path. The
root_path is
set to the root directory
/ initialy.
Consider the following file structure:
/projects/ any_settings.ini project/ app/ settings.py
You can change this behaviour by setting any parent directory of the
starting_path as the
root_path when instantiating
RecursiveSearch:
# Code example in project/app/settings.py import os from prettyconf import Configuration from prettyconf.loaders import RecursiveSearch app_path = os.path.dirname(__file__) project_path = os.path.realpath(os.path.join(app_path, '..')) rs = RecursiveSearch(starting_path=app_path, root_path=project_path) config = Configuration(loaders=[rs])
The example above will start looking for files at
project/app/ and will stop looking
for configuration files at
project/, actually never looking at
any_settings.ini
and no configuration being loaded at all.
The
root_path must be a parent directory of
starting_path:
# Code example in project/app/settings.py from prettyconf.loaders import RecursiveSearch # /baz is not parent of /foo/bar, so this raises an InvalidPath exception here rs = RecursiveSearch(starting_path="/foo/bar", root_path="/baz")
AwsParameterStore¶
The
AwsParameterStore loader gets configuration from the AWS Parameter Store,
part of AWS Systems Manager. The loader will be skipped if the parameter store is
unreachable (connectivity, unavailability, access permissions).
The loader respects parameter hierarchies, performing non-recursive discoveries.
The loader accepts AWS access secrets and region when instantiated, otherwise, it
will use system-wide defaults (if available).
The AWS parameter store supports three parameter types:
String,
StringList
and
SecureString. All types are read as strings, however, decryption of
SecureStrings is not handled by the loader.
from prettyconf import config from prettyconf.loaders import AwsParameterStore config.loaders = [AwsParameterStore(path='/api')] config('debug') # will look for a parameter named "/api/debug" in the store | https://prettyconf.readthedocs.io/en/latest/loaders.html | 2022-01-29T04:36:12 | CC-MAIN-2022-05 | 1642320299927.25 | [] | prettyconf.readthedocs.io |
class FlashArea – access to built-in flash storage¶
Uses Zephyr flash map API.
This class allows access to device flash partition data. Flash area structs consist of a globally unique ID number, the name of the flash device the partition is in, the start offset (expressed in relation to the flash memory beginning address per partition), and the size of the partition that the device represents. For fixed flash partitions, data from the device tree is used; however, fixed flash partitioning is not enforced in MicroPython because MCUBoot is not enabled.
Constructors¶
- class
zephyr.
FlashArea(id, block_size)¶
Gets an object for accessing flash memory at partition specified by
idand with block size of
block_size.
idvalues are integers correlating to fixed flash partitions defined in the devicetree. A commonly used partition is the designated flash storage area defined as
FlashArea.STORAGEif
FLASH_AREA_LABEL_EXISTS(storage)returns true at boot. Zephyr devicetree fixed flash partitions are
boot_partition,
slot0_partition,
slot1_partition, and
scratch_partition. Because MCUBoot is not enabled by default for MicroPython, these fixed partitions can be accessed by ID integer values 1, 2, 3, and 4, respectively.
Methods¶
FlashArea.
ioctl(cmd, arg)¶
These methods implement the simple and extended block protocol defined by
uos.AbstractBlockDev. | http://docs.micropython.org/en/v1.17/library/zephyr.FlashArea.html | 2022-01-29T04:24:06 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.micropython.org |
Suspend-SPEnterprise
Search Service Application
Suspends a search service application, pausing all crawls and search operations, to perform a task such as system maintenance.
Syntax
Suspend-SPEnterprise
Search Service Application [-Identity] <SearchServiceApplicationPipeBind> [-AssignmentCollection <SPAssignmentCollection>] [-Confirm] [-WhatIf] [<CommonParameters>]
Description
This cmdlet reads the SearchServiceApplication object and moves it from Paused for: External Request status to Suspend status.
For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at SharePoint Server Cmdlets.
Examples
------------------EXAMPLE------------------
$ssa = Get-SPEnterpriseSearchServiceApplication -Identity MySSA $ssa | Suspend-SPEnterpriseSearchServiceApplication
This example obtains a reference to a search service application named MySSA and pauses it, stopping all crawls and other search components such as content processing components, analytics processing components and indexing components. suspend. | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/suspend-spenterprisesearchserviceapplication?view=sharepoint-server-ps | 2022-01-29T03:47:52 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.microsoft.com |
Namespace ToSic.Sxc.Services
Interfaces
IConvertService
Conversion helper for things which are very common in web-code like Razor and WebAPIs. It's mainly a safe conversion from anything to a target-type.
Some special things it does:
- Strings like "4.2" reliably get converted to int 4 which would otherwise return 0
- Numbers like 42 reliably converts to bool true which would otherwise return false
- Numbers like 42.5 reliably convert to strings "42.5" instead of "42,5" in certain cultures
IFeaturesService
Features lets your code find out what system features are currently enabled/disabled in the environment. It's important to detect if the admin must activate certain features to let your code do it's work.
IImageService
Service to help create responsive
img and
picture tags the best possible way.
IJsonService
Service to serialize/restore JSON. Get it using GetService < T >
It works for 2sxc/EAV data but can be used for any data which can be serialized/deserialized. Since it's a data-operation, we keep it in this namespace, even if most other things in this namespace are 2sxc-data objects.
Important This is simple object-string conversion. It doesn't change entity objects to be serializable. For that you should use the IConvertToEavLight which returns an object that can then be serialized.
ILogService
Service to add messages to the global log in any platform Dnn/Oqtane
IMailService
Service to send mail messages cross-platform.
Get this service in Razor or WebApi using GetService
IPageService
Make changes to the page - usually from Razor.
IRenderService
Block-Rendering system. It's responsible for taking a Block and delivering HTML for the output.
It's used for InnerContent, so that Razor-Code can easily render additional content blocks.
See also Inner Content (Content Within Other Content)
ISecureDataService
Helper to work with secure / encrypted data.
IToolbarService
Special helper to generate edit toolbars in the front-end.
It is used in combination with
@Edit.Toolbar(...).
It's especially useful for complex rules like Metadata-buttons which are more complex to create. | https://docs.2sxc.org/api/dot-net/ToSic.Sxc.Services.html | 2022-01-29T04:53:51 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.2sxc.org |
Links - January 25, 2010
Authoring in a Content Publishing System using Word
This blog is inactive.
New blog: EricWhite.com/blog
Blog TOCI recently posted on Gray Knowlton's blog with my thoughts on how I would design the authoring experience in a content publishing system. The gist of my approach is that you use styles to mark content and control the document transformation content. One problem is that sometimes your business logic requires that several paragraphs be grouped together, and processed as a whole. This can lead to idiosyncratic experiences for the author. A good solution is to use content controls to delineate a range of paragraphs.
Associating Arbitrary Data with Content Controls
Sometimes developers want to associate some user-editable data with content controls, but in and of themselves, there is no capability for either storing auxiliary information with content controls, nor for editing that information in Word. A reasonable solution is to build a managed add-in that creates a custom task pane. The word-processor user edits that data in a custom task pane. The data is stored in a custom XML part, and is associated with each content control using the content control's unique id. I co-wrote with Anil Kumar a guest post for Gray's blog that details how to do this.
PHP Library for Dynamic Generation of Microsoft Word Documents
Eduardo Ramos wrote an article for OpenXmlDeveloper.org that introduces PHPDOCX, a library that makes it easy to generate DOCX files using PHP. He was working on a project where, for technical reasons, they needed to develop on a LAMP (Linux Apache MySQL PHP) platform. PHPDOCX is is offered for free under the LGPL license. There is also a PRO version eligible for support
SpreadsheetML Made Easy Using C#
Tim Coulter wrote an article for OpenXmlDeveloper.org that introduces ExtremeML, a new open source library that adds functionality to the Open XML SDK. In the article, he walks through a typical use-case for ExtremeML: the programmatic creation of a fully styled and formatted Excel workbook, populated with live data from a database and enhanced by the addition of a pivot table and a chart. | https://docs.microsoft.com/en-us/archive/blogs/ericwhite/links-january-25-2010 | 2022-01-29T05:28:26 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.microsoft.com |
A filter represents a set of run configurations that enable custom views of the data stored in DTP (see DTP Concepts). Filters show a subset of data that is associated with projects in the database. Many DTP REST API services require the filter ID to be specified when calling an endpoint.
In this section:
A filter is automatically created when a new project is added to DTP, but you can create additional filters to facilitate custom: | https://docs.parasoft.com/plugins/viewsource/viewpagesrc.action?pageId=80817623 | 2022-01-29T04:50:39 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.parasoft.com |
Obtaining Ether
Obtaining Ether¶
To obtain Ether you have two options:
Mine it yourself.. | https://docs.web3j.io/4.8.7/transactions/obtaining_ether/ | 2022-01-29T04:06:29 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.web3j.io |
Support Settings
Dremio Support provides settings that can be used for diagnostic purposes. These settings are enabled (or disabled) via the Dremio UI: Admin > Cluster > Support
Support Access
Support access provides multiple capabilities for communication with Dremio support depending on your role (user or administrator).
Internal Support Email
Internal support email allows you to set up default email addresses for communication with Dremio Support. The email is used to send usage data back to Dremio for diagnostic purposes.
Support Keys
Support keys allow you to enable advanced settings so that diagnostic data can be gathered to help Dremio Support resolve any issues that you may be encountering.
To enable a key:
- Navigate to Admin > Cluster > Support > Support Keys.
- Add your support key to show the toggle.
- Enable (or disable) the displayed toggle.
- Click Save. | https://docs.dremio.com/advanced-administration/support-settings.html | 2020-01-18T04:25:28 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['assets/support-access.png', None], dtype=object)
array(['assets/email.png', None], dtype=object)] | docs.dremio.com |
Contents
Strategic Partner Links
Sepasoft - MES Modules
Cirrus Link - MQTT Modules
Resources
Knowledge Base Articles
Inductive University
Forum
IA Support
SDK Documentation
SDK Examples
All Manual Versions
Ignition 8
Ignition 7.9
Ignition 7.8
The following are pages for content in Ignition that has been depreciated in the current version. This means the content is still available inside Ignition for backwards compatibility, but is otherwise hidden.
If you are upgrading your project to newer version, all old functionality will continue to work, but we recommend you use the new versions for any future development.
For example: The old system.alert functions no longer show up in Ignition. They still exist and work exactly the same, but have been replaced with newer system.alarm versions for the new alarming system that was created in Ignition version 7.6. We recommend any time you are creating new alarms in Ignition that you use the new system (and functions) instead of the old ones.
This page has been removed from the Main navigation tree of our documentation, but is still searchable. This is to provide you with any information you might need without confusing new users. | https://docs.inductiveautomation.com/display/DOC79/Deprecated+Pages | 2020-01-18T03:42:48 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.inductiveautomation.com |
[ ) ]
Persistent Storage¶
This guide is to help users debug any general storage issues when deploying charts in this repository.
Ceph¶
Ceph Deployment Status¶
First, we want to validate that Ceph is working correctly. This can be done with the following Ceph command:
admin@kubenode01:~$ MON_POD=$(kubectl get --no-headers pods -n=ceph -l="application=ceph,component=mon" | awk '{ print $1; exit }') admin@kubenode01:~$ kubectl exec -n ceph ${MON_POD} -- ceph -s cluster: id: 06a191c7-81bd-43f3-b5dd-3d6c6666af71 health: HEALTH_OK services: mon: 1 daemons, quorum att.port.direct mgr: att.port.direct(active) mds: cephfs-1/1/1 up {0=mds-ceph-mds-68c9c76d59-zqc55=up:active} osd: 1 osds: 1 up, 1 in rgw: 1 daemon active data: pools: 11 pools, 208 pgs objects: 352 objects, 464 MB usage: 62467 MB used, 112 GB / 173 GB avail pgs: 208 active+clean io: client: 253 B/s rd, 39502 B/s wr, 1 op/s rd, 8 op/s wr admin@kubenode01:~$
Use one of your Ceph Monitors to check the status of the cluster. A couple of things to note above; our health is HEALTH_OK, we have 3 mons, we’ve established a quorum, and we can see that all of our OSDs are up and in the OSD map.
PVC Preliminary Validation¶
Before proceeding, it is important to ensure that you have deployed a
client key in the namespace you wish to fulfill
PersistentVolumeClaims.
To verify that your deployment namespace has a client key:
admin@kubenode01: $ kubectl get secret -n openstack pvc-ceph-client-key NAME TYPE DATA AGE pvc-ceph-client-key kubernetes.io/rbd 1 8h
Without this, your RBD-backed PVCs will never reach the
Bound state. For
more information, see how to activate namespace for ceph.
Note: This step is not relevant for PVCs within the same namespace Ceph was deployed.
Ceph Validating PVC Operation¶
To validate persistent volume claim (PVC) creation, we’ve placed a test manifest here. Deploy this manifest and verify the job completes successfully.
Ceph Validating StorageClass¶
Next we can look at the storage class, to make sure that it was created correctly:
admin@kubenode01:~$ kubectl describe storageclass/general Name: general IsDefaultClass: No Annotations: <none> Provisioner: ceph.com/rbd Parameters: adminId=admin,adminSecretName=pvc-ceph-conf-combined-storageclass,adminSecretNamespace=ceph,imageFeatures=layering,imageFormat=2,monitors=ceph-mon.ceph.svc.cluster.local:6789,pool=rbd,userId=admin,userSecretName=pvc-ceph-client-key ReclaimPolicy: Delete Events: <none> admin@kubenode01:~$
The parameters are what we’re looking for here. If we see parameters
passed to the StorageClass correctly, we will see the
ceph-mon.ceph.svc.cluster.local:6789 hostname/port, things like
userid,
and appropriate secrets used for volume claims. | https://docs.openstack.org/openstack-helm/latest/de/troubleshooting/persistent-storage.html | 2020-01-18T03:33:05 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.openstack.org |
TOPICS×
Adobe Experience Platform Auditor Overview
Adobe Experience Platform Auditor is a service of the Adobe Experience Platform that was co-developed with ObservePoint , experts in validating digital implementations. This guide contains technical documnetation and self-help for Auditor.
With Auditor, Adobe Experience Cloud users receive a report that grades their Adobe Experience Cloud implementation and gives pointers on how to improve it. Auditor helps you get more value from your Adobe products, individually and collectively.
With Auditor, you can:
- Scan Scan 500 to align with best practices so you get the full value of your Adobe investment. Auditor goes beyond broad-stroke recommendations and tells you exactly what's wrong with a specific implementation, the webpage where the issue was found, and then gives guidance on how to fix it.
For your website to effectively drive more business and deliver great experiences, it needs to be implemented properly. If not, the software will either deliver a fraction of its potential value, or nothing at all.
But, maintaining intricate implementations on websites in a constant state of flux is a heavy burden. Auditor transforms this burden into an opportunity that increases the return on your Adobe investment..
Auditor Community Forum
If you need help getting started, have questions, or would like to suggest or vote on future enhancements, please visit the Auditor Community Forum to connect with experts and Adobe.
Release information
For information about the most recent release, see Auditor Release Notes . | https://docs.adobe.com/content/help/en/auditor/using/overview.html | 2020-01-18T04:05:26 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.adobe.com |
Cloudera Manager Agents
The Cloudera Manager Agent is a Cloudera Manager component that works with the Cloudera Manager Server to manage the processes that map to role instances... | https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/admin_cloudera_manager_agents.html | 2020-01-18T03:30:36 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.cloudera.com |
Reply.io email tracking
This tutorial tells you how to get setup for LeadBoxer email tracking of recipients originating from your Reply.io emails.
LeadBoxer will show you the email of the person who clicked through the email, simply by a) adding a special link in the emails sent and b) adding code to the landing page.
Technical requirements:
- LeadBoxer account
- Reply.io Reply.io Reply.io.
You will need to add this email tracking pixel to the source of your email or templates, place it at the bottom of the source.
<img src="{{yourDatasetId}}&campaign={{YourCampaignName}}&email={Email}&firstName={firstName}&lastName={lastName}&companyName={company}">
Do not forget to replace the yourDatasetId and YourCampaignName with your own values.. | https://docs.leadboxer.com/article/124-reply-io-email-tracking | 2020-01-18T04:02:10 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['https://downloads.intercomcdn.com/i/o/29561275/7217a68d6af946526de3eac5/Material_2016-08-26_06-47-11.png',
None], dtype=object) ] | docs.leadboxer.com |
This documentation is for a previous release of Cloud Manager. Go to the docs for the latest release.
Licensing
Each ONTAP Cloud BYOL system must have a license installed with an active subscription. If an active license is not installed, the ONTAP Cloud system shuts itself down after 30 days. Cloud Manager simplifies the process by managing licenses for you and by notifying you before they expire.
License management for a new system
A tenant must be linked to a NetApp Support Site account so Cloud Manager can obtain licenses for ONTAP Cloud BYOL systems. If the credentials are not present, Cloud Manager prompts you to enter them when you create a new ONTAP Cloud BYOL working environment.
Why you should link a tenant to your NetApp Support Site account
Each time you launch an ONTAP Cloud BYOL system, Cloud Manager automatically downloads the license from NetApp and installs it on the ONTAP Cloud system.
If Cloud Manager cannot access the license file over the secure Internet connection, you can obtain the file yourself and then manually upload the file to Cloud Manager.
License expiration
Cloud Manager warns you 30 days before a license is due to expire and again when the license expires. The following image shows a 30-day expiration warning:
You can select the working environment to review the message.
If you do not renew the license in time, the ONTAP Cloud system shuts itself down. If you restart it, it shuts itself down again.
License renewal
When you renew a BYOL subscription by contacting a NetApp representative, Cloud Manager automatically obtains the new license from NetApp and installs it on the ONTAP Cloud system.
If Cloud Manager cannot access the license file over the secure internet connection, you can obtain the file yourself and then manually upload the file to Cloud Manager. For instructions, see Installing license files on ONTAP Cloud BYOL systems. | https://docs.netapp.com/us-en/occm34/concept_licensing.html | 2020-01-18T03:05:15 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['./media/screenshot_warning.gif',
'Screen shot: Shows a ONTAP Cloud working environment that includes a warning icon. The icon indicates that an action is required.'],
dtype=object) ] | docs.netapp.com |
Pass-through Messaging
Ballerina is an open-source programming language that empowers developers to integrate their system easily with the support of connectors.
There are different messaging methods in SOA (Service Oriented Architecture). In this guide, we are focusing on pass-through messaging between services using an example scenario.
What you’ll build¶
There are different ways of messaging between services such as pass-through messaging, content-based routing of messages, header-based routing of messages, and scatter-gather messaging. There is a delay in messaging while processing an incoming message in all messaging methods excluding pass-through messaging. When routing the message without processing/inspecting the message payload, the most efficient way is the pass-through messaging. Here are some differences between conventional message processing vs pass-through messaging.
Conventional message processing methods include a message processor for processing messages, but pass-through messaging skipped the message processor. It thereby saves the processing time and power and is more efficient when compared to other types.
Now let's understand the scenario described here. The owner needs to expand the business, so he makes online shop that is connected to the local shop, to expand the business. When you connect to the online shop, it will automatically be redirected to the local shop without any latency. To do so, the pass-through messaging method is used.
The two shops are implemented as two separate services named as 'OnlineShopping' and 'LocalShop'. When a user makes a call to 'OnlineShopping' using an HTTP request, the request is redirected to the 'LocalShop' service without processing the incoming request. Also the response from the 'LocalShop' is not be processed in 'OnlineShopping'. If it processes the incoming request or response from the 'LocalShop', it will no longer be a pass-through messaging method.
So, messaging between 'OnlineShopping' and 'LocalShop' services act as pass-through messaging. The 'LocalShop' service processes the incoming request method such as 'GET' and 'POST'. Then it calls the back-end service, which will give the "Welcome to Local Shop! Please put your order here....." message. So messaging in the 'LocalShop' service is not a pass-through messaging service.
If you are well aware of the implementation, you can directly clone the GitHub repository to your own device. Using that, you can skip the "Implementation" section and move straight to the "Testing" section.
Create a project.
$ ballerina new pass-through-messaging
Navigate into the project directory and add a new module.
$ ballerina add pass_through_messaging
Add .bal files with meaningful names as shown in the project structure given below.
pass-through-messaging ├── Ballerina.toml └── src └── pass_through_messaging ├── resources ├── Module.md ├── pass_through.bal └── tests ├── resources └── pass_through_test.bal
Developing the service¶
To implement the scenario, let's start by implementing the pass_through.bal file, which is the main file in the implementation.
pass_through.bal file¶
import ballerina/http; import ballerina/log; listener http:Listener OnlineShoppingEP = new(9090); listener http:Listener LocalShopEP = new(9091); //Defines a client endpoint for the local shop with online shop link. http:Client clientEP = new(""); //This is a passthrough service. service OnlineShopping on OnlineShoppingEP { //This resource allows all HTTP methods. @http:ResourceConfig { path: "/" } resource function passthrough(http:Caller caller, http:Request req) { log:printInfo("Request will be forwarded to Local Shop ......."); //'forward()' sends the incoming request unaltered to the backend. Forward function //uses the same HTTP method as in the incoming request. var clientResponse = clientEP->forward("/", req); if (clientResponse is http:Response) { //Sends the client response to the caller. var result = caller->respond(clientResponse); handleError(result); } else { //Sends the error response to the caller. http:Response res = new; res.statusCode = http:STATUS_INTERNAL_SERVER_ERROR; res.setPayload(<string>clientResponse.detail()?.message); var result = caller->respond(res); handleError(result); } } } //Sample Local Shop service. service LocalShop on LocalShopEP { //The LocalShop only accepts requests made using the specified HTTP methods. @http:ResourceConfig { methods: ["POST", "GET"], path: "/" } resource function helloResource(http:Caller caller, http:Request req) { log:printInfo("You have been successfully connected to local shop ......."); // Make the response for the request. http:Response res = new; res.setPayload("Welcome to Local Shop! Please put your order here....."); //Sends the response to the caller. var result = caller->respond(res); handleError(result); } } function handleError(error? result) { if (result is error) { log:printError(result.reason(), err = result); } }
forward() function seen below sends the incoming request unaltered to the backend. It uses the same HTTP method as in the incoming request.
var clientResponse = clientEP->forward("/", req);
Testing¶
Invoking the service¶
Let’s build the module. Navigate to the project directory and execute the following command.
$ ballerina build pass_through_messaging
The build command would create an executable .jar file. Now run the .jar file created in the above step using the following command.
$ java -jar target/bin/pass_through_messaging.jar
Send a request to the online shopping service
OutputOutput
$ curl -v
< HTTP/1.1 200 OK < content-type: text/plain < date: Sat, 23 Jun 2018 05:45:17 +0530 < server: ballerina/0.982.0 < content-length: 54 < * Connection #0 to host localhost left intact Welcome to Local Shop! Please put your order here.....
To identify the message flow inside the services, there will be INFO in the notification channel.
TopTop
2018-06-23 05:45:27,849 INFO [pass_through_messaging] - Request will be forwarded to Local Shop ....... 2018-06-23 05:45:27,864 INFO [pass_through_messaging] - You have been successfully connected to local shop ....... | https://ei.docs.wso2.com/en/latest/ballerina-integrator/learn/tutorials/integration-patterns-and-soa/pass-through-messaging/1/ | 2020-01-18T04:07:49 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['../../../../../assets/img/pass-through-messaging-1.png',
'alt text'], dtype=object)
array(['../../../../../assets/img/pass-through-messaging-2.png',
'alt text'], dtype=object) ] | ei.docs.wso2.com |
Different time zone for central admin
I was working for a customer issue yesterday. As you see from screenshot although the server time was 10/22/2015 9:48 A.M. timerjobs were running around 10/21/2015 8:48 P url which is hidden by default and realized that just the central admin is thinking itself as GMT-10 Hawaii time zone.
When I updated the timezone to UTC+2 my problem has solved.
Another way to fix this issue is updating central admin default time zone through Central Admin -> Application Management -> Manage Web Applications -> SharePoint Central Administration v4 (The Central Admin Web Application) -> General Settings -> Default Time Zone. | https://docs.microsoft.com/en-us/archive/blogs/odabasi/different-time-zone-for-central-admin | 2020-01-18T04:58:49 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.microsoft.com |
Every time a poll under your account gets rendered on someone's browser this counts as toward your request limit. By default you are given 2,000 requests on the free plan.
Requests counts are reset every month. If you exceed you limit the account owner will be sent a notification email.
All the polls under this account will stop being rendered for your users. At the end of the month, the limit will be reset and users will be able to see your poll again. If you increase your plan your users will instantly see the poll again until the new limit is reached without having to wait until the start of the next month. | https://docs.zigpoll.com/accounts/monthy-requests | 2020-01-18T04:35:55 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.zigpoll.com |
To balance the need for logs while considering disk usage, each .NET agent will limit disk usage to 250MB using log rolling. The agent will first log to the file
newrelic_agent_UNIQUENAME.log and create the file if it doesn't exist. Once that file reaches 50MB in size, the agent will:
- Create a new log file.
- Roll each existing log file to a new, sequentially numbered name (up to four archived files).
- Delete the fourth archive.
To roll the log files, the old
newrelic_agent_UNIQUENAME.log becomes the new
newrelic_agent_UNIQUENAME.log(1). Then, the old
newrelic_agent_UNIQUENAME.log(1) becomes the new
newrelic_agent_UNIQUENAME.log(2), and so on. The old
newrelic_agent_UNIQUENAME.log(4) is deleted. | https://docs.newrelic.com/docs/agents/net-agent/other-features/limit-log-disk-space-log-rolling | 2020-01-18T04:50:12 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.newrelic.com |
The Slice Using Guides command slices up the
current image, based on the image's guides. It cuts the image along each
guide, similar to slicing documents in an office with a guillotine (paper
cutter) and creates new images out of the pieces. For further
information on guides, see
「ガイド」.
You can access this command from the image menubar through
Image → Slice Using Guides. | https://docs.gimp.org/ja/plug-in-guillotine.html | 2020-01-18T02:46:25 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.gimp.org |
TOPICS×
Syntax for identifiers
Metric, dimension, and filter expressions can use identifiers to refer to named metrics, dimensions, and filters.
These identifiers are case sensitive and must be typed exactly as they are defined.
A valid identifier can contain one or more of the following:
- Underscores (_). Underscores in an identifier represent spaces in the metric, dimension, or filter name. For example, the Session Referrer dimension would be referred to as Session_Referrer in an expression.
- Percent signs (%)
- Upper case letters (A-Z)
- Lower case letters (a-z)
- Dollar signs ($)
- Numbers (0-9), except as the first character in an identifier.
- Non-ASCII characters. | https://docs.adobe.com/content/help/en/data-workbench/using/client/qry-lang-syntx/c-syntx-id.html | 2020-01-18T04:35:07 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.adobe.com |
The figure below shows an example print layout including all the types produce the most appropriate output. These configurations are: for TIFF,
.pnw for PNG,
jgw for JPEG, …) will be created when
exporting.
This option can also be checked by default in the layout panel.
Note:
The Crop to content dialog also lets you add margins around the cropped bounds.
Image Export Options, output is resized to items extent.
Note
When supported by the format (e.g.
PNG) and the
underlying Qt library, the exported image may include project
metadata (author, title, date, description…)
To export a layout as SVG:
300 dpi, vertices that are less than
1/600 inchapart will be removed).
SVG Export Options
Note
Currently, the SVG output is very basic. This is not a QGIS problem, but a problem with the underlying Qt library. This will hopefully be sorted out in future versions.
To export a layout as PDF:
NEW in 3.10: Generate a georeferenced PDF file (requires GDAL version 3 or later).
300 dpi, vertices that are less than
1/600 inchapart will be removed). This can reduce the size and complexity of the export file (very large files can fail to load in other applications).
PDF Export Options
Note
Since QGIS 3.10, with GDAL 3, GeoPDF export is supported, and a number of GeoPDF specific options are available:
Note
Exporting a print layout to formats that supports georeferencing
(e.g.
TIFF) creates a georeferenced output by default.:
For each feature,):
Atlas Panel
Truewill be processed.
Atlas Preview toolbar
Note. | https://docs.qgis.org/testing/en/docs/user_manual/print_composer/create_output.html | 2020-01-18T03:01:10 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.qgis.org |
rock set comes with 13 different rock surfaces with each including “wet” and “aged” variations. The wet options make it easy to build rain soaked scenes while the aged options are great for post apocalyptic settings with plenty of overgrowth.
Every surface comes with 8 different textures, including a 32bit EXR displacement map for accurate height information without “stair stepping” artifacts. The other textures for each include Diffuse, Aged Diffuse, Aged Normal,. | http://docs.daz3d.com/doku.php/public/read_me/index/62401/start | 2020-01-18T03:18:32 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.daz3d.com |
Measures distances and angles between objects in the model, including edges, faces, and key points.
With the Tools > Measure command, you can:
Measure the actual 3D linear distance between two points.
Measure the delta E- (X), N- (Y), and EL- (Z) distance using the last active coordinate system defined in the PinPoint or Measure commands.
Measure distance along an object, like the Point Along command, or the entire length of an object.
Measure minimum distance between two objects, using the outside surface and not just the axis.
Measure the minimum distance between two objects as projected to a selected plane.
Measure and display hole radius and diameter as well as measure and display fillet radius.
Measure the actual angle defined by three points.
Measure angle between lines, using cylinder axes or nozzle axes as reference lines.
Find SmartSketch points when the software prompts you to locate a start or end point to measure.
Copy measurement values from the ribbon. The Measure command also sums repeated measurements and displays the cumulative results on the ribbon.
When you move the pointer over a key point, the distance between the current location of the pointer and the last point that you clicked displays next to the pointer in text and on the ribbon along with the delta values. The delta values are the distances, as measured along the E- (X), N- (Y), and EL- (Z) axes.
You can change the displayed units of measure for distance or angle by using the Tools > Options command. can use the Measure command to set the active coordinate system, which is a temporary coordinate system with a new origin and axis directions different from those of the global coordinate system. The active coordinate system affects certain calculations, such as weight and CG.
What do you want to do? | https://docs.hexagonppm.com/reader/HhyHMDQnDDZbtDx8qI0jHw/pjcY5kw93BF151DxC6uv5w | 2020-01-18T03:41:13 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.hexagonppm.com |
Migrations
Menus
Last Edited: Jul 02, 2019
To build a new Menu, head over to Menu Builder within Modules in the left-hand menu of your Admin and click the "Add Menu" button.
You'll see two columns when building your Menu. On the left, you have a list of existing pages on your site. Click on one of these to add it to your Menu and you'll see it appears in the right-hand column. You can "drag and drop" items in the right-hand column to reorder and create-sub menu-items (we'll be improving the usability of this soon).
You can fully edit any items in your Menu by clicking on the link icon. From the edit modal, you can edit the Name and URL of the item as all as add custom classes and define a target type if required.
Menu builder does not yet display existing WebApp items, we'll add that soon! For now, you can add any page to your menu, and customise its name and URL to point to any webapp item you'd like. The same can be done for external links.
BC used three Layouts for Menus:
container.html,
group.html and
childitem.html.
Siteglide uses just two layouts:
wrapper.liquid and
item.liquid.
wrapper.liquid should contain your BC
container.html and paste your
group.html
layout within this.
item.liquid should contain the code from your BC
childitem.html Layout.
Check out our doc on Menu Builder for more information on customising Layouts.
Example of BC Layouts:
Container:
Group:
Childitem:
Example of Siteglide Layouts:
Wrapper:
Item:
If you have any questions while migrating your Menu Layouts, get in touch via our Intercom in the bottom right-hand corner of your Admin and we'll be happy to help. | https://docs.siteglide.com/migrations/menus | 2020-01-18T03:36:51 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.siteglide.com |
More tag
Back to Main Index
_1<<
Using the More tag
- Go to→.
For the full info on this tag, visit the article on WordPress.com | http://docs.layerswp.com/reference/more-tag/ | 2018-03-17T05:56:57 | CC-MAIN-2018-13 | 1521257644701.7 | [array(['http://en.support.files.wordpress.com/2008/12/more_button1.png',
'more_button'], dtype=object)
array(['http://en.support.files.wordpress.com/2008/12/more_button1.png',
'more_button'], dtype=object) ] | docs.layerswp.com |
MemoryStream.CopyTo Method
Include Protected Members
Include Inherited Members
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
Copies the contents the current stream into another stream.
This member is overloaded. For complete information about this member, including syntax, usage, and examples, click a name in the overload list.
Overload List
Top | https://docs.microsoft.com/en-us/previous-versions/windows/apps/dd991034(v=vs.105) | 2018-03-17T07:21:39 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.microsoft.com |
These are questions that just don't fit into other categories
10 articles
Medical, allergens, supplements and more
8 articles
What you need to know about our products
5 articles
Policies, terms, rules, disclaimers, and legal mumbo jumbo
10 articles
Extra details about our recurring product plans
2 articles
5 articles | https://docs.rockinwellness.com/ | 2018-03-17T06:18:08 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.rockinwellness.com |
Filters¶
The default filter set (if you don’t specify anything in the config) is:
[SpamFilter] [ClassifyingFilter] [KillThreadsFilter] [ListMailsFilter] [ArchiveSentMailsFilter] [InboxFilter]
The standard filter Configuration can be applied to these filters as well. Though note that most of the filters below set their own value for message, query and/or tags, and some ignore some of the standard settings.
SpamFilter¶
The settings you can use are:
- spam_tag = <tag>
- Add <tag> to all mails recognized as spam.
- The default is ‘spam’.
- You may use it to tag your spam as ‘junk’, ‘scum’ or whatever suits your mood. Note that only a single tag is supported here.
Email will be considered spam if the header X-Spam-Flag is present.
ClassifyingFilter¶
This filter will tag messages based on what it has learnt from seeing how you’ve tagged messages in the past. See Classification for more details.
KillThreadsFilter¶
If the new message has been added to a thread that has already been tagged killed then add the killed tag to this message. This allows for ignoring all replies to a particular thread.
ListMailsFilter¶
This filter looks for the List-Id header, and if it finds it, adds a tag lists and a tag named lists/<list-id>.
SentMailsFilter¶
The settings you can use are:
- sent_tag = <tag>
-
Add <tag> to all mails sent from one of your configured mail addresses.
-
The default is to add no tag, so you need to specify something.
-
You may e.g. use it to tag all mails sent by you as ‘sent’. This may make special sense in conjunction with a mail client that is able to not only search for threads but individual mails as well.
More accurately, it looks for emails that are from one of your addresses and not to any of your addresses.
- to_transforms = <transformation rules>
- Transform To/Cc/Bcc e-mail addresses to tags according to the specified rules. <transformation rules> is a space separated list consisting of 'user_part@domain_part:tags’ style pairs. The colon separates the e-mail address to be transformed from tags it is to be transformed into. ‘:tags’ is optional and if empty, ‘user_part’ is used as tag. ‘tags’ can be a single tag or semi-colon separated list of tags.
- It can be used for example to easily tag posts sent to mailing lists which at this stage don’t have List-Id field.
ArchiveSentMailsFilter¶
It extends SentMailsFilter with the following feature:
- Emails filtered by this filter have the new tag removed, so will not have the inbox tag added by the InboxFilter.
InboxFilter¶
This removes the new tag, and adds the inbox tag, to any message that isn’t killed or spam. (The new tags are set in your notmuch config, and default to just new.)
HeaderMatchingFilter¶
This filter adds tags to a message if the named header matches the regular expression given. The tags can be set, or based on the match. The settings you can use are:
- header = <header_name>
- pattern = <regex_pattern>
- tags = <tag_list>
If you surround a tag with {} then it will be replaced with the named match.
Some examples are:
[HeaderMatchingFilter.1] header = X-Spam-Flag pattern = YES tags = +spam [HeaderMatchingFilter.2] header = List-Id pattern = <(?P<list_id>.*)> tags = +lists;+{list_id} [HeaderMatchingFilter.3] header = X-Redmine-Project pattern = (?P<project>.*) tags = +redmine;+{project}
SpamFilter and ListMailsFilter are implemented using HeaderMatchingFilter, and are only slightly more complicated than the above examples.
FolderNameFilter¶
This looks at which folder each email is in and uses that name as a tag for the email. So if you have a procmail or sieve set up that puts emails in folders for you, this might be useful.
- folder_explicit_list = <folder list>
- Tag mails with tag in <folder list> only. <folder list> is a space separated list, not enclosed in quotes or any other way.
- Empty list means all folders (of course blacklist still applies).
- The default is empty list.
- You may use it e.g. to set tags only for specific folders like ‘Sent’.
- folder_blacklist = <folder list>
- Never tag mails with tag in <folder list>. <folder list> is a space separated list, not enclosed in quotes or any other way.
- The default is to blacklist no folders.
- You may use it e.g. to avoid mails being tagged as ‘INBOX’ when there is the more standard ‘inbox’ tag.
- folder_transforms = <transformation rules>
- Transform folder names according to the specified rules before tagging mails. <transformation rules> is a space separated list consisting of ‘folder:tag’ style pairs. The colon separates the name of the folder to be transformed from the tag it is to be transformed into.
- The default is to transform to folder names.
- You may use the rules e.g. to transform the name of your ‘Junk’ folder into your ‘spam’ tag or fix capitalization of your draft and sent folder:
folder transforms = Junk:spam Drafts:draft Sent:sent
- maildir_separator = <sep>
- Use <sep> to split your maildir hierarchy into individual tags.
- The default is to split on ‘.’
- If your maildir hierarchy is represented in the filesystem as collapsed dirs, <sep> is used to split it again before applying tags. If your maildir looks like this:
[...] /path/to/maildir/devel.afew/[cur|new|tmp]/... /path/to/maildir/devel.alot/[cur|new|tmp]/... /path/to/maildir/devel.notmuch/[cur|new|tmp]/... [...]
the mails in your afew folder will be tagged with ‘devel’ and ‘afew’.
If instead your hierarchy is split by a more conventional ‘/’ or any other divider
[...] /path/to/maildir/devel/afew/[cur|new|tmp]/... /path/to/maildir/devel/alot/[cur|new|tmp]/... /path/to/maildir/devel/notmuch/[cur|new|tmp]/... [...]
you need to configure that divider to have your mails properly tagged:
maildir_separator = /
Customizing filters¶
To customize these filters, there are basically two different possibilities:
Let’s say you like the SpamFilter, but it is way too polite
- Create an filter object and customize it
[SpamFilter.0] # note the index message = meh
The index is required if you want to create a new SpamFilter in addition to the default one. If you need just one customized SpamFilter, you can drop the index and customize the default instance.
- Create a new type...
[ShitFilter(SpamFilter)] message = I hatez teh spam!
and create an object or two
[ShitFilter.0] [ShitFilter.1] message = Me hatez it too.
You can provide your own filter implementations too. You have to register your filters via entry points. See the afew setup.py for examples on how to register your filters. To add your filters, you just need to install your package in the context of the afew application. | http://afew.readthedocs.io/en/latest/filters.html | 2018-03-17T06:39:00 | CC-MAIN-2018-13 | 1521257644701.7 | [] | afew.readthedocs.io |
Django gives you a few ways to control how database transactions are managed, if you’re using a database that supports transactions.', )
The order is quite important. The transaction middleware applies not only to view functions, but also for all middleware modules that come after it. So if you use the session middleware after the transaction middleware, session creation will be part of the transaction.
The various cache middlewares are an exception: CacheMiddleware, UpdateCacheMiddleware, and FetchFromCacheMiddleware are never affected. Even when using database caching, Django’s cache backend uses its own database cursor (which is mapped to its own database connection internally).
For most people, implicit request-based transactions work wonderfully. However, if you need more fine-grained control over how transactions are managed, you can use a set of functions in django.db.transaction to control transactions on a per-function or per-code-block basis.
These functions, described in detail below, can be used in two different ways:
As a decorator on a particular function. For example:
from django.db import transaction @transaction.commit_on_success def viewfunc(request): # ... # this code executes inside a transaction # ...
As a context manager around a particular block of code:
from django.db import transaction def viewfunc(request): # ... # this code executes using default transaction management # ... with transaction.commit_on_success(): # ... # this code executes inside a transaction # ...
Both techniques work with all supported version of Python.
For maximum compatibility, all of the examples below show transactions using the decorator syntax, but all of the follow functions may be used as context managers, too.
注解
Although the examples below use view functions as examples, these decorators and context managers can be used anywhere in your code that you need to deal with transactions.
Use the autocommit decorator to switch a view function to Django’s default commit behavior, regardless of the global transaction setting.
Example:
from django.db import transaction @transaction.autocommit def viewfunc(request): .... @transaction.autocommit(using="my_other_database") def viewfunc2(request): ....
Within viewfunc(), transactions will be committed as soon as you call model.save(), model.delete(), or any other function that writes to the database. viewfunc2() will have this same behavior, but for the "my_other_database" connection.
Use the commit_on_success decorator to use a single transaction for all the work done in a function:
from django.db import transaction @transaction.commit_on_success def viewfunc(request): .... @transaction.commit_on_success(using="my_other_database") def viewfunc2(request): ....
If the function returns successfully, then Django will commit all work done within the function at that point. If the function raises an exception, though, Django will roll back the transaction.
Use the commit_manually decorator if you need full control over transactions. It.
Manual transaction management looks like this:): ....
Django requires that every transaction that is opened is closed before the completion of a request. If you are using autocommit() (the default commit mode) or commit_on_success(), this will be done for you automatically (with the exception of executing custom SQL). However, if you are manually managing transactions (using the commit_manually() decorator), you must ensure that the transaction is either committed or rolled back before a request is completed.
This applies to all database operations, not just write operations. Even if your transaction only reads from the database, the transaction must be committed or rolled back before you complete a request.
Control freaks can totally disable all transaction management by setting TRANSACTIONS_MANAGED.
A savepoint is a marker within a transaction that enables you to roll back part of a transaction, rather than the full transaction. Savepoints are available with the PostgreSQL 8, Oracle and MySQL (when using the InnoDB storage engine) backends. Other backends provide the savepoint functions, but they’re empty operations – they don’t actually do anything.
Savepoints aren’t especially useful if you are using the default autocommit behavior of Django. However, if you are using commit_on_success or commit_manually, each open transaction().
Each of these functions takes a using argument which should be the name of a database for which the behavior applies. If no using argument is provided then the "default" database is used.
Savepoints are controlled by three methods on the transaction object:
Creates a new savepoint. This marks a point in the transaction that is known to be in a “good” state.
Returns the savepoint ID (sid).
Updates the savepoint to include any operations that have been performed since the savepoint was created, or since the last commit..
When a call to a PostgreSQL cursor raises an exception (typically IntegrityError), all subsequent SQL in the same transaction will fail with the error “current transaction is aborted, queries ignored until end of transaction block”. Whilst simple.
注解
This is not the same as the autocommit decorator. When using database level autocommit there is no database transaction at all. The autocommit decorator still uses transactions, automatically committing each transaction when a database modifying operation occurs. | http://django-docs-zh.readthedocs.io/zh_CN/latest/topics/db/transactions.html | 2018-03-17T06:05:42 | CC-MAIN-2018-13 | 1521257644701.7 | [] | django-docs-zh.readthedocs.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Represents the response from the server after AWS Device Farm makes a request to return information about the remote access session.
Namespace: Amazon.DeviceFarm.Model
Assembly: AWSSDK.DeviceFarm.dll
Version: 3.x.y.z
The ListRemoteAccessSessionsResponse type exposes the following members
The following example returns information about a specific Device Farm remote access session.
var response = client.ListRemoteAccessSessions(new ListRemoteAccessSessionsRequest { Arn = "arn:aws:devicefarm:us-west-2:123456789101:session:EXAMPLE-GUID-123-456", // You can get the Amazon Resource Name (ARN) of the session by using the list-sessions
remoteAccessSessions = response.RemoteAccessSessions;
| https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/DeviceFarm/TListRemoteAccessSessionsResponse.html | 2018-03-17T06:15:29 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.aws.amazon.com |
Shorturl API PHP SDK
The OneAll API includes a custom url shortener that enables you to shorten your urls, to track trends in site registrations, returning visitors, social posts and resulting referral traffic. The corresponding reports can be viewed in the User Insights section of your account.
The URL Shortener can be used with our Advanced Sharing API as well as independent module by using the following resources.
The shorturl_token is a key that uniquely identifies a shortened url. Please note that the shorturl_tokens have a shorter format than the other tokens used by the OneAll API. | http://docs.oneall.com/api/resources/shorturls/ | 2018-03-17T06:33:10 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.oneall.com |
Troubleshoot Other Hardware Issues
Applies To: Windows MultiPoint Server 2011
View the following topic to help solve some common problems related to hardware devices and MultiPoint Server.
Wireless Connection Issues
Tip
If you are connected to the Internet, you can see online resources for MultiPoint Server that include additional troubleshooting material. View MultiPoint Server resources on the Web.
See Also
Concepts
Wireless Connection Issues
Troubleshooting | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-multipoint-server/gg609257(v=technet.10) | 2018-04-19T16:21:03 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.microsoft.com |
SSH Access¶
Every team member can get access to any of the team’s EC2 instances by using the Più command line tool:
$ sudo pip3 install --upgrade stups-piu $ # assumptions: region is Ireland, team name is "myteam", private EC2 instance has IP "172.31.146.1" $ piu 172.31.146.1 "Troubleshoot problem XY" # enter even URL (e.g.) # enter odd hostname "odd-eu-west-1.myteam.example.org" $ ssh -A odd-eu-west-1.myteam.example.org # agent-forwarding must be used! $ ssh 172.31.146.1 # jump from bastion to private instance
Tip
Use the
--connect flag to directly connect to the EC2 instance so you do not need to execute the SSH command yourself.
Tip
Use the interactive mode to experience an easy way to access instances. This mode prompts you for the AWS region where your instance is located, so it can present you a list of enumerated deployed stacks from which you can choose the one you want to access and provide a reason for it.
To get the most of this mode, it’s recommended that piu is invoked with the
--connect flag so you get into the instance as soon as the odd host authorizes your request:
$ piu request-access --interactive --connect. Alternatively, you can set the
PIU_CONNECT and
PIU_INTERACTIVE environment variables in your shell profile so you can invoke the command with the mentioned features enabled just with:
$ piu request-access.
Tip
If executing a piu command results in a message
Access to host odd-eu-west-1.myteam.example.org for user <myuser> was granted., but you get an error
Permission denied (publickey)., you can solve this by installing an ssh-agent and executing
ssh-add prior to piu.
Tip
Use the
--clip option to copy the output of piu to your clipboard.
On Linux it requires the package
xclip. On OSX it works out of the box.
Tip
Use
senza instances to quickly get the IP address of your EC2 instance.
See the Senza reference for details.
Più will remember the URL of even and the hostname of odd in the local config file (
~/.config/piu/piu.yaml on Linux).
You can overwrite settings on the command line:
$ piu 172.31.1.1 test -O odd-eu-west-1.myotherteam.example.org
Caution
All user actions are logged for auditing reasons, therefore all SSH sessions must be kept free of any sensitive and/or personal information.
Check the asciicast how using Più looks like:
Copying Files¶
As all access to an EC2 instance has to go through the odd SSH jump host, copying files from and to the EC2 instance appears unnecessary hard at first.
Luckily OpenSSH’s
scp supports jump hosts with the
ProxyCommand configuration option:
$ scp -o ProxyCommand="ssh -W %h:%p odd-eu-west-1.myteam.example.org" mylocalfile.txt 172.31.146.1:
See also the OpenSSH Cookbook on Proxies and Jump Hosts.
SSH Access Revocation¶
SSH access will automatically be revoked by even after the request’s lifetime (default: 60 minutes) expired.
You can specify a non-default lifetime by using Più’s
-t option.
Listing Access Requests¶
The even SSH access granting service stores all access requests and their status in a database. This information is exposed via REST and can be shown using Più’s “list-access-requests” command.
All current and historic access requests can be listed on the command line:
$ piu list # list the most recent requests to my odd host $ piu list -U jdoe -O '*' # list most recent requests by user "jdoe" $ piu list -O '*' -s GRANTED # show all active access requests | http://stups.readthedocs.io/en/latest/user-guide/ssh-access.html | 2018-04-19T15:28:01 | CC-MAIN-2018-17 | 1524125936981.24 | [] | stups.readthedocs.io |
As the amount of information shared through the internet is growing from year to year, as well as adoption of the Web as a mean for doing business, the protection of websites and web applications becomes one of the major Internet security issues. The obvious response to this is implementation of plenty of prevention tools. But before rushing to integrate some complex and/or costly protection solution, consider a few common security methods, as sometimes the most basic security becomes the most efficient one.
So, in this guide we’ll show you how to set a couple of simple protection mechanisms, that are available for any application that uses NGINX-balancer as a frontend, and which applying doesn’t require any additional costs.
Primarily the NGINX-balancer server is intended for performing the smart requests distribution between multiple application server nodes and thus ensuring high system availability and reliability. Herewith, it can be used for processing both HTTP and TCP traffic types (details can be found within the HTTP Load Balancing and TCP Load Balancing docs).
Load balancing node is automatically added to an environment if you pick up more than one application server node, and in addition, it can be added manually even for a single server. To do this, just select the Balancing wizard block above the chosen application server in the Environment Topology window.
Now when the environment is ready, you can proceed to configuring the desired protection method using the instructions below:
- Authentication makes application access protected with a password
IP Address Deny mechanism is used to forbid application access from a particular IP | https://docs.jelastic.com/ru/nginx-balancer-security | 2018-04-19T15:34:44 | CC-MAIN-2018-17 | 1524125936981.24 | [array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2Fenv%20wiz.png&x=1904&a=true&t=f27c12aa4786676fb9a8ca1358f9407e&scalingup=0',
None], dtype=object) ] | docs.jelastic.com |
Overview
Google Compute Engine (GCE) provides a networking system in which you can create networks and manage how your instances connect to and interact with the outside world. You can build out real-world GCE workloads solely using RightScale. In addition to working within the CM Dashboard, you can use the RightScale API to automate network provisioning and you can leverage Self-Service CAT files to include GCE networks in CloudApps. This page describes the mapping of GCE Network concepts to RightScale network abstractions and provides the steps for creating Google network components using the RightScale CM Dashboard. You may want to visit the Google (GCE) Networks and Firewalls overview for additional information.
Mapping GCE Network Concepts into RS Network Abstractions
The following Table describes how the various GCE Network concepts have been mapped to network abstractions in RightScale.
GCE Networks
For more detailed information on how networks are handled in GCE, see the Networks section of the GCE documentation.
Subnetworks
GCE Subnetworks are displayed and can be used when launching instances, but currently cannot be created or managed through RightScale. Legacy networks can be fully managed.
Subnet mode is the new form of networks in which your network is subdivided into regional subnetworks. Each subnetwork controls the IP address range used for VMs that are allocated to that subnetwork. The IP ranges of the different subnetworks in a network might be non-contiguous. There are two options for using subnetworks:
Auto Subnet Network automatically assigns a subnetwork IP prefix range to each region in your network. The instances created in a zone in a specific region in your network get assigned an IP allocated from the regional subnetwork range. The default network for a new project is an auto subnet network.
Custom Subnet Network allows you to manually define subnetwork prefixes for each region in your network. There can be zero, one, or several subnetwork prefixes.
Legacy Networks
Legacy (non-subnetwork) mode is the original approach for networks, where IP address allocation occurs at the global network level. This means the network address space spans across all regions. It is still possible to create a legacy network, but subnetworks are the preferred approach and default behavior going forward.
You can use RightScale to customize the default network by adding or removing rules, or you can create new networks in that GCE project. GCE Instances not explicitly attached to a network on launch are attached to the default network.
GCE Firewalls
GCE firewalls provide similar functionality to AWS security groups. A firewall belongs to a network, it has rules that define what incoming connections are accepted by which instances. Each firewall rule specifies a permitted incoming connection request, defined by source, destination, ports, and protocol. Rules may additionally contain target tags that specify which instances on the network can accept requests from the specified sources (if no target tag is specified then the rules applies to all instances attached to the network). For more detailed information on how firewalls are handled in GCE, see the Firewalls section of the GCE documentation.
GCE Routes
By default, every GCE network has two default routes: a route that directs traffic to the Internet and a route that directs traffic to other instances within the GCE network. A single GCE route comprises a route name, a destination range, a next-hop specification, any instance tags, and a priority value. A route specifies the destination IP range in CIDR format, the next hop as an instance, an IP, a gateway or a VPN tunnel, and a priority as a numeric value. Optionally, instance tags are used to specify which instances on the network the route applies to (if no tag is specified then the rules applies to all instances attached to the network). For more detailed information on how routes are handled in GCE, see the Routes Collection section of the GCE documentation.
Prerequisites
- You must have the 'security_manager' user role privilege in order to work with GCE Networks in the RightScale Network Manager.
Create a New GCE Network
The first step in setting up a GCE network using RightScale is to create a new network using the Network Manager.
- In the RightScale Dashboard, navigate to Manage > Networks.
- Click New Network. The New Network dialog displays.
- Select the Google cloud and enter a name for the new network along with a short description. For this example, we will use 'my-google-network' as the network name. Click Next.
- Enter a value for the CIDR Block (10.0.0.0/16 in this example). The range you specify can contain 16 to 65,536 IP addresses. Note that you cannot change the size of the network after you create it.
- Click Create. You should see a 'growler' message near the top of the Dashboard indicating that the network was created successfully.
Create a New Firewall Rule (Security Group)
By default, a Google network includes one Security Group (or Firewall in Google nomenclature). In this step we will create a new firewall rule.
- In the RightScale Dashboard, navigate to Manage > Networks.
- Enter a meaningful Name for the new Security Group.
- Enter a Description for the new Security Group.
- Select the Common Ports you want to open for this Security Group.
- Click Create.
Create a New Route in the Default Route Table
By default, a Google network includes one route that directs traffic to the Internet and another route that directs traffic to other instances within the network. In this step, we will modify the default route table to include a new route.
- In the RightScale Dashboard, navigate to Manage > Networks.
- Navigate to the Route Tables tab. Click the default route table, then click New Route. The New Route dialog displays.
- Enter a Destination CIDR address block for the destination resource.
- Using the Next Hop drop-down, choose the appropriate target type for the next hop for this route (Instance, URL, IP Address). Choosing Instance displays a drop-down from which you can select an existing instance, while choosing URL or IP Address displays a field in which you can enter a valid URL or IP address value. In the above example, we've used an instance.
- Enter a meaningful Description of the route.
- Enter a numeric Priority value for the route. Lower values have higher priority.
- Enter one or more Instance Tags to which the route will apply. If you elect to leave this field empty, the route will apply to all instances.
- Click Create. | http://docs.rightscale.com/cm/dashboard/manage/networks/network_manager_gce.html | 2018-04-19T15:32:10 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.rightscale.com |
Test-Owa
Connectivity
Syntax
Test-OwaConnectivity [-URL] <String> -MailboxCredential <PSCredential> [-AllowUnsecureAccess] [-Confirm] [-DomainController <Fqdn>] [-LightMode] [-ResetTestAccountCredentials] [-Timeout <UInt32>] [-TrustAnySSLCertificate] [-WhatIf] [<CommonParameters>]
Test-OwaConnectivity [[-ClientAccessServer] <ServerIdParameter>] [-AllowUnsecureAccess] [-Confirm] [-DomainController <Fqdn>] [-LightMode] [-MailboxServer <ServerIdParameter>] [-MonitoringContext] [-ResetTestAccountCredentials] [-RSTEndpoint <String>] [-TestType <Internal | External>] [-Timeout <UInt32>] [-TrustAnySSLCertificate] [-VirtualDirectoryName <String>] [-WhatIf] [<CommonParameters>]
Description-OwaConnectivity -URL: -MailboxCredential:(get-credential contoso\kweku)
This example tests the connectivity for the URL using the credentials for the user contoso\kweku.
-------------------------- Example 2 --------------------------
Test-OwaConnectivity -ClientAccessServer:Contoso12 -AllowUnsecureAccess
This example tests the connectivity of a specific Client Access server Contoso12 and tests all Exchange Outlook Web App virtual directories that support Exchange mailboxes. These include the virtual directories that don't require SSL.
Required Parameters
The MailboxCredential parameter specifies the mailbox credential for a single URL test.
The MailboxCredential parameter is required only when using the URL parameter.
The URL parameter specifies the URL to test. This parameter is required only when you want to test a single Outlook Web App URL.
If this parameter is used, the MailboxCredential parameter is also required.
You can't use the URL parameter with the TestType or ClientAccessServer parameters.
Optional Parameters
The AllowUnsecureAccess parameter specifies whether virtual directories that don't require SSL are tested. If the AllowUnsecureAccess parameter is included, it enables virtual directories that don't require SSL to be tested. If this parameter isn't included, the command skips virtual directories that don't require SSL, and an error is generated.
The ClientAccessServer parameter specifies the name of the Client Access server to test. If this parameter is included, all Exchange Outlook Web App virtual directories on the Client Access server are tested against all Exchange Mailbox servers in the local Active Directory site.
Don parameter isn't implemented for this diagnostic command. Using this parameter doesn't change the behavior of the command.
This parameter is implemented for other Exchange diagnostic commands where it's used to run a less intensive version of a command.
The MailboxServer parameter specifies the name of the Mailbox server to test. If not specified, all Mailbox servers in the local Active Directory site are tested.
The MonitoringContext parameter shows you what information is returned to System Center Operations Manager 2007. When Operations Manager 2007 executes the Test-OwaConnectivity cmdlet, it requires additional information to be returned. By setting this parameter to $true, you can see exactly what would be returned to Operations Manager 2007. This parameter is informational only and has no effect on Operations Manager 2007.
This parameter is reserved for internal Microsoft use.. You can't use this parameter with the URL parameter. When neither the TestType parameter nor the URL parameter is specified, the default is TestType:Internal.
The Timeout parameter specifies the amount of time, in seconds, to wait for the test operation to finish. The default value for the Timeout parameter is 30 seconds. You must specify a time-out value greater than 0 seconds and less than 1 hour (3,600 seconds). We recommend that you always configure this parameter with a value of 5 seconds or more.
The TrustAnySSLCertificate parameter specifies whether to check an internal URL without generating an SSL certificate validation error. The TrustAnySSLCertificate parameter allows SSL certificate validation failures to not be reported. This is useful for testing internal URLs because Internet Information Services (IIS) doesn't support assigning multiple certificates for a single virtual directory. If a directory has different URLs for internal and external access and has a certificate, that certificate is usually for the external URL. This parameter lets the task check an internal URL without generating an error when the certificate doesn't match the URL.
The VirtualDirectoryName parameter specifies the name of the virtual directory to test on a particular Client Access server. If this parameter isn't included, all Exchange Outlook Web App virtual directories that support Exchange mailboxes. | https://docs.microsoft.com/en-us/powershell/module/exchange/client-access/Test-OwaConnectivity?view=exchange-ps | 2018-04-19T16:31:37 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.microsoft.com |
Содержание:
next adjacent selector
Описание: Selects all next elements matching "next" that are immediately preceded by a sibling "prev".
Добавлен в версии: 1.0jQuery( "prev + next" )
prev: Any valid selector.
next: A selector to match the element that is next to the first selector.
One important point to consider with both the next adjacent sibling selector (
prev + next) and the general sibling selector (
prev ~ siblings) is that the elements on either side of the combinator must share the same parent. | https://jquery-docs.ru/next-adjacent-selector/ | 2018-04-19T15:26:30 | CC-MAIN-2018-17 | 1524125936981.24 | [] | jquery-docs.ru |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
POSTrequest to the
/Route 53 API version/hostedzone/hosted Zone ID/rrsetresource. The request body must include a document with a
ChangeResourceRecordSetsRequestelement.
Changes are a list of change items and are considered transactional. For more information on transactional changes, also known as change batches, see POST ChangeResourceRecordSets in the Amazon Route 53 API Reference.
InvalidChangeBatcherror.
In response to a
ChangeResourceRecordSets request, your DNS data is changed
on all Amazon Route 53 DNS servers. Initially, the status of a change is
PENDING.
This means the change has not yet propagated to all the authoritative Amazon Route
53 DNS servers. When the change is propagated to all hosts, the change returns a status
of
INSYNC.
Note the following limitations on a
ChangeResourceRecordSets request:
Valueelements in a request cannot exceed 32,000 characters.
Namespace: Amazon.Route53
Assembly: AWSSDK.dll
Version: (assembly version)
Container for the necessary parameters to execute the ChangeResourceRecordSets service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MRoute53Route53ChangeResourceRecordSetsChangeResourceRecordSetsRequestNET35.html | 2018-04-19T16:07:19 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.aws.amazon.com |
Guides
- Fastly Status
Date- and time-related VCL features
Last updated October 20, 2017
By default VCL includes the
now variable, which provides the current time (for example,
Wed, 17 Sep 2025 23:19:06 GMT). Fastly adds several new Varnish variables and functions that allow more flexibility when dealing with dates and times.
Variables
The following variables have been added:
Functions
The following functions have been added:
Examples
The following examples illustrate how to use the variables and functions.
TIP: Regular strings ("short strings") in VCL use
%xx escapes (percent encoding) for special characters, which would conflict with the
% used in the strftime format. For the strftime examples, we use VCL "long strings"
{"..."}, which do not use the
%xx escapes. Alternatively, you could use
%25 for each
%.
time.hex_to_time, time.add, and time.is_after
if (time.is_after(time.add(now, 10m), time.hex_to_time(1, "d0542d8"))) { ... }
strftime
set resp.http.Now = strftime({"%Y-%m-%d %H:%M"}, now) set resp.http.Start = strftime({"%a, %d %b %Y %T %z"}, time.start)
std.time and std.integer2time
set resp.http.X-Seconds-Since-Modified = strftime({"%s"}, time.sub(now, std.time(resp.http.Last-Modified, now))); std.integer2time(std.atoi("1445445162"));
Comparison operators like
> < >= <= == != do not work with
std.time or
std.integer2time. Instead, you can compare two times using something similar to this:
if (time.is_after(now, std.integer2time(std.atoi("1445445162")))) { # do something } | https://docs.fastly.com/guides/vcl/date-and-time-related-vcl-features.html | 2018-04-19T15:31:20 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.fastly.com |
Distribute apps directly to stores from App Center
You can now publish upgrades of your existing store apps to the App Store and Google Play. App Center also enables enterprise line of business application developers to publish new and upgraded versions of LOB apps to the Intune Company Portal.
This is an early version of store distribution. If you have interest in any specific capabilities, please contact us. | https://docs.microsoft.com/en-us/appcenter/distribution/stores/ | 2018-04-19T15:47:48 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.microsoft.com |
Private Dependencies
Analyzing a private project sometimes need an access to another private repository. Your team might be using Git repository to distribute private library. This kind of dependency is supported in some tools including Bundler, npm, and Glide.
Private Dependency in SideCI
We support using SSH to access private repository during an analysis session.
- You can specify your SSH private key for each project.
- During analysis session,
GIT_SSHenvironment variable will be set so that your Git access will use that key.
Uploading the SSH Key
- Visit the repository setting.
- Click Add button of SSH Key Config section.
- Fill the text field with the content of key file.
- You can specify the description of the SSH key.
Note that the private key cannot have a passphrase and must be an RSA key.
Using SSH
Currently, only a few analysis tools use SSH configuration.
- JavaScript tools use SSH via
npm installto access private repository.
- Go Meta Linter runs Glide to download dependencies from Git repository.
Other tools do not use SSH so adding SSH key for such tools are not needed at all. | https://docs.sideci.com/advanced-settings/private-dependencies.html | 2018-04-19T15:08:24 | CC-MAIN-2018-17 | 1524125936981.24 | [array(['/images/ssh-key-settings.png', 'SSH key settings'], dtype=object)] | docs.sideci.com |
HTTP optionally an Nginx proxy, then the HTTP Edge Server described in this document. Data is accepted via a POST/PUT request from clients, which the server will wrap in a Heka message and forward to two places: the Services Data Pipeline, where any further processing, analysis, and storage will be handled; as well as to a short-lived S3 bucket which will act as a fail-safe in case there is a processing error and/or data loss within the main Data Pipeline.
Namespaces
Namespaces are used to control the processing of data from different types of clients, from the metadata that is collected to the destinations where the data is written, processed and accessible. Namespaces are configured in Nginx using a location directive, to request a new namespace file a bug against the Data Platform Team with a short description of what the namespace will be used for and the desired configuration options.
Forwarding to the pipeline
The constructed Heka protobuf message to is written to disk and the pub/sub pipeline (currently Kafka). The messages written to disk serve as a fail-safe, they are batched and written to S3 (landfill) when they reach a certain size or timeout.
Edge Server Heka Message Schema
- required binary
Uuid; // Internal identifier randomly generated
- required int64
Timestamp; // Submission time (server clock)
- required string
Hostname; // Hostname of the edge server e.g.
ip-172-31-2-68
- required string
Type; // Kafka topic name e.g.
telemetry-raw
- required group
Fields
- required string
uri; // Submission URI e.g.
/submit/telemetry/6c49ec73-4350-45a0-9c8a-6c8f5aded0cf/main/Firefox/58.0.2/release/20180206200532
- required binary
content; // POST Body
- required string
protocol; // e.g.
HTTP/1.1
- optional string
args; // Query parameters e.g.
v=4
- optional string
remote_addr; // In our setup it is usually a load balancer e.g.
172.31.32.229
- // HTTP Headers specified in the production edge server configuration
- optional string
Content-Length; // e.g.
4722
- optional string
Date; // e.g.
Mon, 12 Mar 2018 00:02:18 GMT
- optional string
DNT; // e.g.
1
- optional string
Host; // e.g.
incoming.telemetry.mozilla.org
- optional string
User-Agent; // e.g.
pingsender/1.0
- optional string
X-Forwarded-For; // Last entry is treated as the client IP for geoIP lookup e.g.
10.98.132.74, 103.3.237.12
- optional string
X-PingSender-Version;// e.g.
1.0/[id[/dimensions]]$
Example Telemetry format:
/submit/telemetry/docId/docType/appName/appVersion/appUpdateChannel/appBuildID
Specific Telemetry example:
/submit/telemetry/ce39b608-f595-4c69-b6a6-f7a436604648/main/Firefox/61.0a1/nightly/20180328030202
Note that
id above is a unique document ID, which is used for de-duping
submissions. This is not intended to be the
clientId field from Telemetry.
If
id is omitted, we will not be able to de-dupe based on submission URLs. should send back 202 on body/path too long).
- 414 - request path too long (See above)
- 500 - internal error
Other Considerations
Compression
It is not desirable to do decompression on the edge node. We want to pass along messages from the HTTP Edge node without "cracking the egg" of the payload.
We may also receive badly formed payloads, and we will want to track the incidence of such things within the main pipeline. data warehouse loader performs the lookup and then discards the IP before the message hits long-term storage.
Data Retention
The edge server only stores data while batching and will have a retention time
of
moz_ingest_landfill_roll_timeout which is generally only a few minutes.
Retention time for the S3 landfill, pub/sub, and the data warehouse is outside
the scope of this document. | https://docs.telemetry.mozilla.org/concepts/pipeline/http_edge_spec.html | 2018-04-19T15:09:02 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.telemetry.mozilla.org |
The Rate Grid displays the daily rates for all Rate Plans in a calendar view. The Rate Grid also gives you the ability to make a Daily Rate Change by clicking on the rate. Daily Rate Changes can be made to Parent Rates, but not to Child Rates. You will will see the Rate Type in the column next to the Rate Plan.
To make changes by date range or to change restrictions like Min LOS and CTA, use Manage Rates. Rate Grid
The calendar view can be sorted by Agent Channel. Each Agent Channel displays the room types and rate plans allocated to it. See Allocate Room Types
Agent Channels. See Manage Agent Relationships.The rate is changed on on Sept. 1 to $100 for the RACKQUEEN. Therefore, the rate for Sept. 1 is $100 in all three channels.
To make a Daily Rate Change
For Example:
Rates are displayed by the Agent chosen in the "Agent" drop-down menu. In this example, MyPMS Frontdesk
To see the Rates allocated to a particular Agent, then choose the Agent from the drop-down list and click "Refresh".
In this example, Agent "BookingCenter" (Booking Engine) is selected and displaying only the rates allocated to the channel. | https://docs.bookingcenter.com/plugins/viewsource/viewpagesrc.action?pageId=7373499 | 2018-02-18T03:19:56 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.bookingcenter.com |
Quick Start Windows 10 Publishing
Quick Start Windows 10 Publishing
Publishing your Progressive Web App is mainly a matter of adding the W3C manifest file and service worker scripts to your website. These assets are then referenced from your website, both with a link tag reference to the manifest file and the service worker registration scripts.
Windows 10 PWA support is still in Beta, so some features like service workers will not work on all user machines.
Self Publish through Windows Dev Center
You can easily create a listing for your PWA within the Microsoft Store on Windows 10. Your PWA package can then be uploaded for users to discover
Obtain a Windows Dev Account. The PWA builder team has a limited number of account tokens that can be used by web develper to cover the cost of a dev account with Microsoft for the beta. Reach out to [email protected] or @boyofgreen on twitter for a code.
Login to dev.windows.com and start a “new app” to reserve the name of your app. ** You must use the same name as the “name” or “short_name” on your W3C manifest.** The dev center will tell you if the name is not available.
In the Windows 10 Dev center, find the “publisher name”, “Publisher Identity” and “Package Family Name” of your app. You’ll find it under *******
Go to preview.pwabuilder.com and enter your url.
On step 3, choose “generate Appx” and enter the info the requested info from the dev center.
Go back to the dev center and upload the Appx from pwabuilder .
Publish via CLI
Your PWA can be created and app listing generated for supported platforms (using the -p option, in this case windows10).
Example:
pwabuilder package C:\Projects\HotBeats -p windows10 -a -l debug
| http://docs.pwabuilder.com/quickstart/2018/02/03/quick-start-windows10-publishing.html | 2018-02-18T03:11:56 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['/assets/quickstart-pwa-cli-publish-windows10.png',
'Service Worker Code'], dtype=object) ] | docs.pwabuilder.com |
These pages are intended to be precise and detailed specification. For a tutorial introduction, see instead:
The SQLite interface elements can be grouped into three categories:
List Of Objects. This is a list of all abstract objects and datatypes used by the SQLite library. There are couple dozen objects in total, but the two most important objects are: A database connection object sqlite3, and the prepared statement object sqlite3_stmt.
List Of Constants. This is a list of numeric constants used by SQLite and represented by #defines in the sqlite3.h header file. These constants are things such as numeric result codes.
SQLite is in the Public Domain. | http://docs.w3cub.com/sqlite/c3ref/intro/ | 2018-02-18T02:59:26 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.w3cub.com |
Prerequisites
- Add WebEx as a LaunchPoint Service
- Create a New Event Program
- Set the appropriate flow actions to track engagement
Basic Information
Event Name – This name will be viewable in Marketo.
Unlisted Checkbox
- It's recommended that you do not list your event. This will ensure that all people register through your Marketo landing page. People who register through a mechanism other than Marketo will be displayed in Marketo after the event is concluded AND only if they attended the event.
- If you choose to list the event, it will appear on the List of Events page for anyone who visits your Event Center website.
- Registration – Check this box to set to “required.” You'll use a Marketo form/landing page to capture registration information that will be pushed to WebEx.
Event Password – (optional) If you use this field be sure to include it in your confirmation email!
Date & Time
Start date – Enter your start date. This will be viewable in Marketo.
Start time – Enter your start time. This will be viewable in Marketo.
Estimated duration – Specify the duration of the event. This will be viewable in Marketo.
Time Zones – Enter the applicable time zones. They will be viewable in Marketo.
Audio Conference Settings
These settings reside in WebEx only. They are not used by or viewable in Marketo, but they may be important for your webinar, so double-check them!
Event Description & Options
The following options are used by or viewable in Marketo. Other fields reside in WebEx only.
Description – Enter a description. This will be viewable but not modifiable in Marketo.
Post-event survey – Marketo isn't able to capture the information on a WebEx post-event survey at this time.
Destination URL – (optional) You can enter the URL of a Marketo landing page to serve as the destination URL to display after the session ends.
Attendees & Registration
You will be controlling the invitation list, registration form, and other emails using a Marketo Event. Other functionality will not be supported by Marketo, including:
Maximum number of registrants – Currently not supported using the Marketo-WebEx integration. Manual approval of registrants is available using the Pending Approval progression status in Marketo.
Registration ID required – Currently supported using the Marketo-WebEx integration. You can use Marketo to send out the confirmation email for your event. When the person registers, they receive a unique URL that they use to enter the event.
Tip.
Registration Password – (Optional) Currently not supported using the Marketo-WebEx integration.
Approval Rules – Currently not supported using the Marketo-WebEx integration. However, you can use smart campaigns in Marketo to control approvals.
Presenters & Panelists
The information configured in this section is not passed to Marketo.
You'll use Marketo to send out emails to your registrants, confirmation emails, etc. You don't need to configure anything in this section. Disable (uncheck) the email message options within WebEx.
Note
The Marketo-WebEx integration cannot support sending confirmation emails out of WebEx. The confirmation must be sent via Marketo. After you've scheduled the event, be sure to copy the event information to the Marketo confirmation email and set the email as Operational.
Now we're ready to jump into Marketo!
1. Select the event you created. Open the Event Actions drop-down. Choose Event Settings.
Note
The channel type of the event selected must be webinar.
2. Under Event Partner, select WebEx.
3. Under Login, choose your WebEx login.
4. Under Event, choose your freshly created WebEx event. Then, select an optional Back-up Page and click Save.
5. Select an optional Back-up Page for your WebEx event. Choose from the drop-down of approved Marketo landing pages or enter the URL of a non-Marketo landing page.
Tip
Set a Back-up Page to direct a member to a specific page if they click on their custom event URL prior to the event's start time.
Note
The fields Marketo sends over are: First Name, Last Name, Email Address.
Caution
Avoid using nested email programs to send out your confirmation emails. Use the event program's smart campaign instead, as shown above.
Tip
It can take up to 48 hours for the data to appear in Marketo. If after waiting that long you still don't see anything, select Refresh from Webinar Provider from the Event Actions menu in the Summary tab of your event.
Viewing the Schedule
In the program schedule view, click the calendar entry for your event. You can see the schedule on the right side of the screen!
Note
To change your event schedule you'll need to edit the webinar on WebEx. | https://docs.marketo.com/display/public/DOCS/Create+an+Event+with+WebEx | 2018-02-18T02:54:25 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['/download/attachments/557070/attach.png', None], dtype=object)
array(['/download/attachments/557070/attach.png', None], dtype=object)
array(['/download/attachments/557070/pin_red.png', None], dtype=object)
array(['/download/attachments/557070/attach.png', None], dtype=object)
array(['/download/attachments/557070/burn.png', None], dtype=object)
array(['/download/attachments/557070/pin_red.png', None], dtype=object)
array(['/download/attachments/557070/attach.png', None], dtype=object)] | docs.marketo.com |
To deploy the VMware Identity Manager Desktop application to multiple Windows systems and have the same configuration settings applied to all of those systems, you can implement a script that installs theVMware Identity Manager Desktop application using the command-line installation options.
About this task #GUID-43458142-EC18-4EE7-AD9D-13B7CC6FEF44.
Prerequisites
Verify that the Windows systems are running Windows operating systems that are supported for the version of the VMware Identity Manager Desktop application you are installing. See the VMware Identity Manager User Guide or the release notes.
Verify that the Windows systems have supported browsers installed.
If you want the ability to run a command to familiarize yourself with the available options before you create the deployment script, verify that you have a Windows system on which you can run that command. The command to list the options is only available on a Windows system. See Command-Line Installer Options for VMware Identity Manager Desktop.
Procedure
- Obtain the VMware Identity Manager Desktop installer's executable file and locate that executable file on the system from which you want to silently run the installer.
One method for obtaining the executable file is to download it using the your VMware Identity Manager system's download page. If you have set up your VMware Identity Manager system to provide the Windows application installer from the download page, you can download the executable file by opening the download page's URL in a browser.
- Using the installer's command-line options, create a deployment script that fits the needs of your organization.
Examples of scripts you can use are Active Directory group policy scripts, login scripts, VB scripts, batch files, SCCM, and so on.
For example, if your VMware Identity Manager instance has a URL of, you want to silently install the Windows client to Windows systems that you expect will be used off the domain, with the ThinApp deployment mode set to download mode, and have the VMware Identity Manager Desktop application sync with the server every 60 seconds, that of your downloaded file.
- Run the deployment script against the Windows systems.
Results.
What to do next
Verify that VMware Identity Manager Desktop is properly installed on the Windows systems by trying some of the typical user tasks. | https://docs.vmware.com/en/VMware-AirWatch/9.2/com.vmware.wsp-resource/GUID-D88D19EA-41A0-4060-8E05-C1D724F25764.html | 2018-02-18T03:09:29 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.vmware.com |
The vmrun commands have syntax and other requirements that you must follow. Path to VMX FileVMware stores virtual machines as a package that includes the virtual machine settings file, filename.vmx, and the virtual disks. Disabling Dialog BoxesTo prevent the vmrun utility from failing when you provide user input through a dialog box, you can disable dialog boxes. Syntax of vmrun CommandsThe vmrun commands are divided into function categories. Examples of vmrun CommandsThe command-line examples that follow work on VMware Fusion. Ubuntu16 is the virtual machine example for Linux and Win10 is the virtual machine example for Windows. Parent topic: Using the vmrun Command to Control Virtual Machines | https://docs.vmware.com/en/VMware-Fusion/10.0/com.vmware.fusion.using.doc/GUID-3E063D73-E083-40CD-A02C-C2047E872814.html | 2018-02-18T03:06:51 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.vmware.com |
vCloud Director cells use a database to store shared information. This database must exist before you can complete installation and configuration of vCloud Director software.
About this task
Note:
Regardless of the database software you choose, you must create a separate, dedicated database schema for vCloud Director to use. vCloud Director cannot share a database schema with any other VMware product. | https://docs.vmware.com/en/vCloud-Director/9.0/com.vmware.vcloud.install.doc/GUID-A3CDF724-7BFA-4BD0-95C4-55AC7A9F4055.html | 2018-02-18T03:09:08 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.vmware.com |
Error getting tags :
error 404Error getting tags :
error 404
set the shadowOffset of button "Prospects" to 2
set the shadowOffset of field "Help" to -6
Use the shadowOffset property to change the appearance of drop shadows.
Value:
The shadowOffset of an object is an integer between -128 and 127.
By default, the shadowOffset of a newly created object is 4.
Because this also changes the position of the imaginary light source that casts the shadow, the shadowOffset of all objects on a card should usually be the same.
The shadowOffset specifies how far the shadow extends from the edge of the object. If the shadowOffset is positive, the imaginary light source is at the upper left corner of the screen, so the drop shadow falls below and to the right of the object. If the shadowOffset is negative, the direction of the light source is reversed, and the drop shadow falls above and to the left of the object.
If the object's shadow property is false, the shadowOffset has no effect. | http://docs.runrev.com/Property/shadowOffset | 2018-02-18T03:30:13 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.runrev.com |
Finance and Operations cloud platform monthly updates FAQ
This topic provides some key information about the monthly updates of the Microsoft Dynamics 365 for Finance and Operations cloud platform.
What's the rationale behind the cloud platform monthly updates?
The cloud platform is locked as of Dynamics 365 for Operations platform update 3. Locking the platform enables rich customizations that use extensions while allowing you to make updates without costly code upgrades. Starting with platform update 4, the cloud platform releases monthly updates so that new and existing environments can stay up-to-date with the latest innovations with a click of a button.
Monthly updates are backward compatible and non-breaking. An explicit opt-in option will be added for features that alter the behavior of existing features.
How can I update my environment to the latest monthly update?
To install the latest monthly platform update on an existing environment, go to Lifecycle Services (LCS). In the Shared asset library, select the Software deployable package tab. You will find the latest platform update package that you can deploy. For example, the deployable package for platform update 5 is shown below. This package can be imported to the project's asset library and then can be applied to a specific environment through the update flows. For more details, see Upgrade Finance and Operations to the latest platform update.
New environments that are deployed will include the latest platform update.
How do I know what's changed in the monthly platform update?
To see a list of the new or changed features in the latest monthly update, click here.
What should I test to approve the platform monthly update?
Monthly platform updates are backward compatible and non-breaking. We recommend that you run your main business process regressions tests, and then deploy into PROD.
We recommend that you automate functional validations to reduce the validation effort.
How long can I stay on a specific monthly update?
You can stay up to 12 months on a monthly platform update. However, any hotfix that you need will require you to take the latest monthly update available. Typically updates fix problems with or enable new features in Finance and Operations, so you are highly encouraged to keep up to date. For more information, see Online service and on-premises software lifecycle policy.
Can I get a hotfix instead of the full monthly update?
No. All hotfixes are rolled into the cumulative monthly updates. Platform updates have been cumulative in the past too. You will need to apply the latest monthly platform update to get a fix available in any of the interim updates. For instance, if you are on update 3 and the hotfix that you need is in update 4 but the latest update available is update 7, you need to apply update 7 which will include all fixes in update 4.
Will I need to update my customizations for a monthly update?
No. Monthly platform updates do not require you to upgrade your code customizations (Partner or ISV).
How do I get application updates?
Application updates (X++ and binary) are available in the update tiles based on those applicable to a specific environment. Application updates can be searched for and applied as needed. All available application updates are applicable to the latest platform update. See the details for the release in the Online service and on-premises software lifecycle policy.
If you are already on platform update 4 or later, applying an application binary update will also update your Finance and Operations platform to the latest release.
What is the guidance to customers who are going live?
The recommendation is that you sign off with testing the platform update that's no more than a month before go live. The expectation is that you will test all scenarios and sign off using the T - 1 month platform update. This ensures that you are on the latest platform update with all available fixes.
What is a planned maintenance update?
A planned maintenance update is explained in Planned maintenance window FAQ.
Is there any type of early access program for the Finance and Operations platform?
Yes. There are standard and targeted releases of the Finance and Operations platform. For more information, see Standard and targeted platform releases. | https://docs.microsoft.com/bg-bg/dynamics365/unified-operations/dev-itpro/sysadmin/faq-platform-monthly-updates | 2018-02-18T02:44:09 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['media/deployable-package-in-lcs.png', 'Deployable package in LCS'],
dtype=object)
array(['media/application-and-binary-update-tiles-146x300.png',
'Application and binary update tiles'], dtype=object) ] | docs.microsoft.com |
AppDynamics uses match conditions in rules that specify entities to be monitored or excluded from monitoring. You configure match conditions to fine-tune transaction detection, backend detection, data collectors, EUM injection, health rules, etc.
A match condition is a comparison consisting of:
- A match criterion (such as a method name, servlet name, URI, parameter, hostname, etc.)
- A comparison operator typically selected from a drop-down list
- A value
The entities and values being compared and the list of appropriate comparison operators vary depending on the type of configuration.
Example Match Criteria for a Servlet-based Request
The following example is from a custom match rule named BuyBook used for detecting the business transaction. Detection is based on the discovery of the string "/shopping" in the URI and of a POST parameter with the value "Book" for the itemid parameter. When AppDynamics receives a servlet-based request matching these conditions, it monitors the business transaction.
Case-Sensitivity
Match rules are case sensitive. For example, a rule that specifies a match on "cart" will not match the string "addToCart".
To force a match rule to be case-insensitive use a regular expression. The following match rule matches "addToCart" as well as "addTocart".
To Negate a Match Condition
To reverse a match condition, use the gear icon and check the NOT condition check box.
For example, if you want to set a condition where "Port DOES NOT Equal 80":
- Configure Port Equals 80.
- Click the gear icon.
- Check the NOT checkbox. | https://docs.appdynamics.com/display/PRO14S/Match+Rule+Conditions | 2018-02-18T03:22:44 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.appdynamics.com |
Customizing Pawtucket
This page contains instructions on how to customize your installation of Pawtucket, the public-access front end for CollectiveAccess. Click here for installation instructions.
You can make many customizations and modifications to the base installation of Pawtucket by modifying the configuration files (in app/conf and your theme's conf directory) and the files within the themes directory. The themes directory houses the stylesheets, graphics and views used to generate the site as well as a set of configuration files that override those in app.conf. The more advanced user looking to change the core functionality of Pawtucket will need to access the controller files in app/controllers/. The controllers, which extend upon functionality in the underlying library, package and pass variables to their corresponding views.
Visit the following pages for instructions on customizing Pawtucket to meet your needs:
- Basic application configuration - Any customization process should begin with modifying the application configuration defined in app/conf/app.conf. This page explains what settings you should consider changing to quickly establish the overall functionality of your site.
- Styling Pawtucket - Using themes - In addition to an overview of how themes are used by Pawtucket, this page contains a description of each view in Pawtucket's default theme.
- Making static pages - This page contains instructions on how to add additional pages of static HTML content to your Pawtucket site.
- Pawtucket section by section - An overview of how to customize each section of Pawtucket.
- Plugins - An overview of how plugins are used to create the Gallery section in Pawtucket. For more information on creating plugins for Pawtucket, see Pawtucket plugins.
sphinx | http://docs.collectiveaccess.org/index.php?title=Customizing_Pawtucket&oldid=6147 | 2018-02-18T03:34:11 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.collectiveaccess.org |
Release Notes for Pawtucket2 1.7.5
Release date: August 5, 2017
Pawtucket2 is CollectiveAccess' public access interface. You can use it to publish your collection to the world or within your own network, with open access or password protection. If you want a web site to present the content of your Providence catalogue then Pawtucket2 is what you need.
Version 1.7.5 is a maintenance release with a variety of bug fixes and a handful of new features, including the reintroduction of in-document PDF searching. See the change log at for details.
The release is available at
Release notes for other 1.7 releases are available at:
Release notes for the compatible version of Providence are available here. | http://docs.collectiveaccess.org/index.php?title=Release_Notes_for_Pawtucket2_1.7.5&oldid=6397 | 2018-02-18T03:35:47 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.collectiveaccess.org |
The fields parameter is used to control which fields, of an resource, should be included in the JSON response. The fields parameter can be applied to any GET request. The fields parameter value is specified as a single field name, or a comma separated list of field names, that should be included in the response.
For example, this query returns only the id and name fields of all Deployments in the catalog:
GET Accept: application/json | https://release-2-11-0.docs.nirmata.io/restapi/url_parameters/fields/ | 2022-01-29T03:27:48 | CC-MAIN-2022-05 | 1642320299927.25 | [] | release-2-11-0.docs.nirmata.io |
A purchase journey involves three parts: the order, the product or service, and the payment. With our Unified Commerce solution you can offer your customers a flexible and consistent experience, regardless of where they placed their order, where the product comes from, and where you accept their payment.
Endless aisle: let shoppers place ecommerce orders in your store.
If a product is out of stock or not sold in your store, shoppers can order and pay for the product in your store and have it delivered to their home. This can involve a dedicated in-store kiosk with a tablet and a payment terminal, but there are various other setups possible.
Pay by Link: send your customers a payment link to complete their payment.
Pay by Link is an alternative way to reach your customers in combination with your existing online or point-of-sale integration. In addition to your web, in-app, or in-store store checkout, you can use Pay by Link to accept orders through channels like email, your call center, or social media. You can even launch a QR code with a payment link on your payment terminal, to let in-store shoppers pay online.
Referenced refunds: offer the flexibility of cross-channel returns.
On our unified payments platform, every payment has a unique reference. This reference allows you to keep track of payments and refunds, and to accept returns in your store regardless of whether the product was bought in-store or online.
In-app and mobile cross-channel journeys: customers use your app or web store on their mobile device to place an order, and then collect the product in person. Or in your store or other physical location, customers complete the payment on their own using their mobile device, instead of going to the cash register.
Some examples are:
- Click and collect: customers make a purchase online, and pick up the product in a store.
- Reserve and collect: the customer places a reservation online, and goes to the store or other physical location to make the payment and collect the product or service.
- Table self-order and Close bill at table: at your restaurant or bar, customers use their own mobile device to order, and to pay their bill online.
Self-scan: in your store, customers use your app on their mobile device to scan product barcodes and pay online.
- Pay anywhere in store: have mobile or alternative checkouts in and around the store.
Using mobile Verifone or Android payment terminals, Pay by Link, or kiosks, you are no longer limited to fixed cash registers in the store. Pay-anywhere options can eliminate queues and reduce congestion in certain areas of a store. | https://docs.adyen.com/pt/unified-commerce/shopper-experiences | 2022-01-29T05:17:22 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.adyen.com |
Overview – & Canada)' booking was made for 9:00 AM. If you switch your timezone to 'Pacific Time (US & Canada)', the same booking will be for 12:00PM.
So, after you change your Time Zone, you will have to go through all of your existing bookings and change the booking times to the correct times.
How to Safely Change your Time Zone:
- IMPORTANT: Go to Bookings -> Download CSV, choose the 2nd FORMAT and download a spreadsheet of all of your existing bookings — so you know what the original booking times were!
- Go to My Account -> Subdomain tab and change your Time Zone (please note how many hours ahead/behind your new Time Zone is to the original).
- Go through each of your existing Bookings and change the booking times to the correct one. | https://docs.launch27.com/knwbase/change-time-zone/ | 2022-01-29T05:04:46 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.launch27.com |
MicroPython string interning¶
MicroPython uses string interning to save both RAM and ROM. This avoids having to store duplicate copies of the same string. Primarily, this applies to identifiers in your code, as something like a function or variable name is very likely to appear in multiple places in the code. In MicroPython an interned string is called a QSTR (uniQue STRing).
A QSTR value (with type
qstr) is a index into a linked list of QSTR pools.
QSTRs store their length and a hash of their contents for fast comparison during
the de-duplication process. All bytecode operations that work with strings use
a QSTR argument.
Compile-time QSTR generation¶
In the MicroPython C code, any strings that should be interned in the final
firmware are written as
MP_QSTR_Foo. At compile time this will evaluate to
a
qstr value that points to the index of
"Foo" in the QSTR pool.
A multi-step process in the
Makefile makes this work. In summary this
process has three parts:
Find all
MP_QSTR_Footokens in the code.
Generate a static QSTR pool containing all the string data (including lengths and hashes).
Replace all
MP_QSTR_Foo(via the preprocessor) with their corresponding index.
MP_QSTR_Foo tokens are searched for in two sources:
All files referenced in
$(SRC_QSTR). This is all C code (i.e.
py,
extmod,
ports/stm32) but not including third-party code such as
lib.
Additional
$(QSTR_GLOBAL_DEPENDENCIES)(which includes
mpconfig*.h).
Note:
frozen_mpy.c (generated by mpy-tool.py) has its own QSTR generation
and pool.
Some additional strings that can’t be expressed using the
MP_QSTR_Foo syntax
(e.g. they contain non-alphanumeric characters) are explicitly provided in
qstrdefs.h and
qstrdefsport.h via the
$(QSTR_DEFS) variable.
Processing happens in the following stages:
qstr.i.lastis the concatenation of putting every single input file through the C pre-processor. This means that any conditionally disabled code will be removed, and macros expanded. This means we don’t add strings to the pool that won’t be used in the final firmware. Because at this stage (thanks to the
NO_QSTRmacro added by
QSTR_GEN_CFLAGS) there is no definition for
MP_QSTR_Fooit passes through this stage unaffected. This file also includes comments from the preprocessor that include line number information. Note that this step only uses files that have changed, which means that
qstr.i.lastwill only contain data from files that have changed since the last compile.
qstr.splitis an empty file created after running
makeqstrdefs.py spliton qstr.i.last. It’s just used as a dependency to indicate that the step ran. This script outputs one file per input C file,
genhdr/qstr/...file.c.qstr, which contains only the matched QSTRs. Each QSTR is printed as
Q(Foo). This step is necessary to combine the existing files with the new data generated from the incremental update in
qstr.i.last.
qstrdefs.collected.his the output of concatenating
genhdr/qstr/*using
makeqstrdefs.py cat. This is now the full set of
MP_QSTR_Foo’s found in the code, now formatted as
Q(Foo), one-per-line, with duplicates. This file is only updated if the set of qstrs has changed. A hash of the QSTR data is written to another file (
qstrdefs.collected.h.hash) which allows it to track changes across builds.
Generate an enumeration, each entry of which maps a
MP_QSTR_Footo it’s corresponding index. It concatenates
qstrdefs.collected.hwith
qstrdefs*.h, then it transforms each line from
Q(Foo)to
"Q(Foo)"so they pass through the preprocessor unchanged. Then the preprocessor is used to deal with any conditional compilation in
qstrdefs*.h. Then the transformation is undone back to
Q(Foo), and saved as
qstrdefs.preprocessed.h.
qstrdefs.generated.his the output of
makeqstrdata.py. For each
Q(Foo)in qstrdefs.preprocessed.h (plus some extra hard-coded ones), it outputs
QDEF(MP_QSTR_Foo, (const byte*)"hash" "Foo").
Then in the main compile, two things happen with
qstrdefs.generated.h:
In qstr.h, each QDEF becomes an entry in an enum, which makes
MP_QSTR_Fooavailable to code and equal to the index of that string in the QSTR table.
In qstr.c, the actual QSTR data table is generated as elements of the
mp_qstr_const_pool->qstrs.
Run-time QSTR generation¶
Additional QSTR pools can be created at runtime so that strings can be added to them. For example, the code:
foo[x] = 3
Will need to create a QSTR for the value of
x so it can be used by the
“load attr” bytecode.
Also, when compiling Python code, identifiers and literals need to have QSTRs created. Note: only literals shorter than 10 characters become QSTRs. This is because a regular string on the heap always takes up a minimum of 16 bytes (one GC block), whereas QSTRs allow them to be packed more efficiently into the pool.
QSTR pools (and the underlying “chunks” that store the string data) are allocated on-demand on the heap with a minimum size. | https://docs.micropython.org/en/v1.15/develop/qstr.html | 2022-01-29T04:15:44 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.micropython.org |
CONST
Understanding how to write a [CONST] section in a Parsing Rules file and the syntax to use.
A
CONSTsection is used to define strings and numbers that can be re-used multiple times within XQL statements in other
INGESTsections by using
$constName. This can be helpful to avoid writing the same value in multiple sections, similar to constants in modern programming languages.
For example:
[CONST] DEFAULT_DEVICE_NAME = "firewall3060"; // string FILE_REGEX = "c:\\users\\[a-zA-Z0-9.]*"; // complex string my_num = 3; /* int */
An example of using a
CONSTinside XQL statements in other
INGESTsections using
$constName:
The dollar sign (
$) must be adjacent to the
[CONST]name, without any whitespace in between.
... | filter device_name = $DEFAULT_DEVICE_NAME | alter new_field = JSON_EXTRACT(field, $FILE_REGEX) | filter age < $MAX_TIMEOUT | join type=$DEFAULT_JOIN_TYPE conflict_strategy=$DEFAULT_JOIN_CONFLICT_STRATEGY (dataset=my_lookup) as inn url=inn.url ...
NOTICE: Only quoted or integer terminal values are considered valid for
CONSTsections. For example, these will not compile:
[CONST] WORD_CONST = abcde; //invalid func_val = regex_extract(_raw_log, "regex"); // not possible RECURSIVE_CONST = $WORD_CONST; // not terminal - not possible
CONSTsections are meant to replace values. Other types, such as column names, are not supported:
... | filter $DEVICE_NAME = "my_device" // illegal ...
A few more points to keep in mind when writing
CONSTsections.
- CONSTnames are not case sensitive. They can be written in any user-desired casing, such as UPPER_SNAKE, lower_snake, camelCase, and CamelCase. For example,MY_CONST=My_Const=my_const.
- CONSTnames must be unique inside a section, and across all sections of the file. You cannot have the sameCONSTname defined again in the same section, or in any otherCONSTsections in the file.
- Since section order is unimportant, you do not have to declare aCONSTbefore using it. You can have theCONSTsection written below other sections that use thoseCONSTsections.
- ACONSTis an add-on to the Parsing Rule syntax and is optional to configure.
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/cortex/cortex-xdr/cortex-xdr-pro-admin/data-management/create-parsing-rules/parsing-rules-file-structure/const.html | 2022-01-29T04:15:35 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.paloaltonetworks.com |
class Tempfile
A¶ ↑¶ ↑
Explicit close¶ ↑
When a
Tempfile object is garbage collected, or when the Ruby interpreter exits, its associated temporary file is automatically deleted. This means that file... end
Unlink after creation¶ ↑¶ ↑.
Public Class Methods, the created file is not removed automatically. You should use
File.unlink to remove it.
If a block is given, then a
File object will be constructed, and the block is invoked with the object as the argument. The
File object will be automatically closed and the temporary file is removed after the block terminates,
# File lib/tempfile.rb, line 349.
The
basename parameter is used to determine the name of the temporary file. You can either pass a
String or an
Array with 2
String elements. In the former form, the temporary file’s base name will begin with the given string. In the latter form, the temporary file’s base name will begin with the array’s first element, and end with the second element. For example:
file = Tempfile.new('hello') file.path # => something like: "/tmp/hello2843-8392-92849382--0" # Use the Array form to enforce an extension in the filename: file = Tempfile.new(['hello', '.jpg']) file.path # => something like: "/tmp/hello2843-8392-92849382--0.jpg"
The temporary file will be placed in the directory as specified by the
tmpdir parameter. By default, this is
Dir.tmpdir.
file = Tempfile.new('hello', '/home/aisaka') file.path # => something like: "/home/aisaka/hello2843-8392-92849382--0"
You can also pass an options hash. Under the hood,
Tempfile creates the temporary file using
File.open. These options will be passed to
File.open. This is mostly useful for specifying encoding options, e.g.:
Tempfile.new('hello', '/home/aisaka', encoding: 'ascii-8bit') # You can also omit the 'tmpdir' parameter: Tempfile.new('hello', encoding: 'ascii-8bit')
Note:
mode keyword argument, as accepted by
Tempfile, can only be numeric, combination of the modes defined in
File::Constants.
Exceptions¶ ↑
If
Tempfile.new cannot find a unique filename within a limited number of tries, then it will raise an exception.
# File lib/tempfile.rb, line 134
# File lib/tempfile.rb, line 312 def open(*args, **kw) tempfile = new(*args, **kw) if block_given? begin yield(tempfile) ensure tempfile.close end else tempfile end end
Public Instance Methods.
# File lib/tempfile.rb, line 168 def close(unlink_now=false) _close unlink if unlink_now end
Closes and unlinks (deletes) the file. Has the same effect as called
close(true).
# File lib/tempfile.rb, line 175 def close! close(true) end
Opens or reopens the file with mode “r+”.
# File lib/tempfile.rb, line 150 def open _close mode = @mode & ~(File::CREAT|File::EXCL) @tmpfile = File.open(@tmpfile.path, mode, **@opts) __setobj__(@tmpfile) end
Returns the size of the temporary file. As a side effect, the
IO buffer is flushed before determining the size.
# File lib/tempfile.rb, line 234 def size if [email protected]? @tmpfile.size # File#size calls rb_io_flush_raw() else File.size(@tmpfile.path) end end¶ ↑ 212 | https://docs.ruby-lang.org/en/3.1/Tempfile.html | 2022-01-29T04:36:39 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.ruby-lang.org |
Monitor Tunnel Status
Once you create a network tunnel, you can check its status through Umbrella's Overview page.
A network tunnel is displayed as Active when two conditions are met:
- Active traffic is traversing the tunnel
- Logging is enabled in the default cloud-delivered firewall policy
For more information about tunnel status, see What is the difference between Active Network Tunnels and Network Tunnel Status?
For Viptela SD-WAN devices, Umbrella supports Layer 7 health checks of the Cloud Delivered Firewall (CDFW) and Secure Web Gateway (SWG) service through the tunnel. Viptela SD-WAN devices support automated tunnel failover between the primary and standby tunnel depending on the result of the Layer 7 health check.
The Layer 7 health check is restricted to the CDFW and SWG services only, and a successful health check does not guarantee connectivity beyond these services to the Internet.
For more details, refer to the Cisco SD-WAN Security Configuration Guide.
Manual: Fortinet IPsec Deployment Guide < Monitor Tunnel Status > Manage Accounts
Updated 3 days ago | https://docs.umbrella.com/umbrella-user-guide/docs/monitoring-tunnel-status | 2022-01-29T04:46:48 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://files.readme.io/916c2ec-tunnel_widget.png',
'tunnel widget.png'], dtype=object)
array(['https://files.readme.io/916c2ec-tunnel_widget.png',
'Click to close...'], dtype=object) ] | docs.umbrella.com |
Namespace ToSic.Sxc.Dnn
This contains interfaces that are specific to 2sxc in Dnn.
The purpose is that both the EAV and 2sxc are meant to be platform agnostic, but Razor and WebApi developers in Dnn still need access to some helpers.
Classes
ApiController
This is the base class for all custom API Controllers.
With this, your code receives the full context incl. the current App, DNN, Data, etc.
DynamicCode
This is a base class for custom code files with context.
If you create a class file for dynamic use and inherit from this, then the compiler will automatically add objects like Link, Dnn, etc. The class then also has AsDynamic(...) and AsList(...) commands like a normal razor page.
Factory
This is a factory to create CmsBlocks, Apps etc. and related objects from DNN.
RazorComponent
The base class for Razor-Components in 2sxc 10+
Provides context infos like the Dnn object, helpers like Edit and much more.
RazorComponentCode
This is the type used by code-behind classes of razor components. Use it to move logic / functions etc. into a kind of code-behind razor instead of as part of your view-template. | https://docs.2sxc.org/api/dot-net/ToSic.Sxc.Dnn.html | 2022-01-29T05:16:16 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.2sxc.org |
Using Advanced Settings
The Advanced settings option enables you to customize mouse controls, keystrokes, screen captures, system and debug logs when you record a bot, and configure the proxy server settings for the web services commands. These settings are updated from .
Advanced settings for recording
- Record Mouse Moves
- Select this option to record the mouse moves that have application-specific meaning. For example, application menus.
- Record Mouse Clicks
- Select this option to record the mouse clicks.
- Record Keystrokes
- Select this option to record keystrokes.
- Capture Screen-shots While Recording
- Select this option to capture and display images of the screenshot when you record bots.
Advanced settings for application location
- Application Path
- Use this option to specify a different application path.
The default application path is the Automation Anywhere Files folder under My Documents. The application path can be set to a local drive or to a network path. The network path could be a mapped drive as well.To set up an application path, ensure that:
- It is unique and not shared between users.
- It is accessible at all times.
- Users have read and write privileges for the application path.
When changing this location, all the tasks are saved in the new location. The new path takes effect when you restart the Enterprise Client.
- After changing the application path, all the triggers, hotkeys, and scheduled bots run as normal.
- However, if the domain name changes, manually update the application path.
- If a network drive is specified, the speed of the bots is determined based on the network speed.
Advanced settings for editing and logging
- Edit Task on double-click in Task List
- Enable this option to change the default setting. By default, a double-click on a TaskBot runs or executes the TaskBot.
- Enable System Logging
- System logs show all the client activities.
- Enable Debug Logging
- To debug errors that appear in Automation Anywhere and related services, choose to enable the logs during task execution (the status bar of the application indicates debug logging is enabled).
Note: By default, the system stores a maximum of ten log files of 1 MB each. The system overwrites the existing log entries when this limit is reached.
- When debug logging is enabled, all Debug, Info, Warning, Error and Fatal logs are saved.
- When debug logging is disabled, only Error and Fatal logs along with a maximum of 256 lines of buffered data of the recently raised Warning and Info logs are saved.
If there are different log configuration files for the applications and services, then the Enable Debug Logging check box is set to an Indeterminate state and the system displays a Debug logging enabled message on the status bar.The following table shows the different states of the Enable Debug Logging check box when you enable and disable the debug logging.Note: If the debug log file is accidentally deleted, the system creates a new file using the default settings when the Client applications are started or when you make any updates on the Options screen.If the debug log file is corrupt, the system takes a backup of the existing file with the filename <originalFileName>_Date_Time_backup.xml and replaces it with the default log file when any application starts or if Options is accessed.
- Clear Logs
- Use this to delete all the application logs. Clearing the logs do not delete the service logs that are common for all users. To delete the service logs, delete them manually from the public documents\Enterprise Client Files\LogFiles folder.Note: The logs for applications that are running are not cleared. To clear the application logs, close all the running applications and manually delete all the files in Application Path\LogFiles folder.
- Export Logs
- To export logs to this folder:
- Create a new folder or select an existing folder.
- Click Select Folder.
Advanced settings for proxy server
- Proxy Server Settings (For Web Service)
- These are applicable for REST and SOAP web service commands. If the environment uses a proxy server, specify a Host Name or IP Address, and a Port Number that is active and within the 0 through 65535 range.Attention:
The restriction on using a Port number from 1023 and above is removed from Version 11.3.3. | https://docs.automationanywhere.com/zh-TW/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/customizing-an-automation-client/using-advanced-settings.html | 2022-01-29T03:28:31 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.automationanywhere.com |
Migrating from an RPM-based Deployment to the Latest 1.10.0 CSD
This topic describes how to migrate from an RPM-based deployment to the latest 1.10.0 CSD and parcel-based deployment.
- Save a backup of the Cloudera Data Science Workbench configuration file located at
/etc/cdsw/config/cdsw.conf.
- Stop the Cloudera Data Science Workbench service in Cloudera Manager.
- Stop the Cloudera Data Science Workbench service in Cloudera Manager.
- Delete the 2 patch files:
/etc/cdsw/patches/default/deployment/ingress-controller.yaml and /etc/cdsw/patches/default/deployment/tcp-ingress-controller.yaml.
- Delete every empty folder from the
/etc/cdsw/patchesdirectory.
- Delete the
/etc/cdsw/patchesdirectory if it is empty.
- (Strongly Recommended) On the master host, backup all your application data that is stored in the
/var/lib/cdswdirectory.10.0 Using Cloudera Manager. You might be able to skip the first few steps assuming you have the wildcard DNS domain and block devices already set up.
- Use your copy of the backup
cdsw.confcreatedproperties and their corresponding Cloudera Manager properties (in bold). Use the search box to bring up the properties you want to modify.
- Click Save Changes.
- Cloudera Manager will prompt you to restart the service if needed.
-:
To upgrade a project to the new engine, go to the project's Settings > Engine page and select the new engine from the dropdown. If any of your projects are using custom extended engines, you will need to modify them to use the new base engine image.
-installed by default.Perform). | https://docs.cloudera.com/cdsw/1.10.0/upgrade/topics/cdsw-migrate-rpm-to-csd.html | 2022-01-29T05:25:00 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.cloudera.com |
Coaching
The coaching system is designed to help managers, supervisors, quality-monitoring personnel, and agents with training processes. In the system, coaching sessions are created for trainees to take part in. The sessions consist of lists of interactions that the trainee is asked to listen to, either in part or in their entirety. In addition, they may include notes, Action Items, and links to other sites. While anyone with the required permissions can open a new coaching session, it is the session's coach who designs the session by defining the Interaction Lists, Action Items, and links, and by adding notes to guide the trainee through the session.
Both novice and experienced personnel can benefit from taking part in a training session; new employees can be introduced to their jobs through a coaching session, and veterans can be encouraged to fine-tune their techniques by means of coaching sessions that highlight specific aspects of their jobs.
Using Coaching Sessions
Working with Coaching Sessions
The coaching Sessions screen lets you see which coaching sessions exist, gives general information about each session, and enables you to access the Session Details screen in which you can manage or take part in individual sessions. To open the Sessions screen:
- In the Main Menu, under Coaching, select Sessions.
1000px
The session list contains the following columns:
Session Statuses
Session statuses are indicated by icons in the Session List and by color-coding in the calendar. The following session statuses are defined:
Action Options
The Actions column contains buttons that you can use to perform an action related to a session in the list:
You can use filters to limit the sessions that are displayed in the list to those that meet criteria you specify. Two types of filters are available:
- General filters, which allow you to filter the list based on user and date, and let you choose whether to include completed sessions in the list.
- Column-based filters, which allow you to filter the list based on session status and trainee.
General Filters
The general filter controls are located at the top of the session list. The following filters are available:
Column-Based Filters
Column-based filters are located in the column headings of the Status and Trainee columns.
To use the Status filter:
- In the heading of the Status column, in the drop-down list, select the status you want to include in the session list. Only sessions with the selected status are displayed in the list. To view sessions with any status, select All.
To activate the Trainee filter:
- In the text box to the right of the Trainee heading, type part of the name or username of the trainee you want to include in the list. As you type, names and user names containing those letters are displayed in a drop-down list. Select the trainee from the list. Only sessions with the selected trainee are displayed in the list.
To deactivate the Trainee filter:
- Delete the text in the text field and then press Enter. Sessions for all trainees are displayed in the list.
You can choose to sort the session list by any column. To sort the session list by a column:
- Click the title of the column.
The lower left area of the Coaching > Session screen contains a calendar in which you can see when the sessions that appear in the list are scheduled to take place. You can choose a monthly, weekly, or daily display. You can also open the Session Details screen for a session directly from the calendar.
500px
To select the type of calendar display:
- In the upper-right of the calendar area, click month, week, or day. The calendar display changes to the selected type.
- In the upper-left of the calendar area, click file:Smicon_displayprevious.png to display the previous time period (for example, if you are displaying a weekly calendar, the previous week), or file:smicon_expandtable.png to display the next time period.
- Click the session in the calendar.
To view session details in the Sessions screen, click the specific session row. The session details are displayed on the right side of the screen.
See Working with the Coaching Session List
The display includes five boxes:
The Session Details page is used to manage an individual session. In the Session Details page you can add a new session, create reports, listen to interactions, and so on.
To edit a session:
- In the Main Menu, under Coaching, select Sessions.
- In the session list, under Actions, click the Set up Coaching Session icon (file:smicon_sessionactionopen.png) associated with the session you want to edit.
- Manage the specific session, properties, notes, links and so on. For additional information, refer to Create and Manage Coaching Sessions
The Sessions screen opens.
You can delete one or more coaching sessions. To delete coaching sessions:
Taking Part in a Session
Trainees take part in a coaching session by opening the session in the Session Details screen. In this screen, they can listen to the interactions, read the notes, open the links, and manage the Action Items attached to the session.
When you are ready to take part in a session, open it in the Session Details screen. To open a session in the Session Details screen, do one of the following:
- In the Views page, in the My Messages widget, click the subject of the message inviting you to the session.
- In the Sessions screen, in the session list, click the name of the session.
When you first open it, the Session Details screen contains the following elements:
Once the session is open, click Start Session to begin taking part in it. The start time of the session is recorded, and the status is changed to In Progress. During the session, click Pause Session to temporarily stop taking part. Click End Session when you have completed the session.
During the session, use the session notes to guide you through the session. Listen to the interaction s in the interaction lists and pay special attention to those parts that are mentioned in the notes.
The interaction lists attached to the session can be opened in an Interaction Grid, and the interaction s can be played back in a Media Player, in the Session Details screen. To view an interaction list:
- On the left side of the screen, under General Details, click the name of the interaction list. An Interaction Grid opens on the right side of the screen, in place of the General, Interaction Lists, and Notes boxes, and lists all of the interaction s in the list. For detailed information about Interaction Grids, see Using the Interaction Grid.
- In the Interaction Grid, click the Play button in the interaction's row. The Media Player opens below the grid and begins to play the interaction. For detailed information about the Media Player, see Using the Media Player.
Create and Manage a New Coaching Session
This section explains how to open a new coaching session, set it up by adding interaction lists, notes, Action Items, and links to the session, and manage its contents once it is set up.
New coaching sessions can be opened in a number of ways:
- In the Coaching page
- In the Views page, in the My Messages widget
- In an Interaction Grid or an Event Grid, from the More menu
This section explains how to open a new session in the Coaching page. For information about opening a coaching session in one of the other ways, please refer to the relevant sections of this manual. Regardless of how you open the coaching session, you will most likely want to add interaction lists, notes, and other items to the session, as explained in this section.
When you open a new coaching session, you specify the name of the session, the coach, the trainee, and the date and time when the trainee should take part in the session. If you wish, you can also add either a public or private note. The session then appears in the list of sessions, but it does not have any content. To open a new coaching session:
- In the Main Menu, in the Coaching drop-down menu, select Sessions. The Sessions screen opens.
- At the top of the Sessions List, click New Session. The New coaching session dialog box opens.
- Fill in the fields as follows:
- Click Save. The coaching session is opened; it appears in the Sessions List and in the schedule below it. It is selected in the Sessions List and its details are displayed in the Sessions screen, to the right of the Sessions List. An invitation message is sent to the My Messages boxes of the coach and the trainee. In addition, if their user profiles include e-mail addresses, notifications are sent to their e-mail addresses. The notification e-mails include an iCal file. When this attachment is opened, the session is automatically added to the user's Outlook calendar.
Once a coaching session is open, you can add interactions to it. Interactions are added as interaction lists. Two types of interaction lists can be added:
- Saved Searches: Sets of interaction search criteria that can be used to retrieve lists of interactions.
- Interaction Lists: Lists of interactions that were created manually by adding specific interactions. The interactions can be added from an Interaction or Event grid or from a Media Player.
Interaction lists can be added in the Session Details screen, in setup mode. Once they have been added, the lists can be modified by adding more interactions and removing existing interactions. Modifications that are made to an interaction list in the Coaching page do not affect the original interaction list.
Adding Existing Global Interaction Lists to a Session
The simplest way to add interaction lists to a coaching session is to select existing global interaction lists and copy them to the session. When a global interaction list is added to a coaching session, a copy of the original list is made and attached to the session. The copy becomes a coaching-session interaction list, and is not linked to the original global interaction list. It can only be viewed from the coaching session. From within the coaching session, you can add or remove interactions from an attached interaction list and change the search criteria of a Saved Search. These modifications do not affect the original, global interaction lists or Saved Searches.
- In the Session List screen, click the Edit icon file:Smicon edit.png associated with the session to which you want to add an interaction list.
- Under General Details, click the Add call list file:Smicon newsession.png icon next to Calls.
- Select an interaction list.The list closes and a copy of the interaction list is created and attached to the session. The added interaction list appears in the relevant section (Interaction Lists or Searches) under General Details.
A list of global interaction lists and global Saved Searches appears.
You can also add existing interaction lists to a coaching session from the Saved Searches page.
Creating New Interaction Lists within a Session
- In the Session List screen, click the Edit icon file:Smicon edit.png associated with the session in which you want to perform a search.
- In the General Details screen, on the left side of the screen click New Search.
- Fill in the search criteria.
- Click Search.
- Select the check box next to each interaction you want to add to the interaction list.
- At the top of the list, under Batch Actions, select Add To > Coaching > New Session.
- Give the session a name and fill in the remaining fields.
- Click Save.
An Interaction Search form opens on the right side of the screen.
The search results appear to the right of the form.
A new sessions window opens.
Modifying Session Interaction Lists
Once an interaction list is attached to a Coaching session, you can add interactions to it or remove them as necessary. Interactions can be added to an interaction list from any Interaction or Event Grid or from the Media Player. You can also remove interactions from a list. If the interaction list was copied from a global list, the global list is not affected by changes you make to the coaching-session list.
To remove interaction from an interaction list:
- Under General Details, click the name of the interaction list. The list opens in an Interaction Grid on the right side of the screen.
- Select the checkbox to the left of each interaction you want to remove from the interaction list.
- Under Batch Actions, select Delete From List. The selected interactions are removed from the list.
Removing an Interaction List from a Session
- Under General Details, click the x to the right of the name of the interaction list. You are prompted to confirm that you want to remove the interaction list from the session.
- Click OK. The interaction list is removed from the session.
Session notes can be defined as public or private.
- Public notes are visible to everyone who accesses the session. They are generally intended to help trainees understand the purpose of the coaching session and draw their attention to the aspects of the interactions that need to be highlighted.
- Private notes are only visible to the person who writes them. You can use them to add reminders or comments to yourself. For example, when you create a session, you can leave yourself a note reminding you to add a link later, when you have the URL of the link.
To add a note to a coaching session:
- In the Session Details screen, in the title bar of the Notes box, click New. A blank New Note form opens.ImportantIf the Notes box is not visible, in the upper-left corner of the screen, click General Details.
- Type the note in the text field.
- In the dropdown menu, select Public if you want the note to be visible to anyone who can access the session or Me if you only want the notes to be visible to you.
- Click Add. The note is added to the session.
To edit or delete a note:
- In the Session Details screen, in the Notes box, select Edit or Delete as necessary.
You can add Action Items to a coaching session. For example, if you want the trainee to perform a particular task as part of a coaching session, you could open an Action Item in that coaching session to specify that task. When you open an Action Item within a coaching session, the Action Item appears in the coaching session and can be managed from there. It also appears in the general Action Items box, like other Action Items, and can be managed from there as well.
To create an Action Item in a coaching session:
- In the Session Details screen, click the Action Items link in the top right corner of the screen.
- Click New Item and configure the fields as necessary.
- Click Ok.The Action Item is created and appears in the Action Items box in the Session Details screen. In addition, it is added to both your general Action Items box and in the assigners general Action Items box.
The Action Item dialog box opens below the link with a list of the existing action items.
You can add links to external files, websites, and SpeechMiner Permalinks to a coaching session. For example, if you want the trainee to look at a relevant website, you can add a link to the website. During the coaching session, when the trainee clicks the link, a new window is opened and the link is displayed. For example, if you add a link to a website, a new browser window opens and displays the website. To add a link to a coaching session:
- In the Session Details screen, on the left side of the screen, under General Details, click the + sign beside Links. An Add Link to Session dialog box opens.
- Under Description, enter a description for the link.
- Under Link Text, enter the location of the link. For example, to add a link to Google, type.
- Click OK. The dialog box closes, and the link is added to the list of links that appears under General Details under Links.
To edit a link:
- Under General Details, under Links, click the file:smicon_sessionactionopen.png beside the link you want to edit. An Add Link to Session dialog box opens.
- Modify the Description and Link Text as required.
- Click OK. The dialog box closes, and the link is modified.
To go to a link:
- Under General Details, click the link. A new browser window opens and displays the link.
To delete a link:
- Under General Details, click the x beside the link you want to delete. You are prompted to confirm that you want to remove the link from the session.
- Click Yes. The link is deleted.
Session properties - name, schedule date, trainee, coach, session type - are defined when you open a new session. You can modify these properties as necessary. To modify session properties:
- In the Session Details screen, in the title bar of the General box, click Edit.ImportantIf the General box is not visible, in the upper-left corner of the screen, click General Details.
- The properties of the session become text fields. Modify the text fields as necessary.
- Click Save. The changes are saved.
Coaching Session Reports
Coaching session reports provide statistical information about the coaching sessions in the Session List. The reports break down the sessions by status and indicate how many have not yet been started by their trainees, how many are in progress, and how many have been completed. In addition, they show how long trainees already spent, on average and in total, on the sessions in each category.
Additional details about the sessions included in the report appear in the lower half of the report. The details are divided into groups by trainee. You can expand a session to see additional information about it. You can also open its Session Details screen directly from the report.
Coaching reports can be generated in the Sessions screen. Two forms of the report are available:
- General report: This report includes all coaching sessions that are included in the set defined by the Session List's general filters. For example, if the general filter specifies My Sessions for Sept. 1st through Sept. 30th, and included Completed sessions, all session that meet these criteria are included in the report. (Column filters do not affect the criteria.)
- User report: This report includes all sessions that are included in the set defined by the Session List's general filters and belong to a specific user. That is, if the user is either the creator, coach, or trainee of a session, the session is included in the report.
- In the Sessions screen, at the top of the list, click the Run Report button file:Sm runreport.png. The report is generated and is displayed in a new browser window.
- In the Sessions screen, in a row in which the user is the trainee of the specified session, under Actions, click file:smicon_sessionactionreport.png . A report for all sessions in which the trainee is either the creator, the coach, or the trainee, is generated and is displayed in a new browser window.
The lower part of a generated report lists the sessions that were included in the report. The sessions are grouped by trainee. The name of each session, the name of the coach, the date it was scheduled to take place or the date on which its status was last changed, the status, and, if the session was already begun or completed, the amount of time the trainee spent on it so far, are displayed. You can view additional details about the session, listen to interactions that the trainee listened to during the session, and open the Session Details screen for the session. To view additional details about a session:
- Click the + beside the session name. Notes and resources (links) included in the session are displayed below the session name, as well as links to all the interactions that the trainee listened to during the session.
- Click the interaction link. The Media Player opens in the window, and plays the interaction.
To open a session in the Session Details screen:
- Click the name of the session. The Session Details screen opens in a new browser window.
Managing Session Types
Each coaching session has a type assigned to it. The type can help identify the purpose of the session and the type of trainee it is intended for. By default, one session type, General, is defined. You can add additional session types as appropriate for your organization.
Once a session type is saved, you cannot delete it. However, you can deactivate it if you do not want it to be available at present. (The General type cannot be deactivated.) In addition, you can modify the name of an existing session type as necessary.
To manage session types:
- In the main menu, under Coaching, select Session Types. The Session Type Manager opens and displays a list of the session types that are currently defined and their statuses.
To add a new session type
- Under New Type, type the name of the new session type.
- Click Add. The session type is added to the list and its status is set to Active.
- Click Save. The Session Type Manager closes, and the new session type is included in the list of available session types.
To activate or deactivate a session type
- Under Is Active, click the check box beside the session type to select or clear it. The session type is active when a checkmark appears in the checkbox.
- Click Save. The Session Type Manager closes, and the change is implemented.
To modify the name of a session type
- In the list of session types, modify the name as necessary.
- Click Save. The Session Type Manager closes, and the change is implemented. | https://docs.genesys.com/Documentation/SPMI/8.5.3/user/coachingsystem | 2022-01-29T03:52:40 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.genesys.com |
Low poly character for your game. Suitable for RPG, Actions and more games.
Disclaimer: The scene portrayed in the TEST and featured images are not included.
Features:
Technical Details
Rigged: (Yes)
Rigged to Epic skeleton: (Yes)
If rigged to the Epic skeleton, IK bones are included: (Yes)
Animated: (No)
Number of Animations: (No)
Animation types (Root Motion/In-place): [No]
Number of characters: 1
Vertex counts of characters: 52540
Number of Materials and Material Instances: Materials - 18, MI - 4
Number of Textures: 56
Texture Resolutions: 4K - All
Supported Development Platforms:
Windows: (Yes)
Mac: (Yes)
Documentation: No
Important/Additional Notes: No | https://docs.unrealengine.com/marketplace/ko/product/yokai | 2022-01-29T03:48:45 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.unrealengine.com |
How to create a custom range¶
Oscar ships with a range model that represents a set of products from your catalogue. Using the dashboard, this can be configured to be:
The whole catalogue
A subset of products selected by ID/SKU (CSV uploads can be used to do this)
A subset of product categories
Often though, a shop may need merchant-specific ranges such as:
All products subject to reduced-rate VAT
All books by a Welsh author
DVDs that have an exclamation mark in the title
These are contrived but you get the picture.
Custom range interface¶
A custom range must:
have a
nameattribute
have a
contains_productmethod that takes a product instance and return a boolean
have a
num_productsmethod that returns the number of products in the range or
Noneif such a query would be too expensive.
have an
all_productsmethod that returns a queryset of all products in the range.
Example:
class ExclamatoryProducts(object): name = "Products including a '!'" def contains_product(self, product): return "!" in product.title def num_products(self): return self.all_products().count() def all_products(self): return Product.objects.filter(title__icontains="!")
Create range instance¶
To make this range available to be used in offers, do the following:
from oscar.apps.offer.custom import create_range create_range(ExclamatoryProducts)
Now you should see this range in the dashboard for ranges and offers. Custom ranges are not editable in the dashboard but can be deleted.
Deploying custom ranges¶
To avoid manual steps in each of your test/stage/production environments, use Django’s data migrations to create ranges. | https://django-oscar.readthedocs.io/en/latest/howto/how_to_create_a_custom_range.html | 2022-01-29T04:21:17 | CC-MAIN-2022-05 | 1642320299927.25 | [] | django-oscar.readthedocs.io |
You can configure the SmartServer for your network and timezone using the SmartServer Configuration page. This section describes how to open the SmartServer Configuration page and use it to configure network and timezone settings.
There is also a video on the SmartServer IoT Training Videos page that describes the tabs of the Configuration UI for the SmartServer IoT. Click here for the Configuration UI Tour video.
This section consists of the following:
Accessing the SmartServer Configuration Page
To open the SmartServer configuration page, follow these steps:
Avoiding Browser Security Warnings
Note: To avoid having browsers display a security warning when users access the SmartServer CMS or the SmartServer Configuration Page, you can set up port forwarding for the SmartServer IoT and switch from using self-signed certificates. If you have access to the local DNS server, you can also set up a CNAME record for the SmartServer host address.
- Connect to the SmartServer as described in the section Connecting to Your SmartServer.
- On a client computer, open a compatible web browser as described in the section Open the SmartServer CMS.
- In your browser, specify one of the following:−nnnnnn(where nnnnnn is the SmartServer's install code. See Connecting to the Console using a LAN Connection as described in the section Connecting to Your SmartServer.)
The login page appears.
- Click Login.
The log in dialog box appears prompting you to enter a username and password.
- Enter the username and password. The default username is apollo and the default password is printed on the label on the bottom of the SmartServer. If you changed either the username or password, use the username and password you selected. Use the defaults if you did not change them.
- Click OK to login as appropriate for your web browser.
- The Network Configuration page appears enabling you to view and edit these settings. The menu bar at the top of the page provides the ability to go to the System, LON, BACnet, OPC UA, RS-485, and Features configuration pages, as well as the CMS. These configuration pages are described below.
Viewing System Information
To view your SmartServer IoT system information, click the System tab on the Network Configuration page .
The System Configuration page appears with the system information displayed at the top of the page. The system information includes the following:
- Version
- Serial Number
- MACID
- Install Code
- Segment Provisioning Status
- Segment ID
- LON Network Management Mode
- System Uptime
For more information about the settings on the System Configuration page, see (Optional) Secure Your SmartServer.
For more information about switching the LON Network Management Mode, see (Optional) Switch Off LON Management.
For more information about changing passwords, see Managing Passwords in the (Optional) Secure Your SmartServer section.
For more information about rebooting the system, see Rebooting Your SmartServer.
For more information about reseting the system to defaults, see Resetting the SmartServer to Factory Defaults.
For more information about updating the system, see Updating the SmartServer Software.
Configuring SmartServer Network Settings
You can configure IP network settings for the SmartServer from the Network tab of SmartServer Configuration page. The Network tab displays the LAN and WAN interface, current IP address, network mask (also known as the subnet mask), and default gateway. It also provides the option to set the network mode: Startup Mode - DHCP with rapid fallback to static, DHCP, or Static IP Address. The default setting is Startup Mode - DHCP with rapid fallback to static. Click the mode setting to change the network mode.
Select DHCP or Static IP Address as appropriate for your system from the list of modes. To automatically request and receive an IP address from a DHCP server, select DHCP. To assign a static IP address, select Static IP Address.
Select DHCP if you network uses DHCP for network device IP address assignment. For optimal performance do not select Startup Mode - DHCP with rapid fallback to static. Selecting DHCP in a network using DHCP ensures that the SmartServer always looks for an IP address from the local DHCP server, instead of quickly falling back to its static IP address. In this configuration, a SmartServer that reverts to a static IP address will no longer function as intended.
The Startup Mode - DHCP with rapid fallback to static is global for eth0 and eth1. Therefore, if you configure the eth0/eth1 to DHCP, then eth1/eth0 is no longer in Startup Mode - DHCP with rapid fallback to static. Changing one to DHCP, automatically sets the other to DHCP, and similarly for Startup Mode. Both LAN and WAN interfaces must be changed so neither is in Startup Mode - DHCP with rapid fallback to static.
The SmartServer hostname is displayed in the Hostname field and can be updated as needed if you are using self-signed certificates (not if you are using built-in, signed certificates).
You must save any changes you make to the network configuration settings by clicking the Update button at the bottom of the page.
Verifying the SmartServer Location
To verify or set the SmartServer latitude and longitude position, follow these steps:
- Click the SmartServer IoT device on the Devices widget.
Verify that the latitude and longitude settings are accurate for the location of your SmartServer.
If so, click CANCEL to return to the Devices widget.
If not, click the Set Location button ( ) and set the latitude and longitude values as appropriate for your system.Note: To best view all of the information on this view, click the Expand button ().
- Click SAVE, and then click SAVE again to return to the Devices widget.
Verifying the SmartServer Timezone
To verify or set the SmartServer timezone, follow these steps:
- Click the SmartServer IoT device on the Devices widget.
- Verify that the timezone setting is accurate for the location of your SmartServer.
If so, click CANCEL to return to the Devices widget.
If not, select the appropriate timezone.
- Click SAVE to return to the Devices widget.
Reboot to Occur
Note: If your changed location results in a change to the SmartServer's local timezone, the SAVE will cause the SmartServer to reboot. If a reboot occurs, allow 5-8 minutes for the process to complete before continuing to work.
Configuring SMTP Settings
The Settings widget, which provides the ability to configure SMTP server settings, is available with SmartServer 2.6 and higher.
You can configure SMTP setting to enable password recovery and alarm notification emails. To configure the SMTP settings, follow these steps:
- Open the CMS Settings widget.
- Click SMTP Settings.
The SMTP Settings view appears.
- Enter the SMTP settings, including:
- Host – hostname, 4-40 characters (required)
- Port – port value between 1 and 65536 (required)
- From – email address, 1-40 characters (required)
- User – username for your email account on the SMTP server, 1-40 characters (required)
- Password – password for your email account on the SMTP server (required), click the Show Password button (
) to display the password
- Set options, including:
- SMTP server requires authentication
– set to on if the
- SMTP server requires Transport Layer Security (TLS) connection– set to on if the
- Click TEST to send a test email.
- Click SAVE to save SMTP settings. | https://docs.adestotech.com/pages/viewpage.action?pageId=43376327 | 2022-01-29T05:42:19 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.adestotech.com |
Edge for Private Cloud v4.19.01
After you install the
apigee-mtls component on every node in the cluster, you
must configure and initialize it. You do this by generating a certificate/key pair and updating the
configuration file on your administration machine. You then deploy
the same generated files to all nodes in the cluster and initialize the local
apigee-mtls component.
Configure apigee-mtls (after initial installation)
This section describes how to configure Apigee mTLS for a single data center immediately after the initial installation. For information on making updates to an existing installation of Apigee mTLS, see Change an existing apigee-mtls configuration. For information on configuring multiple data centers, see Configure multiple data centers for Apigee mTLS.
The following is the general process for configuring
apigee-mtls:
- Update your configuration file: On your administration machine, update the configuration file to include the
apigee-mtlssettings.
- Install Consul and generate credentials: Install Consul and use it to generate the TLS credentials (once only).
In addition, edit your Apigee mTLS configuration file to:
- Add the credentials information
- Define the cluster's topology
Note that you can use your existing credentials or generate them with Consul.
- Distribute the configuration file and credentials: Distribute the same generated certificate/key pair and updated configuration file to all nodes in your cluster.
- Initialize apigee-mtls: Initialize the
apigee-mtlscomponent on each node.
Each of these steps is described in the sections that follow.
Step 1: Update your configuration file
This section describes how to modify your configuration file to include mTLS configuration properties. For more general information about the configuration file, see Creating a configuration file.
After you update the configuration file with the mTLS-related properties, you then copy it to
all nodes in the cluster before you initialize the
apigee-mtls component on those
nodes.
Commands that reference the configuration file use "config_file" to indicate that its location is variable, depending on where you store it on each node.
To update the configuration file:
- On your administration machine, open the configuration file.
- Copy the following set of mTLS configuration properties and paste them into the configuration file:
ALL_IP="ALL_PRIVATE_IPS_IN_CLUSTER" ZK_MTLS_HOSTS="ZOOKEEPER_PRIVATE_IPS" CASS_MTLS_HOSTS="CASSANDRA_PRIVATE_IPS" PG_MTLS_HOSTS="POSTGRES_PRIVATE_IPS" RT_MTLS_HOSTS="ROUTER_PRIVATE_IPS" MS_MTLS_HOSTS="MGMT_SERVER_PRIVATE_IPS" MP_MTLS_HOSTS="MESSAGE_PROCESSOR_PRIVATE_IPS" QP_MTLS_HOSTS="QPID_PRIVATE_IPS" LDAP_MTLS_HOSTS="OPENLDAP_PRIVATE_IPS" MTLS_ENCAPSULATE_LDAP="y" ENABLE_SIDECAR_PROXY="y" ENCRYPT_DATA="BASE64_GOSSIP_MESSAGE" PATH_TO_CA_CERT="PATH/TO/consul-agent-ca.pem" PATH_TO_CA_KEY="PATH/TO/consul-agent-ca-key.pem" APIGEE_MTLS_NUM_DAYS_CERT_VALID_FOR="NUMBER_OF_DAYS"
Set the value of each property to align with your configuration.
The following table describes these configuration properties:
In addition to the properties listed above, Apigee mTLS uses several additional properties when you install it on a multi-data center configuration. For more information, see Configure multiple data centers.
- Be sure that
ENABLE_SIDECAR_PROXYproperty to "y".
- Update the IP addresses in the host-related properties. Be sure to use the private IP addresses when referring to each node, not the public IP addresses.
In later steps, you will set the values of the other properties such as
ENCRYPT_DATA,
PATH_TO_CA_CERT, and
PATH_TO_CA_KEY. You do not set their values yet.
When editing the
apigee-mtlsconfiguration properties, note the following:
- All properties are strings; you must wrap the values of all properties in single or double quotes.
- If a host-related value has more than one private IP address, separate each IP address with a space.
- Use private IP addresses and not host names or public IP addresses for all host-related properties in the configuration file.
- The order of IP addresses in a property value must be in the same order in all configuration files across the cluster.
- Save your changes to the configuration file.
Step 2: Install Consul and generate credentials
This section describes how to install Consul and generate credentials.
You must choose one of the following methods to generate credentials:
- Create your own CA using Consul, as described in this section (recommended)
- Use the credentials of an existing CA with Apigee mTLS (advanced)
About the credentials
The credentials consist of the following:
- Certificate: The TLS certificate hosted on each node
- Key: The TLS public key hosted on each node
- Gossip message: A base-64 encoded encryption key
You generate a single version of each of these files once only. You copy the key and certificate files to all the nodes in your cluster, and add the encryption key to your configuration file that you also copy to all nodes.
For more information about Consul's encryption implementation, see the following:
Install Consul and generate credentials
You use a local Consul binary to generate credentials that Apigee mTLS uses for authenticating secure communications among the nodes in your Private Cloud cluster. As a result, you must install Consul on your administration machine before you can generate credentials.
To install Consul and generate mTLS credentials:
- On your administration machine, download the Consul 1.6.2 binary from the HashiCorp website.
- Extract the contents of the downloaded archive file. For example, extract the contents to
/opt/consul/.
- On your administration machine, create a new Certificate Authority (CA) by executing the following command:
/opt/consul/consul tls ca create
Consul creates the following files, which are a certificate/key pair:
consul-agent-ca.pem(certificate)
consul-agent-ca-key.pem(key)
By default, certificate and key files are X509v3 encoded.
Later, you will copy these files to all nodes in the cluster. At this time, however, you must only decide where on the nodes you will put these files. They should be in the same location on each node. For example,
/opt/apigee/.
- In the configuration file, set the value of
PATH_TO_CA_CERTto the location to which you will copy the
consul-agent-ca.pemfile on the node. For example:
PATH_TO_CA_CERT="/opt/apigee/consul-agent-ca.pem"
- Set the value of
PATH_TO_CA_KEYto the location to which you will copy the
consul-agent-ca-key.pemfile on the node. For example:
PATH_TO_CA_KEY="/opt/apigee/consul-agent-ca-key.pem"
- Create an encryption key for Consul by executing the following command:
/opt/consul/consul keygen
Consul outputs a randomized string that looks similar to the following:
QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY=
- Copy the string and set it as the value of the
ENCRYPT_DATAproperty in your configuration file. For example:
ENCRYPT_DATA="
QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY="
- Save your configuration file.
The following example shows the mTLS-related settings in a configuration file (with example values):
... IP1=10.126.0.121 IP2=10.126.0.124 IP3=10.126.0.125 IP4=10.126.0.127 IP5=10.126.0.130 ALL_IP="$IP1 $IP2 $IP3 $IP4 $IP5" LDAP_MTLS_HOSTS="$IP3" ZK_MTLS_HOSTS="$IP3 $IP4 $IP5" CASS_MTLS_HOSTS="$IP3 $IP4 $IP5" PG_MTLS_HOSTS="$IP2 $IP1" RT_MTLS_HOSTS="$IP4 $IP5" MS_MTLS_HOSTS="$IP3" MP_MTLS_HOSTS="$IP4 $IP5" QP_MTLS_HOSTS="$IP2 $IP1" ENABLE_SIDECAR_PROXY="y" ENCRYPT_DATA="QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY=" PATH_TO_CA_CERT="/opt/apigee/consul-agent-ca.pem" PATH_TO_CA_KEY="/opt/apigee/consul-agent-ca-key.pem" ...
Step 3: Distribute the configuration file and credentials
Copy the following files to the nodes running ZooKeeper using a tool such as
scp:
- Configuration file: Copy the updated version of this file and replace the existing version on all nodes (not just the nodes running ZooKeeper).
- consul-agent-ca.pem: Copy to the location you specified as the value of
PATH_TO_CA_CERTin the configuration file.
- consul-agent-ca-key.pem: Copy to the location you specified as the value of
PATH_TO_CA_KEYin the configuration file.
Be sure that the locations to which you copy the certificate and key files match the values you set in the configuration file in Step 2: Install Consul and generate credentials.
Step 4: Initialize apigee-mtls
After you updated your configuration file, copied it and the credentials to all nodes in the
cluster, and installed
apigee-mtls on each node, you are ready to initialize the
apigee-mtls component on each node.
To initialize apigee-mtls:
- Log in to a node in the cluster as the root user. You can perform these steps on the nodes in any order you want.
- Make the
apigee:apigeeuser an owner of the updated configuration file, as the following example shows:
chown apigee:apigee config_file
- Configure the
apigee-mtlscomponent by executing the following command:
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f config_file
- (Optional) Execute the following command to verify that your setup was successful:
/opt/apigee/apigee-mtls/lib/actions/iptables.sh validate
- Start Apigee mTLS by executing the following command:
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
After installing Apigee mTLS, you must start this component before any other components on the node.
- (Cassandra nodes only) Cassandra requires additional arguments to work within the security mesh. As a result, you must execute the following commands on each Cassandra node:
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra setup -f config_file
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra configure
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart
- (Postgres nodes only) Postgres requires additional arguments to work within the security mesh. As a result, you must do the following on the Postgres nodes:
(Master only)
- Execute the following commands on the Postgres master node:
(Standby only)
- Back up your existing Postgres data. To install Apigee mTLS, you must re-initialize the master/standby nodes, so there will be data loss. For more information, see Set up master/standby replication for Postgres.
- Delete all Postgres data:
rm -rf /opt/apigee/data/apigee-postgresql/pgdata
- Configure Postgres and then restart Postgres, as the following example shows:
If you are installing on a multi-data center topology, use an absolute path for the configuration file.
- Start the remaining Apigee components on the node in the start order, as the following example shows:
/opt/apigee/apigee-service/bin/apigee-service component_name start
- Repeat this process for each node in the cluster.
- (Optional) Verify that the
apigee-mtlsinitialization was successful by using one or more of the following methods:
- Validate the iptables configuration
- Verify remote proxy status
- Verify quorum status
Each of these methods is described in Verify your configuration.
Change an existing apigee-mtls configuration
To customize an existing
apigee-mtls configuration, you must uninstall and
reinstall
apigee-mtls.
To reiterate this point, when changing an existing Apigee mTLS configuration:
- If you change a configuration file, you must first uninstall
apigee-mtlsand re-run
setupor
configure:
# DO THIS:
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls uninstall# BEFORE YOU DO THIS:
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f fileOR
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls configure
- You must uninstall and re-run
setupor
configureon all nodes in the cluster, not just a single node. | https://docs.apigee.com/private-cloud/v4.19.01/mtls-configure?authuser=1 | 2022-01-29T05:04:17 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.apigee.com |
Integration
Introduction
The Mulberry Shopify app makes it extremely easy to get up and running with a best in class warranty program. Simply installing the app in your store is usually enough to begin making passive revenue almost immediately.
Onboarding
After choosing to install Mulberry, you'll be walked through a set of onboarding steps so we can get to know your business better. After the onboarding process is complete, you'll be logged into your Partner Dashboard. Once there, it's required that you complete the steps found below.
Connect your Bank Account/Credit card
As a retailer, you're only responsible for making payments to Mulberry for the warranties you sell. We currently support making payments both via bank account and credit card. To add an account, Click the billing icon in the left hand nav.
Enable the Mulberry service
After the onboarding process is complete you'll be logged into your Partner Dashboard. The Mulberry service by default is turned off. You must enable it to begin showing offers. Click the gear/settings icon in the left nav. Then select the "Settings" tab as seen below.
Configure offer types and placement
Mulberry has two "modes" which it can use to insert warranty offers into your product page. These are known as "automatic" and "manual".
Automatic
Automatic mode intelligently attempts to insert the offer(s) into your theme. Most of the time this works out just fine, especially on Shopify sanctioned themes. In the event this is not enough, Mulberry offers a more custom 'manual' mode as seen below.
Manual
Manual mode allows you to specify which DOM element to insert the inline offer into as well as to help Mulberry locate your "Add to cart" button. You can use an existing DOM element or you can insert a new one. For more information on how to edit your theme to insert these elements, visit the Shopify Custom Integration section.
Updated 7 months ago | https://docs.getmulberry.com/docs/integration | 2022-01-29T03:57:17 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://files.readme.io/c9bf7a3-Screen_Shot_2020-03-25_at_4.49.46_PM.png',
'Screen Shot 2020-03-25 at 4.49.46 PM.png'], dtype=object)
array(['https://files.readme.io/c9bf7a3-Screen_Shot_2020-03-25_at_4.49.46_PM.png',
'Click to close...'], dtype=object) ] | docs.getmulberry.com |
@Target(value=TYPE) @Retention(value=RUNTIME) public @interface Import
Note that this annotation is likely to require more use of reflection if package protected members require injection.
public abstract Class<?>[] classes
public abstract String[] packages
Note that only types with a bean defining annotation will be imported.
public abstract String[] annotated
packages()attribute (this attribute has no effect when combined with
classes().
If set to
"*" will include all non-abstract classes. Defaults to only included types annotated with JSR-330 scopes or qualifiers. | https://docs.micronaut.io/latest/api/io/micronaut/context/annotation/Import.html | 2022-01-29T04:15:13 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.micronaut.io |
TiDB Introduction
TiDB (/’taɪdiːbi:/, "Ti" stands for Titanium) is an open-source, distributed, NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability. TiDB can be deployed on-premise or in-cloud.
Designed for the cloud, TiDB provides flexible scalability, reliability and security on the cloud platform. Users can elastically scale TiDB to meet the requirements of their changing workloads. TiDB Operator helps manage TiDB on Kubernetes and automates operating tasks, which makes TiDB easier to deploy on any cloud that provides managed Kubernetes. TiDB Cloud (Public Preview), the fully-managed TiDB service, is the easiest, most economical, and most resilient way to unlock the full power of TiDB in the cloud, allowing you to deploy and run TiDB clusters with just a few clicks. | https://docs.pingcap.com/tidb/stable/ | 2022-01-29T05:10:27 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.pingcap.com |
All docs This doc
Follow the steps below to run the WSO2 Enterprise Integrator (EI) server via WSO2 EI Tooling.
<EI_HOME>directory, which is the parent directory of the product binary distribution.
In the Available section of the Add and Remove window, select any Composite Applications, which you created via WSO2 EI Tooling that you want to upload to WSO2 EI.
If you want to add multiple composite applications at once, select Add All. | https://docs.wso2.com/display/EI611/Running+WSO2+Enterprise+Integrator+via+Tooling | 2022-01-29T05:20:25 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.wso2.com |
Installing CSD and parcel
For installing Cloudera Streaming Analytics (CSA), you need to upload the downloaded Flink and SQL Stream Builder (SSB) Custom Service Descriptor (CSD) files to the default CSD directory, and add the CSA parcel to your cluster using Cloudera Manager.
- Download the Flink and SSB CSD and parcel files.
For more information about download Flink and SSB artifacts, see the Download location section.
- Install CDP Private Cloud Base.
For more information about installing CDP Private Cloud Base and Cloudera Manager, see the CDP Private Cloud Base documentation.
- Place the CSD files in the
/opt/cloudera/csd/folder (default CSD directory).
wget -P /opt/cloudera/csd/
wget -P /opt/cloudera/csd/ Manager automatically detects the CSD files.
- Change the ownership of the CSD files.
chown cloudera-scm:cloudera-scm /opt/cloudera/csd/FLINK-1.14.0-csa1.6.0.0-cdh7.1.7.0-551-19591977.jar
chown cloudera-scm:cloudera-scm /opt/cloudera/csd/SQL_STREAM_BUILDER-1.14.0-csa1.6.0.0-cdh7.1.7.0-551-19591977.jar
- Restart Cloudera Manager and CMS services for the changes to take effect.
systemctl restart cloudera-scm-server
- Log into Cloudera Manager.
- Select Parcels on the tab in the main navigation bar.
- Click on Parcel Repositories & Network Settings tab.
- Add the new Remote Parcel Repository URL for CSA.
- Enter your download credentials to HTTP authentication username override for Cloudera Repositories and HTTP authentication password override for Cloudera Repositories.
- Click Save & Verify Configuration to commit the change.
- Click Close.You are redirected to the Parcels page.
- Search for Flink, and click Download to download the parcel to the local repository.
- After the download is completed, click Distribute to distribute the parcel to all clusters.
- After the parcel is distributed, click Activate to activate the parcel.
- Click OK when confirmation is required. | https://docs.cloudera.com/csa/1.6.0/installation/topics/csa-installing-parcel.html | 2022-01-29T03:31:46 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.cloudera.com |
public class ViewResolutionResultHandler extends HandlerResultHandlerSupport implements HandlerResultHandler, Ordered
HandlerResultHandlerthat encapsulates the view resolution algorithm supporting the following return types:
Voidor no value -- default view name
String-- view name unless
@ModelAttribute-annotated
View-- View to render with
Model-- attributes to add to the model
Map-- attributes to add to the model
Rendering-- use case driven API for view resolution
@ModelAttribute-- attribute for the model
A String-based view name is resolved through the configured
ViewResolver instances into a
View to use for rendering.
If a view is left unspecified (e.g. by returning
null or a
model-related return value), a default view name is selected.
By default this resolver is ordered at
Ordered.LOWEST_PRECEDENCE
and generally needs to be late in the order since it interprets any String
return value as a view name or any non-simple value type as a model attribute
while other result handlers may interpret the same otherwise based on the
presence of annotations, e.g. for
@ResponseBody.
logger
HIGHEST_PRECEDENCE, LOWEST_PRECEDENCE
getAdapter, getAdapterRegistry, getContentTypeResolver, getOrder, selectMediaType, setOrder
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getOrder
public ViewResolutionResultHandler(List<ViewResolver> viewResolvers, RequestedContentTypeResolver contentTypeResolver)
ReactiveAdapterRegistry.
viewResolvers- the resolver to use
contentTypeResolver- to determine the requested content type
public ViewResolutionResultHandler(List<ViewResolver> viewResolvers, RequestedContentTypeResolver contentTypeResolver, ReactiveAdapterRegistry registry)
ReactiveAdapterRegistryinstance.
viewResolvers- the view resolver to use
contentTypeResolver- to determine the requested content type
registry- for adaptation to reactive types
public List<ViewResolver> getViewResolvers()
public void setDefaultViews(@Nullable List<View> defaultViews)
public List<View> getDefaultViews()
View's.. | https://docs.spring.io/spring-framework/docs/5.3.7-SNAPSHOT/javadoc-api/org/springframework/web/reactive/result/view/ViewResolutionResultHandler.html | 2022-01-29T05:19:13 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.spring.io |
General Settings
This section focuses on the general User Interface default values for XDOC
Project Toolbar – This value describes what happens when launching the XDOC UI from an external application
Container Default - These values describe the default value for the "Loan Number" when opening that option. The two values are either "no default container" (loan number), or "last user container," which is the last loan number the user was working in. The "last user container" is only used if this option is configured in the LOS integration.
Container History - These two options determine whether XDOC will keep a history in the Audit Log for the Viewer and Document Compare option. It is recommended to always keep a history for the Viewer.
Document Type List - These settings allow for grouping the docuemnt type drop down lists by Category, as well as sorting the field list.
Document Download – These default values describe the behavior for both the Document Print and Document Download icons in the Document Viewer
Document Email - These default values describe the behavior for both the Document Email option in the Document Viewer
For the Document Download and Email options, the following are a description of the fields and meanings
File Download - These default values describe the behavior for downloading files in the File Room and Monitor File Viewer
Image Toolbars – These default values describe which toolbars will show in the Document Viewer. All dropdowns are yes/no values.
Annotation Controls – These default values describe which annotation types will be available in the Document Viewer. All dropdowns are yes/no values. | https://docs.xdocm.com/6104/admin/system-tab/system-settings/user-interface/general-settings | 2022-01-29T04:34:06 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['general-settings/project-toolbar.png', 'Project Toolbar'],
dtype=object)
array(['general-settings/image-toolbars.png', 'Image Toolbars'],
dtype=object) ] | docs.xdocm.com |
Personal Signatures
XDOC has the ability for users to add their own personal signatures to documents, which enables a truly digital mortgage process. Each signature is completely secure, as the user is only able to add their own signature. Signatures are added either by the user from the Viewer screen, or by the XDOC Administrator from an XDOC Admin screen.
NOTE: Once the personal signature is added, the process for adding a personal signature is the same as adding any image annotation.
1. Log in to XDOC. If logging into XDOC via the LOS and going to the Document Viewer, you will see the Home Icon. If you log in directly into XDOC without going through the LOS, you will see the User Icon on the upper right of the screen, which is the icon you want. User Icon:
Home Icon:
2. If you see the Home Icon, click the Home Icon and the dashboard will appear, along with the User Icon at the top right of the screen. Click the User Icon. If you log in directly to the XDOC website and initially see the User Icon, click the User Icon
3. Click the Signatures Icon at the bottom and the following screen will appear
4. Click the ADD button and the signature add fields will appear
5. Enter the following information:
* Name: The name for the signature
* Active: Set to “yes” to allow you to add this signature to documents
* Project: Set this to your “Loan Documents” project
6. Click Choose File to select the signature file. NOTE: all files must be valid jpg, gif, or png files, less than 20KB, and less than 400 pixels wide by 300 pixels high. If your signature file is larger than this, the fields at the bottom will appear in red and you will have the opportunity to resize the file.
7. Choose the file, then click OPEN
8. The file appears in the signature editing box and the size and height of the inserted signature file appear at the bottom
9. If any sizes appear in red, this means the size is out of range and you must resize the image to conform to specifications. (20KB size, 400 pixels wide, 300 pixels high). You can resize the signature by clicking on the resize dropdown box to the left, choosing the amount of resizing, and then clicking RESIZE.
10. The new dimensions will appear at the bottom. You must ensure that no dimensions appear in red or you will not be able to save the signature annotation.
11. Alternately, you can also crop the signature to make it conform to the size limits. To crop the signature: * Place the cursor in the signature box and crop it to the size you want * Release the cursor * Click the CROP button * The signature will appear with the new sizes at the bottom
12. When you are done editing the signature, click SAVE. The changes are saved to the left and you can now use this signature to sign documents.
| https://docs.xdocm.com/6104/user/the-xdoc-document-viewer/working-with-annotations/personal-signatures | 2022-01-29T04:28:51 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['personal-signatures/home-icon.png', 'Home Icon'], dtype=object)
array(['personal-signatures/signature-screen-blank.png',
'Signature Screen Blank'], dtype=object)
array(['personal-signatures/signature-add-image-blank.png',
'Signature Add Blank'], dtype=object)
array(['personal-signatures/signature-open-file.png',
'Signature Open File'], dtype=object)
array(['personal-signatures/signature-inserted-image.png',
'Signature Inserted Image'], dtype=object)
array(['personal-signatures/signature-resize.png', 'Signature Resize'],
dtype=object)
array(['personal-signatures/signature-resized.png', 'Signature Resized'],
dtype=object)
array(['personal-signatures/signature-cropped.png', 'Signature Cropped'],
dtype=object)
array(['personal-signatures/signature-saved.png', 'Signature Saved'],
dtype=object) ] | docs.xdocm.com |
Delay/Wait command
Use the Delay/Wait command to add a timed delay or a wait condition to TaskBot/MetaBot Logic.
- Delay
- Delays the next step in the TaskBot/MetaBot Logic.
- Specify whether to delay for a specific time period or for a randomized time period based on a range.
- Specify milliseconds or seconds.
- Wait for window
- Adds a condition to wait for the contents of a screen (or an area in the application) to change before doing the next set of actions.
- Specify whether to wait for the window to open or close.
- From the drop-down menu, select the window.Note: If the window is active but does not appear in the drop-down menu, click Refresh.
- Specify the number of seconds to wait for the condition to become true.
- Specify the action to take if the condition is not satisfied:
- Continue with the next action.
- Stop the Task.
- Wait for screen change
- Adds a condition to wait until a rectangular shape on the screen changes before doing the next action:
- Specify whether the change is for a Window or Screen.
- From the drop-down menu, select the window or screen.
- Click Capture to identify the image to use for the comparison.
- Specify the number of seconds to compare the screen.
- Specify the action to take if the condition is not satisfied:
- Continue with the next action.
- Stop the Task.
- When Secure Recording Mode is enabled:
- Images are not captured. | https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/commands/delay-wait-command.html | 2022-01-29T05:01:06 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.automationanywhere.com |
Welcome to MELD
Welcome to the MELD documentation library.
MELD is the first decentralized and non-custodial liquidity protocol for borrowing fiat (USD and EUR) against your crypto assets and earning yield on deposits.
Introduction
MELD is a decentralized and trustless lending protocol built on the Cardano Blockchain using smart contracts and governed by the MELD token. It provides a fast, safe, and transparent set of tools for anyone to lend and borrow crypto and fiat currencies.
MELD lends fiat currency provided by fiat lenders to borrowers that collateralize their cryptocurrency in a MELD Smart Contract. The lender receives an interest rate from secure investments, whilst the borrower can maintain their crypto positions and see them grow, which has had an average annual rate of 32% (BTC IS 196%) CAGR. We stake the collateral in community-managed liquidity pools for our revenue which is divided 50% to MELD and 50% to MELD token holders.
1. Fiat Liquidity Lending
Fiat liquidity providers lend fiat to the MELD protocol, through the MELDapp, to earn high-interest yields. The yields for lending fiat on MELD are sourced from various places, including interest paid from the borrower, trading fees APY from the liquidity pools of MELDed assets and protocol rewards.
2. Crypto Collateral
For a borrower to gain access to fiat loans, the borrower must deposit cryptocurrency (ADA, BTC, ETH, or BNB) to the MELD loan Smart Contract. Once deposited and locked into the Smart Contract, the borrower will be able to take up to 50% of the value held within the cryptocurrency through a crypto-backed loan or a line of credit.
3. Fiat Borrowing
MELD will offer two fiat borrowing services, crypto-backed loans, and a line of credit. From a collateral perspective, both services function similarly. A borrower will have to deposit 2x the desired fiat in cryptocurrency to utilize either service. Borrowers receive fiat currency via wire transfer directly into their account for crypto-backed loans or gain access to a line of credit utilized by the MELD Debit Card, after depositing their crypto.
4. MELD Vaults (Liquidity Pools)
The liquidity pools run by the MELD protocol are single-sided MELD/Token pools. When a user makes a crypto deposit, the deposit is locked to a smart contract and placed into the respective MELD/Token pool. The benefit of this is that the deposited crypto can be exposed to trading fees APY from external DEX aggregators/routers. The MELD protocol has integrated impermanent loss protection for crypto depositors.
5. The MELD Token
The MELD token provides a few utility functions for the holder. First, MELD issued to pay for some transactions on the protocol. Second, you can stake MELD and earn 4% APY rewards. The MELD staking pool acts as an insurance solution for protocol. The staking pool protects against problems that might arise in the protocol and against impermanent loss in the MELD liquidity pools. The APY for the staking comes from 50% of all protocol fees, such as MELDed assets and trading fees.
6. Loan Repayment
MELD offers crypto-backed loans and a line of credit to crypto depositors. Borrowers of fiat through these instruments pay back the principle & interests monthly until paid off.
7. Crypto Collateral Returned
The crypto collateral is unlocked and withdrawn from the respective liquidity pool to the user’s wallet, and the smart contract is completed upon loan repayment.
8. Fiat Liquidity Returned
At any time, fiat liquidity providers can withdraw their money. If a crypto-backed fiat loan position suffers a liquidation event, then the underlying crypto asset is sold and transferred to fiat to ensure the fiat lender doesn’t suffer any losses.
Next - Information
Roadmap
Last modified
5mo ago
Copy link
Contents
MELD is the first decentralized and non-custodial liquidity protocol for borrowing fiat (USD and EUR) against your crypto assets and earning yield on deposits.
Introduction | https://docs.meld.com/ | 2022-01-29T04:13:02 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.meld.com |
Accounting
- Account
Fundamentals for most accounting needs.
- Asset
Depreciation of fixed assets.
- Belgian
Belgian accounting.
- Budget
Budgets for accounts.
- Cash Rounding
Round cash amounts.
- Credit Limit
Manages credit limit of parties.
- Deposit
Supports customer deposits.
- Dunning
Manages dunning on receivables.
- Dunning Email
Sends dunning emails.
- Dunning Fee
Adds fees to dunnings.
- Dunning Letter
Prints dunning letters.
- Spanish
Spanish accounting.
- Europe
Common European requirements.
French accounting.
- French Chorus
Sends invoices via Chorus Pro.
- German
German accounting.
- Invoice
Manages customer and supplier invoices.
- Invoice Correction
Correct price on posted invoices.
- Invoice Defer
Defer expense and revenue.
- Invoice History
Historize invoice.
- Invoice Line Standalone
Supports invoice line without invoice.
- Invoice Secondary Unit
Adds a secondary unit of measure.
- Invoice Stock
Links invoice lines and stock moves.
- Move Line Grouping
Show move line grouped.
- Payment
Manages payments.
- Payment Braintree
Receives payment from Braintree.
- Payment Clearing
Uses clearing account for payments.
- Payment SEPA
Genrates SEPA messages for payments.
- Payment SEPA CFONB
Adds CFONB flavors to SEPA.
- Payment Stripe
Receives payment from Stripe.
- Product
Adds accounting on product and category.
- Rule
Applies rules on accounts.
- Statement
Books bank statement, cash day book etc.
- Statement AEB43
Imports statements in AEB43 format.
- Statement CODA
Imports statements in CODA format.
- Statement OFX
Imports statements in OFX format.
- Statement Rule
Applies rules on imported statements.
- Stock Anglo-Saxon
Values stock using the anglo-saxon method.
- Stock Continental
Values stock using the continental method.
- Stock Landed Cost
Allocates landed cost.
- Stock Landed Cost Weight
Allocates landed cost based on weight.
- Stock Shipment Cost
Allocates shipment cost.
- Tax Cash
Reports tax on cash basis.
- Tax Rule Country
Applies taxes per country of origin and destination. | https://docs.tryton.org/en/6.2/account.html | 2022-01-29T04:33:40 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.tryton.org |
Source code for oscar.core.loading
import sys import traceback import warnings from functools import lru_cache from importlib import import_module from django.apps import apps from django.apps.config import MODELS_MODULE_NAME from django.conf import settings from django.core.exceptions import AppRegistryNotReady from django.utils.module_loading import import_string from oscar.core.exceptions import ( AppNotFoundError, ClassNotFoundError, ModuleNotFoundError) from oscar.utils.deprecation import RemovedInOscar32Warning # To preserve backwards compatibility of loading classes which moved # from one Oscar module to another, we look into the dictionary below # for the moved items during loading. MOVED_MODELS = {}[docs]def get_class(module_label, classname, module_prefix='oscar.apps'): """ Dynamically import a single class from the given module. This is a simple wrapper around `get_classes` for the case of loading a single class. Args: module_label (str): Module label comprising the app label and the module name, separated by a dot. For example, 'catalogue.forms'. classname (str): Name of the class to be imported. Returns: The requested class object or `None` if it can't be found """ return get_classes(module_label, [classname], module_prefix)[0]@lru_cache(maxsize=100) def get_class_loader(): return import_string(settings.OSCAR_DYNAMIC_CLASS_LOADER) def get_classes(module_label, classnames, module_prefix='oscar.apps'): class_loader = get_class_loader() return class_loader(module_label, classnames, module_prefix) def default_class_loader(module_label, classnames, module_prefix): """ Dynamically import a list of classes from the given module. This works by looking up a matching app from the app registry, against the passed module label. If the requested class can't be found in the matching module, then we attempt to import it from the corresponding core app. This is very similar to ``django.db.models.get_model`` function for dynamically loading models. This function is more general though as it can load any class from the matching app, not just a model. Args: module_label (str): Module label comprising the app label and the module name, separated by a dot. For example, 'catalogue.forms'. classname (str): Name of the class to be imported. Returns: The requested class object or ``None`` if it can't be found] Raises: AppNotFoundError: If no app is found in ``INSTALLED_APPS`` that matches the passed module label. ImportError: If the attempted import of a class raises an ``ImportError``, it is re-raised """ if '.' not in module_label: # Importing from top-level modules is not supported, e.g. # get_class('shipping', 'Scale'). That should be easy to fix, # but @maikhoepfel had a stab and could not get it working reliably. # Overridable classes in a __init__.py might not be a good idea anyway. raise ValueError( "Importing from top-level modules is not supported") # import from Oscar package (should succeed in most cases) # e.g. 'oscar.apps.dashboard.catalogue.forms' oscar_module_label = "%s.%s" % (module_prefix, module_label) oscar_module = _import_module(oscar_module_label, classnames) # returns e.g. 'oscar.apps.dashboard.catalogue', # 'yourproject.apps.dashboard.catalogue' or 'dashboard.catalogue', # depending on what is set in INSTALLED_APPS app_name = _find_registered_app_name(module_label) if app_name.startswith('%s.' % module_prefix): # The entry is obviously an Oscar one, we don't import again local_module = None else: # Attempt to import the classes from the local module # e.g. 'yourproject.dashboard.catalogue.forms' local_module_label = '.'.join(app_name.split('.') + module_label.split('.')[1:]) local_module = _import_module(local_module_label, classnames) if oscar_module is local_module is None: # This intentionally doesn't raise an ImportError, because ImportError # can get masked in complex circular import scenarios. raise ModuleNotFoundError( "The module with label '%s' could not be imported. This either" "means that it indeed does not exist, or you might have a problem" " with a circular import." % module_label ) # return imported classes, giving preference to ones from the local package return _pluck_classes([local_module, oscar_module], classnames) def _import_module(module_label, classnames): """ Imports the module with the given name. Returns None if the module doesn't exist, but propagates any import errors. """ try: return __import__(module_label, fromlist=classnames) except ImportError: # There are 2 reasons why there could be an ImportError: # # 1. Module does not exist. In that case, we ignore the import and # return None # 2. Module exists but another ImportError occurred when trying to # import the module. In that case, it is important to propagate the # error. # # ImportError does not provide easy way to distinguish those two cases. # Fortunately, the traceback of the ImportError starts at __import__ # statement. If the traceback has more than one frame, it means that # application was found and ImportError originates within the local app __, __, exc_traceback = sys.exc_info() frames = traceback.extract_tb(exc_traceback) if len(frames) > 1: raise def _pluck_classes(modules, classnames): """ Gets a list of class names and a list of modules to pick from. For each class name, will return the class from the first module that has a matching class. """ klasses = [] for classname in classnames: klass = None for module in modules: if hasattr(module, classname): klass = getattr(module, classname) break if not klass: packages = [m.__name__ for m in modules if m is not None] raise ClassNotFoundError("No class '%s' found in %s" % ( classname, ", ".join(packages))) klasses.append(klass) return klasses def _find_registered_app_name(module_label): """ Given a module label, finds the name of the matching Oscar app from the Django app registry. """ from oscar.core.application import OscarConfig app_label = module_label.split('.')[0] try: app_config = apps.get_app_config(app_label) except LookupError: raise AppNotFoundError( "Couldn't find an app to import %s from" % module_label) if not isinstance(app_config, OscarConfig): raise AppNotFoundError( "Couldn't find an Oscar app to import %s from" % module_label) return app_config.name def get_profile_class(): """ Return the profile model class """ # The AUTH_PROFILE_MODULE setting was deprecated in Django 1.5, but it # makes sense for Oscar to continue to use it. Projects built on Django # 1.4 are likely to have used a profile class and it's very difficult to # upgrade to a single user model. Hence, we should continue to support # having a separate profile class even if Django doesn't. setting = getattr(settings, 'AUTH_PROFILE_MODULE', None) if setting is None: return None app_label, model_name = settings.AUTH_PROFILE_MODULE.split('.') return get_model(app_label, model_name) def feature_hidden(feature_name): """ Test if a certain Oscar feature is disabled. """ return (feature_name is not None and feature_name in settings.OSCAR_HIDDEN_FEATURES) def get_model(app_label, model_name): """ Fetches a Django model using the app registry. This doesn't require that an app with the given app label exists, which makes it safe to call when the registry is being populated. All other methods to access models might raise an exception about the registry not being ready yet. Raises LookupError if model isn't found. """ oscar_moved_model = MOVED_MODELS.get(app_label, None) if oscar_moved_model: if model_name.lower() in oscar_moved_model[1]: original_app_label = app_label app_label = oscar_moved_model[0] warnings.warn( 'Model %s has recently moved from %s to the application %s, ' 'please update your imports.' % (model_name, original_app_label, app_label), RemovedInOscar32Warning, stacklevel=2) try: return apps.get_model(app_label, model_name) except AppRegistryNotReady: if apps.apps_ready and not apps.models_ready: # If this function is called while `apps.populate()` is # loading models, ensure that the module that defines the # target model has been imported and try looking the model up # in the app registry. This effectively emulates # `from path.to.app.models import Model` where we use # `Model = get_model('app', 'Model')` instead. app_config = apps.get_app_config(app_label) # `app_config.import_models()` cannot be used here because it # would interfere with `apps.populate()`. import_module('%s.%s' % (app_config.name, MODELS_MODULE_NAME)) # In order to account for case-insensitivity of model_name, # look up the model through a private API of the app registry. return apps.get_registered_model(app_label, model_name) else: # This must be a different case (e.g. the model really doesn't # exist). We just re-raise the exception. raise def is_model_registered(app_label, model_name): """ Checks whether a given model is registered. This is used to only register Oscar models if they aren't overridden by a forked app. """ try: apps.get_registered_model(app_label, model_name) except LookupError: return False else: return True @lru_cache(maxsize=128) def cached_import_string(path): return import_string(path) | https://django-oscar.readthedocs.io/en/latest/_modules/oscar/core/loading.html | 2022-01-29T04:49:57 | CC-MAIN-2022-05 | 1642320299927.25 | [] | django-oscar.readthedocs.io |
Download Anveo Mobile App
Public Downloads
Anveo Mobile App is available for download in Apple’s App Store, Google Play for Android and Microsoft Store. Please download the app onto your favorite device and connect with our demo environment for testing.
You can use the same app for development and live operation because it supports multiple accounts at the same time.
Mobile Device Management (MDM)
We highly recommend to install a Mobile Device Management (MDM) soluion to manage the Anveo Mobile App versions on your devices. If you download the Anveo Mobile App from Google Play, App Store or Windows Store, you will automatically get updates that may affect your system stability. However, the Anveo team will test each new release very carefully to be backward compatible.
Local Anveo Mobile App Installation
If you need Anveo Mobile App for Android (APK file) or Windows for a manual installation, please download the app here.
Due to iOS restrictions, Anveo Mobile App for iOS is only available through a special “B2B app program”. Please contact Anveo for more details. | https://docs.anveogroup.com/en/manual/anveo-mobile-app/product-installation/download-anveo-mobile-app/?product_version=Version%208&product_platform=Microsoft%20Dynamics%20NAV%202016&product_name=anveo-mobile-app | 2022-01-29T05:09:59 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.anveogroup.com |
Kubernetes version or AMI version of an Amazon EKS managed node group.
You can update a node group using a launch template only if the node group was originally deployed with a launch template. If you need to update a custom AMI in a node group that was deployed with a launch template, then update your custom AMI, specify the new ID in a new version of the launch template, and then update the node group to the new version of the launch template.
If you update without a launch template, then you can update to the latest available AMI version of a node group's current Kubernetes version by not specifying a Kubernetes version in the request. You can update to the latest AMI version of your cluster's current Kubernetes version by specifying your cluster's Kubernetes version in the request. For more information, see Amazon EKS optimized Amazon Linux 2 AMI versions in the Amazon EKS User Guide .
You cannot roll back a node group to an earlier Kubernetes version or AMI version.
When a node in a managed node group is terminated due to a scaling action or update, the pods in that node are drained first. Amazon EKS attempts to drain the nodes gracefully and will fail if it is unable to do so. You can force the update if Amazon EKS is unable to drain the nodes as a result of a pod disruption budget issue.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
update-nodegroup-version --cluster-name <value> --nodegroup-name <value> [--release-version <value>] [--launch-template <value>] [--force | --no-force] [--client-request-token <value>] [--kubernetes-version <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--cluster-name (string)
The name of the Amazon EKS cluster that is associated with the managed node group to update.
--nodegroup-name (string)
The name of the managed node group to update.
--release-version (string)
The AMI version of the Amazon EKS optimized AMI to use for the update. By default, the latest available AMI version for the node group's Kubernetes version is used. For more information, see Amazon EKS optimized Amazon Linux 2 AMI versions in the Amazon EKS User Guide . If you specify launchTemplate , and your launch template uses a custom AMI, then don't specify releaseVersion , or the node group update will fail. For more information about using launch templates with Amazon EKS, see Launch template support in the Amazon EKS User Guide.
--launch-template (structure)
An object representing a node group's launch template specification. You can only update a node group using a launch template if the node group was originally deployed with a launch template.
name -> (string)The name of the launch template.
version -> (string)The version of the launch template to use. If no version is specified, then the template's default version is used.
id -> (string)The ID of the launch template.
Shorthand Syntax:
name=string,version=string,id=string
JSON Syntax:
{ "name": "string", "version": "string", "id": "string" }
--force | --no-force (boolean)
Force the update if the existing node group's pods are unable to be drained due to a pod disruption budget issue. If an update fails because pods could not be drained, you can force the update after it fails to terminate the old node whether or not any pods are running on the node.
--client-request-token (string)
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
--kubernetes-version (string)
The Kubernetes version to update to. If no version is specified, then the Kubernetes version of the node group does not change. You can specify the Kubernetes version of the cluster to update the node group to the latest AMI version of the cluster's Kubernetes version. If you specify launchTemplate , and your launch template uses a custom AMI, then don't specify version , or the node group update will fail. For more information about using launch templates with Amazon EKS, see Launch template support in the Amazon EKS.) | https://docs.aws.amazon.com/cli/latest/reference/eks/update-nodegroup-version.html | 2022-01-29T05:53:06 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.aws.amazon.com |
Setup Push Notifications on iOS¶
Prerequisite¶
- Finished Getting Started with GetSocial iOS SDK guide. register at push server, and the user will start receiving push notifications.
To prevent auto registration and register for push notifications manually:
Set
autoRegisterparameter to
falsein
getsocial-sdk7.json:
{ ... "pushNotifications": { "autoRegister": false, ... }
To start receiving GetSocial push notifications call:
Notifications.registerDevice()
To enable push notifications, just remove the
autoRegister parameter from
getsocial.json or set it to
true.
If you’re not using GetSocial iOS Installer Script check how to disable auto registration for push notifications in the Manual Integration Guide.Handler, action button handler, etc. In this case, we recommend you to override the default behavior and open GetSocial View by yourself.
Notification Handler¶
Receive Listener¶
You can set
OnNotificationReceivedListener, which will be triggered when notification is received and app is in foreground:
Notifications.setOnNotificationReceivedListener{ notification in // Notification received while app was in foreground } in property to
true in
getsocial.json file:
If
foreground is set to
true:
- Push notifications will be shown in the system notification center when your application is in foreground.
OnNotificationReceivedListenerwill be called.
Rich Push Notifications & Badges.
- Make sure extension and application have the same target iOS version.
-.
Disable Push Notifications For User¶
If you wan to disable push notification for the user, use
[GetSocialUser setPushNotificationsEnabled:NO success:success failure:failure]. When push notifications are disabled, you still can query for GetSocial Notifications via data API.
To enable it back use
[GetSocialUser setPushNotificationsEnabled:YES success:success failure:failure]. To check current setting value use
[GetSocialUser isPushNotificationsEnabledWithSuccess:success failure:failure].
Next Steps¶
- Send targeted notifications from GetSocial Dashboard
- Send user to user notifications from the client
- Understand push notifications Analytics. | https://docs.getsocial.im/guides/notifications/setup-push-notifications/ios/ | 2022-01-29T03:41:51 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.getsocial.im |
A Community is the DSpace term for a group of related Collections in the repository. This Community, which we've called DSpace, contains several Collections of useful documents relating to the software of that name. One of these Collections contains documents from Prosentient Systems explaining how to go about setting up and using DSpace and another is a collection of links to DSpace resources on the web.
Collections in this community
DSpace documentation [16]
Documentation on configuring the DSpace repository.
DSpace useful links and resources [5]
DSpace useful links and resources
Image resources [1]
Visit Prosentient Systems website
Discover
Subject | http://docs.intersearch.com.au/prosentientjspui/handle/1/64 | 2020-07-02T15:21:36 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.intersearch.com.au |
General Information
Quality Assurance and Productivity
Desktop
Frameworks and Libraries
Web
Controls and Extensions
Maintenance Mode
Enterprise and Analytic Tools
End-User Documentation
ColumnView.MoveFirst() Method
Moves focus to the first row.
Namespace: DevExpress.XtraGrid.Views.Base
Assembly: DevExpress.XtraGrid.v20.1.dll
Declaration
Remarks
If the first row is invisible on screen, calling the MoveFirst method scrolls the View to make the row visible.
End-users can focus the first row by pressing the "First" button of the embedded data navigator or by pressing the CTRL+HOME combination while the View has focus.
NOTE
Detail pattern Views do not contain data and they are never displayed within XtraGrid. So, the MoveFirst member must not be invoked for these Views. The MoveFirst. | https://docs.devexpress.com/WindowsForms/DevExpress.XtraGrid.Views.Base.ColumnView.MoveFirst | 2020-07-02T15:13:35 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.devexpress.com |
Menus Menu Item Article Single Article/et
From Joomla! Documentation
Contents). Puts the article information block above or below the text or splits it into two separate blocks. One block is above and the other is below.
- Article Info Title. (Use Global/Hide/Show). Displays the 'Article Info' title on top of the article information block.
- Associations. Multilingual only. If set to Show, the associated articles flags or URL Language Code will be displayed.
-. Note: The Plugin Content - Vote has to be enabled.
- Show Icons. (Use Global/Hide/Show). If set to Show, Print and Email will use icons instead of text.
- Show Print. (Use Global/Hide/Show). Show or Hide the Print Article button.
- Show Email. (Use Global/Hide/Show). Show or Hide the Email Article button.
- Show Hits. (Use Global/Hide/Show). Show or Hide the number of times the article has been hit (displayed by a user).
- Tags. Show or hide the tags for this item.
- Use Global: Use the default value from the contacts options screen.
- Show: Show to allow users to select a contact in a drop down list.
- Hide: Do not display the Contact list.
-).
Related Information
- Articles are created using Article Manager: Add or Edit. | https://docs.joomla.org/Help39:Menus_Menu_Item_Article_Single_Article/et | 2020-07-02T15:13:16 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.joomla.org |
App services have been deprecated in version 7.23.4 and are marked for removal in version 8.0.0. Use a consumed web service to consume existing app services.
App services are a way of connecting Mendix applications to each other. An app service can be imported and its content can be used. As for now, app services provide the following content:
- Microflow actions
- Domain model entities
In the project explorer, an app service can be selected in the ‘Add’ context menu on a module. See Select app service for more information.
See the Settings page for more information on document options.
App service actions are directly available in Microflows. If a new activity is added, new app service actions are shown below the standard microflow actions.
An app service action may require parameters, and usually it supplies a return value. The return value can be used in the rest of the microflow. Parameters and return values can be an object or a list type; the entities which are accepted by the app service are included in the domain model of the app service. | https://docs.mendix.com/refguide7/consumed-app-services | 2020-07-02T16:53:00 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['attachments/16713703/16843891.png', None], dtype=object)] | docs.mendix.com |
Problem
You do not see data in New Relic UI after installing a New Relic agent.
Solution
You should start seeing data within a few minutes after installing the agent and generating traffic for your app. If you do not see data, you can use the New Relic Diagnostics utility to automatically identify common issues. For additional troubleshooting tips, see the documentation for your agent.
APM agents
Follow the troubleshooting procedures for your New Relic APM agent:
In addition, you can try these troubleshooting steps that apply to all New Relic APM agents:
- Deleted or renamed applications in APM
If you delete an application from the New Relic APM index, the app needs to stop reporting data for at least an hour before you can reuse that name. It also needs to reconnect with the New Relic collector (be restarted) before new data will be accepted.
The app remains in the New Relic collector's cache for an hour before it is flushed. During that time it is marked as "deleted," so no new data is accepted. Also, the data is associated with an executing app that has been deleted until the agent is restarted.
For more information, see:
- No connection to collector
Your app will not be affected if the New Relic agent cannot connect to the collector. Data continues to be collected, and New Relic uploads it as soon as the connection is restored.
While the network is down or the collector unavailable, you may see gaps where data is missing in New Relic APM's CPU and memory charts. The agent will continue attempting to reconnect, and when it succeeds, you will again see data appearing in the UI.
During the time the agent is unable to communicate with the collector, it is still collecting data. Once it is able to connect again, it will upload the data and fill in the missing segment so there will not be any confusion about whether your application was down or just not reporting data. To save memory, the data will be aggregated and averaged over the period, so you will see flat bars and charts over the period when it was unable to communicate with the collector.
New Relic Browser
See Troubleshooting browser monitoring installation.
New Relic Infrastructure
Follow the troubleshooting procedures for your New Relic Infrastructure agent:
New Relic Mobile
Follow the troubleshooting procedures for your New Relic Mobile app: | https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/troubleshooting/not-seeing-data | 2020-07-02T16:40:23 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.newrelic.com |
Shipping Notifications allows you to give your subscribers an alert when you ship their orders.
Shipping notifications can be sent only when your customer had subscribed to push notifications before placing the order. If the customer hadn't opted in, it won't be possible to send them a shipping notification.
The shipping notification syncs with your Shopify orders dashboard and uses the same tracking link in the notification which is included in that order page. The shipping notification is sent after you mark the order fulfilled.
To enable shipping notifications and enhance customer experience:
1. Select 'Automation' on your Dashboard and click on 'Shipping Notifications'.
2. Click on the switch next to Shipping Notifications to enable it.
3. Click on the edit icon to customize the notification copy.
4. Click on ‘Save’ after you edit the notification to your liking.
Your Shipping notifications are set up! | https://docs.pushowl.com/en/articles/2320378-shipping-notifications | 2020-07-02T14:48:05 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['https://downloads.intercomcdn.com/i/o/75744202/12faf6eb3f472acb6d52ed80/2.6.1.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/205796709/78c948e79d233f639f650320/Group+6.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/205796689/0ec223f523741a2d881c9b51/Group+4.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/205796669/f942cfb61af9a00f5fa24040/Group+3.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/205796655/0356ae3516ca3c845cb01693/Group+5.png',
None], dtype=object) ] | docs.pushowl.com |
SubtractImagesStep¶
- class
jwst.background.
SubtractImagesStep(name=None, parent=None, config_file=None, _validate_kwds=True, **kws)[source]¶
Bases:
jwst.stpipe.Step
SubtractImagesStep: Subtract two exposures from one another to accomplish background subtraction.(input1, input2)[source]¶
Subtract the background signal from a JWST data model by subtracting a background image from it.
- Parameters
input1 (JWST data model) – input science data model to be background-subtracted
input2 (JWST data model) – background data model
- Returns
result – background-subtracted science data model
- Return type
JWST data model | https://jwst-pipeline.readthedocs.io/en/stable/api/jwst.background.SubtractImagesStep.html | 2020-07-02T15:31:28 | CC-MAIN-2020-29 | 1593655879532.0 | [] | jwst-pipeline.readthedocs.io |
Custom Source
This tutorial demonstrates using a custom source to define a source with an arbitrary time profile.
Stochastic Dipole Emission in Light Emitting Diodes.
dipoles which are modeled using a
custom-src. The stochastic results for the radiated flux are averaged over multiple trials/iterations via Monte-Carlo sampling. Method 2 exploits the property of linear time-invariance of the materials/geometry and involves a sequence of separate runs each with a single deterministic dipole (i.e., pulse time profile,
gaussian-src)).
The simulation script is in examples/stochastic-emitter.ctl.
(set-param! resolution 50) ;; resolution (pixels/um) (define-param nr 20) ;; number of random trials (method 1) (define-param nd 10) ;; number of dipoles (define-param nf 500) ;; number of frequencies (define-param textured? false) ;; flat (default) or textured surface (define-param method 1) ;; type of method (1 or 2) (define dpml 1.0) (define dair 1.0) (define hrod 0.7) (define wrod 0.5) (define dsub 5.0) (define dAg 0.5) (define sx 1.1) (define sy (+ dpml dair hrod dsub dAg)) (set! geometry-lattice (make lattice (size sx sy no-size))) (set! pml-layers (list (make pml (direction Y) (thickness dpml) (side High)))) (define fcen 1.0) (define df 0.2) (define run-time (* 2 (/ nf df))) (set! geometry (list (make block (material (make medium (index 3.45))) (center 0 (- (* 0.5 sy) dpml dair hrod (* 0.5 dsub))) (size infinity dsub infinity)) (make block (material Ag) (center 0 (+ (* -0.5 sy) (* 0.5 dAg))) (size infinity dAg infinity)))) (if textured? (set! geometry (append geometry (list (make block (material (make medium (index 3.45))) (center 0 (- (* 0.5 sy) dpml dair (* 0.5 hrod))) (size wrod hrod infinity)))))) (set! k-point (vector3 0 0 0)) (define (compute-flux . args) (let ((m (get-keyword-value args #:m 1)) (n (get-keyword-value args #:n 0))) (reset-meep) (if (= m 1) ;; method 1 (map (lambda (nn) (set! sources (append sources (list (make source (src (make custom-src (src-func (lambda (t) (random:normal))))) (component Ez) (center (* sx (+ -0.5 (/ nn nd))) (+ (* -0.5 sy) dAg (* 0.5 dsub)))))))) (arith-sequence 0 1 nd)) ;; method 2 (set! sources (list (make source (src (make gaussian-src (frequency fcen) (fwidth df))) (component Ez) (center (* sx (+ -0.5 (/ n nd))) (+ (* -0.5 sy) dAg (* 0.5 dsub))))))) (set! geometry-lattice geometry-lattice) (set! pml-layers pml-layers) (set! geometry geometry) (let ((flux-mon (add-flux fcen df nf (make flux-region (center 0 (- (* 0.5 sy) dpml)) (size sx 0))))) (run-until run-time) (display-fluxes flux-mon)))) (if (= method 1) (map (lambda (t) (compute-flux #:m 1)) (arith-sequence 0 1 nr)) (map (lambda (d) (compute-flux #:m 2 #:n d)) (arith-sequence 0 1 nd))) each run, the flux spectra is printed to standard output. meep method=1 resolution=50 nf=500 nd=10 nr=500 stochastic-emitter.ctl > method1-flat.out grep flux method1-flat.out |cut -d, -f2- > method1-flat.dat # Method 1: textured surface meep method=1 resolution=50 nf=500 nd=10 nr=500 textured?=true stochastic-emitter.ctl > method1-textured grep flux method1-textured.out |cut -d, -f2- > method1-textured.dat # Method 2: flat surface meep method=2 resolution=50 nf=500 nd=10 stochastic-emitter.ctl > method2-flat.out grep flux method2-flat.out |cut -d, -f2- > method2-flat.dat # Method 2: textured surface meep method=2 resolution=50 nf=500 nd=10 textured?=true stochastic-emitter.ctl > method2-textured grep flux method2-textured.out |cut -d, -f2- > method2-textured.dat
Afterwards, the four data files containing all the flux spectra are used to plot the normalized flux for each method using Octave/Matlab.
nfreq = 500; ntrial = 500; ndipole = 10 method1_f0 = dlmread('method1-flat.dat',','); method1_f1 = dlmread('method1-textured.dat',','); method1_freqs = method1_f0(1:nfreq,1); method1_f0_flux = reshape(method1_f0(:,2),nfreq,ntrial); method1_f1_flux = reshape(method1_f1(:,2),nfreq,ntrial); method1_f0_mean = mean(method1_f0_flux,2); method1_f1_mean = mean(method1_f1_flux,2); method2_f0 = dlmread('method2-flat.dat',','); method2_f1 = dlmread('method2-textured.dat',','); method2_freqs = method2_f0(1:nfreq,1); method2_f0_flux = reshape(method2_f0(:,2),nfreq,ndipole); method2_f1_flux = reshape(method2_f1(:,2),nfreq,ndipole); method2_f0_mean = mean(method2_f0_flux,2); method2_f1_mean = mean(method2_f1_flux,2); semilogy(method1_freqs,method1_f1_mean./method1_f0_mean,'b-'); hold on; semilogy(method2_freqs,method2_f1_mean./method2_f0_mean,'r-'); xlabel('frequency'); ylabel('normalized flux'); legend('Method 1','Method 2');.
.
noisy-lorentzian-susceptibility feature in Meep. | https://meep.readthedocs.io/en/latest/Scheme_Tutorials/Custom_Source/ | 2020-07-02T16:38:50 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['../../images/LED_layout.png', None], dtype=object)
array(['../../images/stochastic_emitter_trials.png', None], dtype=object)
array(['../../images/stochastic_emitter_normalized_flux_comparison.png',
None], dtype=object) ] | meep.readthedocs.io |
Toggle Navigation
Support Home
"Learning Journey" Content
Memberships & Community
FAQs, Resources, and Docs
Search
No results found
"Learning Journey" Content
About Better World Ed Content
How does Better World Ed content work?
How does this Global SEL align with our standards?
What's the age range for this content?
Why are the videos wordless?
What is "Integrated" Social Emotional Learning?
How will the curriculum evolve over time?
How are the videos designed?
What about skills like creativity, collaboration, grit, curiosity, and growth mindset?
What makes Better World Ed unique?
Why math? How?
Browse all articles
Memberships & Community
Placing Online Orders and Purchase Orders (Billing and Payment)
What Learning Journey Stories are in each plan?
Does Better World Ed help educators find funding for plans?
How can I support this mission?
What kinds of schools do you all work with?
How much does Membership cost?
Coronavirus (COVID-19) Information and Discounts
Is there opportunity for Professional Development (PD)?
Is there a referral program?
Inviting your team onboard when you sign up for a multi-user plan
Browse all articles
Name
Subject
Upload file | https://docs.betterworlded.org/ | 2020-07-02T16:06:15 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.betterworlded.org |
Batch Services¶
Dockstore tools and workflows can also be run through a number of online services that we’re going to loosely call “commercial batch services.” These services share the following characteristics: they spin up the underlying infrastructure and run commands, often in Docker containers, while freeing you from running the batch computing software yourself. While not having any understanding of CWL, these services can be used naively to run tools and workflows, and in a more sophisticated way to implement a CWL-compatible workflow engine.
AWS Batch¶
AWS Batch is built by Amazon Web Services. Look here for a tutorial on how to run a few sample tools via AWS.
Azure Batch¶
Azure Batch and the associated batch-shipyard is built by Microsoft. Look here for a tutorial on how to run a few sample tools via Azure.
Google Pipelines¶
Google Pipeline and DataBiosphere dsub are also worth a look. In particular, both Google Genomics Pipelines and dsub provide tutorials on how to run (Dockstore!) tools if you have some knowledge on how to construct the command-line for a tool yourself.
Consonance¶
Consonance pre-dates Dockstore and was the framework used to run much of the data analysis for the PCAWG project by running Seqware workflows. Documentation for this incarnation of Dockstore can be found at Working with PanCancer Data on AWS and ICGC on AWS.
Consonance has subsequently been updated to run Dockstore tools and has also been adopted at the UCSC Genomics Institute for this purpose. Also, using cwltool under-the-hood to provide CWL compatibility, Consonance provides DIY open-source support for provisioning AWS VMs and starting CWL tasks. We recommend having some knowledge of AWS EC2 before attempting this route.
Consonance’s strategy is to provision either on-demand VMs or spot priced VMs depending on cost, and delegates runs of CWL tools to these provisioned VMs with one tool executing per VM. A Java-based web service and RabbitMQ provide for communication between workers and the launcher, while an Ansible playbook is used to setup workers for execution.
Usage¶
Look at the Consonance repo and, in particular, the Docker compose based setup instructions to setup the environment
Once logged into the client
consonance run --tool-dockstore-id quay.io/collaboratory/dockstore-tool-bamstats:1.25-6_1.0 --run-descriptor Dockstore.json --flavour <AWS instance-type> | https://docs.dockstore.org/en/final_doc_updates/advanced-topics/batch-services.html | 2020-07-02T16:41:25 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.dockstore.org |
DNAnexus¶
Dockstore integrates with the DNAnexus platform, allowing you to launch WDL-based workflows from Dockstore in DNAnexus. Here is some information on what that looks like from a user point of view in a mini tutorial.
Exporting into DNAnexus¶
When browsing WDL workflows from within Dockstore, you will see a “Launch with DNAnexus” button on the right. The currently selected version of the workflow will be exported.
WDL workflow
If not logged into DNAnexus, you will be prompted to login. Otherwise, or after login, you will be presented with the following screen.
WDL workflow import
You will need to pick a folder to export it into. You can either select a folder from an existing project, or you can create a new project.
Then hit the “Submit” button and continue from within the DNAnexus interface to configure and run your workflow.
Limitations¶
- While we support launching of WDL workflows, tools as listed in Dockstore are currently not supported.
- Only the WDL language is supported. | https://docs.dockstore.org/en/snapshot-doi-docs/launch-with/dnanexus-launch-with.html | 2020-07-02T16:11:50 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['../_images/dnanexus_from_dockstore1.png', 'WDL workflow'],
dtype=object)
array(['../_images/dnanexus_from_dockstore2.png', 'WDL workflow import'],
dtype=object) ] | docs.dockstore.org |
TOPICS×
Control playback behavior for seeking over custom ad markers
You can override the default behavior for how TVSDK seeks over ads when using custom ad markers.
By default, when a user seeks into or past ad sections that result from the placement of custom ad markers, TVSDK skips the ads. This might differ from the current playback behavior for standard ad breaks.
You can tell TVSDK to reposition the playhead to the beginning of the most recently skipped custom ad when the user seeks past one or more custom ads.
- Configure a Metadata instance with the DefaultMetadataKeys.METADATA_KEY_ADJUST_SEEK_ENABLED enumeration set to the string value "true" (not as a Boolean true ).
Metadata metadata = new MetadataNode(); metadata.setValue(DefaultMetadataKeys.METADATA_KEY_ADJUST_SEEK_ENABLED.getValue(),"true");
- Create and configure a MediaResource instance, passing the additional configuration options to TimeRangeCollection.toMetadata . This method receives additional configuration options via another generic metadata structure.
MediaResource mediaResource = MediaResource.createFromUrl("", timeRanges.toMetadata(metadata)); | https://docs.adobe.com/content/help/en/primetime/programming/tvsdk-1-4-for-android/configure-user-interface/ad-markers/android-1_4-ad-markers-control-seek.html | 2020-07-02T17:04:22 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.adobe.com |
One of the best ways to deliver short and instant updates to your customers is by using push notifications. With the integration between Shopify Flow and PushOwl, you can now set up automated push notifications to send updates to your customers about their orders.
Here’s how you can set up a push notification to confirm an order:
1. Click on ‘Shopify Flow’ from the ‘Apps’ page on your Shopify dashboard
2. Click on ‘Create Workflow’
3. Select ‘Add Trigger’ and click on ‘Order Created’
4. Select ‘Add Condition’
5. Under ‘If’, select the dropdown and click on ‘Can mark as paid’.
6. In the second dropdown, click on ‘is true’ from the options.
7. Then, select ‘Add Action’.
8. A sidebar will appear on the right. Scroll down to find ‘Send Push Notification’.
9. Fill in the fields as mentioned below or customize it accordingly to your needs.
- Customer Email: {{order.email}}
- Title for Push Notification: ‘Your order has been placed’
- Message for Push Notification (optional): ‘You will receive your order in 5 to 10 business days.’
- Primary Link: Select ‘Add Template Variable’ and click on ‘Url’ to take customers back to the store.
- Image (optional): (You can leave this field empty.)
10 Click ‘Save’ once you’re done.
11. Name the workflow “Order Created Push’ and click ‘ Save’.
12. Click on the switch at the top right to enable the automation.
Here’s what the workflow will look like:
You can download the workflow here: | https://docs.pushowl.com/en/articles/2710114-automated-push-notification-for-order-placed-update | 2020-07-02T15:17:35 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['https://downloads.intercomcdn.com/i/o/101735795/366d574c0ca8752094263e0b/screenshot4.2.1.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101735839/df6095e8859f7793942da83c/screenshot4.2.2.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101735874/81d69e237e89a805148571be/screenshot4.2.4.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101735946/615104b98b0605398612c573/screenshot4.2.5.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101735981/eb00841e2843f39d83e3b2a6/screenshot4.2.6.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101736087/66cc004ca320c0cf03655f12/screenshot4.2.7.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101736239/b46929c3f762080ca8524f1a/screenshot4.2.8.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101736280/04f92d1ba06315e85352357a/screenshot4.2.9.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101736309/04de24d5475bb551f5089293/screenshot4.2.10.png',
None], dtype=object) ] | docs.pushowl.com |
Settings
The Admin> Settings view enables Command Center administrators to configure settings for Command Center features.
History settings
Click Enable queries.
Command Center by default uses history data collected by the
gpsmon and
gpmmon agents that are installed and enabled when you create the gpperfmon database. These agents save history data to the “legacy” history tables, in the gpperfmon database public schema. History and metrics collected by the Command Center
ccagent agent can be saved to a different set of history tables, in the gpperfmon database gpmetrics schema.
When you enable queries history data collection:
- Command Center saves query and metrics history in the gpmetrics schema tables in the gpperfmon database for completed queries that execute for at least the number of seconds you specify. Plan node history is only saved for queries that run at least 10 seconds, or the number of seconds you specify if greater than 10. See gpmetrics Schema Reference for information about the history tables.
- The Command Center History view displays saved history and metrics from the gpmetrics tables instead of from the legacy gpperfmon tables.
- The gpperfmon
gpmmonand
gpsmonagents continue to populate the legacy gpperfmon tables.
If you disable history data collection, the Command Center History view again displays data from the legacy history tables. | https://gpcc.docs.pivotal.io/470/topics/ui/admin-settings.html | 2020-07-02T16:51:23 | CC-MAIN-2020-29 | 1593655879532.0 | [] | gpcc.docs.pivotal.io |
New Lync app for Windows 8.1 now available
Many of you have installed the Windows update and are enjoying the new, cool features of Windows including a new Skype app. If you haven’t upgraded yet, you now have another great reason – the new update to Lync is available today and only for Windows 8.1. We kept a close eye on your reviews and comments in the Windows Store as well as other channels of feedback. Based on your feedback, we made a number of improvements in this update that we think you’ll like. Here’s an overview of what’s new:
Take control of shared screens or apps During a Lync Meeting with app sharing or screen sharing, take control of the sharing started by someone else in the meeting.
Take control of app and screen sharing.
Lync side-by-side with Excel
Answer audio and video calls on the lock screen Answer audio and video calls quickly when your device is locked, without having to unlock it.
Answer calls on lock screen
Mute and Control Call Volume from inside Lync Mute the speakers orcontrol the volume of your call from the conversation window without affecting your Windows speaker volume.
Mute or control the volume of your conversation
In-app contact search Find your contacts faster, with a simple, in-app contact search.
In-app search
Sign in reliably This update includes a number of improvements to the sign in experience so that you can more easily connect and stay connected.
To learn more about all that the app can do, look through our online Help topics.
To download the update, you need to first upgrade your device to Windows 8.1. Once you have Windows 8.1, Lync will upgrade automatically within 24 hours unless you have turned off Automatically update my apps in Windows Store. To get the update sooner,
We are proud to bring you this new version of Lync, and are excited to continue adding features, improve performance and make the app better. Please leave us feedback and give us a new or updated rating in the Windows Store, or leave feedback in the blog comments so that we know what you would like to see in future updates.
Have fun using Lync! | https://docs.microsoft.com/en-us/archive/blogs/lync/new-lync-app-for-windows-8-1-now-available | 2020-07-02T16:49:06 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
If you want to receive email notification every time an instructor signs up or submits a course for review, you can search for the “Notification” plugin from your admin dashbaord and install it.
You can set custom triggers for everything you need. If you need assistance, feel free to send an email to support. | https://docs.themeum.com/tutor-lms/tutorials/custom-notification-email/ | 2020-07-02T15:37:29 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.themeum.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.