content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
The QMK Configurator is an online graphical user interface that generates QMK Firmware hex files. Watch the Video Tutorial. Many people find that is enough information to start programming their own keyboard. The QMK Configurator works best with Chrome or Firefox. !> Note: Files from other tools such as Keyboard Layout Editor (KLE), or kbfirmware will not be compatible with QMK Configurator. Do not load them, do not import them. QMK Configurator is a DIFFERENT tool. Please refer to QMK Configurator: Step by Step.
https://beta.docs.qmk.fm/configurator/newbs_building_firmware_configurator
2020-03-28T21:16:32
CC-MAIN-2020-16
1585370493120.15
[]
beta.docs.qmk.fm
Headers: X_CLIENT_TOKEN: your_API_KEY Content-Type: application/json Body: The body of the request must contain the invalidation url in the form /v7/original_image_url?operation&filter. To invalidate: Use: /v7/sample.li/boat.jpg?width=300 or v7/sample.li/boat.jpg?width=300 Invalidate a single image { "invalidation": { "scope": "url", "url": "/v7/sample.li/boat.jpg?width=300" } } Invalidate a single image and all it's resizes { "invalidation": { "scope": "original", "url": "sample.li/boat.jpg" } } Invalidate multiple images { "invalidation": { "scope": "urls", "urls": ["/v7/sample.li/paris.jpg?width=400", "/v7/sample.li/flat.jpg?width=400", ... ] } } doc token doc.
https://docs.cloudimage.io/go/cloudimage-documentation-v7/en/content-delivery-network-cdn/invalidation-api
2020-03-28T21:51:55
CC-MAIN-2020-16
1585370493120.15
[]
docs.cloudimage.io
Accessories for Clotheslines FAQ for clothesline accessory items such as sockets, basket holders and pegs etc - Ground Sleeve for Clothesline - Will the Clothesline Basket Holder fit my Hills Hoist clothesline? - Does A Folding Rotary Clothesline Come With A Ground Socket Or Tube? - What Is The Difference Between Plated And Standard Ground Mount Kits? - I want to install a Versaline clothesline from a wall to a post, however one bracket needs to be mounted at right angles to the wall. How can this be achieved? - I have two posts that I want to install my Versaline Slimline Clothesline on. Do I require the 90 Degree Mounting Plate? - Delivery to Post Office Box Address - Shipping to PO Box Addresses
https://docs.lifestyleclotheslines.com.au/category/72-accessories-for-clotheslines
2020-03-28T21:46:45
CC-MAIN-2020-16
1585370493120.15
[]
docs.lifestyleclotheslines.com.au
Forwarders Important: You set up inputs on a forwarder the same way you set them up on a Splunk indexer. The only difference is that the forwarder does not include Splunk Web, so you must configure inputs with either the CLI or inputs.conf. Before setting up the inputs, you need to deploy and configure the forwarder, as this recipe describes. You can use Splunk forwarders to send data to indexers, called receivers. This is usually the preferred way to get remote data into an indexer. To use forwarders, specifically universal forwarders, for getting remote data, you need to set up a forwarder-receiver topology, as well as configure the data inputs: 1. Install the Splunk instances that will serve as receivers. See the Installation Manual for details. 2. Use Splunk Web or the CLI to enable receiving on the instances designated as receivers. See "Enable a receiver" in the Forwarding Data manual. 3. Install, configure, and deploy the forwarders. Depending on your forwarding needs, there are a number of best practices deployment scenarios. See "Universal forwarder deployment overview" in the Forwarding Data manual for details. Some of these scenarios allow you to configure the forwarder during the installation process. 4. Specify data inputs for each universal forwarder, if you have not already done so during installation. You do this the same way you would for any Splunk instance. As a starting point, see "What Splunk can index" in this manual for guidance on configuring the different types of data inputs. Note: Since the universal forwarder does not include Splunk Web, you must configure inputs through either the CLI or inputs.conf; you cannot configure with Splunk Web. 5. Specify the fowarders' output configurations, if you have not already done so during installation. You do this through the CLI or by editing the outputs.conf file. You get the greatest flexibility by editing outputs.conf. For details, see the Forwarding Data manual, including "Configure forwarders with outputs.conf". 6. Test the results to confirm that forwarding, along with any configured behaviors like load balancing or filtering, is occurring as expected. Go to the receiver to search the resulting data. For more information on forwarders, see the Forwarding Data manual, starting with "About forwarding and receiving". Also!
https://docs.splunk.com/Documentation/Splunk/6.1.14/Data/Useforwardingagentstogetdata
2020-03-28T22:41:44
CC-MAIN-2020-16
1585370493120.15
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
We offer hands-on workshop on the Open Targets Platform as live webinars or face-to-face workshops. Check our blog post for more details. The upcoming and past training sessions can be found in the Outreach and tutorial page. If you want to arrange a training session at your institution (academia or industry), please email us and we will be happy to come to you. In the meantime, you may want to check some of the recorded Open Targets webinars, quick demos and a tutorial videos on YouTube.
https://docs.targetvalidation.org/outreach/training-sessions
2020-03-28T19:53:54
CC-MAIN-2020-16
1585370493120.15
[]
docs.targetvalidation.org
Referral Program Refer Cloudimage and get money (or additional Cloudimage allowances)! Cloudimage customer/lead can refer Cloudimage to his network by obtaining a link to leading to registration form with pre-filled voucher code. The voucher is either the referrer's Cloudimage token OR a manually generated token for non-Cloudimage customers. The referee obtains: - 10% discount OR - additional 10% GB on any plan (FREE included) And the referrer obtains: - the same discount on his current plan OR - additional 10% GB on his plan OR - Amazon voucher (relevant for non-Cloudimage customers) TODOS: 1) adapt registration form to take into account voucher code in link or manual adding 2) save into DB 3) save in SFDC as cLeadSourceDetail 4) create monthly report in SFDC to pull lead / accounts coming from referal 5) add option under Plans for referee to claim reward (10% discount OR 10% additional GB) 6) Cloudimage admin: section about referal program 7) Cloudimage doc: present referal program => Mantas 8) support rewards (see above) in Cloudimage DB > s_projects
https://docs.cloudimage.io/go/cloudimage-documentation-v7/en/referral-program
2020-03-28T20:22:01
CC-MAIN-2020-16
1585370493120.15
[]
docs.cloudimage.io
converts an XML or XAR Primavera P6 file to JSON gantt.importFromPrimaveraP6({ data: file, taskProperties: ["Notes", "Name"], callback: function (project) { if (project) { gantt.clearAll(); if (project.config.duration_unit) { gantt.config.duration_unit = project.config.duration_unit; } gantt.parse(project.data); } } }); This functionality is currently in Beta stage. The information given in this article may be substantially changed before the feature enters its stable stage. The method requires HTML5 File API support. This method is defined in the export extension, so you need to include it on the page: <script src=""></script> Read the details in the Export and Import from MS Project article. The method takes as a parameter an object with configuration properties of an imported file: The response will contain a JSON of the following structure: { data: {}, config: {}, resources: [], worktime: {} }
https://docs.dhtmlx.com/gantt/api__gantt_importfromprimaverap6.html
2020-03-28T20:21:06
CC-MAIN-2020-16
1585370493120.15
[]
docs.dhtmlx.com
sets the working time for the Gantt chart gantt.config.work_time = true; //changes the working time of working days from [8,17] to [9,18] gantt.setWorkTime({ hours:[9,18] }); //makes all Fridays day-offs gantt.setWorkTime({ day:5, hours:false }); //changes the working time for Fridays and Saturdays from [8,17] to [8,12] gantt.setWorkTime({day : 5, hours : [8,12]}); gantt.setWorkTime({day : 6, hours : [8,12]}); //makes March 31 a working day gantt.setWorkTime({date : new Date(2013, 2, 31)}); //makes January 1 a day-off gantt.setWorkTime({date:new Date(2013,0,1), hours:false}) //sets working time as 2 periods: 8:00-12:00, 13:00-17:00 (to keep time for lunch) gantt.setWorkTime({hours : [8, 12, 13, 17]}) The default working time is the following: The method is used to alter the default settings. Note, each next call of the method for the same date will re-write the previous working-time rule: gantt.setWorkTime({hours:[8,12]}); gantt.setWorkTime({hours:[13,17]}); //the result of following commands will be the working time 13:00-17:00 //and not a mixin of both commands The configuration object can contain the following properties:
https://docs.dhtmlx.com/gantt/api__gantt_setworktime.html
2020-03-28T20:24:33
CC-MAIN-2020-16
1585370493120.15
[]
docs.dhtmlx.com
Check the Health of a ksqlDB Server Check a ksqlDB Server from the ksqlDB CLI¶ Check the streams, tables, and queries on the ksqlDB Server that you're connected to by using the DESCRIBE EXTENDED and EXPLAIN statements in the ksqlDB CLI. - Run SHOW STREAMS or SHOW TABLES, then run DESCRIBE EXTENDED <stream|table>. - Run SHOW QUERIES, then run EXPLAIN <query-name>. Check a ksqlDB Server running in a native deployment¶ If you installed ksqlDB server by using a package manager, like a DEB or RPM, or from an archive, like a TAR or ZIP file, you can check the health of your ksqlDB Server instances by using shell commands. Check the ksqlDB Server process status¶ Use the ps command to check whether the ksqlDB Server process is running:. Inspect runtime stats¶ You can check runtime stats for the ksqlDB server that you're connected to by using the ksql-print-metrics command-line utility. On a server host, run ksql-print-metrics. This tool connects to a ksqlDB Server that's running on localhost and collects JMX metrics from the server process. Metrics include the number of messages, the total throughput, the throughput distribution, and the error rate. For more information, see JMX metrics Check a ksqlDB Server by using the REST API¶ The ksqlDB REST API supports a "server info" request, which you access with a URL like http://<ksqldb-server-host>/info. The /info endpoint returns the ksqlDB Server version, the Apache Kafka® cluster ID, and the service ID of the ksqlDB Server. Also, the ksqlDB REST API supports a basic health check endpoint at /healthcheck. Important This approach doesn't work for non-interactive, or headless, deployments of ksqlDB Server, because a headless deployment doesn't have a REST API server. Instead, check the JMX metrics port. For more information, see Introspect server status. Check a ksqlDB Server running in a Docker container¶ the JMX metrics port¶ In addition to the previous health checks, you can query the Java Management Extensions (JMX) port on a host that runs ksqlDB Server. This is useful when you need to check a headless ksqlDB Server that's running natively or in a Docker container, because headless deployments of ksqlDB Server don't have a REST server that you can query for health. Instead, you can probe the JMX port for liveness. A JMX probe is the most reliable way to determine readiness of a headless deployment. Note JMX indicates that the JVM is up and responsive. This test is similar to confirming is the ksqlDB process is running, but a successful response doesn't necessarily mean that the ksqlDB service is fully operational. To get better exposure, you can monitor the nodes from Confluent Control Center or JMX. The following command probes the JMX port by using the Netcat utility. nc -z <ksql-node>:1099 An exit code of 0 for an open port tells you that the container and ksqlDB JVM are running. This confirmation has a level of confidence that's similar to the REST health check. The general responsiveness on the port should be sufficient as a high-level health check. For a list of the available metrics you can collect, see JMX Metrics.
https://docs.ksqldb.io/en/latest/operate-and-deploy/installation/check-ksqldb-server-health/
2020-03-28T20:56:35
CC-MAIN-2020-16
1585370493120.15
[]
docs.ksqldb.io
1. What Azure regions does Qubole support?¶ Note The information on this page may become out-of-date between releases; if you have access to the current version of the QDS UI, use the information there instead: go to the + New or Edit dialog in the the Clusters section, click the Advanced Configuration tab and pull down the drop-down selector for the Location field. For more information about Azure regions, see the Microsoft documentation. Qubole currently supports following regions: - eastus (East US) - eastus2 (East US 2) - centralus (Central US) - southcentralus (South Central US) - westus (West US) - centralindia (Central India) - southindia (South India) - westindia (West India) - westeurope (West Europe) - southeastasia (Southeast Asia)
https://docs.qubole.com/en/latest/faqs/azure-questions/azure-regions-supported-qubole.html
2020-03-28T21:17:54
CC-MAIN-2020-16
1585370493120.15
[]
docs.qubole.com
Configuration¶ Configuring doctrine-dbal for TYPO3 CMS is all about specifying the single database endpoints and handing over connection credentials. The framework supports the parallel usage of multiple database connections, a specific connection is mapped depending on its table name. The table space can be seen as a transparent layer that determines which specific connection is chosen for a query to a single or a group of tables: It allows “swapping-out” single tables from the Default connection to point them to a different database endpoint. As with other central configuration options, the database endpoint and mapping configuration happens within typo3conf/LocalConfiguration.php and ends up in $GLOBALS['TYPO3_CONF_VARS'] after core bootstrap. The specific sub-array is $GLOBALS['TYPO3_CONF_VARS']['DB']. A typical, basic example using only the Default connection with a single database endpoint: // LocalConfiguration.php // [...] 'DB' => [ 'Connections' => [ 'Default' => [ 'charset' => 'utf8', 'dbname' => 'theDatabaseName', 'driver' => 'mysqli', 'host' => 'theHost', 'password' => 'theConnectionPassword', 'port' => 3306, 'user' => 'theUser', ], ], ], // [...] Remarks: - The Defaultconnection must be configured, this can not be left out or renamed. - For mysqli, if the hostis set to localhostand if the default PHPoptions in this area are not changed, the connection will be socket based. This saves a little overhead. To force a TCP/IPbased connection even for localhost, the IPv4or IPv6address 127.0.0.1and ::1/128respectively must be used as hostvalue. - The connect options are hand over to doctrine-dbalwithout much manipulation from TYPO3 CMSside. Please refer to the doctrine connection docs for a full overview of settings. - If charsetoption is not specified it defaults to utf8. - The option wrapperClassis used by TYPO3 to insert the extended Connection class TYPO3\CMS\Database\Connectionas main facade around doctrine-dbal. A slightly more complex example with two connections, mapping the sys_log table to a different endpoint: //' ] ], // [...] Remarks: - The array key Syslogis just a name, it can be different but it’s good practice to give it a useful speaking name. - It is possible to map multiple tables to a different endpoint by adding further table name / connection name pairs to TableMapping. - Mind this “connection per table” approach is limited: If in the above example a join query that spans over different connections is fired, an exception is raised. It is up to the administrator to group affected tables to the same connection in those cases, or a developer should implement some fallback logic to suppress the join(). Attention Connections to databases postgres, maria and mysql are actively tested. However, mssql is currently not actively tested. Furthermore, the TYPO3 CMS installer supports only a single mysql or mariadb connection at the moment and the connection details can not be properly edited within the All configuration section of the Install Tool.
https://docs.typo3.org/m/typo3/reference-coreapi/master/en-us/ApiOverview/Database/Configuration/Index.html
2020-03-28T21:31:41
CC-MAIN-2020-16
1585370493120.15
[]
docs.typo3.org
Service Provider¶ Overview OpenLMI-Service is CIM provider for managing Linux system services (using the systemd D-Bus interface). Clients The API can be accessed by any WBEM-capable client. OpenLMI already provides: - Python module lmi.scripts.service, part of OpenLMI scripts. - Command line tool: LMI metacommand, with ‘service’ subcommand. Features - Enumerate system services and get their status. - Start/stop/restart/... a service and enable/disable a service. - Event based monitoring of service status (emit indication event upon service property change). Examples For examples how to use OpenLMI-Service provider remotely from LMIShell, see the usage. Table of Contents
https://openlmi.readthedocs.io/en/latest/openlmi-providers/service-dbus/index.html
2020-03-28T20:59:55
CC-MAIN-2020-16
1585370493120.15
[]
openlmi.readthedocs.io
Journald command line reference¶ lmi journald is a command for LMI metacommand, which allows to query and watch system logs on a remote host with installed OpenLMI journald provider. It can also log custom messages. journald¶ Journald message log management. Usage: lmi journald list [(–reverse | –tail)] lmi journald logger <message> lmi journald watch Commands: - list - Lists messages logged in the journal - logger - Logs a new message in the journal - watch - Watch for newly logged messages Options: - –reverse - List messages from newest to oldest - –tail - List only the last 50 messages
https://openlmi.readthedocs.io/en/latest/openlmi-tools/scripts/commands/journald/cmdline.html
2020-03-28T21:23:51
CC-MAIN-2020-16
1585370493120.15
[]
openlmi.readthedocs.io
HttpFS Authentication This section describes how to configure HttpFS CDH 6 with Kerberos security on a Hadoop cluster. Using curl to access an URL Protected by Kerberos HTTP SPNEGO To configure curl to access an URL protected by Kerberos HTTP SPNEGO: - - Login to the KDC using kinit. $ kinit Please enter the password for tucu@LOCALHOST: - Use curl to fetch the protected URL: $ curl -.
https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/cdh_sg_httpfs_security.html
2020-03-28T22:07:17
CC-MAIN-2020-16
1585370493120.15
[]
docs.cloudera.com
Responsive Images made easy Responsive images adapt the image size according to the screen size of the end user, thereby allowing your website or mobile app to load faster across various screen sizes. For example, on an iPhone, Cloudimage will deliver smaller images than it would on a 15" computer screen, thus accelerating the page loading time. The HTML <picture> element with the underlying <source> elements allow to specify which image size will be delivered to which screen size. Examples below. Without responsive design The following HTML code delivers the same image size (800x180px) to all screen sizes (15" laptop monitor, iPad, smartphone, ...) <img src="" /> This image size might be optimized for 13" screens but is too large for 4" smartphone screens, where it would be better delivered in 400x90px. To achieve this, HTML 5 introduced the <picture> element to give developers more flexibility with image resources. With responsive design The following code must replace the <img> element above to make your website or app responsive. It also takes into account high-resolution screens such as Retina: 1.5x, 2x, 3x. <picture> <source media="(max-width:576px)" > <source media="(max-width:768px)" > <source media="(max-width:992px)" > <source media="(max-width:1200px)" > <source media="(max-width:1920px)" > </picture>
https://docs.cloudimage.io/go/cloudimage-documentation/en/responsive-images
2020-03-28T21:03:15
CC-MAIN-2020-16
1585370493120.15
[]
docs.cloudimage.io
GitBook supports the internationalization of a Space or a Variant, enabling public documentation user interface elements to be translated. Moreover, adding a language to a variant lets you create multi-language contents (ex: chinese version of your API, french how-to guide...). The current supported languages are: English, French, Spanish, Chinese (simplified) and Japanese. By default your space will have the English language selected. To customize it you can access the language options in the Customization panel. This will also set the fallback language for all of your variants. You can set a language for each of your variants, enabling your language specific documentations to have their user interface translated. When creating a variant you can specify a language or leave the default value, which will fallback to the space's default language. You can update a variant's language at any time by accessing its options:
https://docs.gitbook.com/features/internationalization
2020-03-28T21:30:34
CC-MAIN-2020-16
1585370493120.15
[]
docs.gitbook.com
Terms & Condition Last Updated: Jan 30, 2018 Last Updated: Jan 30, 2018 The Security Event Logbook will be available to the manager of the premises/site. These must be signed on a daily basis by The Client to ensure that all is in order. The Security Officer (s) on duty will then countersign the log. The Client is required to provide a base room for the Security Officer/Guard which is equipped with the basic amenities namely shelter from the elements, lighting, heating and bathroom facilities. Where necessary, these are to be supplied by The Company at an additional cost. Security Officers will observe and comply with any Health and Safety documents, which are in force at any site or premises where they are deployed. Please refer to the Health and Safety document The Company invests considerable sums in training its staff to the highest standard. Consequently, where a member of staff is recruited directly by The Client either whilst the member of staff is deployed to The Client or within six (6) months of leaving the employ of i3s, one-month charge sum of money will be payable by The Client to The Company. Security officers on duty at these times are paid at double (2 x) the standard rate. The Client is therefore charged double (2 x) for these times. Services provided on 25, 26 & 01 January each year will be charged at double (2 x) the standard rate in addition to all bank and public holidays. Security Officer(s) will have a mobile phone on site, to be used for making "check calls” to the Control Centre and in cases of emergency only. The Company can supply additional communication equipment at an additional cost i.e. Radio’s Where required, The Company can provide patrol monitoring equipment (Active-Guard real time patrol monitoring system) for the Client at specially subsidized rates. Unless a separate arrangement is agreed in writing, invoices are prepared four (4) weeks in arrears. Payment should then be made within twenty (20) days of the invoice date. The Company reserves the right to charge interest at the prevailing base rate on all overdue amounts including any further amounts that become payable under the agreement. The interest will be calculated on a daily basis from the date the amount falls due until the full amount is received. The Company may also charge for any expenses actually incurred in obtaining payment of any sum overdue. The Client undertakes to ensure to the best of its endeavors that The Company’s access to the premises shall not be impeded and in such circumstances such access is impeded, The Client shall release The Company from their duties under this agreement. This agreement shall be terminable by either party giving to the other fifteen (15) days' notice in writing. Otherwise either party shall only be entitled to terminate this agreement forthwith in the event that; Any change to the schedule or service(s) being provided by The Company must be notified to the Head Office in writing giving at least twenty-four (24) hours' notice. Otherwise, The Company reserves the right to claim any costs incurred by interruptions.
http://docs.i3ssecurity.com/tnc.html
2020-03-28T21:39:11
CC-MAIN-2020-16
1585370493120.15
[]
docs.i3ssecurity.com
method recv Documentation for method recv assembled from the following types: role IO::Socket From IO::Socket .
https://docs-stage.perl6.org/routine/recv
2020-03-28T21:43:10
CC-MAIN-2020-16
1585370493120.15
[]
docs-stage.perl6.org
DC/OS 1.13.6 was released on 7.6Fixed and Improved Issues in DC/OS 1.13.6 - Fixed an issue where after upgrading to the latest version of MacOS Catalina, DC/OS certificates were identified as invalid. (DCOS-60264, DCOS-60205, COPS-5417) - Fixed an issue where if a UCR container is being destroyed and the container is in provisioning state, we will wait for the provisioner to finish before we start destroying the container. This may cause the container to get stuck at destroying, and more seriously may cause the subsequent containers created from the same image to get stuck at provisioning state. Fixed by adding support for destroying the container in provisioning state so that the subsequent containers created from the same image will not be affected. (COPS-5285, MESOS-9964) - Fixed an issue where Marathon begins crash-looping after receiving a very long error message from a task’s fetcher. (COPS-5365) - Improved diagnosing problems with pods. (DCOS_OSS-5616) - Fixed an issue occuring if a new secret was added and a secret with the combination of “secret” and the new index as a key was already existing. (COPS-4928)
https://docs.d2iq.com/mesosphere/dcos/1.13/release-notes/1.13.6/
2020-03-28T20:10:17
CC-MAIN-2020-16
1585370493120.15
[]
docs.d2iq.com
Blog moved to burling.co.nz When I first started at Microsoft I had my own blog on my own site. However, after a few months I started blogging here anyway and removed the blog from my own site. Recently I've dusted off the old site and started afresh with a nice new blogging engine that I really like and with a nice design that I also like. If you are interested in following my adventures, please subscribe to the main feed at. If you would just like to get the work related blog posts, subscribe to the feed at - this is my work specific blog. If you are already subscribed to the work feed - you'll not have to do anything - nor will you be reading this through a feed reader. Hopefully the main feed will be more interesting to you, but you have the choice.
https://docs.microsoft.com/en-us/archive/blogs/darrylburling/blog-moved-to-burling-co-nz
2020-03-28T22:20:02
CC-MAIN-2020-16
1585370493120.15
[]
docs.microsoft.com
Azure Notebooks new UI And Project Features–Get your academic course content showcased. Refreshed Azure Notebooks User Interface The Azure Notebook team last week launched a refreshed user interface at; Some of the new enhancements included the following 1. Libraries are now more appropriately named projects. They function just like the libraries you are used to, but we’ve made it easier to create and maintain your content. A great example of a Project is the AzureML Project, Azure Notebooks integration with Azure Machine Learning workspaces The team have integrated the Azure Machine Learning Service with Azure Notebooks by making the Azure Machine Learning SDK available to all code running in the Python 3.6 kernel. This, combined with Azure Subscription support, makes it easy for data scientists to train and optimize machine learning models using the rich set of compute resources available on Azure, and deploy them to Azure for inferencing. Here the Example of the ML Project at Welcome Azure Machine Learning service through Azure Notebooks First try the example 01.run-experiment to connect to your workspace and run a basic experiment using Azure Machine Learning Python SDK, and then 02.deploy-web-service to deploy a model as a web service. Then move to more comprehensive examples in tutorials folder, or explore different features in how-to-use-azureml folder. See also: Important: You must select Python 3.6 as the kernel for your notebooks to use the SDK. 2. The site now supports notifications that we will use to keep you informed of new features and additions to Azure Notebooks. 3. The site homepage is now more alive with featured and popular content from the Jupyter Community. Showcase academic and institutions use of Azure Notebooks Have a cool notebook you use in teaching want to share and be showcased? Upload & tweet it with #AzureNotebooks! or send us a email to [email protected].
https://docs.microsoft.com/en-us/archive/blogs/uk_faculty_connection/azure-notebooks-new-ui-and-project-features-get-you-academic-content-showcased
2020-03-28T22:28:04
CC-MAIN-2020-16
1585370493120.15
[]
docs.microsoft.com
CPU-frequency Scaling with Slackware Overview CPU-frequency-scaling is done by the kernel. Slackware comes with all requirements necessarry to use this feature. Requirements The kernelmdule which provides cpu-frequency-scaling is for AMD powernow_k8, for Intel ???, Slackware comes with the cpufrequtils-package which helps us to configure the frequency-scaling in an appropriate manner. Note that on a laptop the appropriate kernelmodule for cpu-frequency-scaling is loaded automatically. If you're computer is not a laptop you'll have to load the module via modprobe. The cpu-frequency-scaling can be done automatically by a so called governor or manually. The governors available are “conservative”, “ondemand”, “userspace” and “performance”. When you build your own kernel, you can choose one of the governors or “userspace” which allows for configuring cpu-frequency in userspace. Functions of the governors: the governor “performance” sets always the highest frequency available for the processor. “ondemand” switches dynamically between the available frequencies depending on the systemload. “conservative” is similar to “ondemand”, but tries to use always the lowest frequency possible. “powersave” sets the frequency to the lowest possible. When you configure your kernel for “userspace” you can configure the different governors in userspace (which is most convenient). The cpufrequtils-package provides two commands, with cpufreq-info you can find out which governor is configured and also some other informations for example about the available frequencies for your processor. The cpufre-set command can be used in order to set the appropriate governor. Sources
https://docs.slackware.com/playground:testing-cpufreqhowto
2020-03-28T21:06:04
CC-MAIN-2020-16
1585370493120.15
[]
docs.slackware.com
Book Creator Add this page to your book Book Creator Remove this page from your book I have changed Dovecot's mysql connection string. The former was: host=localhost dbname=mailserver user=mailuser pass={your mailuser password} This produces the following error at /var/log/maillog: Apr 1 17:54:23 darkstar dovecot: auth: Fatal: mysql: Unknown connect string: pass The comments in /etc/dovecot/dovecot-sql.conf.ext clearly states: # Database connection string. This is driver-specific setting. ... # mysql: # Basic options emulate PostgreSQL option names: # host, port, user, password, dbname This is also confirmed by Dovecot's documentation: #: #connect = host=localhost dbname=mails user=admin password=pass I have also followed the advice on double quotes for special characters in the password. So, the new connection string is: "host=localhost dbname=mailserver user=mailuser password={your mailuser password}" This change refers to Dovecot v2.2.13. By the way, I would like to thanks astrogeek for this article. It's quite precise and clear! — Deny Dias 2015/04/01 18:36
https://docs.slackware.com/talk:howtos:network_services:postfix_dovecot_mysql:dovecot?rev=1427924421&mddo=print
2020-03-28T21:33:46
CC-MAIN-2020-16
1585370493120.15
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
Administrator’s Manual¶ Describes how to manage the extension from an administrator’s point of view. That relates to Page/User TSconfig, permissions, configuration etc., which administrator level users have access to. Language should be non/semi-technical, explaining, using small examples. Target group: Administrators, Developers Installation¶ There are two ways to properly install the extension. Using git to clone the the repository is deprecated and most likely will not work any more in the near future. 1. Composer installation¶ In case you use Composer to manage dependencies of your TYPO3 project, you can just issue the following Composer command in your project root directory: composer require helhum/typo3-console The typo3cms binary will be installed by Composer in the specified bin-dir (by default vendor/bin). In case you are unsure how to create a Composer based TYPO3 project, you can check out this TYPO3 distribution, which already provides TYPO3 Console integration. 2. Installation with Extension Manager¶ For the extension to work, it must be installed in the typo3conf/ext/ directory not in any other possible extension location. This is the default location when downloading it from TER with the Extension Manager. The typo3cms script will be copied to your TYPO3 root directory, when you activate it. When you symlink the typo3cms script to a location of your preference, TYPO3 Console will work, even when it is not marked as active in the Extension Manager. Shell auto complete¶ You can get shell auto completion by using the great autocomplete package. Install the package and make the binary available in your path. Please read the installation instructions of this package on how to do that. To temporary activate auto complete in the current shell session, type eval "$(symfony-autocomplete --aliases=typo3cms)" You can also put this into your .profile or .bashrc file to have it always available. Auto completion is then always dynamic and reflects the commands you have available in your TYPO3 installation.
https://docs.typo3.org/p/helhum/typo3-console/master/en-us/AdministratorManual/Index.html
2020-03-28T21:55:36
CC-MAIN-2020-16
1585370493120.15
[]
docs.typo3.org
mkdir /etc/ambari-server/keys where the keys directory does not exist, but should be created. $JAVA_HOME/bin/keytool -import -trustcacerts -alias root -file $PATH_TO_YOUR_LDAPS_CERT -keystore /etc/ambari-server/keys/ldaps-keystore.jks Set a password when prompted. You will use this during ambari-server setup-ldap. ambari-server setup-ldap At the Primary URL*prompt, enter the server URL and port you collected above. Prompts marked with an asterisk are required values. At the Secondary URL*prompt, enter the secondary server URL and port. This value is optional. At the Use SSL*prompt, enter your selection. If using LDAPS, enter true. At the User object class*prompt, enter the object class that is used for users. At the User name attribute*prompt, enter your selection. The default value is uid. At the Group object class*prompt, enter the object class that is used for groups. At the Group name attribute*prompt, enter the attribute for group name. At the Group member attribute*prompt, enter the attribute for group membership. At the Distinguished name attribute*prompt, enter the attribute that is used for the distinguished name. At the Base DN*prompt, enter your selection. At the Referral method*prompt, enter to followor ignoreLDAP referrals. At the Bind anonymously*prompt, enter your selection. At the Manager DN*prompt, enter your selection if you have set bind.Anonymously to false. At the Enter the Manager Password*prompt, enter the password for your LDAP manager DN. If you set Use SSL*= true in step 3, the following prompt appears: Do you want to provide custom TrustStore for Ambari? Consider the following options and respond as appropriate. More secure option: If using a self-signed certificate that you do not want imported to the existing JDK keystore, enter y. For example, you want this certificate used only by Ambari, not by any other applications run by JDK on the same host. If you choose this option, additional prompts appear. Respond to the additional prompts as follows: At the TrustStore typeprompt, enter jks. At the Path to TrustStore fileprompt, enter /keys/ldaps-keystore.jks(or the actual path to your keystore file). At the Password for TrustStoreprompt, enter the password that you defined for the keystore. Less secure option: If using a self-signed certificate that you want to import and store in the existing, default JDK keystore, enter n. Convert the SSL certificate to X.509 format, if necessary, by executing the following command: openssl x509 -in slapd.pem -out<slapd.crt> Where <slapd.crt> is the path to the X.509 certificate. Import the SSL certificate to the existing keystore, for example the default jre certificates storage, using the following instruction: /usr/jdk64/jdk1.7.0_45/bin/keytool -import -trustcacerts -file slapd.crt -keystore /usr/jdk64/jdk1.7.0_45/jre/lib/security/cacerts Where Ambari is set up to use JDK 1.7. Therefore, the certificate must be imported in the JDK 7 keystore. Review your settings and if they are correct, select y. Start or restart the Server ambari-server restart The users you have just imported are initially granted the Ambari User privilege. Ambari Users can read metrics, view service status and configuration, and browse job information. For these new users to be able to start or stop services, modify configurations, and run smoke tests, they need to be Admins. To make this change, as an Ambari Admin, use Manage Ambari > Users > Edit. For instructions, see Managing Users and Groups.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.0/bk_ambari-security/content/configure_ambari_to_use_ldap_server.html
2018-01-16T17:28:23
CC-MAIN-2018-05
1516084886476.31
[]
docs.hortonworks.com
Minio Client Complete Guide Minio Client (mc) provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff. 1. Download Minio Client Docker Stable docker pull minio/mc docker run minio/mc ls play Docker. Homebrew (macOS) Install mc packages using Homebrew brew install minio/stable/mc mc --help Binary Download (GNU/Linux) chmod +x mc ./mc --help Binary Download (Microsoft Windows) mc.exe --help Snap (GNU/Linux) 2. Run Minio Client GNU/Linux chmod +x mc ./mc --help macOS chmod 755 mc ./mc --help Microsoft Windows mc.exe --help 3. Add a Cloud Storage Service Note: If you are planning to use mc only on POSIX compatible filesystems, you may skip this step and proceed to Step 4. To add one or more Amazon S3 compatible hosts, please follow the instructions below. mc stores all its configuration information in ~/.mc/config.json file. Usage mc config host add <ALIAS> <YOUR-S3-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY> <API-SIGNATURE> Alias is simply a short name to you 4./ 5. Everyday Use You may add shell aliases to override your common Unix tools. alias ls='mc ls' alias cp='mc cp' alias cat='mc cat' alias mkdir='mc mb' alias pipe='mc pipe' alias find='mc find' 6. Global Options Option [--debug] Debug option enables debug output to console. Example: Display verbose debug output for ls command. mc --debug ls play mc: <DEBUG> GET / HTTP/1.1 Host: play.minio.io:9000 User-Agent: Minio (darwin; amd64) minio-go/1.0.1 mc/2016-04-01T00:22:11Z Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20160408/us-east-1/s3/aws4_request, SignedHeaders=expect;host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED** Expect: 100-continue X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 X-Amz-Date: 20160408T145236Z Accept-Encoding: gzip mc: <DEBUG> HTTP/1.1 200 OK Transfer-Encoding: chunked Accept-Ranges: bytes Content-Type: text/xml; charset=utf-8 Date: Fri, 08 Apr 2016 14:54:55 GMT Server: Minio/DEVELOPMENT.2016-04-07T18-53-27Z (linux; amd64) Vary: Origin X-Amz-Request-Id: HP30I0W2U49BDBIO mc: <DEBUG> Response Time: 1.220112837s [...] [2016-04-08 03:56:14 IST] 0B albums/ [2016-04-04 16:11:45 IST] 0B backup/ [2016-04-01 20:10:53 IST] 0B deebucket/ [2016-03-28 21:53:49 IST] 0B guestbucket/ Option [--json] JSON option enables parseable output in JSON format. Example: List all buckets from Minio play service. mc --json ls play {"status":"success","type":"folder","lastModified":"2016-04-08T03:56:14.577+05:30","size":0,"key":"albums/"} {"status":"success","type":"folder","lastModified":"2016-04-04T16:11:45.349+05:30","size":0,"key":"backup/"} {"status":"success","type":"folder","lastModified":"2016-04-01T20:10:53.941+05:30","size":0,"key":"deebucket/"} {"status":"success","type":"folder","lastModified":"2016-03-28T21:53:49.217+05:30","size":0,"key":"guestbucket/"} Option [--no-color] This option disables the color theme. It useful for dumb terminals. Option [--quiet] Quiet option suppress chatty console output. Option [--config-folder] Use this option to set a custom config path. Option [ --insecure] Skip SSL certificate verification. 7. Commands Command ls - List Objects ls command lists files, objects and objects. Use --incomplete flag to list partially copied content. USAGE: mc ls [FLAGS] TARGET [TARGET ...] FLAGS: --help, -h Show help. --recursive, -r List recursively. --incomplete, -I List incomplete uploads. Example: List all buckets on. mc ls play [2016-04-08 03:56:14 IST] 0B albums/ [2016-04-04 16:11:45 IST] 0B backup/ [2016-04-01 20:10:53 IST] 0B deebucket/ [2016-03-28 21:53:49 IST] 0B guestbucket/ [2016-04-08 20:58:18 IST] 0B mybucket/ Command mb - Make a Bucket mb command creates a new bucket on an object storage. On a filesystem, it behaves like mkdir -p command. Bucket is equivalent of a drive or mount point in filesystems and should not be treated as folders. Minio does not place any limits on the number of buckets created per user. On Amazon S3, each account is limited to 100 buckets. Please refer to Buckets Restrictions and Limitations on S3 for more information. USAGE: mc mb [FLAGS] TARGET [TARGET...] FLAGS: --help, -h Show help. --region "us-east-1" Specify bucket region. Defaults to ‘us-east-1’. Example: Create a new bucket named "mybucket" on. mc mb play/mybucket Bucket created successfully ‘play/mybucket’. Command cat - Concatenate Objects cat command concatenates contents of a file or object to another. You may also use it to simply display the contents to stdout USAGE: mc cat [FLAGS] SOURCE [SOURCE...] FLAGS: --help, -h Show help. Example: Display the contents of a text file myobject.txt mc cat play/mybucket/myobject.txt Hello Minio!! Command pipe - Pipe to Object pipe command copies contents of stdin to a target. When no target is specified, it writes to stdout. USAGE: mc pipe [FLAGS] [TARGET] FLAGS: --help, -h Help of pipe. Example: Stream MySQL database dump to Amazon S3 directly. mysqldump -u root -p ******* accountsdb | mc pipe s3/ferenginar/backups/accountsdb-oct-9-2015.sql Command cp - Copy Objects cp command copies data from one or more sources to a target. All copy operations to object storage are verified with MD5SUM checksums. Interrupted or failed copy operations can be resumed from the point of failure. USAGE: mc cp [FLAGS] SOURCE [SOURCE...] TARGET FLAGS: --help, -h Show help. --recursive, -r Copy recursively. Example: Copy a text file to to an object storage. Command rm - Remove Buckets and Objects Use rm command to remove file or bucket USAGE: mc rm [FLAGS] TARGET [TARGET ...] FLAGS: --help, -h Show help. --recursive, -r Remove recursively. --force Force a dangerous remove operation. --prefix Remove objects matching this prefix. --incomplete, -I Remove an incomplete upload(s). --fake Perform a fake remove operation. --stdin Read object list from STDIN. --older-than value Remove objects older than N days. (default: 0) Example: Remove a single object. mc rm play/mybucket/myobject.txt Removed ‘play/mybucket/myobject.txt’. Example: Recursively remove a bucket and all its contents. Since this is a dangerous operation, you must explicitly pass --force option. mc rm --recursive --force play/myobject Removed ‘play/myobject/newfile.txt’. Removed 'play/myobject/otherobject.txt’. Example: Remove all incompletely uploaded files from mybucket. mc rm --incomplete --recursive --force play/mybucket Removed ‘play/mybucket/mydvd.iso’. Removed 'play/mybucket/backup.tgz’. Example: Remove object only if its created older than one day. mc rm --force --older-than=1 play/mybucket/oldsongs Command share - Share Access share command securely grants upload or download access to object storage. This access is only temporary and it is safe to share with remote users and applications. If you want to grant permanent access, you may look at mc policy command instead. Generated URL has access credentials encoded in it. Any attempt to tamper the URL will invalidate the access. To understand how this mechanism works, please follow Pre-Signed URL technique. USAGE: mc share [FLAGS] COMMAND FLAGS: --help, -h Show help. COMMANDS: download Generate URLs for download access. upload Generate ‘curl’ command to upload objects without requiring access/secret keys. list List previously shared objects and folders. Sub-command share download - Share Download share download command generates URLs to download objects without requiring access and secret keys. Expiry option sets the maximum validity period (no more than 7 days), beyond which the access is revoked automatically. USAGE: mc share download [FLAGS] TARGET [TARGET...] FLAGS: --help, -h Show help. --recursive, -r Share all objects recursively. --expire, -E "168h" Set expiry in NN[h|m|s]. Example: Grant temporary access to an object with 4 hours expiry limit. mc share download --expire 4h play/mybucket/myobject.txt URL: Expire: 0 days 4 hours 0 minutes 0 seconds Share: Sub-command share upload - Share Upload share upload command generates a ‘curl’ command to upload objects without requiring access/secret keys. Expiry option sets the maximum validity period (no more than 7 days), beyond which the access is revoked automatically. Content-type option restricts uploads to only certain type of files. USAGE: mc share upload [FLAGS] TARGET [TARGET...] FLAGS: --help, -h Show help. --recursive, -r Recursively upload any object matching the prefix. --expire, -E "168h" Set expiry in NN[h|m|s]. Example: Generate a curl command to enable upload access to play/mybucket/myotherobject.txt. User replaces <FILE> with the actual filename to upload mc share upload play/mybucket/myotherobject.txt URL: Expire: 7 days 0 hours 0 minutes 0 seconds Share: curl -F x-amz-date=20160408T182356Z -F x-amz-signature=de343934bd0ba38bda0903813b5738f23dde67b4065ea2ec2e4e52f6389e51e1 -F bucket=mybucket -F policy=eyJleHBpcmF0aW9uIjoiMjAxNi0wNC0xNVQxODoyMzo1NS4wMDdaIiwiY29uZGl0aW9ucyI6W1siZXEiLCIkYnVja2V0IiwibXlidWNrZXQiXSxbImVxIiwiJGtleSIsIm15b3RoZXJvYmplY3QudHh0Il0sWyJlcSIsIiR4LWFtei1kYXRlIiwiMjAxNjA0MDhUMTgyMzU2WiJdLFsiZXEiLCIkeC1hbXotYWxnb3JpdGhtIiwiQVdTNC1ITUFDLVNIQTI1NiJdLFsiZXEiLCIkeC1hbXotY3JlZGVudGlhbCIsIlEzQU0zVVE4NjdTUFFRQTQzUDJGLzIwMTYwNDA4L3VzLWVhc3QtMS9zMy9hd3M0X3JlcXVlc3QiXV19 -F x-amz-algorithm=AWS4-HMAC-SHA256 -F x-amz-credential=Q3AM3UQ867SPQQA43P2F/20160408/us-east-1/s3/aws4_request -F key=myotherobject.txt -F file=@<FILE> Sub-command share list - Share List share list command lists unexpired URLs that were previously shared USAGE: mc share list COMMAND COMMAND: upload: list previously shared access to uploads. download: list previously shared access to downloads. Command mirror - Mirror Buckets mirror command is similar to rsync, except it synchronizes contents between filesystems and object storage. USAGE: mc mirror [FLAGS] SOURCE TARGET FLAGS: --help, -h Show help. --force Force overwrite of an existing target(s). --fake Perform a fake mirror operation. --watch, -w Watch and mirror for changes. --remove Remove extraneous file(s) on target. Example: Mirror a local directory to 'mybucket' on. mc mirror localdir/ play/mybucket localdir/b.txt: 40 B / 40 B % 73 B/s 0 Example: Continuously watch for changes on a local directory and mirror the changes to 'mybucket' on. mc mirror -w localdir play/mybucket localdir/new.txt: 10 MB / 10 MB % 1 MB/s 15s Command find - Find files and objects find command finds files which match the given set of parameters. It only lists the contents which match the given set of criteria. USAGE: mc find PATH [FLAGS] FLAGS: --help, -h Show help. --exec value Spawn an external process for each matching object (see FORMAT) --name value Find object names matching wildcard pattern. ... ... Example: Find all jpeg images from s3 bucket and copy to minio "play/bucket" bucket continuously. mc find s3/bucket --name "*.jpg" --watch --exec "mc cp {} play/bucket" Command diff - Show Difference diff command computes the differences between the two directories. It only lists the contents which are missing or which differ in size. It DOES NOT compare the contents, so it is possible that the objects which are of same name and of the same size, but have difference in contents are not detected. This way, it can perform high speed comparison on large volumes or between sites USAGE: mc diff [FLAGS] FIRST SECOND FLAGS: --help, -h Show help. Example: Compare a local directory and a remote object storage. mc diff localdir play/mybucket ‘localdir/notes.txt’ and ‘’ - only in first. Command watch - Watch for files and object storage events. watch provides a convenient way to watch on various types of event notifications on object storage and filesystem. USAGE: mc watch [FLAGS] PATH FLAGS: --events value Filter specific types of events. Defaults to all events by default. (default: "put,delete,get") --prefix value Filter events for a prefix. --suffix value Filter events for a suffix. --recursive Recursively watch for events. --help, -h Show help. Example: Watch for all events on object storage mc watch play/testbucket [2016-08-18T00:51:29.735Z] 2.7KiB ObjectCreated [2016-08-18T00:51:29.780Z] 1009B ObjectCreated [2016-08-18T00:51:29.839Z] 6.9KiB ObjectCreated Example: Watch for all events on local directory mc watch ~/Photos [2016-08-17T17:54:19.565Z] 3.7MiB ObjectCreated /home/minio/Downloads/tmp/5467026530_a8611b53f9_o.jpg [2016-08-17T17:54:19.565Z] 3.7MiB ObjectCreated /home/minio/Downloads/tmp/5467026530_a8611b53f9_o.jpg ... [2016-08-17T17:54:19.565Z] 7.5MiB ObjectCreated /home/minio/Downloads/tmp/8771468997_89b762d104_o.jpg Command events - Manage bucket event notification. events provides a convenient way to configure various types of event notifications on a bucket. Minio event notification can be configured to use AMQP, Redis, ElasticSearch, NATS and PostgreSQL services. Minio configuration provides more details on how these services can be configured. USAGE: mc events COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...] COMMANDS: add Add a new bucket notification. remove Remove a bucket notification. With '--force' can remove all bucket notifications. list List bucket notifications. FLAGS: --help, -h Show help. Example: List all configured bucket notifications mc events list play/andoria MyTopic arn:minio:sns:us-east-1:1:TestTopic s3:ObjectCreated:*,s3:ObjectRemoved:* suffix:.jpg Example: Add a new 'sqs' notification resource only to notify on ObjectCreated event mc events add play/andoria arn:minio:sqs:us-east-1:1:your-queue --events put Example: Add a new 'sqs' notification resource with filters Add prefix and suffix filtering rules for sqs notification resource. mc events add play/andoria arn:minio:sqs:us-east-1:1:your-queue --prefix photos/ --suffix .jpg Example: Remove a 'sqs' notification resource mc events remove play/andoria arn:minio:sqs:us-east-1:1:your-queue Command policy - Manage bucket policies Manage anonymous bucket policies to a bucket and its contents USAGE: mc policy [FLAGS] PERMISSION TARGET mc policy [FLAGS] TARGET mc policy list [FLAGS] TARGET PERMISSION: Allowed policies are: [none, download, upload, public]. FLAGS: --help, -h Show help. Example: Show current anonymous bucket policy Show current anonymous bucket policy for mybucket/myphotos/2020/ sub-directory mc policy play/mybucket/myphotos/2020/ Access permission for ‘play/mybucket/myphotos/2020/’ is ‘none’ Example : Set anonymous bucket policy to download only Set anonymous bucket policy for mybucket/myphotos/2020/ sub-directory and its objects to download only. Now, objects under the sub-directory are publicly accessible. e.g mybucket/myphotos/2020/yourobjectnameis available at mc policy download play/mybucket/myphotos/2020/ Access permission for ‘play/mybucket/myphotos/2020/’ is set to 'download' Example : Remove current anonymous bucket policy Remove any bucket policy for mybucket/myphotos/2020/ sub-directory. mc policy none play/mybucket/myphotos/2020/ Access permission for ‘play/mybucket/myphotos/2020/’ is set to 'none' Command session - Manage Sessions session command manages previously saved sessions for cp and mirror operations USAGE: mc session COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...] COMMANDS: list List all previously saved sessions. clear Clear a previously saved session. resume Resume a previously saved session. FLAGS: --help, -h Show help. Example: List all previously saved sessions. mc session list IXWKjpQM -> [2016-04-08 19:11:14 IST] cp assets.go play/mybucket ApwAxSwa -> [2016-04-08 01:49:19 IST] mirror miniodoc/ play/mybucket Example: Resume a previously saved session. mc session resume IXWKjpQM ...assets.go: 1.68 KB / 1.68 KB ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ 100.00 % 784 B/s 2s Example: Drop a previously saved session. mc session clear ApwAxSwa Session ‘ApwAxSwa’ cleared successfully. Command config - Manage Config File config host command provides a convenient way to manage host entries in your config file ~/.mc/config.json. It is also OK to edit the config file manually using a text editor. USAGE: mc config host COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...] COMMANDS: add, a Add a new host to configuration file. remove, rm Remove a host from configuration file. list, ls Lists hosts in configuration file. FLAGS: --help, -h Show help. Example: Manage Config File Add Minio server access and secret keys to config file host entry. Note that, the history feature of your shell may record these keys and pose a security risk. On bash shell, use set -o and set +o to disable and enable history feature momentarily. set +o history mc config host add myminio OMQAGGOL63D7UNVQFY8X GcY5RHNmnEWvD/1QxD3spEIGj+Vt9L7eHaAaBTkJ set -o history Command update - Software Updates Check for new software updates from. Experimental flag checks for unstable experimental releases primarily meant for testing purposes. USAGE: mc update [FLAGS] FLAGS: --quiet, -q Suppress chatty console output. --json Enable JSON formatted output. --help, -h Show help. Example: Check for an update. mc update You are already running the most recent version of ‘mc’. Command version - Display Version Display the current version of mc installed USAGE: mc version [FLAGS] FLAGS: --quiet, -q Suppress chatty console output. --json Enable JSON formatted output. --help, -h Show help. Example: Print version of mc. mc version Version: 2016-04-01T00:22:11Z Release-tag: RELEASE.2016-04-01T00-22-11Z Commit-id: 12adf3be326f5b6610cdd1438f72dfd861597fce
https://docs.minio.io/docs/minio-client-complete-guide
2018-01-16T16:53:54
CC-MAIN-2018-05
1516084886476.31
[array(['https://slack.minio.io/slack?type=svg', 'Slack'], dtype=object)]
docs.minio.io
Incremental Submission Feature Overview Incremental Submission adds the ability to only upload new data for the current release. All data from previous releases will be maintained by the system and automatically copied to the current release. Additionally, submitters are now able to download their submitted data. This provides a model where submitters are in full control of their effective data set and can use the submission system as a canonical data store. File Naming regex: "^donor(\.[a-zA-Z0-9]+)?\.txt(?:\.gz|\.bz2)?$" Thus one may choose to adopt a naming scheme such as: donor.01.txt, donor.02.txt, donor.03.txt Alternatively one could embed a date: donor.20130101.txt, donor.20130201.txt, donor.20130301.txt` With this scheme in place, a submitter can upload donor.20130101.txt in Release 1, donor.20130201.txt in Release 2 and donor.20130301.txt in Release 3. The effective submission will be the combined set of files. Data Management It is the responsibility of the submitter to ensure data remains consistent from release to release. In the case of deleted records, one must remove the records and their dependent records from all files and resubmit. The appropriate file split strategies should chosen by submitters to simplify operations between releases and interoperate with existing pipelines. Notes In the current implementation each time a validation is performed it will validate the entire data set. However, in combination with _Selective Validation_ the total validation time should be greatly reduced.
http://docs.icgc.org/submission/guide/incremental-submission-feature/
2018-01-16T17:02:46
CC-MAIN-2018-05
1516084886476.31
[]
docs.icgc.org
Basic installation Please refer to README for bulk of the instructions CPU build Generally, pytorch GPU build should work fine on machines that don’t have a CUDA-capable GPU, and will just use the CPU. However, you can install CPU-only versions of Pytorch if needed with fastai. pip The pip ways is very easy: pip install pip install fastai Just make sure to pick the correct torch wheel url, according to the needed platform, python and CUDA version, which you will find here. conda The conda way is more involved. Since we have only a single fastai package that relies on the default pytorchpackage working with and without GPU environment, if you want to install something custom you will have to manually tweak the dependencies. This is explained in detail here. So follow the instructions there, but replace pytorchwith pytorch-cpu, and torchvisionwith torchvision-cpu. Also, please note, that if you have an old GPU and pytorch fails because it can’t support it, you can still use the normal (GPU) pytorch build, by setting the env var CUDA_VISIBLE_DEVICES="", in which case pytorch will not try to check if you even have a GPU. Jupyter notebook dependencies The fastai library doesn’t require the jupyter environment to work, therefore those dependencies aren’t included. So if you are planning on using fastai in the jupyter notebook environment, e.g. to run the fastai course lessons and you haven’t already setup the jupyter environment, here is how you can do it. conda conda install jupyter notebook conda install -c conda-forge jupyter_contrib_nbextensions Some users also seem to need this conda package to be able to choose the right kernel environment, however, most likely you won’t need this package. conda install nb_conda pip pip install jupyter notebook jupyter_contrib_nbextensions Custom dependencies If for any reason you don’t want to install all of fastai’s dependencies, since, perhaps, you have limited disk space on your remote instance, here is how you can install only the dependencies that you need. First, install fastaiwithout its dependencies using either pipor conda: # pip pip install --no-deps fastai # conda conda install --no-deps -c fastai fastai The rest of this section assumes you’re inside the fastaigit repo, since that’s where setup.pyresides. If you don’t have the repository checked out, do: git clone cd fastai tools/run-after-git-clone Next, find out which groups of dependencies you want: python setup.py -q deps You should get something like: Available dependency groups: core, text, vision You need to use at least the coregroup. Do note that the depscommand is a custom distutilsextension, i.e. it only works in the fastaisetup. Finally, install the custom dependencies for the desired groups. For the sake of this demonstration, let’s say you want to get the core dependencies ( core), plus dependencies specific to computer vision ( vision). The following command will give you the up-to-date dependencies for these two groups: python setup.py -q deps --dep-groups=core,vision It will return something which can be fed directly to pip install: pip install $(python setup.py -q deps --dep-groups=core,vision) Since conda uses a slightly different syntax/package names, to get the same output suitable for conda, add --dep-conda: python setup.py -q deps --dep-groups=core,vision --dep-conda If your shell doesn’t support $()syntax, it most likely will support backticks, which are deprecated in modern bash. (The two are equivalent, but $()has a superior flexibility). If that’s your situation, use the following syntax instead: pip install `python setup.py -q deps --dep-groups=core,vision` Manual copy-n-paste case: If, instead of feeding the output directly to pipor conda, you want to do it manually via copy-n-paste, you need to quote the arguments, in which case add the --dep-quoteoption, which will do it for you: # pip: python setup.py -q deps --dep-groups=core,vision --dep-quote # conda: python setup.py -q deps --dep-groups=core,vision --dep-quote --dep-conda So the output for pip will look" Summary: pip selective dependency installation: pip install --no-deps fastai pip install $(python setup.py -q deps --dep-groups=core,vision) same for conda: conda install --no-deps -c fastai fastai conda install -c pytorch -c fastai $(python setup.py -q deps --dep-conda --dep-groups=core,vision) adjust the --dep-groupsargument to match your needs. Full usage: # show available dependency groups: python setup.py -q deps # print dependency list for specified groups python setup.py -q deps --dep-groups=core,vision # see all options: python setup.py -q deps --help Development dependencies As explained in Development Editable Install, if you want to work on contributing to fastai you will also need to install the optional development dependencies. In addition to the ways explained in the aforementioned document, you can also install fastai with developer dependencies without needing to check out the fastai repo. To install the latest released version of fastaiwith developer dependencies, do: pip install "fastai[dev]" To accomplish the same for the cutting edge master git version: pip install "git+[dev]" Virtual environment It’s highly recommended to use a virtual python environment for the fastai project, first because you could experiment with different versions of it (e.g. stable-release vs. bleeding edge git version), but also because it’s usually a bad idea to install various python package into the system-wide python, because it’s so easy to break the system, if it relies on python and its 3rd party packages for its functionality. There are several implementations of python virtual environment, and the one we recommend is conda (anaconda), because we release our packages for this environment and pypi, as well. conda doesn’t have all python packages available, so when that’s the case we use pip to install whatever is missing. You will find the instructions for installing conda on each platform here. Once you followed the instructions and installed anaconda, you’re ready to build you first environment. For the sake of this example we will use an environment name fastai, but you can name it whatever you’d like it to be. The following will create a fastai env with python-3.6: conda create -n fastai python=3.6 Now any time you’d like to work in this environment, just execute: conda activate fastai It’s very important that you activate your environment before you start the jupyter notebook if you’re using fastai notebooks. Say, you’d like to have another env to test fastai with python-3.7, then you’d create another one with: conda create -n fastai-py37 python=3.7 and to activate that one, you’d call: conda activate fastai-py37 If you’d like to exit the environment, do: conda deactivate To list out the available environments conda env list Also see bash-git-prompt which will help you tell at any moment which environment you’re in.
https://docs.fast.ai/install.html
2020-01-17T17:23:58
CC-MAIN-2020-05
1579250589861.0
[]
docs.fast.ai
Universal Resource Scheduling for Field Service This article describes how Dynamics 365 Field Service uses Universal Resource Scheduling (URS). We'll also take a look at how to configure URS for onsite field service scenarios. Overview Universal Resource Scheduling (URS) is a Dynamics 365 solution that allows organizations from different industries with different scenarios to assign resources to jobs and tasks. URS assigns the best resources to jobs and tasks based on: - Resource availability - Required skills - Promised time windows - Business unit - Geographic territory and more Field service organizations most frequently use URS to schedule mobile resources to location-specific jobs and tasks (known as work orders) as the resources travel to various customer locations. Because work orders are generally performed onsite, URS schedules the resources with closest proximity to work orders, reducing travel time and costs. In this topic, we'll take a quick look at: - URS components - How URS works with Field Service work orders - How to schedule work orders with URS - Basic configuration For more detailed information on URS, visit the Universal Resource Scheduling documentation. Components When Dynamics 365 Field Service is installed, URS installs automatically, and appears in the menu as shown in the following screenshot. In general, work orders and related entities are a part of Field Service, while resource- and requirement-related entities are part of URS. All work seamlessly together. In other words, field service work orders define what work needs to be done and where, while URS defines who can perform the work and when. The following list shows which components correspond to Field Service and URS: - Work orders (Field Service) - Bookable resources (Universal Resource Scheduling) - Resource requirements (Universal Resource Scheduling) - Resource bookings (Universal Resource Scheduling) - Schedule tools - schedule board, schedule assistant (Universal Resource Scheduling) - Resource Scheduling Optimization (installed separately) (Universal Resource Scheduling) For more information, visit the Universal Resource Scheduling documentation. How URS works with Field Service work orders Now that we've looked at how the various components correspond with Field Service and URS, let's look at what happens when URS interacts with Field Service work orders. Creating work orders creates requirements When a work order is created and saved, a related requirement automatically generates in the background. This requirement (which is a separate entity) outlines the specific details for resources that can perform the work order. The requirement is what will be scheduled to resources, and it simply references the work order. By default, one requirement is created but a single work order can have multiple requirements. Additionally, a requirement group with multiple requirements can also be added to a work order. Fields passed from work order to requirements When a requirement is created, it inherits attributes from the work order, including but not limited to: - Name (work order number text) - Work order (lookup reference to work order) - Work location - Latitude - Longitude - Service Territory - Duration - Start / End date - Priority - Characteristics - Preferred/restricted resources - Fulfillment preference Updating work order attributes will update requirement attributes. Manual edits to requirements can be made before scheduling, too. Note Many work order attributes are added to the work order when work order incident types are created, including duration and characteristics. Scheduling work orders with URS After a work order and related requirement are ready to be scheduled, URS scheduling tools can be used to book the requirement to the most appropriate resource. After a work order requirement is booked, a bookable resource booking record is created documenting the resource, travel time, and start/end time. The booking relates to both the work order and requirement. You can book from: - Work orders - Requirements - Schedule board - Resource Scheduling Optimization (RSO) Book from the work order Selecting Book from the work order will trigger the URS schedule assistant to match the related work order requirement with available resources. Book from the requirement Like with work orders, the same booking experience can be triggered from the requirement entity, by selecting Book while on the requirement. Book from the schedule board The lower schedule board pane displays requirement records and can be configured to show only requirements related to work orders with a view filter. The requirement can be dragged and dropped onto a resource on the schedule board to schedule the work order. Alternatively, selecting find availability on the requirement in the lower pane will trigger the schedule assistant, which recommends the most appropriate resources. Book with Resource Scheduling Optimization Resource Scheduling Optimization can automatically schedule work order requirements based on predefined schedules or triggers. You can also manually accomplish this by selecting the Run Now button. Configure URS for Field Service There are a few things you'll need to configure before getting started with URS for Field Service. Enable work orders for scheduling Navigate to Resource Scheduling > Administration > Enable Resource Scheduling for Entities. This is where administrators decide which entities can be scheduled to resources. When Field Service is installed, work orders are enabled for resource scheduling by default. Double-click work orders to define default behavior when scheduling work order requirements. Connect to maps Connecting to a mapping service is critical if you want to geographically display work orders and route field technicians. - To connect a mapping service, navigate to Resource Scheduling > Administration > Scheduling Parameters. - Set Connect to Maps to Yes. Then save and close. The API key will populate automatically and use the Bing Maps API. Configure booking statuses Resources (field technicians) interact with booking statuses to communicate to stakeholders the progress of their work. For field service, booking statuses can update work order system statuses. This is done by noting a Field Service Status on the Booking Status. Navigate to Resource Scheduling > Booking Statuses See the following screenshot for the recommended out-of-the-box values. Geo-locate resources Work order locations are defined by the latitude / longitude of either the work order form, or the related service account. It's important to also geo-locate resources. Navigate to Resource Scheduling > Resources. To ensure resources can appear on the schedule board map, they must have a geocoded starting and ending location. There are two ways to geocode your resources. Option one Set resource start/end location to Resource address and ensure the related resource record (User, Account, Contact) as defined by the resource type has latitude and longitude values. For example, in the following screenshot, the bookable resource has resource type = Contact; this means the related contact record must be geo-coded, meaning latitude and longitude fields must have values. Note For routing purposes, the location of a resource is defined as the current work order location, current location of the mobile device, or the start/end location defined here when the other options are not applicable. Option two Set resource start/end location to Organizational Unit Address and ensure the related organizational unit record is geo-coded, meaning latitude and longitude fields must have values. Note You may need to add the latitude/longitude fields to the organizational unit entity form. Confirm geocoding works appropriately To make sure resources are geocoded properly, navigate to Universal Resource Scheduling > Schedule Board. The resource should appear on the map. Select a resource's name to highlight their location pin on the map. Additional notes If the work order or requirement doesn't have a latitude or longitude, the location is treated as location-agnostic, which means the location of resources isn't considered during scheduling. If the work order or requirement has a latitude and longitude and work location is set to onsite, resource locations, travel time, and routes are considered during scheduling. See also Feedback
https://docs.microsoft.com/en-us/dynamics365/field-service/universal-resource-scheduling-for-field-service
2020-01-17T16:13:37
CC-MAIN-2020-05
1579250589861.0
[array(['media/scheduling-urs-apps.png', 'Screenshot of Screenshot of URS and Field Service apps'], dtype=object) array(['media/scheduling-urs-work-order-related-requirement.png', 'Screenshot of related requirement'], dtype=object) array(['media/scheduling-urs-work-order-related-requirement-number.png', 'Screenshot of Requirement data1'], dtype=object) array(['media/scheduling-urs-work-order-related-requirement-fields.png', 'Screenshot of Requirement data'], dtype=object) array(['media/scheduling-urs-work-order-book.png', 'Screenshot of booking from work order'], dtype=object) array(['media/scheduling-urs-work-order-related-requirement-book.png', 'Screenshot of booking requirement'], dtype=object) array(['media/scheduling-urs-schedule-board-schedule-assistant.png', 'Screenshot of schedule board'], dtype=object) array(['media/scheduling-urs-rso-schedule.png', 'Screenshot of Resource scheduling Optimization schedule'], dtype=object) array(['media/perform-initial-configurations-image8.png', 'Screenshot of Enabling entities for scheduling'], dtype=object) array(['media/perform-initial-configurations-image6.png', 'Screenshot of Resource Scheduling Administration in Dynamics 365 dropdown menu'], dtype=object) array(['media/perform-initial-configurations-image7.png', 'Screenshot of setting Connect to Maps to yes'], dtype=object) array(['media/scheduling-booking-status-fs.png', 'Screenshot of Booking Statuses'], dtype=object) array(['media/scheduling-resource-address.png', 'Screenshot of resource address'], dtype=object) array(['media/scheduling-urs-resource-type.png', 'Screenshot of resource address'], dtype=object) array(['media/scheduling-urs-resource-organizational-unit.png', 'Screenshot of resource address'], dtype=object) array(['media/scheduling-urs-schedule-board-locate-resource.png', 'Screenshot of geo coded resource on map'], dtype=object) ]
docs.microsoft.com
Configuring a Pexip Exchange Integration The VMR Scheduling for Exchange feature allows you to create an add-in that enables Microsoft Outlook desktop and Web App users in Office 365 or Exchange environments to schedule meetings using Pexip VMRs as a meeting resource. The first step in enabling VMR Scheduling for Exchange is Configuring Exchange or Office 365 for scheduling. You must do that before you can complete the following configuration on the Pexip Infinity Management Node. Adding a Pexip Exchange Integration to Pexip Infinity and generating the add-in file A Pexip Exchange Integration defines a specific connection between your Pexip Infinity deployment and a Microsoft Exchange deployment. In some cases a single Pexip Infinity deployment will have more than one Pexip Exchange Integration. Adding a new Pexip Exchange Integration involves the following steps: - add details of your Microsoft Exchange deployment and accounts to your Pexip Infinity deployment - configure the aliases that will be used for VMRs created using the VMR Scheduling for Exchange service - configure the text used by the add-in - generate the add-in file. From the Pexip Infinity Management Node go to and complete the following fields: Signing in to the service account if OAuth has been enabled If you have enabled OAuth for the first time, you must sign in to the service account after saving the configuration of the Pexip Exchange Integration. You may also need to re-sign in to the service account if: - you disable and then subsequently re-enable OAuth - you update any of the following configuration for the Pexip Exchange Integration: - Service account username - OAuth client ID - OAuth token endpoint - the Management Node has been offline for more than 90 days. To sign in to the service account: - Ensure you have signed out of all Microsoft accounts on your computer, including the Microsoft Azure portal. From the Management Node, go to and select the Pexip Exchange Integration. At the bottom of the page, select . You will be taken to the Sign in to service account page. - Copy the Sign in link and paste into a new browser tab. You will be asked to permit the Application registration to access the service account: You should be returned to the Sign in to service account page and see the message Successfully signed in. Saving and checking configuration When you have finished, select Infinity Connect platform will attempt to contact the Microsoft Exchange deployment, and if there are any issues, it will raise an alarm on the Management Node.. You will be taken back to the main page. The Formatting the email text All the templates and text specified in the Email text section can be entered as HTML. This allows you to customize the text (for example, the font, size, and color). When using HTML, you must ensure all HTML tags are closed properly, otherwise you may affect the format of any existing text in the email body. The add-in pane headings and text can also be formatted using HTML, although some formatting may be overridden by the base HTML. We recommend that you check that any formatting applied to add-ins appears as expected in all clients used in your environment. Working with jinja2 templates VMR Scheduling for Exchange uses a subset of the jinja2 templating language () to create the text used in emails. For more reference information and to see where else jinja2 templates are used within Pexip Infinity, see Jinja2 templates and filters. Variables The following variables can be used when creating the jinja2 templates used for VMR Scheduling for Exchange: Deleting and replacing Pexip Exchange Integrations If you delete an existing Pexip Exchange Integration and replace it with another, you must also re-generate and re-install the add-in XML file, even if the configuration of the new Pexip Exchange Integration is identical to that of the old one. Using multiple Pexip Exchange Integrations Different groups of users within the same Microsoft Exchange deployment You can provide different groups of users within your Microsoft Exchange deployment with different options when using the VMR Scheduling for Exchange feature. For example, you may wish to vary the prefix used as part of the VMR alias, or use different text for the joining instructions. To do this, create multiple Pexip Exchange Integrations that connect to the same Exchange environment. (Note however that each Pexip Exchange Integration must have a separate equipment resource.) Each Pexip Exchange Integration that you create will have an associated add-in which you can then make available to specific users by using Exchange PowerShell commands. The diagram below shows a single Pexip Infinity deployment with two Pexip Exchange Integrations to the same Microsoft Exchange deployment. Each Pexip Exchange Integration uses the same EWS URL and is configured with the FQDNs of all the Exchange servers in the Exchange deployment. The first connection provides an add-in for sales staff; the second provides an add-in for development staff. Both add-ins are uploaded to Microsoft Exchange, but each user will only see the add-in relevant to their group. Different Microsoft Exchange deployments If you are a service provider, you can configure one or more Pexip Exchange Integrations for each of your customers. The diagram below shows a single Pexip Infinity deployment with two Pexip Exchange Integrations to two different Microsoft Exchange deployments. The first connection provides an add-in for everyone at Example Corp; the second provides an add-in everyone at Acme Corp. Next step - Making the add-in available to users within your Exchange deployment.
https://docs.pexip.com/admin/scheduling_infinity_config.htm
2020-01-17T17:42:03
CC-MAIN-2020-05
1579250589861.0
[array(['../Resources/Images/admin_guide/scheduling_multi_groups_840x205.png', None], dtype=object) array(['../Resources/Images/admin_guide/scheduling_multi_exchange_840x138.png', None], dtype=object) ]
docs.pexip.com
Persistent. If defined, it is used by the cluster to store information that does not need to persist. This aids in optimization and helps to reduce the load on the persistent storage. Critical: DO NOT confuse persistent or ephemeral storage on this page with Redis persistence or AWS ephemeral drives used in other areas of Redis Enterprise Software. For disk size requirements refer to the following sections: - Hardware requirements for general guidelines regarding the ideal disk size each type of storage - Disk size requirements for extreme write scenarios for special considerations when dealing with a high rate of write commands
https://docs.redislabs.com/latest/rs/administering/designing-production/persistent-ephemeral-storage/
2020-01-17T17:27:08
CC-MAIN-2020-05
1579250589861.0
[]
docs.redislabs.com
Requesting Personal Certificates Revision as of 14:42, 27 August 2013
https://docs.uabgrid.uab.edu/sgw/index.php?title=Requesting_Personal_Certificates&diff=prev&oldid=416
2020-01-17T16:20:07
CC-MAIN-2020-05
1579250589861.0
[]
docs.uabgrid.uab.edu
. // Attach this script to a camera, this will make it render in wireframe function OnPreRender() { GL.wireframe = true; } function OnPostRender() { GL.wireframe = false; } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void OnPreRender() { GL.wireframe = true; } void OnPostRender() { GL.wireframe = false; } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.1/Documentation/ScriptReference/GL-wireframe.html
2020-01-17T17:33:09
CC-MAIN-2020-05
1579250589861.0
[]
docs.unity3d.com
Journey Manager (JM) Previously known as Transact Manager (TM). | System Manager / DevOps | v5.1 & Higher This feature is related to v5.1 and higher. Manager captures a wide range of system and service run-time information and stores it in different logs. It provides a convenient interface for viewing this information, making it easier to troubleshoot diverse production problems. Manager maintains the following logs, which reside both on the file system and in the
https://docs.avoka.com/Logs/SystemLog.htm
2020-01-17T15:52:54
CC-MAIN-2020-05
1579250589861.0
[]
docs.avoka.com
Drop #2 of Claims Identity Guide on CodePlex Second drop of samples and draft chapters is now available on CodePlex. Highlights: - All 3 samples for ACS v2: ("ACS as a Federation Provider", "ACS as a FP with Multiple Business Partners" and "ACS and REST endpoints"). These samples extend all the original "Federation samples" in the guide with new capabilities (e.g. protocol transition, REST services, etc.) - Two new ACS specific chapters and a new appendix on message sequences Most samples will work without an ACS account, since we pre-provisioned one for you. The exception is the “ACS and Multiple Partners”, because this requires credentials to modify ACS configuration. You will need to subscribe to your own instance of ACS to fully exercise the code (especially the “sign-up” process). The 2 additions to the appendix are: Message exchanges between Client/RP/ACS/Issuer: And the Single-Sign-Out process (step 10 below): You will also find the Fiddler sessions with explained message contents. Feedback always welcome!
https://docs.microsoft.com/en-us/archive/blogs/eugeniop/drop-2-of-claims-identity-guide-on-codeplex
2020-01-17T17:35:11
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Microsoft CRM form and columns Lots of partners and customers ask the question if it is possible to change the layout of the sections on a Microsoft CRM form. Of course this is possible, but many people know only the option to display two columns. Talking with Michiel van den Heuvel of Capgemini today he reminded me that there is an option within the form to display three columns on a section instead of two columns. It is not possible to adjust or change this on an existing section on a form, but you have to define it when creating a new section. One of the advantages of using three columns is to display and order certain information in a better way on a form for the users. The below screenshot shows how it can look.
https://docs.microsoft.com/en-us/archive/blogs/mscrmfreak/microsoft-crm-form-and-columns
2020-01-17T18:00:07
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
. There. NOTE: You must use square brackets or the dot operator to index in a saveInto variable. You cannot use the index() function to save to a specific index. When saving sections of your expression into rules, you can pass a load()
https://docs.appian.com/suite/help/18.4/enabling_user_interaction.html
2019-08-17T15:02:25
CC-MAIN-2019-35
1566027313428.28
[]
docs.appian.com
. Log in using SSH and configure repository access. The repository configuration file is installdir/repository/conf/svnserve.conf. The variables anon-access and auth-access can be set to the values none, read, or write. Setting the value to none prohibits both reading and writing; read allows read-only access to the repository, and write allows complete read/write access to the repository. For example, uncomment these lines for a reasonable starting configuration: [general] anon-access = read auth-access = write password-db = passwd Edit the passwd file in the same directory to manage Subversion users. For example, uncomment these lines to create two subversion users: harry and sally. [users] harry = harryssecret sally = sallyssecret Restart the Subversion server to load the changes. $ sudo installdir/ctlscript.sh restart subversion Import a project directory to Subversion from your local machine and check the files in your browser: $ svn import /path/to/project/ svn://localhost/repository/ -m "First import"
https://docs.bitnami.com/installer/apps/subversion/get-started/get-started-subversion/
2019-08-17T16:17:48
CC-MAIN-2019-35
1566027313428.28
[]
docs.bitnami.com
Updating & Deleting Records Follow this guide to change the values of your records, or to delete them. Use the records table from within a zone to modify and delete them. - Click DNS in the nav menu on the left hand side. - Select the zone with records you want to modify from the list. Updating a Record All your records for this zone will appear on the table. The Name field appears first, followed by the type, then all relevant fields last. Update any fields you want to change, then select "Update Selected" from the action select field at the bottom of the form. Click Apply. Deleting a Record Check the box next to the records you wish to delete. Then, select "Delete Selected" from the action select field at the bottom of the form. Click!
https://docs.cycle.io/dns/records/updating-deleting-records/
2019-08-17T15:26:36
CC-MAIN-2019-35
1566027313428.28
[array(['/static/record-table-new-e990bf8083ffeaa4f6e212c7fdfc639a.png', None], dtype=object) ]
docs.cycle.io
. Introduction Campaigns Contact Lists Compliance - Suppress contact attempts based on a set of prescribed rules - Assign a time zone to phone numbers containing a specific country code or area code - Suppress contact attempts based on the location of contacts - Define allowable calling windows for each day of the week for a given region - Suppress contact attempts by date - Add or manage a suppression list This page was last modified on November 5, 2018, at 12:25. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSAAS/latest/Administrator/CXContact
2019-08-17T15:48:33
CC-MAIN-2019-35
1566027313428.28
[]
docs.genesys.com
Microsoft Intune Windows 10 Team device restriction settings This article shows you the Microsoft Intune device restrictions settings that you can configure for devices running Windows 10 Team. Apps and experience - Wake screen when someone in room - Allows the device to wake automatically when its sensor detects someone in the room. - Meeting information displayed on welcome screen - Enable this option to choose the information that is displayed on the Meetings tile of the Welcome screen. You can: - Show organizer and time only - Show organizer, time, and subject (subject hidden for private meetings) - Welcome screen background image URL - Enable this setting to display a custom background on the Welcome screen of Windows 10 Team devices from the URL you specify. The image must be in PNG format and the URL must begin with https://. Azure operational insights - Azure Operational Insights - Azure Operational Insights, part of the Microsoft Operations Manager suite collects, stores, and analyzes log file data from Windows 10 Team devices. To connect to Azure Operational insights, you must specify a Workspace ID and a Workspace Key. Maintenance - Maintenance window for updates - Configures the window when updates can take place to the device. You can configure the Start time of the window and the Duration in hours (from 1-5 hours). Wireless projection - PIN for wireless projection - Specifies whether you must enter a PIN before you can use the wireless projection capabilities of the device. - Miracast wireless projection - If you want to let the Windows 10 Team device use Miracast enabled devices to project, select this option. - Miracast wireless projection channel - Choose the Miracast channel that is used to establish the connection. Next steps Use the information in How to configure device restriction settings to save, and assign the profile to users and devices. Feedback
https://docs.microsoft.com/en-in/intune/device-restrictions-windows-10-teams
2019-08-17T15:52:15
CC-MAIN-2019-35
1566027313428.28
[]
docs.microsoft.com
NextGen There was a previous problem regarding NextGen and Page Builder Sandwich. Prior to version 2.12, PBS somehow triggered NextGen to show an error message that made your site get a blank white screen. This has been fixed as of version 2.12, so if you're using an older version, please update.
https://docs.pagebuildersandwich.com/article/94-nextgen
2019-08-17T14:43:51
CC-MAIN-2019-35
1566027313428.28
[]
docs.pagebuildersandwich.com
Provide external authentification to Agorakit¶ You can provide facebook, google, twitter and/or github authentication to your users. This feature is a work in progress, help is highly welcome. To do so, get a client id, a client secret from each provider you want to use. Those are a string of characters you need to put in your .env file Agorakit will automatically put the links on the login and registration page for each provider when [PROVIDER]_ID is set - For Twitter go to : - For Facebook, go to: - For Github go to - For Google go to : It’s a bit of a pain to find where to get the client_id and client_key, so good luck :-) You also need to set [PROVIDER]_URL to http[s]://[yourdomain]/auth/[provider]/callback for each provider, in your .env file
https://agorakit.readthedocs.io/en/latest/external_authentification.html
2019-08-17T14:39:41
CC-MAIN-2019-35
1566027313428.28
[]
agorakit.readthedocs.io
Certain correspondence relating to the P-TECH pilot program Any correspondence, including but not limited to emails, letters and memos, sent between - The Department of Education - OR the office of the Federal Education Minister (including the Assistant Minister) - OR the office or the Prime Minister AND any company, entity or organisation EXPRESSING INTEREST IN PARTICIPATING in the P-TECH pilot program. Note: documents as released under the Freedom of Information Act, redactions applied to exempt material.
https://docs.education.gov.au/documents/certain-correspondence-relating-p-tech-pilot-program
2019-08-17T14:54:03
CC-MAIN-2019-35
1566027313428.28
[]
docs.education.gov.au
In this tutorial, we're going to flash-back to the simple server we created a while back. We'll create a very simple server where everything takes place in one thread. Once we have a solid understanding there, we'll move on to the next tutorial where we begin to introduce concurrency concepts. There are four C++ source files in this tutorial: server.cpp, client_acceptor.h, client_handler.h and client_handler.cpp. I'll talk about each of these in turn with the usual color commentary as we go. In addition, I'll briefly discuss the Makefile and a short perl script I've added.
https://docs.huihoo.com/ace_tao/ACE-5.2+TAO-1.2/ACE_wrappers/docs/tutorials/005/page01.html
2019-08-17T15:20:51
CC-MAIN-2019-35
1566027313428.28
[]
docs.huihoo.com
Appliance.
https://docs.infrascale.com/dr/cfa/management-console/system/basic
2019-08-17T14:47:07
CC-MAIN-2019-35
1566027313428.28
[array(['/images/cfa-management-console/system-tab-imgs/image1.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image1.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image2.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image2.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image3.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image3.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image4.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image4.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image5.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image5.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image6.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image6.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image7.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image7.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image11.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image11.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image13.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image13.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image14.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image14.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image15.png', None], dtype=object) array(['/images/cfa-management-console/system-tab-imgs/image15.png', None], dtype=object) ]
docs.infrascale.com
After an initial full backup, an ECS continues to be backed up incrementally by default. VBS allows you to use any backup (no matter it is a full or incremental one) to restore the data of the entire EVS disk. By virtue of this, manual or automatic deletion of a backup will not affect the restoration function. Suppose EVS disk X has backups A, B, and C (in time sequence) and every backup involves data changes. If backup B is deleted, you can still use backup A or C to restore data.
https://docs.otc.t-systems.com/en-us/usermanual/vbs/en-us_topic_0126537915.html
2019-08-17T15:47:00
CC-MAIN-2019-35
1566027313428.28
[]
docs.otc.t-systems.com
Javascript Hooks The bulk of Page Builder Sandwich is Javascript, and we've added a bunch of methods so that developers can extend and add more functionality to our page builder. To make things easier to developers, we've patterned things to how WordPress does things - via hooks! In WordPress, you typically use the functions add_filter() and add_action() to modify or add to the behavior of WP. Similarly in PBS, you can use wp.hooks.addFilter() and wp.hooks.addAction(). The way you use them is also the same: use filters to modify stuff, and use actions to perform additional stuff. Those two function calls are essentially the same as with their PHP counterparts, except they run in Javascript. So as not to be confused between PHP hooks and Javascript hooks, we've named the Javascript hooks with a "period-case" (invented word). PHP hooks all use snake case: - pbs.save.payload - "period-case", a JS hook - pbs_save_content_data - snake case, a PHP hook Refer to the Javascript Actions and Javascript Filters categories on the left menu to browse all the hooks. Credits For those curious, we're using to add those nifty hook calls in Javascript
https://docs.pagebuildersandwich.com/article/220-javascript-hooks
2019-08-17T14:57:32
CC-MAIN-2019-35
1566027313428.28
[]
docs.pagebuildersandwich.com
In this tutorial, we will walk you through the creation of a simple calculator assistant with the Snips Console. It will be able to calculate arithmetic sums. That is, it will be capable of understanding sentences such as: What is 25 plus 12? or variants thereof, such as Can you tell me the sum of 75 and 9? Once the assistant has been created in the console, it can be deployed to a device such as a Raspberry Pi, and Android or iOS device, or a Mac. This is the topic of other guides in this Getting Started series. If you haven't already, head over to console.snips.ai and create a new account: Once signed up, you will be prompted to create your first assistant. If it's not your first sign in, click on Create a New Assistant: Enter the basic assistant info. For this tutorial, we will name our assistant HelloSnips, and use English as language: Next, we will create a simple app. Click on Add an App, which opens the Snips app store window. You may select among preexisting apps, but for this tutorial, we will create our own. Therefore, click on Create a New App and give your app some basic information. We are creating a calculator app, so we will name our app Calculator: Tap Create, and the select the newly created card from the list of apps. You should see the following: Congratulations! Your app is now created, and we can start training it with intents. In the app's properties panel, click Edit App. This brings up the App Editor. Click on Create New Intent. We will create a simple intent that handles queries like "What is 3 plus 5?", that is, asking the sum of two numbers. We will call this intent ComputeSum: Once created, the Intent Editor is brought up. This is where we will feed the intent with some training sentences. We start by defining the type of slots that our intent needs. In a sentence like "What is 3 plus 5?", we want to extract the numbers 3 and 5 as slots to our intent. So we name these slots firstTerm and secondTerm and indicate that they are of type snips/number. If you want to learn more about slot types, go to the slot types page. Now, we are ready to train our assistant with some examples. In the Training Examples panel, we enter a few sentences, and tag the slots accordingly. A few phrases will do for now: Once done, we save the intent, which will automatically trigger a training of the model. Once done, we can test that it works by entering a sample phrase. This will display a JSON output, with the correctly parsed intent and slots. Your assistant is now ready for use! It's time to deploy it to your device, and write code that reacts when an intent has been detected. If you plan to deploy your assistant to a Raspberry Pi, you can create code directly in the console and deploy it to your device without further setup. Make sure to check our the following articles: For other platforms, make sure to go through the following:
https://docs.snips.ai/getting-started/quick-start-console
2019-08-17T14:50:36
CC-MAIN-2019-35
1566027313428.28
[]
docs.snips.ai
Provides data for the ASPxClientUploadControl.FilesUploadComplete client event, which enables you to perform specific actions after all selected files have been uploaded. Gets a string that contains specific information (if any) passed from the server side for further client processing. Gets the error text to be displayed within the upload control's error frame.
https://docs.devexpress.com/AspNet/js-ASPxClientUploadControlFilesUploadCompleteEventArgs._properties
2019-08-17T15:35:06
CC-MAIN-2019-35
1566027313428.28
[]
docs.devexpress.com
Check for Existing Callback V2 This block checks to see if an existing callback already exists for a caller's phone number in the specified virtual queue. ImportantThis check is performed separately for each virtual queue. Keep in mind that if a caller is using different virtual queues, they could potentially book multiple callbacks with the same phone number. Inputs tab Provide the Virtual Queue and Phone Number that are to be checked for existing callbacks. Example. Example This page was last modified on January 18, 2019, at 14:22. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSAAS/latest/Administrator/CallbackExisting
2019-08-17T14:58:19
CC-MAIN-2019-35
1566027313428.28
[]
docs.genesys.com
. Note You can access the Test Team Management folder from the Excel Reports folder for the team project in Team Explorer. You can access this folder only if your team project portal has been enabled and is provisioned to use SharePoint Products. For more information, see Access a Team Project Portal and Process Guidance.
https://docs.microsoft.com/en-us/previous-versions/ee795293(v=vs.100)
2019-08-17T14:37:57
CC-MAIN-2019-35
1566027313428.28
[]
docs.microsoft.com
Intermine From UABgrid Documentation Revision as of 12:53, 1 November 2011 by [email protected] (Talk | contribs) Motivation CCTS/BMI group plans to establish an Intermine server at UAB in order to support genomics (and other -omic) reseach. The tool provides a way to link datasets with disparate datatypes together in a single warehouse. We particularly expect it to be useful for the analysis of gene sets: storage, collaboration/sharing and researcher self-service. Initial Configurartion Requirements TBA - Ranjit
https://docs.uabgrid.uab.edu/w/index.php?title=Intermine&oldid=3439
2019-08-17T14:57:24
CC-MAIN-2019-35
1566027313428.28
[]
docs.uabgrid.uab.edu
jBASIC Language Overview jBASIC, sometimes referred to as jBC is a BASIC style language modeled after PICK system. It is used mostly to build business applications that can utilize a multivalue database or a third party dbms depending on the configuration it is given. Benefits of jBASIC - jBASIC comes with a built in debugger. - Applications built using jBASIC are fast without the overhead. - Calls can be made to OS functions and vice versa. - jBASIC is able to read/open operating system files and vice versa. - Since the source code is converted to pre-compiled 'C', jBASIC specific modifications will have been implemented by jBASE in the run-time libraries. - SQL support is provided making it possible to use jBASIC programs with a third party SQL database. Variables Variable names begin with an alphabetic character, which can be followed by any combination of letters, digits, periods, dollar signs and underscores. File and Directory Organization jBASIC is able to create files and directories that can be read by the operating system.
https://docs.jbase.com/36868-jbase-basic/263498-jbase-basic
2019-08-17T15:34:51
CC-MAIN-2019-35
1566027313428.28
[]
docs.jbase.com
Common Problems - The "Edit with Page Builder Sandwich" Button Doesn't Appear - I Keep Getting A "Session Expired" Popup When Editing - Wordfence Firewall Setup & Compatibility - Caching & Minify Plugins (W3 Total Cache, WP Super Cache, Comet Cache, Better WordPress Minify, etc) - CloudFlare & Rocket Loader Compatibility - I Can't Find Where to Put in My License Key - How to Manually Update PBS with a Plugin Zip File - I Cannot See My Changes After Refreshing - Padding, Margins and Other Styles Are Not Applying On Elements - How to Update Page Builder Sandwich - My Screen Is Stuck in The "Please Wait..." Stage - Where Can I Download The Premium Plugin? - Where Can I Find My License Key? - All Elements Disappearing from Elements Sidebar - My Iframe Content doesn't Resize properly in Mobile
https://docs.pagebuildersandwich.com/category/86-common-problems
2019-08-17T14:46:04
CC-MAIN-2019-35
1566027313428.28
[]
docs.pagebuildersandwich.com
- Currently, the feature supports the following complex data types CDTs as they have multiple transform groups: The only time JSON and XML uses transforms are when in Field, Record, or Indicator modes. For MultipartRecord mode and External JSON/XML data transfer, transforms are not used. - JSON - XML - ST_GEOMETRY - DATASET - This feature cannot be applied to user-defined types. - In the same session, this statement can be executed multiple times for the same CDT to switch to different transforms. - Any transform group change for a CDT using SET TRANSFORM GROUP FOR TYPE should occur before a statement’s preparation or after the prepared statement’s execution. The transform group for a CDT cannot be changed between a statement’s preparation and its execution as it would lead to undefined behavior. The ODBC driver will throw an error message (mentioned in Error Message) in this scenario. - If the logged-on USER already has transform settings, the SET TRANSFORM GROUP FOR TYPE statement will modify the transform setting only for the current session.
https://docs.teradata.com/reader/pk_W2JQRhJlg28QI7p0n8Q/a2inC7MJ~QOl96tT9vb8Iw
2019-08-17T14:33:35
CC-MAIN-2019-35
1566027313428.28
[]
docs.teradata.com
When a key definition is bound to a resource addressed by @href or @keyref and does not specify "none" for the @linking attribute, all references to that key definition become navigation links to the bound resource. When a key definition is not bound to a resource or specifies "none" for the @linking attribute, references to that key do not become navigation links. When a key definition has a <topicmeta> subelement, elements that refer to that key and that are empty may get their effective content from the first matching subelement of the <topicmeta> subelement of the key-defining topicref. If no matching element is found, the contents of the <linktext> tag, if present, should be used. Elements within <linktext> that do not match the content model of the key reference directly or after generalization should be skipped. For <link> tags with a keyref attribute, the contents of the <shortdesc> tag in the key-defining element should provide the <desc> contents. removed. For key reference elements that become navigation links, if there is no matching element in the key definition, normal link text determination rules apply as for <xref>. If a referencing element contains a key reference with an undefined key, it is processed as if there were no key reference, and the value of the @href attribute is used as the reference. If the @href attribute is not specified either, the element is not treated as a navigation link. If it is an error for the element to be empty, an implementation may give an error message, and may recover from this error condition by leaving the key reference element empty. The effective resource bound to the <topicref> element is determined by resolving all intermediate key references. Each key reference is resolved either to a resource addressed directly by URI reference in an @href attribute, or to no resource. Processors may impose reasonable limits on the number of intermediate key references they will resolve. Processors should support at least three levels of key references. The attributes that are common to a key definition element and a key reference element using that key, other than the @keys, @processing-role, and @id attributes, are combined as for content references, including the special processing for the @xml:lang, @dir, and @translate attributes. There is no special processing associated with either the @locktitle or the @lockmeta attributes when attributes are combined.
http://docs.oasis-open.org/dita/v1.2/os/spec/archSpec/processing_key_references.html
2018-02-17T21:09:23
CC-MAIN-2018-09
1518891807825.38
[]
docs.oasis-open.org
Creates a VPC with the CIDR block you specify. The smallest VPC you can create uses a /28 netmask (16 IP addresses), and the largest uses a /18 netmask (16,384 IP addresses). To help you decide how big to make your VPC, go to the topic about creating VPCs in the Amazon Virtual Private Cloud Developer Guide. By default, each instance you launch in the VPC has the default DHCP options (the standard EC2 host name, no domain name, no DNS server, no NTP server, and no NetBIOS server or node type). AWS might delete any VPC that you create with this operation if you leave it inactive for an extended period of time (inactive means that there are no running Amazon EC2 instances in the VPC). Assembly: AWSSDK (Module: AWSSDK) Version: 1.5.60.0 (1.5.60.0) Inheritance Hierarchy
https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_EC2_Model_CreateVpcRequest.htm
2018-02-17T21:50:24
CC-MAIN-2018-09
1518891807825.38
[array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object)]
docs.aws.amazon.com
Staging and Packages Staging is the process of ingesting your application source code into Apcera and making sure that all its dependencies are met. A stager is a job in Apcera that's responsible for some part of the staging process, such as running the source code through a test suite, or checking for a virus. A staging pipeline is a collection of stagers that run in a specified order. The end result of the staging process is a package, a collection of binary data (your application source code, for instance) and metadata. A package could represent a specific version of Ubuntu, a Java runtime, a capsule snapshot, a Ruby on Rails application, or a Bash script, to name a few. This section of the guide provides documentation on working with and creating stagers and packages, including:
https://docs.apcera.com/packages/packages-toc/
2018-02-17T21:00:33
CC-MAIN-2018-09
1518891807825.38
[]
docs.apcera.com
Setting Up a Profiling Environment Note There have been substantial changes to profiling in the .NET Framework 4. When a managed process (application or service) starts, it loads the common language runtime (CLR). When the CLR is initialized, it evaluates the following two environmental variables to decide whether the process should connect to a profiler: COR_ENABLE_PROFILING: The CLR connects to a profiler only if this environment variable exists and is set to 1. COR_PROFILER: If the COR_ENABLE_PROFILING check passes, the CLR connects to the profiler that has this CLSID or ProgID, which must have been stored previously in the registry. The COR_PROFILER environment variable is defined as a string, as shown in the following two examples. set COR_PROFILER={32E2F4DA-1BEA-47ea-88F9-C5DAF691C94A} set COR_PROFILER="MyProfiler" To profile a CLR application, you must set the COR_ENABLE_PROFILING and COR_PROFILER environment variables before you run the application. You must also make sure that the profiler DLL is registered. Note Starting with the .NET Framework 4, profilers do not have to be registered. Note To use .NET Framework versions 2.0, 3.0, and 3.5 profilers in the .NET Framework 4 and later versions, you must set the COMPLUS_ProfAPI_ProfilerCompatibilitySetting environment variable. Environment Variable Scope How you set the COR_ENABLE_PROFILING and COR_PROFILER environment variables will determine their scope of influence. You can set these variables in one of the following ways: If you set the variables in an ICorDebug::CreateProcess call, they will apply only to the application that you are running at the time. (They will also apply to other applications started by that application that inherit the environment.) If you set the variables in a Command Prompt window, they will apply to all applications that are started from that window. If you set the variables at the user level, they will apply to all applications that you start with File Explorer. A Command Prompt window that you open after you set the variables will have these environment settings, and so will any application that you start from that window. To set environment variables at the user level, right-click My Computer, click Properties, click the Advanced tab, click Environment Variables, and add the variables to the User variables list. If you set the variables at the computer level, they will apply to all applications that are started on that computer. A Command Prompt window that you open on that computer will have these environment settings, and so will any application that you start from that window. This means that every managed process on that computer will start with your profiler. To set environment variables at the computer level, right-click My Computer, click Properties, click the Advanced tab, click Environment Variables, add the variables to the System variables list, and then restart your computer. After restarting, the variables will be available system-wide. If you are profiling a Windows Service, you must restart your computer after you set the environment variables and register the profiler DLL. For more information about these considerations, see the section Profiling a Windows Service. Additional Considerations The profiler class implements the ICorProfilerCallback and ICorProfilerCallback2 interfaces. In the .NET Framework version 2.0, a profiler must implement ICorProfilerCallback2. If it does not, ICorProfilerCallback2will not be loaded. Only one profiler can profile a process at one time in a given environment. You can register two different profilers in different environments, but each must profile separate processes. The profiler must be implemented as an in-process COM server DLL, which is mapped into the same address space as the process that is being profiled. This means that the profiler runs in-process. The .NET Framework does not support any other type of COM server. For example, if a profiler wants to monitor applications from a remote computer, it must implement collector agents on each computer. These agents will batch results and communicate them to the central data collection computer. Because the profiler is a COM object that is instantiated in-process, each profiled application will have its own copy of the profiler. Therefore, a single profiler instance does not have to handle data from multiple applications. However, you will have to add logic to the profiler's logging code to prevent log file overwrites from other profiled applications. Initializing the Profiler When both environment variable checks pass, the CLR creates an instance of the profiler in a similar manner to the COM CoCreateInstance function. The profiler is not loaded through a direct call to CoCreateInstance. Therefore, a call to CoInitialize, which requires setting the threading model, is avoided. The CLR then calls the ICorProfilerCallback::Initialize method in the profiler. The signature of this method is as follows. HRESULT Initialize(IUnknown *pICorProfilerInfoUnk) The profiler must query pICorProfilerInfoUnk for an ICorProfilerInfo or ICorProfilerInfo2 interface pointer and save it so that it can request more information later during profiling. Setting Event Notifications The profiler then calls the ICorProfilerInfo::SetEventMask method to specify which categories of notifications it is interested in. For example, if the profiler is interested only in function enter and leave notifications and garbage collection notifications, it specifies the following. ICorProfilerInfo* pInfo; pICorProfilerInfoUnk->QueryInterface(IID_ICorProfilerInfo, (void**)&pInfo); pInfo->SetEventMask(COR_PRF_MONITOR_ENTERLEAVE | COR_PRF_MONITOR_GC) By setting the notifications mask in this manner, the profiler can limit which notifications it receives. This approach helps the user build a simple or special-purpose profiler. It also reduces CPU time that would be wasted sending notifications that the profiler would just ignore. Certain profiler events are immutable. This means that as soon as these events are set in the ICorProfilerCallback::Initialize callback, they cannot be turned off and new events cannot be turned on. Attempts to change an immutable event will result in ICorProfilerInfo::SetEventMask returning a failed HRESULT. Profiling a Windows Service Profiling a Windows Service is like profiling a common language runtime application. Both profiling operations are enabled through environment variables. Because a Windows Service is started when the operating system starts, the environment variables discussed previously in this topic must already be present and set to the required values before the system starts. In addition, the profiling DLL must already be registered on the system. After you set the COR_ENABLE_PROFILING and COR_PROFILER environment variables and register the profiler DLL, you should restart the target computer so that the Windows Service can detect those changes. Note that these changes will enable profiling on a system-wide basis. To prevent every managed application that subsequently runs from being profiled, you should delete the system environment variables after you restart the target computer. This technique also leads to every CLR process getting profiled. The profiler should add logic to its ICorProfilerCallback::Initialize callback to detect whether the current process is of interest. If it is not, the profiler can fail the callback without performing the initialization.
https://docs.microsoft.com/en-us/dotnet/framework/unmanaged-api/profiling/setting-up-a-profiling-environment
2018-02-17T21:43:28
CC-MAIN-2018-09
1518891807825.38
[]
docs.microsoft.com
The appthemes_add_submenu_page action hook is triggered within the WordPress admin theme option pages. This hook provides no parameters but must be used in conjunction with the appthemes_add_submenu_page_content function in order to work properly. Usage This example creates a brand new option menu item called, “My Menu Name” within the AppThemes option menu. You will need to paste this code within your functions.php theme file. As you can see, this admin hook uses the WordPress function add_submenu_page with the following syntax: The parameters can all be changed except for the following: - $parent_slug – must always be set to ‘admin-options.php’ - $capability – should be set to ‘manage_options’ or higher permissions (see WordPress Roles and Capabilities) - $function – must match the name of your appthemes_add_submenu_page_content function Example This example puts all the pieces together by using both the appthemes_add_submenu_page and appthemes_add_submenu_page_content functions to create a new admin theme sub-menu, sub-menu page, and saves the options to the WordPress database. You would need to place this code in your functions.php file. So what does all that code actually do? It creates a new theme admin sub-menu called “My Menu Name”. When you click on it, you’ll see a new page with one drop-down option called “Option Name”. If you save the page, you’ll instantly save that admin option in the WordPress options table. Then you can write a function or if statement to execute based on the value. A simpler test would be to just echo the value by using the WordPress ‘get_option’ function like this: Paste that anywhere in your theme index.php template and you should see either a ‘yes’ or ‘no’ printed on your screen. If not, make sure you’ve actually saved a value and that $app_abbr is globally set (i.e. global $app_abbr;). Changelog - since appthemes-functions.php version 1.2 Source File appthemes_add_submenu_page() is located in includes/admin/admin-options.php.
https://docs.appthemes.com/hook/appthemes_add_submenu_page/
2019-08-17T16:52:49
CC-MAIN-2019-35
1566027313436.2
[]
docs.appthemes.com
[−][src]Crate gpsd_proto The gpsd_proto module contains types and functions to connect to gpsd to get GPS coordinates and satellite information. gpsd_proto uses a plain TCP socket to connect to gpsd, reads and writes JSON messages. The main motivation to create this crate was independence from C libraries, like libgps (provided by gpsd) to ease cross compiling. A example demo application is provided in the example sub directory. Check the repository for up to date sample code. Testing gpsd_proto has been tested against gpsd version 3.17 on macOS with a GPS mice (Adopt SkyTraQ Venus 8) and the iOS app GPS2IP. Feel free to report any other supported GPS by opening a GitHub issue. Reference documentation Important reference documentation of gpsd are the JSON protocol and the client HOWTO. Development notes Start gpsd with a real GPS device: /usr/local/sbin/gpsd -N -D4 /dev/tty.SLAB_USBtoUART Or start gpsd with a TCP stream to a remote GPS: /usr/local/sbin/gpsd -N -D2 tcp://192.168.177.147:11123 Test the connection to gpsd with telnet localhost 2947 and send the string: ?WATCH={"enable":true,"json":true};
https://docs.rs/gpsd_proto/0.4.1/x86_64-apple-darwin/gpsd_proto/
2019-08-17T17:26:16
CC-MAIN-2019-35
1566027313436.2
[]
docs.rs
, verification methods and settings, and enrollment settings. You also specify which users the process applies to.. Configure Password Reset to auto-enroll users or to enable users to enroll for the program. See Configure your Password Reset process to auto-enroll users and Enable users to enroll for Password Reset. About this task A Password Reset process consists of the following elements: The credential store that contains user login credentials. Optionally, the user groups that are authorized to use the Password Reset process. The verifications that verify the identity of the requesting user and that enable the service desk agents to authorize reset of the password. (Verifications are implemented by script includes.) Procedure Navigate to Password Reset > Processes. Click New and then specify a meaningful Name and Description for the process. Select the Credential store that contains the user credentials that the process applies to. Specify the process that you are defining: Select the Password Reset check box, the Password change check box or both check boxes. Specify the Apply to all users setting. Setting of the Apply to all users check box Result. For Password Reset, configure settings on the Password Reset Details tab. Table 1. Settings on the Details tab Field Value Public access The check box is available only when Password reset is selected. Select the check box to enable a self-service process with public user access to the Password Reset or Password Change form through a URL. Clear the check box to define a Service desk-assisted process in which only service desk agents can reset a password at the request of a user. Public URL The field is available only when Public access is selected. URL of the page where users go to reset or change the password. The value from the URL suffix field is appended to the URL when you tab out of the URL suffix field. For the Default self-service Password Reset process, this value must be /$pwd_reset.do?sysparm_url=ss_default. URL suffix The field is available only when Public access is selected. Suffix used to create a unique URL for the Password Reset or Password Change form. Display CAPTCHA The check box is available only when Public access is selected.Select the check box to display a CAPTCHA on the user identification page. The Password Reset application uses Google reCAPTCHA as the default CAPTCHA service. See Configure Google reCAPTCHA for the password reset process. Note: The Password Reset Windows Application uses the base-system CAPTCHA service even if the Password Reset application is configured to use Google reCAPTCHA. Because on-premises instances do not have access to the Internet, the instances cannot use the Google reCAPTCHA service. Set the password_reset.captcha.google.enabled system property to false for on-premises instances. To use the base system CAPTCHA, change the password_reset.captcha.google.enabled system property to false. Identification type Method that the user employs to claim identity for the public Password Reset or Password Change process. Any selection overrides the default identification that is associated with the process. The base system includes the Email and Username Identification identification types. You can create a custom identification type (some knowledge of JavaScript is recommended).See Personal data identification types and confirmation type verifications. Post-reset URL URL to go to after a successfully resetting a password — typically, the URL of the original login page. Enter a complete path, including the protocol (for example,). If the path is under the same domain as the Public URL, then start the path with the / character.Note: If the Auto-generate password check box is selected, then the instance displays the new password. The user must click Done to go to the URL. Minimum verifications Number of verifications that a user must successfully submit to reset the password. If the number exceeds the number of mandatory verifications, then the user must submit enough additional optional verifications to meet the number specified for Minimum verifications.Note: Each user must submit all mandatory verifications regardless of the number specified.By default, during the password reset process, the system presents optional verifications to the user based on the Order values for the verifications. If you selected Allow user to choose from optional verifications, then the Verification page presents all optional verifications to the user. The user then selects the appropriate number of verifications. In this example, the Minimum verifications value is 1. Because no mandatory verifications are configured, the user can choose an optional verification.Also, see Allow user to choose from optional verifications. Allow user to choose from optional verifications Select the check box to enable a user, on the Verifications page during the process of resetting the password, to select which optional verifications to use. The choice of optional verifications appears only if the Minimum verifications setting is greater than the number of mandatory verifications. The number that you specify for Minimum verifications determines how many optional verifications that the user is allowed to select.In the example, the Minimum verifications setting is 2 and there are no mandatory verifications. The user has selected two optional verifications, so cannot select a third verification. Email Password Reset URL Select the check box to enable users to reset the password by clicking a link in an email that the instance sends to them. By default, the self-service Password Reset processes enable this option. When you select this option, the Auto-generate password check box is not available.Note: See Example: The default self-service Password Reset process for an outline of the process that is enabled by default. Enable account unlock This check box is available only when Password reset is selected. Select the check box to allow user accounts on credential stores to be unlocked without resetting the password. Note: Not supported by the default self-service Password Reset process. Unlock user account Select the check box to unlock user accounts on credential stores after a password reset. Auto-generate password Select the check box to auto-generate a new password for the user. When this check box is selected, you must select the Email password or Display password check box, or both. This setting is useful for service desk-assisted processes.This check box is available only when: The Password reset check box is selected. The Email Password Reset URL check box is cleared. Note: If you use the credential store on your local ServiceNow instance or an Active Directory credential store: Clear the check box to enable the Enforce history policy option for a credential store. See Configure the connection to a credential store for the Password Reset processes. User must reset password This check box is available only when Auto-generate password is selected. Select the check box to require users to reset their password immediately after logging in with the auto-generated password.Note: Users whose credentials are held in the local ServiceNow instance credential store are prompted to change their password the first time that they log in. Users whose credentials are held in an Active Directory credential store are not prompted to change their passwords in the instance. Such users must change their passwords from a computer on the domain. Display password This check box is available only when Auto-generate password is selected.Select the check box to display the new password on the screen. In a self-service process, the password appears on the user screen. In a service desk-assisted process, the password appears on the service desk agent screen. Email password This check box is available only when Auto-generate password is selected. Select the check box to email the new password to the user. The setting is useful in both self-service and service desk-assisted processes. The setting can add a layer of security by requiring that users access their email to view the password. In a service desk-assisted process, emailing the password to users ensures that only the user requesting the password reset can view the password. Table 2. Related lists on the Details tab List Description Verifications One or more verifications that the Password Reset process uses. See Password Reset verifications.The Verifications related list is available only after the record has been saved. Groups ServiceNow user groups to associate with the Password Reset process.The Groups related list is available only after the record has been saved and if the Apply to all users check box is cleared. For Password Reset, configure settings of interest on the Advanced tab. Table 3. Advanced tab Field Description Entry UI macro UI macro that displays a customized message to users when they access the initial Password Reset screen. Success UI macro UI macro that displays a customized message to users on the final Password Reset screen when their password is successfully reset. Failure UI macro UI macro that displays a customized message to users on the final Password Reset screen when their password reset fails. Post reset script Script include that performs actions after the Password Reset process completes whether the outcome is success or failure. For more information on customizing post processor scripts, see the Post reset script category as described in Password Reset extension script categories. Header UI macro / Footer UI macro Macros that add a header or footer to customize the appearance of the pages that end users work in while resetting a password (the Identify, Verify, and Reset pages. See Add a custom header or footer to the user pages for Password Reset. For Password Reset, fill in any fields of interest on the Enrollment Reminder tab. Save your changes on the Password Reset Process. Save the record and then table [sys_user] or an Active Directory server.Password Reset verificationsEach verification specifies the method and process for verifying the identity of the user that is requesting password reset.Configure your Password Reset process to auto-enroll usersTo simplify management, many organizations auto-enroll users in the Password Reset program. Every base-system verification type enables you to specify automatic enrollment for your process.Enable users to enroll for Password ResetTo enable users to enroll for the Password Reset program, you specify a UI macro that takes the user through the enrollment process and a script that processes the enrollment data that the user entered. The base system includes a functioning macro and script.Configure Password Reset propertiesYou can specify properties that configure the Password Reset experience for end users. Send email to remind users to enroll for Password ResetYou can automatically send messages that remind users to enrolled in the Password Reset process. You specify the text of the message and can configure the messages to repeat at intervals.Configure the required strength for passwordsThe password that a user defines must meet certain requirements — for example, it must contain at least 12 characters, it must include a numeral, and so on. You can configure the requirements as needed for your organization.Specify lockout for failed login attemptsThe system provides inactive script actions that enable you to specify the number of failed login attempts before a user account is locked and to reset the count after a successful login.Configure Google reCAPTCHA for the password reset processTo use the Google reCAPTCHA service, instances that are running on a domain other than service-now.com require an API key pair from Google. Settings on the Advanced tab of the Password Reset Processes formUI macros and script includes can extend the basic functionality of a Password Reset process.Related tasksPlan your Password Reset processesConfigure the required strength for passwords On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-servicenow-platform/page/administer/login/task/t_CreateAPasswordResetProcess.html
2019-08-17T17:37:54
CC-MAIN-2019-35
1566027313436.2
[]
docs.servicenow.com
How and When Do Features Become Available? Supported. Salesforce Overall. Lightning Essentials. Sales. Service. Analytics Drive decisions with insights from reports and dashboards and Einstein Analytics. Drill down in Lightning Experience reports to get the details behind the numbers. Create Einstein Analytics assets with templates, enhanced sharing security, file-based data connectors, and much more. Communities. Chatter Automatically save your posts as drafts as you’re composing. Now when you post, the cursor moves to the text body instead of the To field so you can start writing your post right away. Files Use public links to share folders, so you can deliver collections of files to your employees, partners, customers, and prospects. Access file details on the go with mobile file detail pages. Financial Services Cloud Standardize customer engagement processes with Action Plans task templates. Find out what’s on customers’ minds with Salesforce Surveys. Give bankers a more holistic view of customers with the Commercial Banking Console Lightning app. And help customers help themselves with Einstein Bots for Financial Services Cloud. Plus, accelerate data insert and update operations with the enhanced rollup framework in Financial Services Cloud. Health Cloud. Customization. Security and Identity. Salesforce IoT. Development Whether you’re using Lightning components, Visualforce, Apex, or our APIs with your favorite programming language, these enhancements to the Salesforce Platform help you develop amazing applications, integrations, and packages for resale to other organizations.
http://docs.releasenotes.salesforce.com/en-us/winter19/release-notes/rn_feature_impact.htm
2019-08-17T17:16:29
CC-MAIN-2019-35
1566027313436.2
[]
docs.releasenotes.salesforce.com
Settings¶ How to customize the settings to suit your needs. PACKAGINATOR_SEARCH_PREFIX (Default: “django”)¶ Autocomplete searches for something like ‘forms’ was problematic because so many packages start with ‘django’. This prefix is accommodated in searches to prevent this sort of problem. example: PACKAGINATOR_SEARCH_PREFIX = 'django' PACKAGINATOR_HELP_TEXT (Default: Included in settings.py)¶" } Permissions Settings¶ Django Packages provides several ways to control who can make what changes to things like packages, features, and grids. By default, a Django Packages. Settings that are on by default¶ By default registered users can do the following: Packages - Can add package - Can change package Grids - Can add Package - Can change Package - Can add feature - Can change feature - Can change element In the default condition, only super users or those with permission can delete. Testing permissions in templates¶ A context processor will add the user profile to every template context, the profile model also handles checking for permissions: {% if profile.can_edit_package %} <edit package UI here> {% endif %} The follow properties can be used in templates: - can_add_package - can_edit_package - can_edit_grid - can_add_grid - can_add_grid_feature - can_edit_grid_feature - can_delete_grid_feature - can_add_grid_package - can_delete_grid_package - can_edit_grid_element
https://djangopackages.readthedocs.io/en/latest/opencomparison_settings.html
2019-08-17T18:23:17
CC-MAIN-2019-35
1566027313436.2
[]
djangopackages.readthedocs.io
. Examples include Splunk Add-on for Checkpoint OPSEC LEA, Splunk Add-on for Box, and Splunk Add-on for McAfee. App and add-on support Anyone can develop an app or add-on for Splunk software. Splunk and members of our community create apps and add-ons and share them with other users of Splunk softwarebase.,<br /><br />Is there any particular reason why Technology add-ons are not mentioned on this page? Adeniel8, an add-on is a technology add-on. <br /><br />In the past, Splunk referred to add-ons as Technology add-ons, or TAs. We’ve dropped the “Technology” from the term, and now refer to them simply as Add-ons. On Splunk Apps, you can still find add-ons that use the older terminology.<br /><br />The Splunk Enterprise Admin manual provides overview information and general guidance for administering add-ons. See Supported Add-ons () for the documentation available for add-ons officially supported by Splunk. Consult the documentation that is provided with other add-ons that are available from Splunk Apps.
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Whatsanapp
2019-08-17T17:29:03
CC-MAIN-2019-35
1566027313436.2
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
DEPRECATION WARNING This documentation is not using the current rendering mechanism and will be deleted by December 31st, 2020. The extension maintainer should switch to the new system. Details on how to use the rendering mechanism can be found here. Preparing setup¶ Unpack the TYPO3 source as usual, and unpack a dummy package. Set everything up as explained in the setup documentation, until you come to the point where you are asked to start the install tool – DON'T DO THIS YET! Extensions DBAL and ADOdb are part of the system extensions shipped with TYPO3. As such, you only have to load them.' => 'mysql', ) ), ); Of course you need to adjust the DBAL configuration as you need to, the example above does nothing but route everything through ADOdb inside the DBAL extension. See appendices for specific DBMS setup tutorials.
https://docs.typo3.org/typo3cms/extensions/dbal/7.6/InstallingWithDbal/PreparingSetup/Index.html
2019-08-17T18:13:53
CC-MAIN-2019-35
1566027313436.2
[]
docs.typo3.org
Open States API v1¶ Open States provides a JSON API for accessing state legislative information. Basics¶ - All API calls are URLs in the form - Responses are JSON unless otherwise specified. - If an error occurs the response will be a plain text error message with an appropriate HTTP error code (404 if object is not found, 401 if authentication fails, etc.). - To use the API you must register for an API key. - Once activated, pass your API key via the apikeyquery paramter or the X-API-KEYheader. - All changes to the API will be announced on the Open States Google Group. It is recommended you subscribe if you’re using the API. - For Python users, there’s an official pyopenstates package available. Data Types¶ Open States provides data about six core data types. - State Metadata - Details on what data is available, including terms, sessions, and state-specific names for things. - Bills - Details on bills & resolutions, including actions & votes. - Legislators - Details on legislators, including contact details. - Committees - Details on committees as they currently stand. - Districts - Details on districts and their boundaries. - events - Events endpoints have been deprecated and will no longer return current data as of 2018. Requesting A Custom Fieldset¶ On essentially every method in the API it is possible to specify a custom subset of fields on an object by specifying a fields parameter. There are two use cases that this functionality aims to serve: First, if you are writing an application that loads a lot of data but only uses some of it, specifying a limited subset of fields can reduce response time and bandwidth. We’ve seen this approach be particuarly useful for mobile applications where bandwidth is at a premium. An example would be a legislator search with fields=first_name,last_name,leg_id specified. All legislator objects returned will only have the three fields that you requested. Second, you can actually specify a set of fields that includes fields excluded in the default response. For instance, if you are conducting a bill search, it typically does not include sponsors, though many sites may wish to use sponsor information without making a request for the full bill (which is typically much larger as it includes versions, votes, actions, etc.). A bill search that specifies fields=bill_id,sponsors,title,chamber will include the full sponsor listing in addition to the standard bill_id, title and chamber fields. Extra Fields¶ You may notice that the fields documented are sometimes a subset of the fields actually included in a response. Many times as part of our scraping process we take in data that is available for a given state and is either not available or does not have an analog in other states. Instead of artificially limiting the data we provide to the smallest common subset we make this extra data available. To make it clear which fields can be relied upon and which are perhaps specific to a state or subset of states we prefix non-standard fields with a +. If you are using the API to get data for multiple states, it is best to restrict your usage to the fields documented here. If you are only interested in data for a small subset of our available states it might make sense to take a more in depth look at the API responses for the state in question to see what extra data we are able to provide.
http://docs.openstates.org/en/latest/api/
2018-02-18T05:10:22
CC-MAIN-2018-09
1518891811655.65
[]
docs.openstates.org
, carefully check the references to all Telerik controls in your project and make sure that they are the same version considering the suffix as well (.20 or .40). Better yet, you can remove all references and add them anew by using the DLLs from your fresh installation. Delete the license.licx file. After that, you should rebuild your project, close Visual Studio and open it again to make sure that no references are kept in the memory by Visual Studio.
https://docs.telerik.com/devtools/winforms/installation-deployment-and-distribution/versions-upgrade
2018-02-18T05:11:26
CC-MAIN-2018-09
1518891811655.65
[array(['images/installation-deployment-and-distribution-versions-upgrade001.png', 'installation-deployment-and-distribution-versions-upgrade 001'], dtype=object) ]
docs.telerik.com
This documentation is only valid for older versions of Wordfence. If you are using Wordfence 7 or later, please visit our new documentation. My Scan terminated with an error "...error connecting to the the Wordfence scanning servers..." Errors about connecting to the Wordfence scanning servers usually mean that your web server (the machine that runs your WordPress website) can’t connect to our scanning servers. An example message is: Scan terminated with error: We received an error response when trying to contact the Wordfence scanning servers. The HTTP status code was [0] and the error from CURL was couldn’t connect to host. If you are having issues connecting to the Wordfence servers when scanning or getting a key, here are some things you can (many thanks to Patrick). telnet noc1.wordfence.com 80 and telnet noc1.wordfence.com 443 You should get a response that says "Connected to noc1.wordfence.com." As long as you can connect to both port 80 and port 443, you should be able to scan.
https://docs.wordfence.com/index.php?title=My_Scan_terminated_with_an_error_%22...error_connecting_to_the_the_Wordfence_scanning_servers...%22&oldid=914
2018-02-18T05:11:28
CC-MAIN-2018-09
1518891811655.65
[]
docs.wordfence.com
Using Automatic Reflection-Based PDX Serialization You can configure your cache to automatically serialize and deserialize domain objects without having to add any extra code to them. You can automatically serialize and deserialize domain objects without coding a PdxSerializer class. You do this by registering your domain objects with a custom PdxSerializer called ReflectionBasedAutoSerializer that uses Java reflection to infer which fields to serialize. You can also extend the ReflectionBasedAutoSerializer to customize its behavior. For example, you could add optimized serialization support for BigInteger and BigDecimal types. See Extending the ReflectionBasedAutoSerializer for details. Note: Your custom PDX autoserializable classes cannot use the org.apache.geode package. If they do, the classes will be ignored by the PDX auto serializer. Prerequisites - Understand generally how to configure the GemFire cache. - Understand how PDX serialization works and how to configure your application to use PdxSerializer. In your application where you manage data from the cache, provide the following configuration and code as appropriate: In the domain classes that you wish to autoserialize, make sure each class has a zero-arg constructor. For example: public PortfolioPdx(){} Using one of the following methods, set the PDX serializer to ReflectionBasedAutoSerializer. In gfsh, execute the following command prior to starting up any members that host data: gfsh>configure pdx --auto-serializable-classes=com\.company\.domain\..* By using gfsh, this configuration can propagated across the cluster through the Cluster Configuration Service. Alternately, in cache.xml: <!-- Cache configuration configuring auto serialization behavior --> <cache> <pdx> <pdx-serializer> <class-name> org.apache.geode.pdx.ReflectionBasedAutoSerializer </class-name> <parameter name="classes"> <string>com.company.domain.DomainObject</string> </parameter> </pdx-serializer> </pdx> ... </cache> The parameter, classes, takes a comma-separated list of class patterns to define the domain classes to serialize. If your domain object is an aggregation of other domain classes, you need to register the domain object and each of those domain classes explicitly for the domain object to be serialized completely. Using the Java API: Cache c = new CacheFactory() .setPdxSerializer(new ReflectionBasedAutoSerializer("com.company.domain.DomainObject")) .create(); Customize the behavior of the ReflectionBasedAutoSerializerusing one of the following mechanisms: - By using a class pattern string to specify the classes to auto-serialize and customize how the classes are serialized. Class pattern strings can be specified in the API by passing strings to the ReflectionBasedAutoSerializerconstructor or by specifying them in cache.xml. See Customizing Serialization with Class Pattern Strings for details. - By creating a subclass of ReflectionBasedAutoSerializerand overriding specific methods. See Extending the ReflectionBasedAutoSerializer for details. If desired, configure the ReflectionBasedAutoSerializerto check the portability of the objects it is passed before it tries to autoserialize them. When this flag is set to true, the ReflectionBasedAutoSerializerwill throw a NonPortableClassExceptionerror when trying to autoserialize a non-portable object. To set this, use the following configuration: In gfsh, use the following command: gfsh>configure pdx --portable-auto-serializable-classes=com\.company\.domain\..* By using gfsh, this configuration can propagated across the cluster through the Cluster Configuration Service. In cache.xml: <!-- Cache configuration configuring auto serialization behavior --> <cache> <pdx> <pdx-serializer> <class-name> org.apache.geode.pdx.ReflectionBasedAutoSerializer </class-name> <parameter name="classes"> <string>com.company.domain.DomainObject</string> </parameter> <parameter name="check-portability"> <string>true</string> </parameter> </pdx-serializer> </pdx> ... </cache> Using the Java API: Cache c = new CacheFactory() .setPdxSerializer(new ReflectionBasedAutoSerializer(true,"com.company.domain.DomainObject")) .create(); For each domain class you provide, all fields are considered for serialization except those defined as static or transient and those you explicitly exclude using the class pattern strings. Note: The ReflectionBasedAutoSerializer traverses the given domain object’s class hierarchy to retrieve all fields to be considered for serialization. So if DomainObjectB inherits from DomainObjectA, you only need to register DomainObjectB to have all of DomainObjectB serialized.
http://gemfire.docs.pivotal.io/geode/developing/data_serialization/auto_serialization.html
2018-02-18T04:43:06
CC-MAIN-2018-09
1518891811655.65
[]
gemfire.docs.pivotal.io
Developer`s Guide Overview This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here. Telerik Data Access is a tool that aids the development of data oriented applications. As an object-relational mapper, it solves the object-relational impedance mismatch – a set of conceptual and technical problems which occur when using a RDBMS from within applications developed with object-oriented language. Telerik Data Access enables you to work with data in the form of domain-specific hierarchies of objects without concerning yourself with the underlying database tables and columns. This adds a level of abstraction over the data, allows the developer to concentrate on solving the business logic problems at hand and reduces the amount of code required for to develop and maintain well designed, data-oriented applications. Developer`s Guide Often times all that a developer needs to accomplish or progress the task at hand is guidance on how to use a certain set of features. The Developer's Guide section aims to provide you with information on how to combine the features of Telerik Data Access in order to accomplish a concrete task. Each section within the Developer's Guide deals with a specific topic and contains “How to” articles regarding various Telerik Data Access use cases, such as defining your Domain Model mapping, retrieving data using LINQ, calling stored procedures and functions, integrating Telerik Data Access in Web Applications, and many others. For more information regarding specific features of the product, you can also refer to the Feature Reference section. For actual end-to-end projects, do not hesitate to try the Telerik Data Access Samples Kit.
https://docs.telerik.com/data-access/deprecated/developers-guide/developers-guide-overview.html
2018-02-18T05:11:02
CC-MAIN-2018-09
1518891811655.65
[]
docs.telerik.com
How HAWQ Manages Resources HAWQ manages resources (CPU, memory, I/O and file handles) using a variety of mechanisms including global resource management, resource queues and the enforcement of limits on resource usage. Globally Managed Environments In Hadoop clusters, resources are frequently managed globally by YARN. YARN provides resources to MapReduce jobs and any other applications that are configured to work with YARN. In this type of environment, resources are allocated in units called containers. In a HAWQ environment, segments and node managers control the consumption of resources and enforce resource limits on each node. The following diagram depicts the layout of a HAWQ cluster in a YARN-managed Hadoop environment: When you run HAWQ natively in a Hadoop cluster, you can configure HAWQ to register as an application in YARN. After configuration, HAWQ’s resource manager communicates with YARN to acquire resources (when needed to execute queries) and return resources (when no longer needed) back to YARN. Resources obtained from YARN are then managed in a distributed fashion by HAWQ’s resource manager, which is hosted on the HAWQ master.. Cluster Memory to Core Ratio The HAWQ resource manager chooses a cluster memory to core ratio when most segments have registered and when the resource manager has received a cluster report from YARN (if the resource manager is running in YARN mode.) The HAWQ resource manager selects the ratio based on the amount of memory available in the cluster and the number of cores available on registered segments. The resource manager selects the smallest ratio possible in order to minimize the waste of resources. HAWQ trims each segment’s resource capacity automatically to match the selected ratio. For example, if the resource manager chooses 1GB per core as the ratio, then a segment with 5GB of memory and 8 cores will have 3 cores cut. These cores will not be used by HAWQ. If a segment has 12GB and 10 cores, then 2GB of memory will be cut by HAWQ. After the HAWQ resource manager has selected its ratio, then the ratio will not change until you restart the HAWQ master node. Therefore, memory and core resources for any segments added dynamically to the cluster are automtaically cut based on the fixed ratio. To find out the cluster memory to core ratio selected by the resource manager, check the HAWQ master database logs for messages similar to the following: Resource manager chooses ratio 1024 MB per core as cluster level memory to core ratio, there are 3072 MB memory 0 CORE resource unable to be utilized. You can also check the master logs to see how resources are being cut from individual segments due to the cluster memory to core ratio. For example: Resource manager adjusts segment localhost original resource capacity from (8192 MB, 5 CORE) to (5120 MB, 5 CORE) Resource manager adjusts segment localhost original global resource manager resource capacity from (8192 MB, 5 CORE) to (5120 MB, 5 CORE) See HAWQ Database Server Log Files for more information on working with HAWQ database server log files.
http://hdb.docs.pivotal.io/220/hawq/resourcemgmt/HAWQResourceManagement.html
2018-02-18T04:44:28
CC-MAIN-2018-09
1518891811655.65
[array(['../images/hawq_high_level_architecture.png', 'Hawq high level architecture'], dtype=object)]
hdb.docs.pivotal.io
Authentication and Authorization in Azure Mobile Apps What is App Service Authentication / Authorization? Note This article will be migrated to a consolidated App Service Authentication / Authorization article, which covers Web, Mobile, and API Apps. App Service Authentication / Authorization is a feature that allows your application to log in users with no code changes required on the app backend. It provides an easy way to protect your application and work with per-user data. App Service uses federated identity, in which a 3rd-party identity provider ("IDP") stores accounts and authenticates users. The application uses this identity instead of its own. App Service supports five identity providers out of the box: Azure Active Directory, Facebook, Google, Microsoft Account, and Twitter. You can also expand this support for your apps by integrating another identity provider or your own custom identity solution. Your app can use any number of these identity providers, so you can provide your end users with options for how they log in. If you wish to get started right away, see one of the following tutorials: - Add authentication to your iOS app - Add authentication to your Xamarin.iOS app - Add authentication to your Xamarin.Android app - Add Authentication to your Windows app How authentication works In order to authenticate using one of the identity providers, you first need to configure the identity provider to know about your application. The identity provider will then provide you with IDs and secrets that you provide back to the application. This completes the trust relationship and allows App Service to validate identities provided to it. These steps are detailed in the following topics: - Once everything is configured on the backend, you can modify your client to log in. There are two approaches here: - Using a single line of code, let the Mobile Apps client SDK sign in users. - Use an SDK published by a given identity provider to establish identity and then gain access to App Service. Tip Most applications should use a provider SDK to get a more native-feeling login experience and to leverage refresh support and other provider-specific benefits. How authentication without a provider SDK works If you do not wish to set up a provider SDK, you can allow Mobile Apps to perform the login for you. The Mobile Apps client SDK opens a web view to the provider of your choosing to complete the sign-in. Occasionally, you might see this workflow referred to as the "server flow" or "server-directed flow" since the server is managing the login, and the client SDK never receives the provider token. The code needed to start this flow is covered in the authentication tutorial for each platform. At the end of the flow, the client SDK has an App Service token, and the token is automatically attached to all requests to the backend. How authentication with a provider SDK works Working with a provider SDK allows the log-in experience to interact more tightly with the platform OS the app is running on. The provider SDK also gives you a provider token and some user information on the client, making it much easier to consume graph APIs and customize the user experience. Occasionally, you might see this workflow referred to as the "client flow" or "client-directed flow" since code on the client is handling the login, and the client code has access to a provider token. Once a provider token is obtained, it needs to be sent to App Service for validation. At the end of the flow, the client SDK has an App Service token, and the token is automatically attached to all requests to the backend. The developer can also keep a reference to the provider token if they so choose. redirect. Documentation The following tutorials show how to add authentication to your mobile clients using App Service: - Add authentication to your iOS app - Add authentication to your Xamarin.iOS app - Add authentication to your Xamarin.Android app - Add Authentication to your Windows app The following tutorials show how to configure App Service to use different authentication providers: - If you wish to use an identity system other than the ones provided here, you can also use the preview custom authentication support in the .NET server SDK.
https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-auth
2018-02-18T04:53:24
CC-MAIN-2018-09
1518891811655.65
[]
docs.microsoft.com
Route Blob storage events to a custom third-party tools from either RequestBin or Hookbin. Note RequestBin and Hookbin are not intended for high throughput usage. The use of these tools a few ways to launch the Cloud Shell: If you choose to install and use the CLI locally, this article requires that you are running the latest version of Azure CLI (2.0.24. Note Availability for Storage events is tied to Event Grid availability and will become available in other regions as Event Grid does.. Rather than write code to respond to the event, let's create an endpoint that collects the messages so you can view them. RequestBin and Hookbin are third-party tools that enable you to create an endpoint, and view requests that are sent to it. Go to RequestBin, and click Create a RequestBin, or go to Hookbin and click Create New Endpoint. Copy the bin URL, because you need it when subscribing to the topic. You subscribe to a topic to tell Event Grid which events you want to track. The following example subscribes to the storage account you created, and passes the URL from RequestBin or Hookbin as the endpoint for event notification. Replace <event_subscription_name> with a unique name for your event subscription, and <endpoint_URL> with the value from the preceding section. By specifying an endpoint when subscribing, Event Grid handles the routing of events to that endpoint. For <resource_group_name> and <storage_account_name>, use the values you created earlier. storageid=$(az storage account show --name <storage_account_name> --resource-group <resource_group_name> --query id --output tsv) az eventgrid event-subscription create \ --resource-id $storageid \ --name <event_subscription_name> \ --endpoint <endpoint_URL> endpoint URL that you created earlier. Or, click refresh in your open:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-event-quickstart?toc=%2Fazure%2Fevent-grid%2Ftoc.json
2018-02-18T04:55:26
CC-MAIN-2018-09
1518891811655.65
[array(['media/storage-blob-event-quickstart/request-result.png', 'Event data'], dtype=object) ]
docs.microsoft.com
Configure the remote access Server Configuration Option This topic is about the "Remote Access" feature. This configuration option is an obscure SQL Server to SQL Server communication feature that is deprecated, and you probably shouldn't be using it. If you reached this page because you are having trouble connecting to SQL Server, see one of the following topics instead: Tutorial: Getting Started with the Database Engine - Connect to SQL Server When System Administrators Are Locked Out Connect to a Registered Server (SQL Server Management Studio) Connect to Any SQL Server Component from SQL Server Management Studio Connect to the Database Engine With sqlcmd How to Troubleshoot Connecting to the SQL Server Database Engine Programmers may be interested in the following topics: How To: Connect to SQL Server Using SQL Authentication in ASP.NET 2.0 Connecting to an Instance of SQL Server How to: Create Connections to SQL Server Databases The main body of this topic starts here. This topic describes how to configure the remote access server configuration option in SQL Server 2017 by using SQL Server Management Studio or Transact-SQL.. To prevent local stored procedures from being run from a remote server or remote stored procedures from being run on the local server, set the option to 0. Important This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible. Use sp_addlinkedserver instead. When remote access is not enabled, execution of a stored procedure on a linked server, fails when using four part naming, such as the syntax EXEC SQL01.TestDB.dbo.proc_test;. Use EXECUTE ... AT syntax instead, such as EXEC(N'TestDB.dbo.proc_test') AT [SQL01];. In This Topic Before you begin: Limitations and Restrictions To configure the remote access option, using: SQL Server Management Studio Follow Up: After you configure the remote access option Before You Begin Limitations and Restrictions - The remote access option only applies to servers that are added by using sp_addserver, and is included for backward compatibility. access option In Object Explorer, right-click a server and select Properties. Click the Connections node. Under Remote server connections, select or clear the Allow remote connections to this server check box. Using Transact-SQL To configure the remote access option Connect to the Database Engine. From the Standard bar, click New Query. Copy and paste the following example into the query window and click Execute. This example shows how to use sp_configure to set the value of the remote accessoption to 0. EXEC sp_configure 'remote access', 0 ; GO RECONFIGURE ; GO For more information, see Server Configuration Options (SQL Server). Follow Up: After you configure the remote access option This setting does not take effect until you restart SQL Server. See Also RECONFIGURE (Transact-SQL) Server Configuration Options (SQL Server) sp_configure (Transact-SQL)
https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-remote-access-server-configuration-option
2018-02-18T05:14:08
CC-MAIN-2018-09
1518891811655.65
[array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object)]
docs.microsoft.com
Test if an IBAN code for GB can be set in a partner’s bank account Check if an IBAN for GB can be set in a partner’s bank account Go to the window Land, Region, Stadt and select the England entry Go to its IBAN section and make sure all the fields are filled according with the specification from this document: ( page 29, 2.23 GB – United Kingdom, IBAN section) Go to the Organization BPartner for the org you loggeed on with Go to their BankAccount tab (Bank Konto) Create a new entry and set the IBAN GB29NWBK60161331926819 ( taken from the document above: ) Create a manual payment (use Zahlung) and use this new account for it
http://docs.metasfresh.org/tests_collection/testcases/Testcase_FRESH-886.html
2018-02-18T05:06:22
CC-MAIN-2018-09
1518891811655.65
[]
docs.metasfresh.org
Button Types There are 3 types of buttons; Primary, Secondary and Tertiary. Each of those has different colors to suit different actions (to convey meaning). The CSS below is the base style for all buttons. All button styles have specific override deviations from the button base style — all documented on this page. CSS: Button Basics /* Button */ border-radius: 5px; padding: 0 50px; font-family: Apercu-Bold; font-size: 14px; text-align: center; Primary Button There are 4 different types of primary button, each has guidelines on their correct usage. See below: Action, Navigation, Light and Cancel. - Primary buttons pair best with Tertiary buttons. - Used for actionable and navigational links to a new page, submit content, or continue to a next step. - All primary buttons share the same disabled state. CSS: Same as button basics, with the following differences: /* Primary button */ height: 44px; /* Disabled */ background-color: $gray30; color: $white; Primary Action Button - Should be used for key, or the most important action on a page (e.g. main CTA on the home page, or log in, or submit). - Used for 'actions' like booking, signing up, logging in, saving, updating, paying or submitting. - If the button is not a key action, but to navigate to another page, consider using the Primary Navigation or Primary Light button style instead. - Try not to include too many primary action buttons on the same page — focus on one main objective. More than a one action button is likely too much choice. - Pairs well with a Secondary Action button. CSS: Same as Primary, plus: /* Primary Action */ background-color: $yellow50; color: $black50; /* :hover */ background-color: $yellow60; Primary Navigation Button - Used for navigational links to other pages. - Note: A Primary Action button can also be used for navigational links, if the link is a high priority action (e.g. the solo button/link on the home page). - Pairs well with a Tertiary Navigation button. CSS: Same as Primary, plus: /* Primary Navigation */ background-color: $blue50; color: $white; /* :hover */ background-color: $blue60; Primary Light Button - Use only to overlay images or dark color backgrounds. - Never use on a light background color. - Used sparingly as an alternative to a Primary Navigation button, when blue doesn't work well (e.g. overlaying an image). - Pairs well with a Secondary Light or Tertiary Light button. - Avoid using as the main call to action for a page — consider using the Primary Action button for important CTAs. CSS: Same as Primary, plus: /* Primary Light */ background-color: $white; color: $black50; /* :hover */ background-color: $yellow30; /* Disabled */ background-color: $white; color: $black50; opacity: 0.1; Primary Cancel Button - Used sparingly to clearly communicate the intent to cancel or delete something. - More appropriate to product scenarios, than marketing. CSS: Same as Primary, plus: /* Primary Cancel */ background-color: $red50; color: $white; /* :hover */ background-color: $red60; Secondary Button - For navigation to other content on the same page — to anchor up/down the page. - If the link leads to a different page, consider using a Primary button. - Can be used to navigate to content on other pages in special circumstances, if a Primary button style doesn't work so well. - Never pair with a Tertiary button (use a Primary button to pair with a tertiary). CSS: Same as button basics, with the following differences: /* Secondary button */ height: 40px; border: 2px solid; background: none; /* Disabled (secondary) */ border-color: $gray30; Secondary Action Button - Not for use on its own. - Secondary action oriented button to pair with a Primary Action button. CSS: Same as Primary, plus: /* Secondary Action */ background-color: $yellow30; color: $black50; /* :hover */ background-color: $yellow40; Secondary Navigation Button - Primary use to link/anchor to content on the same page, for use on a light background. - If the button navigates to a different page, use the 'solid' Primary Navigation button instead. CSS: Same as Secondary, plus: /* Secondary Navigation */ border-color: $blue50; color: $blue50; /* :hover */ border-color: $blue70; color: $blue70; Secondary Light Button - Link/anchor to content on the same page, for use on a dark background, or image. - Never use on a light background color. CSS: Same as Secondary, plus: /* Secondary Light */ border-color: $white; color: $white; /* :hover */ border-color: $yellow30; color: $yellow30; /* Disabled (secondary — light) */ border-color: $white; color: $white; opacity: 0.2; Tertiary Button - Tertiary buttons can be used in isolation, for less prominent actions, or paired with a Primary or Secondary button, or multiple CTAs around the same subject (see some example use cases). - A Tertiary button has the same dimensions (height) as primary and secondary, only it has no background or border color. CSS: Same as button basics, with the following differences: /* Tertiary button */ background: none; padding: 0; text-align: left; /* Disabled (tertiary) */ color: $gray30; Tertiary Navigation Button - Use for navigational links to other pages. - Works well paired with a Primary Navigation button. CSS: Same as Tertiary, plus: /* Tertiary Navigation */ color: $blue50; /* :hover */ color: $blue70; Tertiary Light Button - Use for navigational links to other pages. - Use only to overlay images or dark color backgrounds. - Never use on a light background color. - Works well paired with a Primary Light button. CSS: Same as Tertiary, plus: /* Tertiary Light */ color: $white; /* :hover */ color: $yellow30; /* Disabled */ color: $white; opacity: 0.2;
http://rivendell-docs.netlify.com/button-specs/
2018-02-18T05:14:11
CC-MAIN-2018-09
1518891811655.65
[array(['buttons.png', 'Button basics'], dtype=object) array(['but-primary-action.png', 'Primary Action Button'], dtype=object) array(['but-primary-navi.png', 'Primary Navigation Button'], dtype=object) array(['but-primary-light.png', 'Primary Light Button'], dtype=object) array(['but-primary-cancel.png', 'Primary Cancel Button'], dtype=object) array(['but-secondary-action.png', 'Secondary Action Button'], dtype=object) array(['but-secondary-navi.png', 'Secondary Navigation Button'], dtype=object) array(['but-secondary-light.png', 'Secondary Light Button'], dtype=object) array(['but-tertiary-navi.png', 'Tertiary Navigation Button'], dtype=object) array(['but-tertiary-light.png', 'Tertiary Light Button'], dtype=object)]
rivendell-docs.netlify.com
This documentation is only valid for older versions of Wordfence. If you are using Wordfence 7 or later, please visit our new documentation. Error when trying to add a username to cell phone sign-in From Wordfence Documentation The username that you enter when setting up the cellphone sign-in needs to be a user account in your WordPress installation. This may or may not be the same as the username you created on the Wordfence.com site to manage your Wordfence account. See the instructions here: Activating Cellphone Sign-in for a user
https://docs.wordfence.com/index.php?title=Error_when_trying_to_add_a_username_to_cell_phone_sign-in&oldid=394
2018-02-18T05:07:21
CC-MAIN-2018-09
1518891811655.65
[]
docs.wordfence.com
Tcl/Tk Documentation > TclCmd > lappend Tcl/Tk Applications | Tcl Commands | Tk Commands | Tcl Library | Tk Library NAMElappend - Append list elements onto a variable SYNOPSISlappend varName ?value value value ...? DESCRIPTIONThis command treats the variable given by varName as a list and appends each of the value arguments to that list as a separate element, with spaces between elements. If varName doesUsing lappend to build up a list of numbers. % set var 1 1 % lappend var 2 1 2 % lappend var 3 4 5 1 2 3 4 5 SEE ALSOlist, lindex, linsert, llength, lset, lsort, lrange KEYWORDSappend, element, list, variable.
http://docs.activestate.com/activetcl/8.5/tcl/tcl/TclCmd/lappend.html
2018-02-18T05:25:42
CC-MAIN-2018-09
1518891811655.65
[]
docs.activestate.com
.0.0 » Developer Guide Developer Guide - 1. NiFi Developer's Guide - Introduction - NiFi Components - Processor API - Supporting API - AbstractProcessor API - Component Lifecycle - Restricted - State Manager - Reporting Processor Activity - Documenting a Component - Provenance Events - Common Processor Patterns - Error Handling - General Design Considerations - Controller Services - Reporting Tasks - Command Line Tools - Testing - NiFi Archives (NARs) - Per-Instance ClassLoading - How to contribute to Apache NiFi
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.0/bk_developer-guide/content/error-handling.html
2018-02-18T05:18:24
CC-MAIN-2018-09
1518891811655.65
[array(['../common/images/loading.gif', 'loading table of contents...'], dtype=object) ]
docs.hortonworks.com
Metabase with TreasureData (Experimental) Metabase is a open source data collaboration and visualization platform. User can visualize own data from Treasure Data with Metabase. Table of Contents Setup Metabase See Metabase installation. This page describes a way to connect to Treasure Data Presto from Metabase with Mac OS X application. Setup Presto connection After installing Metabase, visit Databases page on Admin menu, and click “Add database”. Then you’ll see a list of database types. To connect Treasure Data Presto to Metabase, write down the following information based on your account: - Database type: Presto - Name: <Any name> - Host: api-presto.treasuredata.com - Port: 443 - Database name: td-presto - Database username: <YOUR TREASURE DATA APIKEY> - Database password: <any password (Treasure Data Presto validates only APIKEY)> - Use an SSH-tunnel for database connections: OFF(Not Supported) - This is a large database, so let me choose when Metabase syncs and scans: YES After the connection is established, Metabase takes a look at the metadata of the fields in your tables and automatically assigns them a field type. Also, you’ll select one of three scan options in the Scheduling tab. See here for more details. If you’d like to sync your database manually at any time, click on it from the Databases list in the admin panel and click on the Sync button on the right side of the screen. You can see your table schema at View Schema after the schema load is completed. Now, you can create a Native Query on New Question. You can specify a table by FROM <database_name>.<table_name>. Next Step See the following doc to know more detail about Metabase. Last modified: Dec 03 2017 08:53:32 UTC If this article is incorrect or outdated, or omits critical information, let us know. For all other issues, access our support channels.
https://docs.treasuredata.com/articles/metabase
2018-02-18T04:45:42
CC-MAIN-2018-09
1518891811655.65
[array(['https://t.gyazo.com/teams/treasure-data/3271d24f764d6674c5895551316f3117.png', None], dtype=object) array(['https://t.gyazo.com/teams/treasure-data/cc67e5f574619ed2cc3f8887eca26494.png', None], dtype=object) array(['https://t.gyazo.com/teams/treasure-data/9802457930a2e68e7881b4bbf5483279.png', None], dtype=object) array(['https://t.gyazo.com/teams/treasure-data/3520350eff822c61ddfb0ed95e7f1c3c.png', None], dtype=object) array(['https://t.gyazo.com/teams/treasure-data/c6f951da89ba738e10e0ef2f2fae634c.png', None], dtype=object) array(['https://t.gyazo.com/teams/treasure-data/508467d550207499d38b89a6b5db1e49.png', None], dtype=object) ]
docs.treasuredata.com
Issue #19435: CGI directory traversal¶ An error in separating the path and filename of the CGI script to run in http.server.CGIHTTPRequestHandler allows running arbitrary executables in the directory under which the server was started. - Disclosure date: 2013-10-29 (Python issue #19435 reported) Fixed In¶ - Python 2.7.6 (2013-11-10) fixed by commit 1ef959a (2013-10-30) - Python 3.2.6 (2014-10-11) fixed by commit 04e9de4 (2013-10-30) - Python 3.3.4 (2014-02-09) fixed by commit 04e9de4 (2013-10-30) - Python 3.4.0 (2014-03-16) fixed by commit 04e9de4 (2013-10-30) Python issue¶ Directory traversal attack for CGIHTTPRequestHandler. - Python issue: issue #19435 - Creation date: 2013-10-29 - Reporter: Alexander Kruppa Timeline¶ Timeline using the disclosure date 2013-10-29 as reference: - 2013-10-29: Python issue #19435 reported by Alexander Kruppa - 2013-10-30 (+1 days): commit 04e9de4 - 2013-10-30 (+1 days): commit 1ef959a - 2013-11-10 (+12 days): Python 2.7.6 released - 2014-02-09 (+103 days): Python 3.3.4 released - 2014-03-16: Python 3.4.0 released - 2014-10-11 (+347 days): Python 3.2.6 released
http://python-security.readthedocs.io/vuln/issue_19435_cgi_directory_traversal.html
2018-02-18T05:13:24
CC-MAIN-2018-09
1518891811655.65
[]
python-security.readthedocs.io
Create a change request template You can create a template that can be used to create change requests with pre-defined supporting tasks. Before you beginRole required: admin Procedure Navigate to System Definition > Templates. Click New or open an existing change request template to modify. Click Configure > Form Layout to add the following fields to the template: Next related template, Next related child template, Link element. From the Table, choose from one of two default change request template configuration items: ItemLink element Change_request None. This object does not have a link element, because it is at root level. Change_task Parent. Because this task object is one level below root level, it uses the parent table as a link element. In this case, the parent is change_request. Edit the fields on the change request template as needed: FieldDescription Name Unique and descriptive name for this template. Table Select the table the template applies to. Active Check to make template available for Enter a unique short description for the template. Template This field automatically displays after selecting a table and used to auto-populate records. Click and select the field from the table. You can select multiple fields. Enter the information that auto-populates. Next related template Using this field creates a record at the same hierarchical level (sibling) as the current template. Using this field on a child template specifies an extra child template under the same parent template. This field is not supported on top-level templates. Next related child template This field creates a record at the hierarchical level below (child) the current template. You can assign a child template to a child template. Link element Use this field to link a record created from a child template to the record created from the parent template. The template script include chooses the first valid reference field that can link to the parent record when this field is left blank. Click Save to save the change request template. Related TasksCreate a change request from a CIRequest a standard change from the catalogCopy a change request
https://docs.servicenow.com/bundle/geneva-it-service-management/page/product/change_management/task/create-a-change-request-template.html
2018-02-18T05:12:49
CC-MAIN-2018-09
1518891811655.65
[]
docs.servicenow.com
Elastic Query Execution Runtime HAWQ uses dynamically allocated virtual segments to provide resources for query execution. In HAWQ 1.x, the number of segments (compute resource carrier) used to run a query is fixed, no matter whether the underlying query is big query requiring many resources or a small query requiring little resources. This architecture is simple, however it uses resources inefficiently. To address this issue, HAWQ now uses the elastic query execution runtime feature, which is based on virtual segments. HAWQ allocates virtual segments on demand based on the costs of queries. In other words, for big queries, HAWQ starts a large number of virtual segments, while for small queries HAWQ starts fewer virtual segments. Storage In HAWQ, the number of invoked segments varies based on cost of query. In order to simplify table data management, all data of one relation are saved under one HDFS folder. For all the HAWQ table storage formats, AO (Append-Only) and Parquet, the data files are splittable, so that HAWQ can assign multiple virtual segments to consume one data file concurrently to increase the parallelism of a query. Physical Segments and Virtual Segments In HAWQ, only one physical segment needs to be installed on one host, in which multiple virtual segments can be started to run queries. HAWQ allocates multiple virtual segments distributed across different hosts on demand to run one query. Virtual segments are carriers (containers) for resources such as memory and CPU. Queries are executed by query executors in virtual segments. Note: In this documentation, when we refer to segment by itself, we mean a physical segment. Virtual Segment Allocation Policy Different number of virtual segments are allocated based on virtual segment allocation policies. The following factors determine the number of virtual segments that are used for a query: - Resources available at the query running time - The cost of the query - The distribution of the table; in other words, randomly distributed tables and hash distributed tables - Whether the query involves UDFs and external tables - Specific server configuration parameters, such as default_hash_table_bucket_numberfor hash table queries and hawq_rm_nvseg_perquery_limit
http://hdb.docs.pivotal.io/220/hawq/overview/ElasticSegments.html
2018-02-18T04:38:52
CC-MAIN-2018-09
1518891811655.65
[]
hdb.docs.pivotal.io
Event ID 301 — Software Installation Processing Applies To: Windows Server 2008 The Software Installation client-side extension is responsible for installing software, applied through Group Policy, to both computers and users. Event Details Resolve This is a normal condition. No further action is required. Related Management Information Software Installation Processing Group Policy Infrastructure
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc727263(v=ws.10)
2018-02-18T05:43:57
CC-MAIN-2018-09
1518891811655.65
[array(['images/dd300121.green%28ws.10%29.jpg', None], dtype=object)]
docs.microsoft.com
When an Identity Mechanism using the key generator is chosen in a composite primary key there must be an Identity Mechanism Member specified for the persistent type. This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here. This error occurs when a persistent class in the domain model uses Identity Mechanism different from Default but it does not have a Identity Mechanism Member specified. Solution To resolve this error you can either select which one of the primary key members will use the Autoinc key generator, or set the Identity Mechanism of the class to default. To select which primary key member will use the Autoinc key generator: - Double-click the error to open the Validation Dialog. - Choose the Select one of the primary key members which will use the Autoinc key generator option. Select a primary key member from the drop-down list. Click the Fix Selected button. To set the Identity Mechanism of the class to default: - Double-click the error to open the Validation Dialog. Select the Set the identity mechanism of the class to default option. Click the Fix Selected button.
https://docs.telerik.com/data-access/deprecated/developers-guide/data-access-domain-model/validating-the-domain-model/validation-errors/data-access-tasks-model-tools-validate-rules-backend-calculated-identity-mecha-member-not-selected.html
2018-02-18T04:50:21
CC-MAIN-2018-09
1518891811655.65
[array(['/data-access/images/1oatasks-modeltools-validation-rules-backendcalcedidentitymechmembernotselected-010.png', None], dtype=object) ]
docs.telerik.com
Service Generation Outcome (Version1) This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here. The Service Wizard allows you to generate WCF Data Services based on Telerik Data Access. This topic discusses the changes that the wizard will make to your project when you generate WCF Data Service version 1. For more information about the WCF Data Service versioning, please read here. References The wizard adds references to: - System.Data.Entity.dll - System.Data.Services.dll - System.Data.Services.Client.dll - System.ServiceModel.dll - System.ServiceModel.Web.dll - Telerik.OpenAccess.dll - Telerik.OpenAccess.35.Extensions.dll - <Your Telerik Data Access Model Project> Generated Files When WCF Data Service version1 is generated, the wizard will automatically create the actual service files (.svc and .cs) and will add them to the output project. - DataServiceKeys.cs - defines which properties are key for each entity type exposed by the service. - DataManager.cs - this is a wrapper around OpenAccessContext that exposes data manipulation endpoints to the DataService. - EntitiesModelService.svc - this is the hosting svc file for the service. EntitiesModelService.svc.cs - contains the implementation of the service contract. The implementation uses DataService<T> as base type and sets access rights for entity endpoints. To see the EntitiesModelService.svc.vb file in VB projects, use the Show All Files command from the Solution Explorer toolbar.
https://docs.telerik.com/data-access/deprecated/developers-guide/using-web-services/data-services/developer-guide-wcfservices-data-service-generation-outcome-1.html
2018-02-18T04:50:05
CC-MAIN-2018-09
1518891811655.65
[array(['/data-access/images/1oatasks-workingdsw-wcfdataservices-010.png', None], dtype=object) ]
docs.telerik.com
Milagro MFA Overview A zero-knowledge proof protocol is a method by which one party (the prover) can prove to another party (the verifier) that a given statement is true, without conveying any additional information apart from the fact that the statement is indeed true. Proving that one possesses certain knowledge is, in most cases, trivial if one is allowed to simply reveal that knowledge; the challenge is proving that one has such knowledge without revealing it or without revealing anything else. Background Milagro authentication is the Apache licensed version of the M-Pin Protocol. It was first introduced in academic circles over a decade ago by Dr. Michael Scott, MIRACL's chief cryptographer, and has been cited over three thousand times in cryptographic research since initial publication. To date, no known theoretical or practical attacks exist against it. For more information on the M-Pin Protocol in general, refer to the M-Pin cryptographic white papers available on the MIRACL website in the MIRACL Labs section. Protocol Milagro Authentication is based on a zero-knowledge proof authentication protocol using proven, strong, standards-based elliptic curve cryptography: - Server Keys and Client Keys are issued according to elliptic curve cryptography principles, and the server can tell whether a client key comes from the right elliptic curve set. - The server can prove who a user is without having to store client credentials, or in a database with its current set up of passwords. - Credentials (Client Keys) are NEVER exchanged (encrypted or unencrypted) between a user and the server. - Server Key compromise does not reveal anything about users or their credentials, eliminating scenarios like password database breaches. - The code that manipulates the Client Key (a user's credential) runs in the user's browser or app, therefore no separate hardware tokens or software installations are required. The picture below represents schematically the operation of the M-Pin Authentication Protocol:
http://docs.milagro.io/en/mfa/getting-started/milagro-mfa-overview.html
2017-04-23T13:49:08
CC-MAIN-2017-17
1492917118707.23
[array(['http://cdn2.hubspot.net/hub/230906/file-2034175627-jpg/Images-cos/diagrams/1-m-pin-authentication-overlay.jpg', '1-m-pin-authentication-overlay'], dtype=object) ]
docs.milagro.io
This change log is for SQL Server Data Tools (SSDT) for Visual Studio 2015. For detailed posts about what’s new and changed, please visit the SSDT Team blog. SSDT 17.0 (supports up to SQL Server vNext) Build number: 14.0.61704.140 What's New? Database projects: - Amending a clustered index on a view will no longer block deployment. - Schema comparison strings relating to column encryption will use vNext" - Support for CDC Control Task, CDC Splitter and CDC Source when targeting SQL Server vNext. AS projects: - Analysis Services PowerQuery Integration (1400 compat-level tabular models): - DirectQuery is available for SQL Oracle, And Teradata if user has installed 3rd Party drivers - Add columns by example in PowerQuery - Data access options in 1400 models (model-level properties used by M engine) - heirarchy compat compat compat-level models calculated table UI when using default formatting for column type to allow changing the formatting type from the UI. SSDT June . SSDT April (for SQL Server 2016 RC3) Released: April 15, 2016 Build number: 14.0.60413.0. - Support for using LocalDB 2014. SSDT Hotfix (for SQL Server 2016 RC2) Released: April 5, 2016 Build number: 14.0.60329.0 This build contains a hotfix for the version of SSDT that provides features for SQL Server Integration Services. Build 14.0.60316.0 can also be used with Analysis Services and Reporting Services in SQL Server 2016. To get this hotfix, use the download links on this blog post. Report developers, if you build new reports using this build of SSDT, read the known issue and workaround for a for a temporary issue in SSRS reports found only in this hotfix. SSDT Hotfix (for SQL Server 2016 RC0) Released: March 18, 2016 Build number: 14.0.60316.0 This build contains a hotfix for the version of SSDT that provides features for SQL Server 2016 RC0. There is no RC1 version of SSDT at this time. Build 14.0.60316.0 can be used with either RC0 or RC1 of SQL Server 2016. SSDT February 2016 Preview (for SQL Server 2016 RC0) Released: March 7, 2016 Build number: 14.0.60305.0 SQL Server project templates No announcements for this SSDT preview release. See What's New in Database Engine to learn about other features in this release. SSIS package project templates SSIS Designer creates and maintains packages for SQL Server 2016, 2014, or 2012. New templates renamed as parts. SSIS Hadoop connector supports for ORC format. See What's New in Integration Services for details. SSAS project templates (Tabular model projects) This month’s update to Analysis Services delivers support for display folders for Tabular models and any models created with new SQL Server 2016 compatibility level is now supported in SSIS packages. For more information. see What's New in Analysis Services (blog post) for details. SSRS report project templates No announcements for this SSDT preview release. See What's New in Reporting Services to learn about other features in this release. SSDT January 2016 Preview Released: Feb 4, 2016 Build number: 14.0.60203.0 SQL Server project templates No announcements for this SSDT preview release. See What's New in Database Engine to learn about other features in this CTP. SSIS package project templates Adds support for ODBC source and destination components, a CDC control task, a CDC source and splitter component, a Microsoft Connector for SAP BW, and an Integration Services Feature Pack for Azure. See What's New in Integration Services for details. SSAS project templates Includes enhancements for Tabular models at 1200 compatibility level, calculated columns and row-level security for models in DirectQuery mode, translations of model metadata, TMSL script execution in the SSIS Analysis Services Execute DDL Task, and numerous bug fixes. See What's New in Analysis Services (msdn) or What's New in Analysis Services (blog post) for details. SSRS report project templates No announcements for this SSDT preview release. See What's New in Reporting Services to learn about other features in this CTP. SSDT December 2015 Preview SQL Server project templates include bug fixes for the Connection dialog box, recent history lists, proper use of authentication context set in the connection property when loading a database list. Changed test connection timeout value to 15 seconds. Create an Azure SQL Database server firewall rule if the client IP is not registered when loading a database list. SQL Server 2016 CTP3.2 feature programmability support. SSAS project templates add support for creating calculated tables based on DAX expressions and other objects already defined in the model. SSIS package project template additions include SSIS Hadoop connector support for Avro file format and Kerberos authentication. Please note that SSIS designer support for SSIS 2012 and 2014 is not yet included in this update. SSDT November 2015 Preview SQL Server project templates. Preview of improved connection experience for SQL Server and Azure SQL Database. SSIS package project templates. SSIS catalog performance improvement: The performance for most SSIS catalog views for non-ssis-admin user is improved. SSAS project templates include enhancements for Tabular model projects in Analysis Services. You can use the View Code command to view the model definition in JSON. If you aren't using a full-featured edition Visual Studio 2015, you will need one to get the JSON editor. You can download the Visual Studio Community edition for free. SSDT October 2015 Preview New project templates for BI (Analysis Services models, Reporting Services reports, and Integration Services packages). All SQL Server project templates are now in one SSDT. New SSIS features including Hadoop connector, control flow template, relaxed max buffer size of data flow task. SQL Server 2016 CTP 3.0 feature support for relational database projects. Various bug fixes in SSIS and support for Windows 7 OS. SSDT September 2015 Preview - Multi-language support is new in this preview. SSDT August 2015 Preview - New standalone Setup.exe program for installing SSDT. You no longer need to use a modified version of SQL Server Setup. This version of SSDT includes a project template for building relational databases deployed to SQL Server or Azure SQL Database. See Also Download SQL Server Data Tools (SSDT) Previous releases of SQL Server Data Tools (SSDT and SSDT-BI) What's New in Database Engine What's New in Analysis Services What's New in Integration Services
https://docs.microsoft.com/en-us/sql/ssdt/changelog-for-sql-server-data-tools-ssdt
2017-04-23T13:48:32
CC-MAIN-2017-17
1492917118707.23
[]
docs.microsoft.com
JavaScript unit tests¶ As an alternative to the black-box whole-app testing, you can unit test individual JavaScript files. You can run tests as follow: tools/test-js-with-node The JS unit tests are written to work with node. You can find them in frontend_tests/node_tests. Here is an example test from frontend_tests/node_tests/stream_data.js: (function test_get_by_id() { stream_data.clear_subscriptions(); var id = 42; var sub = { name: 'Denmark', subscribed: true, color: 'red', stream_id: id }; stream_data.add_sub('Denmark', sub); sub = stream_data.get_sub('Denmark'); assert.equal(sub.color, 'red'); sub = stream_data.get_sub_by_id(id); assert.equal(sub.color, 'red'); }()); The names of the node tests generally align with the names of the modules they test. If you modify a JS module in static/js you should see if there are corresponding test in frontend_tests/node_tests. If there are, you should strive to follow the patterns of the existing tests and add your own tests. Coverage reports¶ You can automatically generate coverage reports for the JavaScript unit tests like this: tools/test-js-with-node --coverage If tests pass, you will get instructions to view coverage reports in your browser. Note that modules that we don’t test at all aren’t listed in the report, so this tends to overstate how good our overall coverage is, but it’s accurate for individual files. You can also click a filename to see the specific statements and branches not tested. 100% branch coverage isn’t necessarily possible, but getting to at least 80% branch coverage is a good goal. Handling dependencies in unit tests¶ The following scheme helps avoid tests leaking globals between each other. You want to categorize each module as follows: - Exercise the module’s real code for deeper, more realistic testing? - Stub out the module’s interface for more control, speed, and isolation? - Do some combination of the above? For all the modules where you want to run actual code, add statements like the following toward the top of your test file: zrequire('util'); zrequire('stream_data'); zrequire('Filter', 'js/filter'); For modules that you want to completely stub out, please use a pattern like this: set_global('page_params', { email: '[email protected]' }); // then maybe further down: // Import real code. zrequire('narrow'); // And later... narrow.
https://zulip.readthedocs.io/en/stable/testing/testing-with-node.html
2018-12-10T06:17:45
CC-MAIN-2018-51
1544376823318.33
[]
zulip.readthedocs.io
Imports data into the data warehouse. This data is in a specified file that has first been generated from the eureport-export-data command. eucadw-import-data -e filename -p password[-r] None. Eucalyptus returns a message detailing the number of entries imported and the timefrome of those entries. eucadw-import-data -e iReport.dat -p mypassword Imported 45 entries from 2012-11-07 23:08:17 to 2012-11-07 23:37:59
http://docs.eucalyptus.cloud/eucalyptus/4.4.4/euca2ools-guide/eucadw-import-data.html
2018-12-10T07:25:00
CC-MAIN-2018-51
1544376823318.33
[]
docs.eucalyptus.cloud
API & Addons¶ Valispace intends to be open to integration into various other tools. This is how we imagine Valispace integrating with your engineering workflow: For this vision to become reality, Valispace offers a series of Addons and encourages you, to build additional ones yourself. REST API¶ Most of the data stored in the Valispace database is accessible in machine readable (.json) format through a REST-API. You can find it in yourdeployment.valispace.com/rest/ (e.g. here in our demo deployment) The REST API is very powerful and gives you direct read/write access to your data. However with great power comes great responsibility. In case you would like to write your own application which can interface to Valispace, don't hesitate to contact us to support you. However we have already built some addons for your convenience: Python API¶ The Valispace python API lets you access and update objects in your Valispace deployment with python code. Install the Valispace python API with pip: pip install valispace Import valispace API module in a python script: import valispace and initialize with valispace = valispace.API() More information about the functionalities and functions in the API can be found at github.com/valispace/ValispacePythonAPI. The Valispace python API is licensed under the MIT license, which means that anyone can contribute to the code by cloning the GitHub repository. To store your Valispace login credentials we recommend using Keyring, instead of storing them as variables in your source code. MS Word Addon¶ The MS Word Addon lets you insert values from your Valispace deployment into your document as a field and let's you refresh them with the newest values at any time. With Add Vali you can start typing to use the auto-complete function or select a vali from the dropdown: With Refresh all Valis all values which have been inserted with this plugin are refreshed with the newest value. Don't worry. When you give your Word Document to another person who does not have the plugin or access to your Valispace Deployment, all values will still be available in the document. Download and installation¶ Download it from GitHub here and install it by copying this file into the following folder: %APPDATA%/Microsoft/Word/STARTUP. When you start MS Word, a new ribbon will appear: Vali-Tools. Fill in your user credentials using the ValiAddon Settingsbutton: Remember to use https:// in the URL if you want the connection to be securely encrypted. MS Excel Addon¶ The MS Excel Addon lets you exchange data bi-directionally between your Excel Worksheet and your Valispace deployment. (Pull) Valis¶¶¶ MATLAB Toolbox¶ The Matlab Toolbox lets you read from and write to your Valispace deployment directly from your Matlab simulations. Example Usage: % 1) Valispace Login ValispaceInit("","username","password") % 2) optional: pull all Valis for faster access or access via name ValispacePull() % 3a) get Vali as a struct 3a) ValispaceGetVali("MySat.Mass") % 3b) get value 3b) ValispaceGetValue("MySat.Mass") % 4) push value to Valispace 4) ValispacePushValue("MySat.Mass",0) % 4b) update dataset (x values as first row and y values as second row) 4b) ValispacePushDataset("MySat.Mass", [0,1,2,3,4,5,6; 10,20,30,40,50,60,70]) % get matrix values from matrix ID 5) ValispaceGetMatrix(217) % push matrix values 6) ValispacePushMatrix(217,[2,3;4,5]) % post data through REST API 7) ValispacePost(url, data) % get data in json format through REST API 8) ValispaceGet(url) Please note: Until you run clear all the all ValispaceGetVali() and ValispaceGetValue() will use the cached values from your last ValispacePull() call. ValispaceGetVali() / ValispaceGetValue() / ValispacePushValue() work with the argument as a string (name) or integer (id) i.e. ValispaceGetValue("MySat.Mass") and ValispaceGetValue(217). When using these functions with an integer id, step 2) can be skipped. In this case the WebInterface will be accessed with every individual call. Both ValispacePushValue() and ValispacePushMatrix() can also push formulas (e.g. $MySat.Mass*5) instead of values Download and installation¶ Download it from GitHub here and install the toolbox via double click. You can then activate it by clicking the Valispace Toolbox Icon in your Apps Ribbon inside Matlab. The Matlab plugin is tested with Matlab version R2017a. Satsearch integration¶ The satsearch integration enables you to populate your Valispace environment with subsystems and components from the satsearch database. Setting up the integration on satsearch.co¶ To get started, sign up for an account at satsearch.co/register. Once you’ve activated your account, log in, which will take you to your account home. Click on Integrations to take you to the Integrations home. Click on the Valispace integration tile. You can watch the video on this page for an overview of how the satsearch integration with Valispace works. Click on the Activate button the left, to activate the integration with Valispace for your account. Click on the View Settings button or Settings tab that appear after successful activation to edit the URL for your Valispace server. You can use demo.valispace.com for testing purposes (if you’ve created a demo account previously). Using the integration¶ To add a satsearch part to your Valispace project, navigate to a project page, e.g., Honeywell’s HR 0610 reaction wheel. On the product page, below the image you’ll see an Add button. Click on this button to push an instance of this reaction wheel to Valispace. You’ll be prompted to log in with your Valispace credentials and authorize the integration to write data to your project. If the part was successfully added to your Valispace project, you’ll be returned to the product page on satsearch and greeted with a success message. To see the results of your push action, go to your Valispace project, where you will find the new component listed in your project tree. Support¶ For support relating to the satsearch integration, join the satsearch Slack community or send an email to [email protected].
http://docs.valispace.com/user-guide/addons/
2018-12-10T06:06:08
CC-MAIN-2018-51
1544376823318.33
[array(['../../img/integration.png', 'Valispace Engineering Tool Integration'], dtype=object) array(['../../img/word_addon_ribbon.png', 'Valispace MS Word Ribbon'], dtype=object) array(['../../img/word_addon_dropdown.png', 'Valispace MS Word Dropdown'], dtype=object) array(['../../img/word_addon_settings.png', 'Valispace MS Word Addon Settings'], dtype=object) array(['../../img/excel_addon_menu.png', 'Excel Addon Menu'], dtype=object) array(['../../img/plugins/satsearch/180724_satsearch_valispace_integration_addon_docs_1.png', 'Account home on satsearch'], dtype=object) array(['../../img/plugins/satsearch/180724_satsearch_valispace_integration_addon_docs_2.png', 'Integrations home on satsearch'], dtype=object) array(['../../img/plugins/satsearch/180724_satsearch_valispace_integration_addon_docs_3.png', 'Valispace integration home on satsearch'], dtype=object) array(['../../img/plugins/satsearch/180724_satsearch_valispace_integration_addon_docs_4.png', 'Successful activation of Valispace integration on satsearch'], dtype=object) array(['../../img/plugins/satsearch/180724_satsearch_valispace_integration_addon_docs_5.png', 'A product page on satsearch with Valispace integration button'], dtype=object) array(['../../img/plugins/satsearch/180724_satsearch_valispace_integration_addon_docs_6.png', 'Product added to Valispace through satsearch integration'], dtype=object) ]
docs.valispace.com
Phoenix2 ID for board option in “platformio.ini” (Project Configuration File): [env:phoenix_v2] platform = espressif8266 board = phoenix_v2 You can override default Phoenix 2.0 settings per build environment using board_*** option, where *** is a JSON object path from board manifest phoenix_v2.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:phoenix_v2] platform = espressif8266 board = phoenix_v2 ; change microcontroller board_build.mcu = esp8266 ; change MCU frequency board_build.f_cpu = 80000000L Debugging¶ PIO Unified Debugger currently does not support Phoenix 2.0 board.
http://docs.platformio.org/en/latest/boards/espressif8266/phoenix_v2.html
2018-12-10T07:09:44
CC-MAIN-2018-51
1544376823318.33
[]
docs.platformio.org
#define PLATFORM_GCCDefined if the compiler is GCC (or compatible). Defined if the compiler is GCC (or compatible). Detailed Description Deprecated. Use STDLIB_VS, STDLIB_GNU or STDLIB_LLVM to know which standard lib is currently used. Or use COMPILER_MSVC, COMPILER_GCC, COMPILER_LINTEL, COMPILER_WINTEL or COMPILER_CLANG to know which compiler is currently used. Data Races Thread safety unknown!
http://docs.seqan.de/seqan/develop/macro_PLATFORM_95GCC.html
2018-12-10T06:07:23
CC-MAIN-2018-51
1544376823318.33
[]
docs.seqan.de
CL Surround Lines Examples Parameters Under Inspector – Crosshair Layer category Parameter Description Line Size the x,y size of each line Surround Arrangement/N Lines The number of lines Surround Arrangement/Center Gap The distance between the element center and the crosshair center Surround Arrangement/Step Angle Force Equal Division Switch to equally divide the 360 degrees to place elements Surround Arrangement/Step Angle The angle between adjacent elements Recoil Response/Enable Recoil Response Switch to dynamically respond to recoil Recoil Response/Recoil to Center Gap Percentage The percentage to apply the recoil to the center gap Recoil Response/Recoil to Arc Radius Percentage The percentage to apply the recoil to the circle radius Transform/Element Alignment The size/scale pivot Transform/Element Scale The scale (x,y) of each line Transform/Element Rotation Pivot The rotation pivot Transform/Element Angle The angle of the element (rotation) Transform/Layer Angle The angle of the layer (rotation) Transform/Layer Translation The translation of the layer (encounters screen resolution change) Animation – Auto Rotation/Auto Rotation RPM (Element) The auto rotation speed of the element Animation – Auto Rotation/Auto Rotation RPM (Layer) The auto rotation speed of the layer Color/Ignore Global Color Switch to ignore the global tint color to decide the final color Doc navigation← CL Surround CirclesCL Surround Circular Sectors → Was this article helpful to you? Yes No How can we help? Name Email subject message
https://docs.jiffycrew.com/docs/crosshair-assembler-unity/the-10-crosshair-building-blocks/cl-surround-lines
2018-12-10T06:58:19
CC-MAIN-2018-51
1544376823318.33
[]
docs.jiffycrew.com
If you remove a virtual machine or template from a host but do not remove it from the host datastore, you can return it to the host's inventory. Procedure - Click Storage in the VMware Host Client inventory. - Right-click a datastore from the list and click Register a VM. - Select the virtual machine you want to register from the list and click Register.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.html.hostclient.doc/GUID-55B11BF1-2F7E-4426-9DE8-ADA7700BA77F.html
2018-12-10T06:39:39
CC-MAIN-2018-51
1544376823318.33
[]
docs.vmware.com
Miscellaneous Ops¶ The pyro.ops module implements high-level utilities that are mostly independent of the rest of Pyro. -, step_size, num_steps=1)[source]¶ Second order symplectic integrator that uses the velocity verlet algorithm. single_step_velocity_verlet(z, r, potential_fn, step_size, z_grads=None)[source]¶ A special case of velocity_verletintegrator where num_steps=1. It is particular helpful for NUTS kernel. newton_step_2d(loss, x, trust_radius=None)[source]¶ Performs a Newton update step to minimize loss on a batch of 2-dimensional variables, optionally regularizing to constrain to a trust region. lossmust be twice-differentiable as a function of x. If lossis 2+d-times differentiable, then the return value of this function is d-times differentiable. When lossis interpreted as a negative log probability density, then the return value of_2d(loss, x, trust_radius=1.0) # the final x is still differentiable
http://docs.pyro.ai/en/0.2.1-release/ops.html
2018-12-10T07:21:38
CC-MAIN-2018-51
1544376823318.33
[]
docs.pyro.ai
Benefits of Structured Storage COM provides a set of services collectively called structured storage. Among the benefits of these services is the reduction of performance penalties and overhead associated with storing separate objects in a flat file. Instead of a flat file, COM stores the separate objects in a single, structured file consisting of two main elements: storage objects and stream objects. Together, they function like a file system within a file.. At the same time, structured storage enables end users to interact and manage a compound file as if it were a single file rather than a nested hierarchy of separate objects. Structured storage also has other benefits: - Incremental access. If a user needs access to an object within a compound file, the user can load and save only that object, rather than the entire file. - Multiple use. More than one end user or application can concurrently read and write information in the same compound file. - Transaction processing. Users can read or write to COM compound files in transacted mode, where changes made to the file are buffered and can subsequently either be committed to the file or reversed. - Low-memory saves. Structured storage provides facilities for saving files in low-memory situations.
https://docs.microsoft.com/en-us/windows/desktop/Stg/benefits-of-structured-storage
2018-12-10T07:06:35
CC-MAIN-2018-51
1544376823318.33
[]
docs.microsoft.com
Get a code signing certificate Before you can establish a Partner Center account, you need to get a. Step 1: Determine which type of code signing certificate you need Microsoft accepts standard code signing and extended validation (EV) code signing certificates from partners enrolled and authorized for Kernel Mode Code Signing as part of the Microsoft Trusted Root Certificate Program. If you already have an approved standard or EV certificate from one of these authorities, you can use it to establish a Partner Center account. If you don’t have a certificate, you’ll need to buy a new one. The table below provides the details of the Certificate requirements for each of the dashboard services. Note The Partner Center will enforce mandatory EV certificates for submissions later this year. Code signing certificates for Partner Center There are two types of code signing certificates available today: Standard Code Signing Provides standard level of identity validation Requires shorter processing times and lower cost Can be used for all Partner Center services except LSA, and UEFI file signing services. In Windows 10 for desktop editions (Home, Pro, Enterprise, and Education), standard code signing cannot be used for kernel-mode drivers. For more info about these changes, see Code Signing FAQ. Extended Validation (EV) Code Signing Provides the highest level of identity validation Requires longer processing times and higher cost due to an extensive verification process Can be used for all Partner Center services, and is required for LSA and UEFI file signing services In Windows 10 for desktop editions, all kernel-mode drivers must be signed by the Partner Center and the Partner Center requires an EV certificate. For more info about these changes, see Code Signing FAQ. Step 2: Buy a new code signing certificate If you don’t have an approved standard or EV code signing certificate, you can buy one from one of the certificate authorities below. Standard code signing certificates Buy a Symantec standard code signing certificate Buy a Certum standard code signing cert (Supported only in the Partner Center) Buy an Entrust standard code signing cert Buy a GlobalSign standard code signing cert Buy a Comodo standard code signing cert Buy a DigiCert standard code signing certificate On the DigiCert Code Signing Certificates for Sysdevs page, click Start. On the DigiCert Order Form page (Step 1), in the Code Signing section, click Code Signing Certificate. Still on Step 1, scroll down to the Platform section, select Microsoft Authenticode from the drop-down list, and then click Continue. Follow the instructions provided by DigiCert to buy a certificate. Extended validation code signing certificates(required for UEFI, kernel-mode drivers, and LSA certifications) Buy a Symantec EV code signing certificate Buy a Certum EV code signing cert Buy an Entrust EV code signing cert Buy a GlobalSign EV code signing certificate Buy a Comodo EV code signing certificate Buy a DigiCert EV code signing certificate Once the certificate authority has verified your contact information and your certificate purchase is approved, follow their directions to retrieve the certificate. Note You must use the same computer and browser to retrieve your certificate. Next steps If you’re setting up a new Partner Center account, follow the steps in Register for the Hardware Program. If you’ve already set up a Partner Center account and need to renew a certificate, follow the steps in Update a code signing certificate. Code Signing FAQ This section provides answers to frequently asked questions about code signing for Windows 10. Additional code signing information is available on the Windows Hardware Certification blog. HLK Tested and Dashboard Signed Drivers - A dashboard signed driver that has passed the HLK tests will work on Windows Vista through Windows 10, including Windows Server editions. This is the recommended method for driver signing, because it allows a single process for all OS versions. In addition, HLK tested drivers demonstrate that a manufacturer has rigorously tested their hardware to meet all of Microsoft's requirements with regards to reliability, security, power efficiency, serviceability, and performance, so as to provide a great Windows experience. This includes compliance with industry standards and adherence with Microsoft specifications for technology-specific features, helping to ensure correct installation, deployment, connectivity and interoperability. For more information about the HLK, see Windows Hardware Compatibility Program. -. Cross-Signing and SHA-256 Certificates Cross-signing describes a process where a driver is signed with a certificate issued by a Certificate Authority (CA) that is trusted by Microsoft. For more information, see Cross-Certificates Overview. - Windows 8 and later versions support SHA-256. - Windows 7, if patched, supports SHA-256. If you need to support unpatched devices that run Windows 7, you need to either cross-sign with a SHA-1 certificate or submit to the Dashboard for signing. Otherwise, you can either cross-sign with SHA-1 or SHA-2 certificate or create an HLK/HCK submission for signing. - Because Windows Vista doesn’t support SHA-256, you need to either cross-sign with a SHA-1 certificate or create an HLK/HCK submission for Windows Vista driver signing. - A driver cross-signed with a SHA-256 certificate (including an EV certificate) issued prior to July 29th, 2015 will work on Windows 8 and later. It will not work on Windows Vista or Windows Server 2008. - A driver cross-signed with a SHA-256 certificate (including an EV certificate) issued prior to July 29th, 2015 will work on Windows 7 or Server 2008R2 if the patch issued through Windows Update earlier this year has been applied. For more information, see Availability of SHA-2 Hashing Algorithm for Windows 7 and Windows Server 2008 R2 and Microsoft security advisory: Availability of SHA-2 code signing support for Windows 7 and Windows Server 2008 R2: March 10, 2015. - A cross-signed driver using a SHA-1 certificate issued prior to July 29th, 2015 will work on all platforms starting with Windows Vista through Windows 10. - A cross-signed driver using a SHA-1 or SHA-256 certificate issued after July 29th, 2015 is not recommended for Windows 10. - For more information about the effort to move to SHA-256 Certificates, see Windows Enforcement of Authenticode Code Signing and Timestamping Device Guard - Enterprises may implement a device guard policy to modify the driver signing requirements using Windows 10 Enterprise edition. Device Guard provides an enterprise-defined code integrity policy, which may be configured to require at least an attestation-signed driver. For more information about Device Guard, see Device Guard certification and compliance. Windows Server - The dashboard will not accept attested device and filter driver signing submissions for Windows Server 2016. - The dashboard will only sign device and filter drivers that have successfully passed the HLK tests. - Windows Server 2016 will only load dashboard signed drivers that have successfully passed the HLK tests. EV Certs - As of October 31, 2015, your Sysdev dashboard account must have at least one EV certificate associated with it to submit binaries for attestation signing or to submit binaries for HLK certification. - You can sign with either your EV certificate or your existing standard certificates until May 1, 2016. After May 1, 2016, you need to use an EV certificate to sign the cab file that is submitted. - The submitted binaries themselves do not need to be signed. Only the submission cab file needs to be signed with an EV certificate. OS Support Summary This table summarizes the driver signing requirements for Windows. *Configuration Dependent –With Windows 10 Enterprise edition, organizations can use Device Guard to define custom driver signing requirements. For more information about Device Guard, see Device Guard certification and compliance. (1) Driver signing is required for manufacturers building retail products (i.e. for a non-development purpose) with IoT Core. For a list of approved Certificate Authorities (CAs), see Cross-Certificates for Kernel Mode Code Signing. Note that if UEFI Secure Boot is enabled, then drivers must be signed.
https://docs.microsoft.com/ja-jp/windows-hardware/drivers/dashboard/get-a-code-signing-certificate
2018-12-10T07:27:58
CC-MAIN-2018-51
1544376823318.33
[]
docs.microsoft.com