content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
_bugdown, that is used to check whether a message contains any syntax that needs to be rendered to HTML on the backend. If markdown.contains_bugdown returns true, the frontend simply won’t echo the message for the sender until it receives the rendered HTML from the backend. If there is a bug where markdown.contains_bugdown_bugdown is always correct. Testing¶ The Python-Markdown implementation is tested by zerver/tests/test_bugdown.py, and the marked.js implementation and markdown.contains_bugdown are tested by frontend_tests/node_tests/markdown.js. A shared set of fixed test data (“test fixtures”) is present in zerver/fixtures/bugdown-data.json, and is automatically used by both test suites; as a result, it the preferred place to add new tests for Zulip’s markdown system.. Changing Zulip’s markdown processor¶_bugdownif your changes won’t be supported in the frontend processor. - If desired, the typeahead logic in static/js/composebox_typeahead.js. - The test suite, probably via adding entries to zerver/fixtures/bugdown-data.json. - The in-app markdown documentation ( templates/zerver important detail to know about until we do that work. - Testing: Every new feature should have both positive and negative tests; they’re easy to write and give us the flexibility to refactor frequently..
http://zulip.readthedocs.io/en/1.6.0/markdown.html
2017-11-18T00:56:28
CC-MAIN-2017-47
1510934804125.49
[array(['', None], dtype=object)]
zulip.readthedocs.io
Changes related to "Help35:Glossary" ← Help35:Glossary:16(Page translation log) MATsxm (Talk | contribs) marked Search Engine Friendly URLs for translation 23 April 2016 m 03:33Help35:Content Article Manager (diff; hist; +15) Dw1Rianto
https://docs.joomla.org/Special:RecentChangesLinked/Help33:Glossary
2016-04-29T06:54:08
CC-MAIN-2016-18
1461860110764.59
[array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) ]
docs.joomla.org
The multiple data centers HA configuration ensures availability across geographically-distributed and redundant Jive platforms as an active/passive configuration. Note that you cannot have Jive running in multiple data centers simultaneously. As an example, here is how a multiple data center HA configuration might look (your configuration may vary). Click on the image to enlarge it. In this configuration, the web application nodes are configured in a cluster and deployed behind a load balancer, preferably an enterprise-grade load balancer such as the F5 BIG-IP (for more information about how to set up a cluster, see Clustering in Jive). In the passive standby data center system, you can leave the web application nodes booted up at the operating system level, but not the Jive application (while the active production data center is running). However, the cache node(s), the Document Conversion service nodes, the Activity Engine nodes, and the database nodes in the passive standby data center may be left on. Be sure to read Starting Up After a Failover to learn how to bring up Data Center B in the case of a failure.
https://docs.jivesoftware.com/jive/6.0/community_admin/topic/com.jivesoftware.help.sbs.online_6.0/admin/HADesigningaMultipleConfig.html
2016-04-29T06:01:04
CC-MAIN-2016-18
1461860110764.59
[]
docs.jivesoftware.com
Changes related to "Help35:Extensions Module Manager Language Switcher" ← Help35:Extensions Module Manager Language Switcher This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. 27 April 2016 20:19Chunk30:Help-3x-module-site-common-tabs (diff; hist; +4) MATsxm 25 April 2016 m 09:38Chunk30:Module Details (diff; hist; +6) MATsxm
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20140404163033&target=Help32%3AExtensions_Module_Manager_Language_Switcher
2016-04-29T06:54:50
CC-MAIN-2016-18
1461860110764.59
[array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) ]
docs.joomla.org
Revision history of "JSessionStorageEaccelerator/1.5"/1.5 (cleaning up content namespace and removing duplicated API references)
https://docs.joomla.org/index.php?title=JSessionStorageEaccelerator/1.5&action=history
2016-04-29T07:39:52
CC-MAIN-2016-18
1461860110764.59
[]
docs.joomla.org
Difference between revisions of "Moving sensitive files outside the web root" From Joomla! Documentation Revision as of 00:10, 13 December 2009 This Phild (talk| contribs) 6 years ago. (Purge) Moving sensitive files outside the public_html it's own directory outside of public_html to contain it's configuration.php file. 2. Move configuration.php to the design2-files directory and rename!! If you need to change configuration settings, do so manually by downloading the relocated joomla.conf file, making the needed edits and uploading it back. Do not use the Joomla web administrator interface global configuration button to edit the global configuration..
https://docs.joomla.org/index.php?title=Moving_sensitive_files_outside_the_web_root&diff=20729&oldid=20728
2016-04-29T06:50:35
CC-MAIN-2016-18
1461860110764.59
[]
docs.joomla.org
Information for "Tags-not-installed-in-3.1.0" Basic information Display titleTalk:Tags-not-installed-in-3.1.0 Default sort keyTags-not-installed-in-3.1.0 Page length (in bytes)48 Page ID287lin (Talk | contribs) Date of page creation05:49, 27 April 2013 Latest editorElin (Talk | contribs) Date of latest edit05:49, 27 April 2013 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’
https://docs.joomla.org/index.php?title=Talk:Tags-not-installed-in-3.1.0&action=info
2016-04-29T07:34:32
CC-MAIN-2016-18
1461860110764.59
[]
docs.joomla.org
JFTP::pwd::pwd Description Method to retrieve the current working directory on the FTP server. Description:JFTP::pwd [Edit Descripton] public function pwd () - Returns string Current working directory - Defined on line 356 of libraries/joomla/client/ See also JFTP::pwd source code on BitBucket Class JFTP Subpackage Client - Other versions of JFTP::pwd SeeAlso:JFTP::pwd [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=JFTP::pwd/11.1&oldid=56782
2016-04-29T07:38:47
CC-MAIN-2016-18
1461860110764.59
[]
docs.joomla.org
Graphics.ClearRect From Xojo Documentation Method Graphics.ClearRect(X As Double, Y As Double, Width As Double, Height As Double) Supported for all project types and targets. Supported for all project types and targets. Clears the rectangle described by the parameters passed by filling it with the background color of the parent window (on Windows/Linux) or by clearing the context so that the background comes through (on Mac). Sample Code This code clears the entire drawing area: g.ClearRect(0, 0, g.Width, g.Height)
http://docs.xojo.com/index.php?title=Graphics.ClearRect&oldid=72281
2022-05-16T21:04:24
CC-MAIN-2022-21
1652662512249.16
[]
docs.xojo.com
Windows Installation Guide¶ Supported versions¶ Warning - Running Genymotion Desktop in a virtual machine (VirtualBox, VMWare, Parallels, Hyper-V or VirtualPC) is not supported. For more details, please refer to Can I run Genymotion Desktop in a virtual machine? - Running Genymotion Desktop on Windows Server editions is not supported. For more information, please refer to Can I run Genymotion Desktop in a server? - Windows 7 is no longer supported. - We no longer provides Genymotion Desktop for Windows 32 bits. - Supported Windows editions are Windows 10 (64 bits) and Windows 8/8.1 (64 bits). Genymotion Desktop has been tested on the following Windows versions: - Windows 8, 8.1 (64 bit) - Windows 10 (64 bit) Installation steps¶ - Go to the Genymotion download page. - Download the ready-to-run Genymotion for Windows with Virtualbox, or the installer without VirtualBox1. - Run the downloaded genymotion-3.X.X.exeor genymotion-3.X.X-vbox.exefile. - Select the setup language and click OK. By default, the Genymotion. Tip It is recommended to reboot your PC after installing Genymotion Desktop. If you choose Genymotion installer without VirtualBox, you must first download and install VirtualBox 6.1.14 for Windows at ↩
https://docs.genymotion.com/desktop/Get_started/011_Windows_install/
2022-05-16T22:14:42
CC-MAIN-2022-21
1652662512249.16
[]
docs.genymotion.com
RayCast¶ Inherits: Spatial < Node < Object Query the closest object intersecting a ray. Description¶ A RayCast represents a line from its origin to its destination position, cast_to. It is used to query the 3D space in order to find the closest object along the path of the ray. RayCast can ignore some objects by adding them to the exception list via add_exception or by setting proper filtering with collision layers and masks. RayCast can be configured to report collisions with Areas (collide_with_areas) and/or PhysicsBodys (collide_with_bodies). Only enabled raycasts will be able to query the space and report collisions. RayCast Areas will be reported. If true, collision with PhysicsBodys will be reported. The ray's collision mask. Only objects in at least one collision layer enabled in the mask will be detected. See Collision layers and masks in the documentation for more information. The custom color to use to draw the shape in the editor and at run-time if Visible Collision Shapes is enabled in the Debug menu. This color will be highlighted at run-time if the RayCast is colliding with something. If set to Color(0.0, 0.0, 0.0) (by default), the color set in ProjectSettings.debug/shapes/collision/shape_color is used. If set to 1, a line is used as the debug shape. Otherwise, a truncated pyramid is drawn to represent the RayCast. Requires Visible Collision Shapes to be enabled in the Debug menu for the debug shape to be visible at run-time. If true, collisions will be reported. If true, collisions will be ignored for this RayCast's immediate parent. true if the bit index passed is turned on. Note: Bit indices range from 0-19. the bit index passed to the value passed. Note: Bit indexes range from 0-19.
https://docs.godotengine.org/en/3.4/classes/class_raycast.html
2022-05-16T22:32:18
CC-MAIN-2022-21
1652662512249.16
[]
docs.godotengine.org
# Patching Dependencies Only Available in MetaMask Flask Snaps is only available in MetaMask Flask (opens new window). A problem that may arise as you develop your snap is that some dependencies make use of APIs that aren't available in the snaps execution environment. To work around this, we firstly recommend that you check if another library is available that makes use of the APIs made available for snaps (see Snaps Development Guide for a list of APIs). If you are unable to find another library (or version) that works with the snaps execution environment, another way of solving the problem is by patching the dependency yourself. For this we can leverage patch-package (opens new window). patch-package works by allowing you to make changes to your dependencies, saving the changes as a patch and replaying it when installing dependencies. To use it, install it using the following command: yarn add -D patch-package postinstall-postinstall. Then add a postinstall script to your package.json. "scripts": { + "postinstall": "patch-package" } Now you can make changes to your dependencies inside node_modules and run yarn patch-package package-name to save the changes as a patch. This will create a .patch file containing your dependency patch. These patches can be committed to your Git repository and will be replayed when you re-install your dependencies. If you need guidance in how you can patch your dependencies or otherwise need help troubleshooting dependency problems, please create an issue on the MetaMask/snaps-skunkworks (opens new window) repository. # Patching the use of XMLHttpRequest The XMLHttpRequest API is not exposed in the snaps execution environment and will not be in the future. Because of this, you may run into issues with dependencies in your dependency tree attempting to leverage this API for their network requests. Below we've included some examples of popular libraries that use XMLHttpRequest and are therefore not compatible with the snaps execution environment. Below you'll also find some patching strategies for fixing dependencies that try to make use of these libraries. # cross-fetch cross-fetch is a popular library used for cross-platform access to the fetch API across multiple environments. Under the hood, however, the library does make use of XMLHttpRequest and therefore it will cause issues when used in a snap. Luckily, this issue is fairly easy to patch with patch-package. To do this, open up node_modules/cross-fetch/browser-ponyfill.js and find the following lines (it's close to the bottom): // Choose between native implementation (global) or custom implementation (__self__) // var ctx = global.fetch ? global : __self__; var ctx = __self__; // this line disable service worker support temporarily You can replace that with the following snippet: // Choose between native implementation (global) or custom implementation (__self__) var ctx = global.fetch ? { ...global, fetch: global.fetch.bind(global) } : __self__; // var ctx = __self__; // this line disable service worker support temporarily After replacing it, run yarn patch-package cross-fetch which saves the patch for future use. If you find that it's easier you can also use the following patch, copy and paste this to patches/cross-fetch+3.1.5.patch and re-install your dependencies. diff --git a/node_modules/cross-fetch/dist/browser-ponyfill.js b/node_modules/cross-fetch/dist/browser-ponyfill.js index f216aa3..6b3263b 100644 --- a/node_modules/cross-fetch/dist/browser-ponyfill.js +++ b/node_modules/cross-fetch/dist/browser-ponyfill.js @@ -543,8 +543,8 @@ __self__.fetch.ponyfill = true; // Remove "polyfill" property added by whatwg-fetch delete __self__.fetch.polyfill; // Choose between native implementation (global) or custom implementation (__self__) -// var ctx = global.fetch ? global : __self__; -var ctx = __self__; // this line disable service worker support temporarily +var ctx = global.fetch ? { ...global, fetch: global.fetch.bind(global) } : __self__; +// var ctx = __self__; // this line disable service worker support temporarily exports = ctx.fetch // To enable: import fetch from 'cross-fetch' exports.default = ctx.fetch // For TypeScript consumers without esModuleInterop. exports.fetch = ctx.fetch // To enable: import {fetch} from 'cross-fetch' Using either of these methods allows your dependencies to access the fetch API correctly and cross-fetch compatible with the snaps execution environment. # axios axios is another popular networking library that leverages XMLHttpRequest under the hood. At the time of writing there is no known way of patching axios to work with the snaps execution environment. Instead you may have to resort to replacing the usage of axios with another library such as isomorphic-fetch or isomorphic-unfetch. Or simply using the snaps execution environment global fetch. Below is a small example of how you can rewrite your dependency to use fetch instead of axios. Note: In a production environment this may be a large task depending on the usage of axios. axios const instance = axios.create({ baseURL: ' }); instance .get('users/MetaMask') .then((res) => { if (res.status >= 400) { throw new Error('Bad response from server'); } return res.data; }) .then((user) => { console.log(user); }) .catch((err) => { console.error(err); }); fetch fetch(' .then((res) => { if (!res.ok) { throw new Error('Bad response from server'); } return res.json(); }) .then((json) => console.log(json)) .catch((err) => console.error(err)); More resources:
https://docs.metamask.io/guide/snaps-patching-dependencies.html
2022-05-16T22:35:29
CC-MAIN-2022-21
1652662512249.16
[]
docs.metamask.io
Attestation Depending on the security requirements for your application, you may need to use attestation to do business. Attestation is a method for software to prove its identity during normal operations. The goal of attestation is to prove to a remote party that your operating system and application software are intact and trustworthy. The best method of implementing attestation is through Security policies, specifically using the policy type drop-down and selecting Multi-factor authentication or User consent. For more information, see: - Enforcing policies from the Security Policies landing page. - Security policies. - Security policies settings. - Using the login policies settings. Attestations at Pega Pega keeps pace with emerging and established international and local standards and regulations, maintaining extensive compliance certifications, attestations, and accessibility, plus third-party assessments. Pega Platform supports the following types of attestation: - California Consumer Privacy Act (CCPA). - United States Food and Drug administration (FDA). - General Data Protection Regulation (GDPR). - Health Insurance Portability and Accountability Act (HIPAA). - Health Information Technology for Economic and Clinical Health (HITECH). - Privacy Shield Framework. For more general information about these policies, see their official websites. For more information on how these are used in Pega Platform, see: Previous topic Verifying a one-time password by calling an API Next topic Configuring a token credentials authentication service
https://docs.pega.com/security/85/attestation
2022-05-16T22:28:06
CC-MAIN-2022-21
1652662512249.16
[]
docs.pega.com
Quickstart Tip If you're want to quickly see what Contember can do, check out our Headless CMS template. It's the easiest way to try out what Contember can do in minutes. Create new project with Contember In this tutorial we will show you how to setup both Contember Engine and Contember Admin to create your first project. Prerequisites - Installed NPM version 7+ - Installed Docker with Docker Compose npm exec "@contember/[email protected]" quickstart It will create a new folder with basic Contember setup. After it is done: cd quickstart And then install dependencies: npm install And you're ready to go. Just start Docker containers and Contember will be running. npm start Now, administration UI should be running on address Started services - Admin UI at - API endpoints at (you can authorize with token 0000000000000000000000000000000000000000) For advanced use-cases there is also: - Adminer database management tool at - Minio local S3 provider at (you can sign in with contembeer / contember credentials). - Mailhog testing SMTP at - PostgreSQL database at localhost:1482 (you can sign in with contember / contember credentials). Create first project From the first step you should have a folder structure that looks like this: quickstart/ ├── admin/ │ ├── components/ │ ├── pages/ │ ├── .env │ ├── index.html │ ├── index.tsx │ └── vite-env.d.ts ├── api/ │ ├── migrations/ │ ├── model/ │ ├── acl.ts │ └── index.ts ├── docker/ ├── node_modules/ ├── docker-compose.yaml ├── package.json ├── package-lock.json └── tsconfig.json Create your first data model First you have to tell Contember Engine, how your data model looks like. The init command automatically created api/model/index.ts file, so go there. Here you start defining your project schema. Really simple example looks like this: import { SchemaDefinition as def } from '@contember/schema-definition' export class Article { title = def.stringColumn() content = def.stringColumn() } - Import SchemaDefinitionso you'll get TypeScript autocompletion. - Define your first entity - Article. For this example let's just add two columns named titleand content, both are string. Then you need to generate a database migration for Contember Engine: npm run contember migrations:diff quickstart add-article Contember CLI npm run contember is a Contember CLI, if you call this without any arguments you'll see all the available commands. We'll use migrations:diff command. It goes through your schema and generates migration - instructions for Contember how to get from previous state to your new one. This command needs two parameters: first is name of your project ( quickstart in our example) and then name your migration. It can be anything you want. Run this command and choose an option Yes and execute immediately. It will create your migration and after confirmation execute it. Now if you would look into your database, you would see there a table article with three columns: id, title, content. Nice. Create your administration UI Now we have something we want to edit in UI. Let's start by adding a listing page for our articles. Add listing page Go to admin/pages and create new file articleList.tsx. import * as React from 'react' import { DataGridPage, TextCell } from '@contember/admin' export default () => ( <DataGridPage entities="Article" rendererProps={{ title: 'Articles' }}> <TextCell field="title" header="Title" /> </DataGridPage> ) - Import @contember/adminpackage for TypeScript autocompletion. - Export page component as default export. - Use DataGridPagecomponent to show the data in a simple datagrid. - Tell it which entities you'd like to edit. In our case it's Article(it has to be the same name we used in the model). - Use TextCellto add text column. If you go to localhost:1480/article-list you'll see list of your articles. Which is empty as we didn't add any data there yet. Let's add some data. Add create page import * as React from 'react' import { CreatePage, RichTextField, TextField } from '@contember/admin' export default () => ( <CreatePage entity="Article" rendererProps={{ title: 'Create Article' }}> <TextField field="title" label="Title" /> <RichTextField field="content" label="Content" /> </CreatePage> ) - We'll create a new file named articleCreate.tsx. - This time we'll use CreatePagecomponent. - We'll tell it what we want to add ( Article). - We'll use two new components - TextFieldand RichTextFieldand tell them what fields to edit. Now at localhost:1480/article-create you can create new article. And if you go to the list of articles you'll see the data are there. But it doesn't work very well. One of the things missing is to go to edit mode after you created a new article. So let's start by adding an edit page: Add edit page We'll create a new page named articleEdit. It looks almost the same as the create page - but we have to tell it which article to edit: import * as React from 'react' import { EditPage, RichTextField, TextField } from '@contember/admin' export default () => ( <EditPage entity="Article(id = $id)" rendererProps={{ title: 'Edit Article' }}> <TextField field="title" label="Title" /> <RichTextField field="content" label="Content" /> </EditPage> ) Let's use it. We'll redirect users from our create page to the edit page after the article is successfully created: import * as React from 'react' import { CreatePage, RichTextField, TextField } from '@contember/admin' export default () => ( <CreatePage entity="Article" rendererProps={{ title: 'Create Article' }} <TextField field="title" label="Title" /> <RichTextField field="content" label="Content" /> </CreatePage> ) This is done with redirectOnSuccess prop where we specify link to page where user should be redirected. Now if you create a new article you're automatically redirected to the edit page. What's missing is an edit and delete button in the list of pages. import * as React from 'react' import { DataGridPage, DeleteEntityButton, GenericCell, Link, TextCell } from '@contember/admin' export default () => ( <DataGridPage entities="Article" rendererProps={{ title: 'Articles' }}> <TextCell field="title" header="Title" /> <GenericCell shrunk><Link to="articleEdit(id: $entity.id)">Edit</Link></GenericCell> <GenericCell shrunk><DeleteEntityButton immediatePersist /></GenericCell> </DataGridPage> ) - We've added two GenericCell. - First with Linkcomponent targeting articleEditpage and passing idas parameter. - Second with delete button provided by DeleteEntityButton. - Minor touch is use of shrunkwith tells the cell to be as small as possible. Add pages to side menu One last thing is to add our pages to the left sidebar: import * as React from 'react' import { Menu } from '@contember/admin' export const Navigation = () => ( <Menu.Item> <Menu.Item <Menu.Item <Menu.Item </Menu.Item> ) And that's it! You have just created a simple data model and created custom interface, so you can edit the data. Fetch data via GraphQL API We recommend reading Content API topic however if you're looking to quickly play with the API, we've prepared Insomnia project for you to import and quickly try it out. To use it download it here and just drag&drop it to Insomnia.
https://docs.contember.com/intro/quickstart/
2022-05-16T21:03:09
CC-MAIN-2022-21
1652662512249.16
[]
docs.contember.com
High CPU usage by opscenterd Increasing the nodelist polling period or setting a sleep delay can reduce excessive CPU usage when starting or running opscenterd. Increasing the nodelist polling period or setting a sleep delay can reduce excessive CPU usage when starting or running opscenterd. In some environments, you might notice CPU usage for the opscenterd spiking dramatically (almost to 100%) upon startup or while already running. Typically, this spike is caused by the retrieval and parsing of cluster topology performed during startup, and every 60 seconds by default while opscenterd is running. When OpsCenter is managing multiple clusters with vnodes enabled, the impact of this CPU spike can cause performance issues or even stop opscenterd from properly starting. If your environment is experiencing excessive CPU consumption, try the available workarounds to alleviate the issue. Configuring the polling period for CPU issues while running opscenterd Increasing the nodelist polling period can reduce CPU usage when running opscenterd. Increasing the nodelist polling period ( nodelist_poll_period) can reduce CPU usage when running opscenterd. cluster_name.confThe location of the cluster_name.conf file depends on the type of installation: - Package installations: /etc/opscenter/clusters/cluster_name.conf - Tarball installations: install_location/conf/clusters/cluster_name.conf Procedure - Open cluster_name.conf for editing. - Add a [collection]section and set the nodelist_poll_periodvalue: [collection] nodelist_poll_period = 43200 # this would be every 12 hoursThe nodelist_poll_periodrepresents the interval in seconds that OpsCenter polls the nodes and token lists in a cluster. Polling the node list determines whether there were any topology changes since the last poll. If you do not anticipate any topology changes, set it to a high value. - Optional: If there were any topology changes and the polling interval is set to a high value, restart opscenterd. Otherwise, wait for the next poll. - Optional: Refresh the browser. Configuring a sleep delay for CPU issues when starting opscenterd Configuring a delay between clusters during startup helps alleviate opscenterd CPU usage on startup, allowing OpsCenter to function properly. Configuring a delay between starting clusters helps alleviate opscenterd CPU usage on startup, allowing OpsCenter to function properly. Set the startup sleep time to control how long OpsCenter waits between connecting to clusters on startup. opscenterd.confThe location of the opscenterd.conf file depends on the type of installation: - Package installations: /etc/opscenter/opscenterd.conf - Tarball installations: install_location/conf/opscenterd.conf Procedure - Open opscenterd.conf for editing. - Under the [clusters]section, set the value of startup_sleepto 5: [clusters] startup_sleep = 5 The default value is 0 seconds, which results in no wait time between connecting to each cluster. Depending on your environment, you might need to adjust the value. After configuring the sleep value to a value other than zero, wait until all clusters have started before using the OpsCenter UI or API. Otherwise, OpsCenter can become unresponsive and log multiple errors. - Restart opscenterd.
https://docs.datastax.com/en/dse-trblshoot/doc/troubleshooting/opsc/highCPUusageOpscenterd.html
2022-05-16T21:28:22
CC-MAIN-2022-21
1652662512249.16
[]
docs.datastax.com
How to Report A Bug Warning Against Accidental Vulnerability Disclosure You may not realize it, but your bug could be a security issue. Please review our Security Policy to determine if your bug is a security issue before you create a public ticket on GitHub issues. GitHub Issues are public reports and security issues must be reported in private. Steps Before You Report If you find a bug in ClamAV, please do the following before you submit a bug report: Verify if the bug exists in the most recent stable release or ideally check if the bug exists in the latest unreleased code in Git before reporting the issue. Review the open issues to make sure someone else hasn't already reported the same issue. Tip: Before switching to GitHub Issues, ClamAV used Bugzilla. You can also review older open tickets from the Bugzilla archive. Collect the required information, described below, to include with your report. Create a new ticket on GitHub. Please do not submit bugs for third party software. Required Information Please be sure to include all of the following information so that we can effectively address your bug: ClamAV version, settings, and system details: On the command line, run: clamconf -n ClamConf will print out configuration information and some system information. The -noption prints out only non-default options. This information will help the team identify possible triggers for your bug. 3rd party signatures: Please tell us if you are using any unofficial signature databases, that is anything other than main.cvd/cld, daily.cvd/cld, and bytecode.cvd/cld How to reproduce the problem: Include specific steps needed to reproduce the issue. If the issue is reproducible only when scanning a specific file, attach it to the ticket. Large Files: The maximum size for file attachments on GitHub Issues is 25MB and the maximum size for images is 10MB. If the file is too big to mail it, you can upload it to a password protected website and send us the URL and the credentials to access it. If your file must be kept confidential you can reach out on the ClamAV Discord chat server to exchange email addresses and to share the zipped file or to share the zip password. CAUTION: Don’t forget to encrypt it or you may cause damage to the mail servers between you and us! On the command line, run: zip -P virus -e file.zip file.ext
https://docs.clamav.net/manual/Usage/ReportABug.html
2022-05-16T21:51:44
CC-MAIN-2022-21
1652662512249.16
[]
docs.clamav.net
Introduction Grenadine lets you print badges for your attendees, speakers, and anyone else who needs one at your event. Grenadine supports a variety of badge formats from different manufacturers. The most common badge size is an Avery 5392 Nametag Insert Refill which measures 3” x 4”. Printing a badge within Grenadine Event Management Software does not require any special kind of printer. Our system generates a PDF which you can easily send to any printer. Decide What Gets Printed On The Badge You can include the following elements on your badges: - The event name - The event logo - Cut marks (thin lines that help you determine where the label boundaries are) - The attendee or person’s name (this information is the only mandatory information on the badge) - The person’s organization or company - The person’s title - The city where the person is from - The country where the person is from - A professional photo of the person - The ticket type(s) for the person - The categories for the person (i.e. the categories that you have set on this person’s profile) - The person’s registration number - A scannable QR code (can be scanned by the event organizer using the check-in app, or by another attendee using the attendee mobile app) Here’s an example of what a badge may look like: Printing Badges Printing badges can be done in a few easy steps. Grenadine gives you the option to print them from the Reports section of Grenadine Event Manager or the People section. Where you print badges from will not change how they look, the different printing options exist simply for your convenience. Reports Select the Badges report from the People section in reports. After selecting Badges the following will appear on the right-hand side of the screen as shown below. - Label Type: Select size you would like your badge to be printed here. - People: To select specific people, type their name(s) in here. To print, all badges leave this section empty. People To print badges in the People table select the person or people whose badge(s) you would like to print then click the button, shown highlighted above. The badges will appear as a PDF on the bottom left of your screen, as shown highlighted below. Click the PDF at the bottom left of your screen to open it. From here click the printer icon in the top right corner of the screen to print the necessary badges.
https://docs.grenadine.co/badges.html
2022-05-16T22:59:33
CC-MAIN-2022-21
1652662512249.16
[]
docs.grenadine.co
debops.mosquitto¶ Mosquitto is an open source Message Queue Telemetry Transport (MQTT) broker used in the Internet of Things paradigm. The debops.mosquitto Ansible role can be used to install and configure Mosquitto on Debian/Ubuntu hosts. The role can use other DebOps roles to manage firewall access to Mosquitto, publish Avahi services and configure an nginx frontend for the Mosquitto WebSockets API. - Getting started - debops.mosquitto default variables - APT packages - PyPI packages - User, group, additional groups - Network configuration - Websocket support - Global Mosquitto configuration - Mosquitto listeners - Mosquitto bridges - Public Key Infrastructure - PKI inventory variables - Avahi/ZeroConf support - Password file configuration - Access Control List support - User authentication, ACL configuration - Configuration for other Ansible roles - Default variable details debops.mosquitto - Manage a Mosquitto MQTT broker
https://docs.debops.org/en/stable-3.0/ansible/roles/mosquitto/index.html
2022-05-16T21:23:35
CC-MAIN-2022-21
1652662512249.16
[]
docs.debops.org
Notification emails sent by GitLab can be signed with S/MIME for improved security. Be aware that S/MIME certificates and TLS/SSL certificates are not the same and are used for different purposes: TLS creates a secure channel, whereas S/MIME signs and/or encrypts the message itself Enable S/MIME signing This setting must be explicitly enabled and a single pair of key and certificate files must be provided: - Both files must be PEM-encoded. - The key file must be unencrypted so that GitLab can read it without user intervention. - Only RSA keys are supported. Optionally, you can also provide a bundle of CA certs (PEM-encoded) to be included on each signature. This will typically be an intermediate CA. For Omnibus installations: Edit /etc/gitlab/gitlab.rband adapt the file paths: gitlab_rails['gitlab_email_smime_enabled'] = true gitlab_rails['gitlab_email_smime_key_file'] = '/etc/gitlab/ssl/gitlab_smime.key' gitlab_rails['gitlab_email_smime_cert_file'] = '/etc/gitlab/ssl/gitlab_smime.crt' # Optional gitlab_rails['gitlab_email_smime_ca_certs_file'] = '/etc/gitlab/ssl/gitlab_smime_cas.crt' Save the file and reconfigure GitLab for the changes to take effect. The key needs to be readable by the GitLab system user ( git by default). For installations from source: Edit config/gitlab.yml: email_smime: # Uncomment and set to true if you need to enable email S/MIME signing (default: false) enabled: true # S/MIME private key file in PEM format, unencrypted # Default is '.gitlab_smime_key' relative to Rails.root (i.e. root of the GitLab app). key_file: /etc/pki/smime/private/gitlab.key # S/MIME public certificate key in PEM format, will be attached to signed messages # Default is '.gitlab_smime_cert' relative to Rails.root (i.e. root of the GitLab app). cert_file: /etc/pki/smime/certs/gitlab.crt # S/MIME extra CA public certificates in PEM format, will be attached to signed messages # Optional ca_certs_file: /etc/pki/smime/certs/gitlab_cas.crt Save the file and restart GitLab for the changes to take effect. The key needs to be readable by the GitLab system user ( git by default). How to convert S/MIME PKCS #12 format to PEM encoding Typically S/MIME certificates are handled in binary Public Key Cryptography Standards (PKCS) #12 format ( .pfx or .p12 extensions), which contain the following in a single encrypted file: - Public certificate - Intermediate certificates (if any) - Private key To export the required files in PEM encoding from the PKCS #12 file, the openssl command can be used: #-- Extract private key in PEM encoding (no password, unencrypted) $ openssl pkcs12 -in gitlab.p12 -nocerts -nodes -out gitlab.key #-- Extract certificates in PEM encoding (full certs chain including CA) $ openssl pkcs12 -in gitlab.p12 -nokeys -out gitlab.crt
https://docs.gitlab.com/ee/administration/smime_signing_email.html
2022-05-16T21:55:47
CC-MAIN-2022-21
1652662512249.16
[]
docs.gitlab.com
Analytics for knowledge articles and search terms Business value Knowledge authors must keep their knowledge bases relevant, accurate, and easy to access from different channels. By leveraging the built-in, historical view of knowledge article usage and other related metrics, knowledge authors and managers can understand the effectiveness of knowledge content and identify opportunities for improving their knowledge bases. Feature details The following are some of the key capabilities of knowledge article analytics, including detailed reports that provide historical trends for key metrics such as: - Number of views - Number of visitors - Average feedback rating - Number of links to cases - Number of shares
https://docs.microsoft.com/en-us/dynamics365-release-plan/2022wave1/service/dynamics365-customer-service/analytics-knowledge-articles-search-terms?WT.mc_id=BA-MVP-5003431
2022-05-16T23:29:57
CC-MAIN-2022-21
1652662512249.16
[]
docs.microsoft.com
Twist# You can use these credentials to authenticate the following nodes with Twist. Prerequisites# Using OAuth# Callback URL with Twist Note: The Redirect URL should be a URL in your domain. For example, Twist doesn't accept the localhost callback URL. Refer to the FAQs to learn to configure the credentials for the local environment. - Access your Twist workspace. - Click on your avatar in the top right corner. - Select 'Add integrations...' from the dropdown list. - Click on Build on the top. - Click on the Add a new integration button. - Enter a name in the Integration name field. - Enter a description in the Description field. - Select 'General integration' from the Integration type dropdown list. - Click on the Create my integration button. - Click on OAuth Authentication from the left sidebar. - Copy the 'OAuth Callback URL' provided in the Twist OAuth2 API credentials in n8n and paste it in the OAuth 2 redirect URL field on your Twist integration page. - Click on the Update integration button. - Use the Client ID and Client Secret with your Twist node credentials in n8n. - Click on the circle button in the OAuth section to connect a Twist account to n8n. - Click the Save button to save your credentials in n8n. FAQs# How to configure the OAuth credentials for the local environment?# Twist doesn't accept the localhost callback URL. However, you can follow the steps mentioned below to configure the OAuth credentials for the local environment: 1. We will use ngrok to expose the local server running on port 5678 to the internet. In your terminal, run the following command: <YOUR-NGROK-URL>with the URL that you get from the previous step. 3. Start your n8n instance. 4. Follow the instructions mentioned in the Using OAuth section to configure your credentials.
https://docs.n8n.io/integrations/credentials/twist/
2022-05-16T22:40:40
CC-MAIN-2022-21
1652662512249.16
[]
docs.n8n.io
Policy settings¶ You can change policy settings to affect when 0install looks for updates and which versions it prefers. The first part shows how to set policy settings that apply to all applications of the current user. The last section shows how to change per-application settings. Policy affects which versions 0install chooses (do you want test versions, ...). General policy settings¶ You can change the policy settings using the Preferences dialog. To open it run 0install config or choose Zero Install -> Manage Applications from the Applications menu, click on the edit properties icon next to an application and click Preferences. You can change the policy settings using the Configuration dialog. To open it run 0install-win config or click on the Options in the bottom left of the main GUI. Network use¶ Affects how much 0install will rely on the network. Possible values are: Freshness¶ 0install caches feeds and checks for updates from time to time. The freshness indicates how old a feed may get before 0install automatically checks for updates to it. Note that 0install only checks for updates when you actually run a program; so if you never run something, it won't waste time checking for updates. Help test new versions¶ By default, 0install tries not to select new versions while they're still in the "testing" phase. If checked, 0install will instead always select the newest version, even if it's marked as "testing". Per-application policy settings¶ You can change per-application policy settings in the application information dialog. To open this dialog: Run 0install runwith the --guioption and the URI of the application: 0install run --gui -or- Choose Zero Install -> Manage Applications from the Applications menu, click on the edit properties icon next to the application. Double-click the application in the list. For example, double-clicking on Edit displays this dialog box: Run 0install runwith the --customizeoption and the URI of the application: 0install run --customize -or- In the main GUI open the dropdown menu next to an App's Run button, select Run with options, set the Customize version checkbox and click OK. Click on the Change link next to the application. This displays this dialog box: Feeds¶ In the Feeds tab, a list of feeds shows all the places where Zero Install looks for versions of the app. By default, there is just one feed with the URL you just entered. You can register additional feeds to be considered (e.g., a local feed with custom builds or an alternate remote feed). This can be done either using the GUI or with the 0install add-feed command. Versions¶ In the Versions tab, you can use the Preferred Stability setting in the interface dialog to choose which versions to prefer. You can also change the stability rating of any implementation by clicking on it and choosing a new rating from the popup menu (drop-down in the Override column on Windows). User-set ratings are shown in capitals. As you make changes to the policy and ratings, the selected implementation will change. The version shown in bold (or at the top of the list, in some versions) is the one that will actually be used. In addition to the ratings below, you can set the rating to Preferred. Such versions are always preferred above other versions, unless they're not cached and you are in Off-line mode. The following stability ratings are allowed: - Stable (this is the default if Help test new versions is unchecked) - Testing (this is the default if Help test new versions is checked) - Developer - Buggy - Insecure Stability ratings are kept independently of the implementations, and are expected to change over time. When any new release is made, its stability is initially set to Testing. If you have selected Help test new versions in the Preferences dialog box then you will then start using it. Otherwise, you will continue with the previous stable release. After a while (days, weeks or months, depending on the project) with no serious problems found, the author will change the implementation's stability to Stable so that everyone will use it. If problems are found, it will instead be marked as Buggy, or Insecure. Neither will be selected by default, but it is useful to see the reason (you might opt to continue using a buggy version if it works for you, but should never use an insecure one). Developer is like a more extreme version of Testing, where the program is expected to have bugs. Tip.
https://docs.0install.net/details/policy-settings/
2022-05-16T21:29:25
CC-MAIN-2022-21
1652662512249.16
[]
docs.0install.net
AI-generated conversation summary sets context for Teams-based collaboration Important Some of the functionality described in this release plan has not been released. Delivery timelines may change and projected functionality may not be released (see Microsoft policy). Learn more: What's new and planned Business value Reading through a long conversation transcript to understand the context can be time-consuming. On top of that, writing a summary of the conversation adds even more time. With an AI-generated conversation summary, when an agent has a conversation with a customer and wants to collaborate with other agents, supervisors, SMEs, and so on using embedded Microsoft Teams, AI automatically provides a summary of the conversation for agents to share with their collaborators. Feature details Some of the key capabilities of this feature are: - Auto-generated summaries that agents can use to share the context of their service conversations. - A summary format structure that provides insights about the customer's issue and any solutions that the agent tried. - The ability for agents to edit the auto-generated summary.
https://docs.microsoft.com/en-us/dynamics365-release-plan/2022wave1/service/dynamics365-customer-service/ai-generated-conversation-summary-set-context-teams-based-collaboration?WT.mc_id=BA-MVP-5003431
2022-05-16T23:05:55
CC-MAIN-2022-21
1652662512249.16
[]
docs.microsoft.com
Rolling back hotfixes from the user interface If an uncommitted hotfix introduces unwanted changes to your system, you can remove it by rolling it back. - In Dev Studio, click. - In the Hotfix Details area, click the Uncommitted tab. - Click Rollback All. - Click Yes. Previous topic Rolling back hotfixes Next topic Rolling back hotfixes by submitting a request to an active instance
https://docs.pega.com/keeping-current-pega/85/rolling-back-hotfixes-user-interface
2022-05-16T22:21:24
CC-MAIN-2022-21
1652662512249.16
[]
docs.pega.com
candidates, and still need have the content type requested (via file extension, parameter, or Accept header, described above). logger HIGHEST_PRECEDENCE, LOWEST_PRECEDENCE getServletContext, getTempDir, getWebApplicationContext, initApplicationContext, isContextRequired, setServletContext getApplicationContext, getMessageSourceAccessor, initApplicationContext, obtainApplicationContext, requiredContextClass, setApplicationContext clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public ContentNegotiatingViewResolver() public void setContentNegotiationManager(@Nullable() @Nullable public ContentNegotiationManager getContentNegotiationManager() ContentNegotiationManagerto use to determine requested media types. boolean isUseNotAcceptableStatusCode() public void setDefaultViews(List<View> defaultViews) ViewResolverchain. public List<View> getDefaultViews() public void setViewResolvers(List<ViewResolver> viewResolvers) If this property is not set, view resolvers will be detected automatically. public List<ViewResolver> getViewResolvers() public void setOrder(int order)- the Locale in which to resolve the view. ViewResolvers that support internationalization should respect this. nullif not found (optional, to allow for ViewResolver chaining) Exception- if the view cannot be resolved (typically in case of problems creating an actual View object) @Nullable protected List<MediaType> getMediaTypes(HttpServletRequest request) MediaTypefor the given HttpServletRequest. request- the current servlet request
https://docs.spring.io/spring-framework/docs/5.2.10.RELEASE/javadoc-api/org/springframework/web/servlet/view/ContentNegotiatingViewResolver.html
2022-05-16T23:16:46
CC-MAIN-2022-21
1652662512249.16
[]
docs.spring.io
Date: Mon, 16 May 2022 22:25:09 +0000 (GMT) Message-ID: <1403090571.20716.1652739909179@c7585db71e40> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_20715_1023082284.1652739909179" ------=_Part_20715_1023082284.1652739909179 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Contents:=20 By default, the web client uses port 3005. NOTE: Any client firewall software must be configured t= o enable access on this port. This port can be changed. For more information, see Change Listening Port. If Trifacta Self-Managed Enterprise Ed= ition is integrated with a Hadoop cluster, the Edge Node must h= ave access to the following Hadoop components. Their default ports are list= ed below: NOTE: These ports vary between installations. Please ve= rify your environment's ports. If the Trifacta node is on = a different network from the Hadoop cluster, please verify that these addit= ional ports are opened on the firewall. For additional details, please refer to the documentation provided with = your Hadoop distribution. If you are integrating with an EMR cluster, please verify that the follo= wing nodes and ports are available to the = Trifacta node.
https://docs.trifacta.com/exportword?pageId=118229156
2022-05-16T22:25:09
CC-MAIN-2022-21
1652662512249.16
[]
docs.trifacta.com
This document describes the kinds of rules that apply to documents. The document path rule controls the folder that the document is created in. All named ranges and cells are displayed in the Named Ranges window. Current values of any named ranges will be restored in the new document unless a rule is applied. Portable.
http://docs.driveworkspro.com/Topic/DocumentRulesWordDocument
2018-06-17T21:48:39
CC-MAIN-2018-26
1529267859817.15
[]
docs.driveworkspro.com
When DriveWorks Autopilot is used to round trip through SOLIDWORKS to produce 3D content in a 3D Preview control please use this checklist to ensure everything is set correctly. DriveWorks 3D Preview takes the first active model it finds in your model rules. It looks in model rules from the bottom up until it finds a viable root component set. A viable component set is a top-level model with a valid file name. Component Sets can be bypassed in the model rules tree by setting their file name to "DELETE". DriveWorks won’t use these models and will move onto the next one During the Specification, you can set file names to be delete and then at release time you can switch them back so models generate. This is the machine name that has DriveWorks Autopilot installed which is to be used as the 3D preview service. In DriveWorks General settings there is a setting for Preview Service Location. This must be set to the name of the machine where DriveWorks Autopilot is running 3D Preview. If you are running DriveWorks Live through IIS you must also set the Preview Service Location in the DriveWorks.config file. Make sure you have the DriveWorks 3D Export add in enabled in the SOLIDWORKS application running on the DriveWorks Autopilot machine. DriveWorks Autopilot must be running as a Windows Administrator. This is so that the 3D Preview service can run and access applications like SOLIDWORKS. In DriveWorks Autopilot under 3D Preview Settings there is a check box to "Enable 3D Preview". Make sure this is checked on the machine that you want to generate the 3D Preview. DriveWorks Applications such as DriveWorks Live and IIS request 3D previews through port 8900. If this port is not accessible, then the requests will not go through.
http://docs.driveworkspro.com/Topic/HowToTroubleshootAutopilot3DPreview
2018-06-17T21:48:58
CC-MAIN-2018-26
1529267859817.15
[]
docs.driveworkspro.com
Using Blog Options Goto Goto Dashboard => Appearance => Customize => Curtains. - Click Save & Publishwhen you're done with settings Curtains Pro. - This option let you to display post's Author Biodatain single post view. - This option let you to display Social Sharing Iconsin single post view. - Lets you to show Related Postsbelow single post view.
http://docs.webulous.in/curtains-free/theme-options/blog.php
2018-06-17T21:56:41
CC-MAIN-2018-26
1529267859817.15
[array(['http://docs.webulous.in/curtains-free/images/theme-options/blog.png', None], dtype=object) ]
docs.webulous.in
Using Recent Work Widget With Page Builder After learning How to use Page Builder, you're now ready to add widgets. Recent Work Widget is used to display your portfolio anywhere in your site, like To use Recent Work Widget, choose Recent Work widget from the popup window when you click on Add Widget button from page builder editor Now, you're presented with following popup screen - Enter Titlefor your Recent Work - Enter No. of Workyou would like to display - Enter the Type, Available choices are Carousel,Isotope. - Click on Donewhen you're done.
http://docs.webulous.in/waves-pro/widgets/recent-work.php
2018-06-17T21:56:25
CC-MAIN-2018-26
1529267859817.15
[array(['http://docs.webulous.in/waves-pro/images/widget/recent-work-isotope.png', None], dtype=object) array(['http://docs.webulous.in/waves-pro/images/widget/recent-work-carousel.png', None], dtype=object) array(['http://docs.webulous.in/waves-pro/images/recent-work-widget.png', None], dtype=object) ]
docs.webulous.in
Tagged timeline Use Filter a network over time using years defined in the tag field. Note: If you include timeline data using the tag field, this controls provides clickable dates that can be used to filter the map. We recommend using years to define when an element/connection was present. You can supply multiple years to be able to allow disjointed timelines (e.g. 2012, 2013, 2016 for an element that was part of the network until 2013, then left and joined again in 2016). Example @controls { bottom { tagged-timeline { range: 2000..2016; target: element; } } } Supported properties rangedefines the years that should be included as clickable links. targetdefines whether the filter should apply to elements, connections, or loops. To apply the filter to elements and connections, use element,connection;. multipleby default the timeline allows you to select multiple years. Use multiple: falseto only allow a single year to be selected instead. defaultdefines which values should be selected by default. Use select-allto select everything by default (or show-allfor a similar effect without selecting). Check out our controls reference to see the full list of properties and values recognized by the tagged timeline control.
https://docs.kumu.io/guides/controls/tagged-timeline-control.html
2018-06-17T22:18:25
CC-MAIN-2018-26
1529267859817.15
[]
docs.kumu.io
Error getting tags : error 404Error getting tags : error 404 set the bottomRight of scrollbar "Master" to 400,200 set the bottomRight of field "Follower" to the mouseLoc Use the bottomRight property to change the placement of a control or window. Value: The bottomRight of an object is any expression that evaluates to a point--two integers separated by a comma. The first item of the bottomRight is the distance in pixels from the left edge of the screen (for stacks) or card to the right edge of the object. The second item is the distance in pixels from the top edge of the screen (for stacks) or card to the bottom edge of the object. For cards, the bottomRight property is The bottomRight of a stack is in absolute (screen) coordinates. The first item (the right) of a card's bottomRight property is always the width of the stack window; the second item (the bottom) is always the height of the stack window. The bottomRight of a group or control is in relative (window) coordinates. In window coordinates, the point 0,0 is at the top left of the stack window. In screen coordinates, the point 0,0 is at the top left of the screen. Changing the bottomRight of an object moves it to the new position without resizing it. To change an object's size, set its height, width, or rectangle properties. Important! The order of the bottom and rightparameters is reversed compared to the property name: right comes first, then bottom.
http://docs.runrev.com/Property/bottomRight
2018-06-17T21:50:32
CC-MAIN-2018-26
1529267859817.15
[]
docs.runrev.com
California Public Utilities Commission 505 Van Ness Ave., San Francisco ______________________________________________________________________________FOR IMMEDIATE RELEASE PRESS RELEASE Contact: Terrie Prosper, 415.703.1366, [email protected] Docket #: I.10-02-003 CPUC FINES TELCO COMPANY OVER CRAMMING VIOLATIONS San Francisco, CA - May 5, 2011 - The California Public Utilities Commission (CPUC), following a thorough investigation of consumer complaints about unauthorized charges and incorrect rates on phone bills, today approved a Settlement Agreement between Americatel and the CPUC's Consumer Protection and Safety Division (CPSD) that protects consumers against fraudulent marketing practices and faulty billing systems. As a result of today's decision, thereby violating Pub. Util. Code §2890(a) and §451.. Americatel terminated its marketing agreement with Bravo Marketing in June 2008. It did not bill customers signed by Bravo Marketing in July 2008 or thereafter, and gave credits to every customer Bravo had signed up, including some who had legitimately signed up and used the service. On February 4, 2010, the CPUC opened a formal investigation into the operations and practices of Americatel to determine whether Americatel violated the laws, rules, and regulations governing the way in which consumers are billed for products or services, by billing customers for dial-around long distance monthly service without authorization, and by applying incorrect rates to customers' phone bills. However, the parties also explored settlement negotiations and decided in late September 2010 to submit the disputes to the CPUC's Alternative Dispute Resolution program for mediation. On January 11, 2011, CPSD and Americatel filed a Joint Motion for Approval of Settlement Agreement that represented a compromise of the litigation positions of the Settling Parties. In the settlement, Americatel agreed to numerous operational changes, some of which it had already implemented. These include much better screening and more direct oversight of its telemarketers, enhanced training for its customer service and billing employees, bilingual customer service representatives, and regular trend analysis of customer inquiries to quickly identify problems. These changes will significantly reduce the likelihood of a rogue telemarketing agent defrauding Americatel's customers, minimize billing errors, and substantially improve customer protection. "The substantial fine imposed on Americatel should serve as a warning to Americatel and all telecommunications companies that they must carefully look at the work of their marketing agents and the accuracy of their billing systems," said CPUC President Michael R. Peevey. "We encourage customers to contact us if they have unauthorized charges placed on their telephone bills that they cannot resolve with their carrier." The proposal voted on today is available at. For more information on the CPUC, please visit. ##
http://docs.cpuc.ca.gov/PUBLISHED/NEWS_RELEASE/134754.htm
2017-04-23T09:59:18
CC-MAIN-2017-17
1492917118519.29
[]
docs.cpuc.ca.gov
Sharing Overview Sharing is a feature that allowes to share a chart made with AnyChart component to your page in a social network like Facebook, Pinterest and else. This article describes the settings and methods necessary for sharing activation, adjusting and using. There are two ways how to share a chart. First one is to use the context menu that provides four social networks for a chart sharing. A pop-up window of the chosen social network will show up and suggest to write a comment (if you've been logged in) or to log in. The chart will be transformed into a *.PNG image. Another way to share a chart in any social network is to use special sharing methods. There is a special sharing method for each network: When sharing is done using the context menu, the sharing function uses default settings. The image of the chart will be shared with no link on the sample, and the hostname of the link will be the same as the picture name. All sharing methods described above have settings that can be adjusted. Facebook sharing options: // initiate sharing with Facebook chart.shareWithFacebook("Sharing with Facebook sample", "anychart.com", "AnyChart Area Chart", "The sample of an Area Chart created with AnyChart"); All parameters can be defined as an object. This is useful when only some of the parameters are set. // initiate sharing with with Facebook chart.shareWithFacebook({caption:"Sharing with Facebook sample", link: "anychart.com", name: "AnyChart Area Chart", description: "The sample of an Area Chart created with AnyChart"}); There is one more way to change the sharing settings. The anychart.exports.facebook() method is used to set the defaults. Read more about this in the Defaults section. While sharing with Twitter there are no extra options to be adjusted: // share the chart with Twitter chart.shareWithTwitter(); It is possible to change some default settings of export by using the anychart.exports.twitter() method. Read more about this in the Defaults section. There are two options for the LinkedIn sharing: // share the sample with LinkedIn chart.shareWithLinkedIn("Sharing with LinkedIn", "This is a sample of an Area Chart created with AnyChart"); LinkedIn sharing settings can be defined as an object: // this method will share the sample with LinkedIn chart.shareWithLinkedIn({caption: "Sharing with LinkedIn", description: "This is a sample of an Area Chart created with AnyChart"}); It is possible to change default settings of export to LinkedIn using the anychart.exports.linkedin() method. Read more about this in the Defaults section. When sharing with Pinterest, it is possible to specify two settings: // share the sample with Pinterest chart.shareWithPinterest("", "This is a sample of an Area Chart created with AnyChart"); It is possible to adjust the Pinterest settings of an object: // share the sample with Pinterest chart.shareWithPinterest({link: "", description: "This is a sample of an Area Chart created with AnyChart"}); Use the anychart.exports.pinterest() method to adjust the export default settings. Read more about this in the Defaults section. Defaults The anychart.exports methods are responsible for export settings. These methods allow to set sharing defaults. Use anychart.exports.facebook() to set the following: If it is necessary, it is possible to change the sharing application. If you want to create your own sharing application, visit the Facebook application creating page and create your own application. Then, copy the ID of your application and set it as default for the Facebook export method. Use anychart.exports.twitter() to set the following: If it is necessary, it is possible to change the sharing application. If you want to create your own sharing application, visit the Twitter application creating page and create your own application. Then, copy the URL of your application and set it as default for the "url" parameter of the exporting method. Use anychart.exports.linkedin() to set the following: Use anychart.exports.pinterest() to set the following: The following sample demonstrates setting defaults for all networks available at the moment. The defaults for sharing with Facebook are set as an object. Note that when the parameters are set as object, it is not necessary to set all of them. // this method sets the Facebook export settings anychart.exports.facebook({caption: "A sample shared with Facebook", link: "", height: "600", appID: "1167712323282103"}); // this method sets the Twitter export settings anychart.exports.twitter("", "800", "600"); // this method sets the LinkedIn export settings anychart.exports.linkedin("AnyChart Area Chart sample shared with LinkedIn", undefined, undefined, "400"); // this method sets the Pinterest export settings anychart.exports.pinterest("", undefined, "800", undefined); // attach click listener chart.listen("pointClick", function(e){ switch (e.point.get("x")) { case "Facebook": chart.shareWithFacebook(); break; case "Twitter": chart.shareWithTwitter(); break; case "LinkedId": chart.shareWithLinkedIn(); break; case "Pinterest": chart.shareWithPinterest(); break; } }); Sharing Buttons Sample This sample shows how to share a chart using custom buttons. Explore it in the playground to see the code.
https://docs.anychart.com/Common_Settings/Sharing
2017-04-23T10:01:08
CC-MAIN-2017-17
1492917118519.29
[]
docs.anychart.com
Registering a New Domain When you want to register a new domain using the Amazon Route 53 console, perform the following procedure. Important When you register a domain with Amazon Route 53, we automatically create a hosted zone for the domain to make it easier for you to use Amazon Route 53 as the DNS service provider for your new domain. This hosted zone is where you store information about how to route traffic for your domain, for example, to an Amazon EC2 instance or a CloudFront distribution. We charge a small monthly fee for the hosted zone in addition to the annual charge for the domain registration. If you don't want to use your domain right now, you can delete the hosted zone; if you delete it within 12 hours of registering the domain, there won't be any charge for the hosted zone on your AWS bill. We also charge a small fee for the DNS queries that we receive for your domain. For more information, see Amazon Route 53 Pricing. Note that you can't use AWS credits to pay the fee for registering a new domain with Amazon Route 53. To register a new domain using Amazon Route 53 Sign in to the AWS Management Console and open the Amazon Route 53 console at. If you're new to Amazon Route 53, under Domain Registration, choose Get Started Now. If you're already using Amazon. For generic TLDs, we typically send an email to the registrant for the domain to verify that the registrant contact can be reached at the email address that you specified. (We don't send an email if we already have confirmation that the email address is valid.) The email comes from one of the following email addresses: [email protected] – for TLDs registered by Amazon Registrar. [email protected] – for TLDs registered by our registrar associate, Gandi.. For all TLDs, you'll receive an email when your domain registration has been approved. To determine the current status of your request, see Viewing the Status of a Domain Registration. When domain registration is complete, your next step depends on whether you want to use Amazon Route 53 or another DNS service as the DNS service for the domain: Amazon Route 53 – Create resource record sets to tell Amazon Route 53 how you want to route traffic for the domain. For more information, see Adding Resource Record Sets for a New Domain. Another DNS service – Configure your new domain to route DNS queries to the other DNS service. Perform the procedure To update the name servers for your domain when you want to use another DNS service. To update the name servers for your domain when you want to use another DNS service Use the process that is provided by your DNS service to get the name servers for the domain. Sign in to the AWS Management Console and open the Amazon Route 53 console at. In the navigation pane, choose Registered Domains. Choose the name of the domain that you want to configure to use another DNS service. Choose Add/Edit Name Servers. Change the names of the name servers to the name servers that you got from your DNS service in step 1. Choose Update. (Optional) Delete the hosted zone that Amazon Route 53 created automatically when you registered your domain. This prevents you from being charged for a hosted zone that you aren't using. In the navigation pane, choose Hosted Zones. Select the radio button for the hosted zone that has the same name as your domain. Choose Delete Hosted Zone. Choose Confirm to confirm that you want to delete the hosted zone.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register.html
2017-04-23T09:52:03
CC-MAIN-2017-17
1492917118519.29
[]
docs.aws.amazon.com
Migrating Mule 2.x CE to EE If you are a Mule Community user and want to use the Mule Enterprise Edition, follow these steps to start using the Enterprise Edition. Make a backup of your existing Mule Community directory. Install mule-<version>-EEinto a new directory (see Installing Mule). Do not overwrite your existing Mule Community installation. Set the environment variable MULE_HOME to point to the mule-<version>-EEinstallation directory, such as export MULE_HOME=/usr/local/mule/mule-2.2.1-EE. If you use Maven to build your application, edit the pom.xmlfile and change the `muleVersion`property value to <version>-EE. For example: If your application uses custom classes, copy your JAR files to the ${MULE_HOME}/lib/userdirectory. If you are migrating from Mule 1.x Community to Mule 2.x Enterprise, note that the API and configuration files have changed, so your configuration files and possibly your classes will require modifications. See Migrating Mule 1.x to 2.0 for more details. If you have purchased Mule Enterprise, you can use the Migration Tool on the customer portal to migrate your files. If you downloaded and installed any additional community plug-ins, reinstall them for the Mule Enterprise version. Note that plug-ins provided by the Mule community have not been tested against Mule Enterprise and are not officially supported by MuleSource.
https://docs.mulesoft.com/release-notes/migrating-mule-2.x-ce-to-ee
2017-04-23T09:53:16
CC-MAIN-2017-17
1492917118519.29
[]
docs.mulesoft.com
Text height determines the size in drawing units of the letters in the font you are using. The exception is TrueType fonts: the value usually represents the size of the uppercase letters. If you specify a fixed height as part of a text style, the Height prompt is bypassed when you create single-line text. When the height is set to 0 in the text style, you are prompted for the height each time you create single-line text. Set the value to 0 if you want to specify the height as you create text. For TrueType fonts, the value specified for text height represents the height of a capital letter plus an ascent area reserved for accent marks and other marks used in non-English languages. The relative portion of text height that is assigned to capital letters and ascent characters is determined by the font designer at the time the font is designed; consequently, it varies from font to font. In addition to the height of a capital letter and the ascent area that make up the text height specified by the user, TrueType fonts have a descent area for portions of characters that extend below the text insertion line, for example, y, j, p, g, and q. When you apply a text height override to all text in the editor, the entire multiline text object is scaled, including its width.
http://docs.autodesk.com/ACD/2010/ENU/AutoCAD%202010%20User%20Documentation/files/WS1a9193826455f5ffa23ce210c4a30acaf-642e.htm
2017-04-23T10:01:29
CC-MAIN-2017-17
1492917118519.29
[]
docs.autodesk.com
Exports Overview AnyChart provides you with ability to export charts to images (SVG, PNG, JPG), PDF or data files (CSV, Excel). These options are available both via Context menu and API. Every export has some fine tune options, including an ability to change file name. There is also a special option to save chart configuration which may be used to debug charts and report issues. Export server IMPORTANT, DO NOT SKIP THIS PART AnyChart "save as" features work via AnyChart Export Server, which is hosted at AnyChart.Com server. and configure your charts to custom_server. Custom server address is set like this: anychart.server(""); You can read all about AnyChart Export Server and Server-side rendering in AnyChart Export Server article. File name If you want to change default file name for all exports at once you can use anychart.exports.filename() method: anychart.exports.filename('custom_name'); After you do so, all files, images, pdf and data, will be saved under this name, unless you override it when calling specific methods as shown below. Image AnyChart js charting library allows to save charts in 3 different image formats: SVG, PNG and JPG, using saveAsSvg(), saveAsPng() and saveAsJpg() methods. Each method has common parameter: filename, and special parameters depending on format. SVG saveAsSvg() can be launched in two modes, one with width and height passed: saveAsSvg({width: 360, height: 500, filename: 'custom_name'}) And another one with paper size and page orientation set: saveAsSvg({paperSize: "A4", "landscape": false, "filename": "custom_name"}); PNG With saveAsPng() you can set width, height and quality in addition to file name: saveAsPng({width: 360, height: 500, quality: 0.3, filename: "custom_name"}); JPG With saveAsJpg() you can set width, height, quality and forceTransparentWhite flag and in addition to file name: saveAsJpg({width: 360, height: 500, quality: 0.3, forceTransparentWhite: "false", filename: "custom_name"}); To launch the export you need to use these methods as shown: // save the chart in SVG format chart.saveAsSvg(); // save the chart in PNG format chart.saveAsPng(); // save the chart in JPG format chart.saveAsJpg(); And here is a sample where you can see all methods in action: To save chart in Pdf format use: saveAsPdf method: // initiate saving chart in PDF format chart.saveAsPdf(); Data AnyChart provides several methods for saving current chart's data. Output formats are CSV and saveAsXlsx() (Excel file). ChartDataExportMode parameter defines what data is exported: only the data used by chart (SPECIFIC), all data in the data set (RAW) and a special mode for stock charts allows to export grouped data (GROUPED). CSV With saveAsCsv() you can set how you export data and file name: saveAsCsv({chartDataExportMode: anychart.enums.ChartDataExportMode.RAW, csvSettings: {"rowsSeparator": "\n", "columnsSeparator": ",", "ignoreFirstRow": true}, filename: "csv_file"}); Excel With saveAsXlsx() you can set how you export data and file name: // initiate saving chart's data in Xlsx format chart.saveAsXlsx({chartDataExportMode: anychart.enums.ChartDataExportMode.SPECIFIC, filename: "excel"}); To launch the export you need to use these methods as shown: // initiate saving chart's data in Xlsx format chart.saveAsXlsx(); // initiate saving chart's data in CSV format chart.saveAsCsv(); Chart Configuration Chart config may be saved using XML and JSON methods, this feature may be very useful in debug process or if you want to create some kind of import/export functionality for chart themselves. These methods get output of toJson() and toXml() methods and allow to save it as file. First parameter is boolean flag that defines if the current Theme is included in output configuration file. XML // save chart data and configuration in XML format chart.saveAsXml({includeTheme: "false", filename: "chart_xml"}); JSON // save chart data and configuration in Json format chart.saveAsJson({includeTheme: "false", filename: "chart_json"}); Here is a sample of chart save as XML and save as JSON methods:
https://docs.anychart.com/Common_Settings/Exports
2017-04-23T09:56:40
CC-MAIN-2017-17
1492917118519.29
[]
docs.anychart.com
Checking for Annotations¶ To check for a single or all functions in your IDB, you can use the right click menu in the IDA View window. Simply right click anywhere in the IDA View window and select Check FIRST for this function or Query FIRST for all function matches. FIRST’s IDA Integration Right Click Menu The dialog boxes associated with querying FIRST for one or all functions are similar in a lot of ways. Both use two colors to signify if the match is currently applied and what match is selected. Query for Single Function¶ To check FIRST for Check FIRST for this function. The Check Function dialog box will pop up and any matches (exact or similar) will be displayed in a tree structure. Query FIRST for Single Function Query for All Functions¶ To query FIRST for all functions in the IDB, right click any where within the IDA View window. Once the popup menu is shown, select Query FIRST for all function matches. The Check all Functions dialog box will pop up with a list of matches for functions within the IDB. While results are being populated, you can go through an look at each match. The tree view is used to display the data in a meaniful way. The top most layer shows the address and current function name the matches correspond to. The next column displays the number of matches to that function. Expanding that node will show the matches and their details. Selecting to expand a match node will display the comment associated with that match. Note Selecting Show only “sub_” functions and Select Highest Ranked in either order will cause only the function starting with sub_ to be selected. FIRST will not query for all functions at once. Instead all functions are grouping in sets of 20 and each set is set to the server. This will allow results to be returned and the researcher to start looking over the results. As new matches are returned the data will be added. To help understand if the match makes sense, right click anywhere within the function tree node or its matches, and click Go to Function. This will focus the last used IDA View window on the function in question. After reviewing your selection, select the Apply button. If no errors occured the selected matches will be applied to your function(s) and the Output window will state how many functions were applied with FIRST annotations. Important If multiple functions attempt to apply the same match, then only the function first applied with the annotations will change. IDA Pro will prompt you with each additional function that the metadata cannot be applied to. The Output window will display the addresses of the functions that metadata couldn’t be applied to. Danger Selecting to apply annotations to your functions will overwrite any annotations currently applied. There is no UNDO for this operation. Also, when applying a function metadata to your IDB’s function, this could result in incorrect function prototypes (for similar and possibly exact matches). Query FIRST for All Functions
http://first-plugin-ida.readthedocs.io/en/latest/checking.html
2017-04-23T09:56:05
CC-MAIN-2017-17
1492917118519.29
[array(['_images/ida_view_right_click_popup.gif', "FIRST's IDA Integration Right Click Menu"], dtype=object) array(['_images/check_single_function.gif', 'Query FIRST for Single Function'], dtype=object) array(['_images/check_all_functions.gif', 'Query FIRST for All Functions'], dtype=object) ]
first-plugin-ida.readthedocs.io
xdmp.toJsonString( $item as ValueIterator ) as String Returns a string representing a JSON serialization of a given item sequence. XML nodes are serialized to JSON strings. JSON has no serialization for infinity, not a number, and negative 0, therefore if you try and serialize INF, -INF, NaN, or -0 as JSON, an exception is thrown. If you want to represent these values in some way in your serialized JSON, then you can catch the exception (with a try/catch, for example) and provide your own value for it. XQuery maps ( map:map types) serialize to JSON name-value pairs. xdmp.toJsonString(["a",false]); => ["a", false] xdmp.toJsonString( xdmp.fromJsonString('{ "a":111 }')); => {"a":111}
http://docs.marklogic.com/xdmp.toJsonString
2017-04-23T10:00:42
CC-MAIN-2017-17
1492917118519.29
[array(['/images/i_speechbubble.png', None], dtype=object)]
docs.marklogic.com
Package-Aware Print Drivers Package-aware print drivers have entries in their INF files that support point and print with packages. These entries, which make it possible for point and print to accommodate print driver dependencies on other files, can be minor and depend on the nature of the driver package. If the files in the driver package are unique and are not listed in other print driver packages, use the PackageAware keyword in the INF. If the files in the driver package are shared with files in other print driver packages: - Move the shared files into a separate core driver. - Use the PackageAware keyword and the CoreDriverDependencies keyword to refer to this separate core driver. This is necessary to avoid file version conflicts during various remote installation scenarios. This section includes: Package-Aware Print Drivers that Do Not Share Files Package-Aware Print Drivers that Share Files Feedback Send feedback about:
https://docs.microsoft.com/en-us/windows-hardware/drivers/print/package-aware-print-drivers?f=255
2019-04-18T16:58:26
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Let’s assume you have a cluster installation and you want to include a component in your installation which doesn’t support the cluster mode yet. If you put it on all nodes as a separate instances they will work out of sync and overall functionality might be useless. If you put on one node only it will work correctly but it will be visible to users connected to this one node only. Ideally you would like to have a mechanism to install it on one node and put some redirections on other nodes to forward all packets for this component to a node where this component is working. Redirection on it’s own is not enough because the component must be visible in service discovery list and must be visible somehow to users connected to all nodes. This is where the virtual components are handy. They are visible to users as a local normal component, they seem to be a real local component but in fact they just forward all requests/packets to a cluster node where the real component is working. Virtual component is a very lightweight ServerComponent implementation in Tigase server. It can pretend to be any kind of component and can redirect all packets to a given address. They can mimic native Tigase components as well as third-party components connected over external component protocol (XEP-0114). Configuration is very simple and straightforward, in fact it is very similar to configuration of any Tigase component. You set a real component name as a name of the component and a virtual component class name to load. Let’s say we want to deploy MUC component this way. The MUC component is visible as muc.domain.our in the installation. Thus the name of the component is: muc --comp-name-1=muc --comp-class-1=tigase.cluster.VirtualComponent This is pretty much all you need to load a virtual component. A few other options are needed to point to correct destination addresses for packets forwarding and to set correct service discovery parameters: muc/[email protected] muc/disco-name=Multi User Chat muc/disco-node= muc/disco-type=text muc/disco-category=conference muc/disco-features= That’s it.
https://docs.tigase.net/tigase-server/7.1.5/Administration_Guide/webhelp/virtualComponents.html
2019-04-18T16:26:14
CC-MAIN-2019-18
1555578517745.15
[]
docs.tigase.net
CI/CD for microservices architectures Faster release cycles are one of the major advantages of microservices architectures. But without a good CI/CD process, you won't achieve the agility that microservices promise. This article describes the challenges and recommends some approaches to the problem. What is CI/CD? When we talk about CI/CD, we're really talking about several related processes: Continuous integration, continuous delivery, and continuous deployment. Continuous integration. Code changes are frequently merged into the main branch. Automated build and test processes ensure that code in the main branch is always production-quality. Continuous delivery. Any code changes that pass the CI process are automatically published to a production-like environment. Deployment into the live production environment may require manual approval, but is otherwise automated. The goal is that your code should always be ready to deploy into production. Continuous deployment. Code changes that pass the previous two steps are automatically deployed into production.. Why a robust CI/CD pipeline matters In a traditional monolithic application, there is a single build pipeline whose output is the application executable. All development work feeds into this pipeline. If a high-priority bug is found, a fix must be integrated, tested, and published, which can delay the release of new features. You can mitigate these problems by having well-factored modules and using feature branches to minimize the impact of code changes. But as the application grows more complex, and more features are added, the release process for a monolith tends to become more brittle and likely to break. Following the microservices philosophy, there should never be a long release train where every team has to get in line. The team that builds service "A" can release an update at any time, without waiting for changes in service "B" to be merged, tested, and deployed. To achieve a high release velocity, your release pipeline must be automated and highly reliable, to minimize risk. If you release to production daily or multiple times a day, regressions or service disruptions must be very rare. At the same time, if a bad update does get deployed, you must have a reliable way to quickly roll back or roll forward to a previous version of a service. Challenges Many small independent code bases. Each team is responsible for building its own service, with its own build pipeline. In some organizations, teams may use separate code repositories. Separate repositories can lead to a situation where the knowledge of how to build the system is spread across teams, and nobody in the organization knows how to deploy the entire application. For example, what happens in a disaster recovery scenario, if you need to quickly deploy to a new cluster? Mitigation: Have a unified and automated pipeline to build and deploy services, so that this knowledge is not "hidden" within each team. Multiple languages and frameworks. With each team using its own mix of technologies, it can be difficult to create a single build process that works across the organization. The build process must be flexible enough that every team can adapt it for their choice of language or framework. Mitigation: Containerize the build process for each service. That way, the build system just needs to be able to run the containers. Integration and load testing. With teams releasing updates at their own pace, it can be challenging to design robust end-to-end testing, especially when services have dependencies on other services. Moreover, running a full production cluster can be expensive, so it's unlikely that every team will run its own full cluster at production scales, just for testing. Release management. Every team should be able to deploy an update to production. That doesn't mean that every team member has permissions to do so. But having a centralized Release Manager role can reduce the velocity of deployments. Mitigation: The more that your CI/CD process is automated and reliable, the less there should be a need for a central authority. That said, you might have different policies for releasing major feature updates versus minor bug fixes. Being decentralized doesn't mean zero governance. Service updates. When you update a service to a new version, it shouldn't break other services that depend on it. Mitigation: Use deployment techniques such as blue-green or canary release for non-breaking changes. For breaking API changes, deploy the new version side by side with the previous version. That way, services that consume the previous API can be updated and tested for the new API. See Updating services, below. Monorepo vs multi-repo Before creating a CI/CD workflow, you must know how the code base will be structured and managed. - Do teams work in separate repositories or in a monorepo (single repository)? - What is your branching strategy? - Who can push code to production? Is there a release manager role? The monorepo approach has been gaining favor but there are advantages and disadvantages to both. Updating services There are various strategies for updating a service that's already in production. Here we discuss three common options: Rolling update, blue-green deployment, and canary release. Rolling updates In a rolling update, you deploy new instances of a service, and the new instances start receiving requests right away. As the new instances come up, the previous instances are removed. Example. In Kubernetes, rolling updates are the default behavior when you update the pod spec for a Deployment. The Deployment controller creates a new ReplicaSet for the updated pods. Then it scales up the new ReplicaSet while scaling down the old one, to maintain the desired replica count. It doesn't delete old pods until the new ones are ready. Kubernetes keeps a history of the update, so you can roll back an update if needed. One challenge of rolling updates is that during the update process, a mix of old and new versions are running and receiving traffic. During this period, any request could get routed to either of the two versions. For breaking API changes, a good practice is to support both versions side by side, until all clients of the previous version are updated. See API versioning. Blue-green deployment In a blue-green deployment, you deploy the new version alongside the previous version. After you validate the new version, you switch all traffic at once from the previous version to the new version. After the switch, you monitor the application for any problems. If something goes wrong, you can swap back to the old version. Assuming there are no problems, you can delete the old version. With a more traditional monolithic or N-tier application, blue-green deployment generally meant provisioning two identical environments. You would deploy the new version to a staging environment, then redirect client traffic to the staging environment — for example, by swapping VIP addresses. In a microservices architecture, updates happen at the microservice level, so you would typically deploy the update into the same environment and use a service discovery mechanism to swap. Example. In Kubernetes, you don't need to provision a separate cluster to do blue-green deployments. Instead, you can take advantage of selectors. Create a new Deployment resource with a new pod spec and a different set of labels. Create this deployment, without deleting the previous deployment or modifying the service that points to it. Once the new pods are running, you can update the service's selector to match the new deployment. One drawback of blue-green deployment is that during the update, you are running twice as many pods for the service (current and next). If the pods require a lot of CPU or memory resources, you may need to scale out the cluster temporarily to handle the resource consumption. Canary release In a canary release, you roll out an updated version to a small number of clients. Then you monitor the behavior of the new service before rolling it out to all clients. This lets you do a slow rollout in a controlled fashion, observe real data, and spot problems before all customers are affected. A canary release is more complex to manage than either blue-green or rolling update, because you must dynamically route requests to different versions of the service. Example. In Kubernetes, you can configure a Service to span two replica sets (one for each version) and adjust the replica counts manually. However, this approach is rather coarse-grained, because of the way Kubernetes load balances across pods. For example, if you have a total of 10 replicas, you can only shift traffic in 10% increments. If you are using a service mesh, you can use the service mesh routing rules to implement a more sophisticated canary release strategy. Next steps Learn specific CI/CD practices for microservices running on Kubernetes. Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/architecture/microservices/ci-cd
2019-04-18T16:47:15
CC-MAIN-2019-18
1555578517745.15
[array(['images/cicd-monolith.png', 'Diagram of a CI/CD monolith'], dtype=object) ]
docs.microsoft.com
Suspend-Mailbox Database Copy Use the Suspend-MailboxDatabaseCopy cmdlet to block replication and replay activities (log copying and replay) or activation for a database configured with two or more database copies. For information about the parameter sets in the Syntax section below, see Exchange cmdlet syntax (). Syntax Suspend-MailboxDatabaseCopy [-Identity] <DatabaseCopyIdParameter> [-EnableReplayLag] [-Confirm] [-DomainController <Fqdn>] [-WhatIf] [<CommonParameters>] Suspend-MailboxDatabaseCopy [-Identity] <DatabaseCopyIdParameter> [-ActivationOnly] [-SuspendComment <String>] [-MailboxDatabaseCopy -Identity DB1\MBX3 -SuspendComment "Maintenance on MBX3" This example suspends replication and replay activity for the copy of the database DB1 hosted on the Mailbox server MBX3. An optional administrative reason for the suspension is specified. -------------------------- Example 2 -------------------------- Suspend-MailboxDatabaseCopy -Identity DB3\MBX2 -ActivationOnly This example only suspends activation for the copy of the database DB3 hosted on the Mailbox server MBX2. Parameters The ActivationOnly switch specifies whether to suspend only activation for the mailbox database Identity parameter specifies the name of the database copy being suspended. The SuspendComment parameter specifies the reason that the database copy is being suspended. This parameter is limited to 512 characters.:
https://docs.microsoft.com/en-us/powershell/module/exchange/database-availability-groups/Suspend-MailboxDatabaseCopy?view=exchange-ps
2019-04-18T16:56:13
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Note Input to an Input Method Editor (IME) is not supported for creating an action recording because an action recording cannot aggregate actions into a single action. (An IME is a program that allows computer users to enter complex characters and symbols, such as Japanese characters, using a standard keyboard.) The following procedures describe how to create an action recording. Load a Test into Test Runner Note. To load a test into Test Runner Open Microsoft Test Manager. Note To display the Microsoft Test Manager window, choose Start, and then choose All Programs. Point to Microsoft Visual Studio 2012 and then choose the test case and then choose Run. The Test Runner opens. Note Select Run with options to specify a build to run the test on, or to override the test settings and environment settings for the test plan. For more information, see How to: Override Settings in Your Test Plan for Test Runs. Record an Action Recording. Note If the test contains an existing action recording, you are prompted with the option Overwrite existing action recording. Select this option to create a new recording that replaces the previous action recording, and then choose Start Test. The action recording can be played by using the Play option in the toolbar. For more information, see How to: Play Back an Action Recording. You can specify which applications to record in your test settings for the actions diagnostic data adapter.. Each test step including launching the application is recorded after you choose Start Test. Note If your test setting includes collecting IntelliTrace data, you must start the application after the test is started. For more information, see How to: Collect IntelliTrace Data to Help Debug Difficult Issues.. Note If you do not mark each test step as either passed or failed, then the action recording section can span several test steps. It includes all unmarked test steps since the last step that was marked as passed or failed. . Note If an existing action recording already exists for the test, the Test Runner - Microsoft Test Manager dialog box appears. You have the option to either Overwrite existing recording or Discard new recording. The action recording can now be replayed when you run this in the test case future. For more information, see How to: Play Back an Action Recording. See Also Tasks How to: Play Back an Action Recording How to: Include Recordings of the Screen and Voice During Tests Using Test Settings How to: Create an Action Recording for Shared Steps Concepts Reviewing Test Results in Microsoft Test Manager Recording and Playing Back Manual Tests Supported Configurations and Platforms for Coded UI Tests and Action Recordings
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2012/dd286647(v=vs.110)
2019-04-18T17:00:18
CC-MAIN-2019-18
1555578517745.15
[array(['images/hh698492.runtest%28vs.110%29.png', 'Selecting test to run in Microsoft Test Manager Selecting test to run in Microsoft Test Manager'], dtype=object) array(['images/dd286647.how_actrecord%28vs.110%29.png', 'Creating an action recording in Test Runner Creating an action recording in Test Runner'], dtype=object) ]
docs.microsoft.com
Ticket #1620 (closed defect: community) Show times for SMS Description. Change History Note: See TracTickets for help on using tickets.
https://docs.openmoko.org/trac/ticket/1620
2019-04-18T16:55:39
CC-MAIN-2019-18
1555578517745.15
[]
docs.openmoko.org
Starting Up and Shutting Down Your System Determine the proper startup and shutdown procedures, and write your startup and shutdown scripts.. The DISCONNECT_WAIT default is 10000 milliseconds. To change it, set this system property on the Java command line used for member startup. For example: gfsh>start server --J=-DDistributionManager.DISCONNECT_WAIT=<milliseconds> Each process can have different DISCONNECT_WAIT settings.
https://gemfire.docs.pivotal.io/95/geode/configuring/running/starting_up_shutting_down.html
2019-04-18T16:59:55
CC-MAIN-2019-18
1555578517745.15
[]
gemfire.docs.pivotal.io
Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_7077_174451907.1555607527659" ------=_Part_7077_174451907.1555607527659 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: This page gives information on the Render Time Render Element. The Render Time Render Element shows a flo= ating-point number for each pixel's render time measured in milliseconds. T= he colors in the render element show the rendering speed as a range from bl= ack to white. The whiter the pixel, the longer the render time and vice ver= se. This render element works only with the Bucket image sampler. The Render Time Render Element is useful mainly for troubleshooting purposes= . ||Render Setup window||&n= bsp;> Render Elements tab <= strong>> Add button This render element is enabled through the Render Elements tab of the Re= nder Setup window in 3ds Max and displays its parameters in a rollout at th= e bottom of the window: vrayVFB =E2=80= =93 When enabled, the render element appears in the V-Ray Virtual Frame Buf= fer. deep output =E2=80=93 Specifies whether to include this= render element in deep images. Right-click any rendered pixel to see its = render time in milliseconds in the Pixel Info window.
https://docs.chaosgroup.com/exportword?pageId=38573556
2019-04-18T17:12:08
CC-MAIN-2019-18
1555578517745.15
[]
docs.chaosgroup.com
When dbForge Event Profiler for SQL Server: Tip: Use Set Value To menu to quickly set an empty string, zero, or a current date. To add a new record, select the Append option on the shortcut menu or click the Append button under the grid. To delete a record from the grid, select the Delete option on the shortcut menu or click the - button under the grid or press CTRL+DEL keys. To copy and paste cell values, use the corresponding options on the shortcut menu. You can easily select and copy the data just like cells in a spreadsheet. Do either of these actions: Move the mouse pointer across the grid holding the left mouse button Click the first cell of the data range, press SHIFT, and, holding the SHIFT key, click the last cell. A rectangular range of cells will be selected. cannot be edited. It is also impossible to edit the result of executing of the script with several select statements. When working with grid, you can see special indicators near to the focused cell. These indicators reflect the current editing state. The row is focused. The row has been edited. Incorrect value was entered into a cell. You must either fix the value or press the ESCAPE key to cancel changes made to the cell.
https://docs.devart.com/event-profiler-for-sql-server/working-with-data-in-data-editor/editing-data-in-grid/editing-data-in-grid-overview.html
2019-04-18T16:30:02
CC-MAIN-2019-18
1555578517745.15
[]
docs.devart.com
News Search API v5 reference Note A new version of this API is available. See Bing News Search API v7. For information about upgrading, see the upgrade guide. The News Search API lets you send a search query to Bing and get back a list of relevant news articles. This section provides technical details about the query parameters and headers that you use to request news articles and the JSON response objects that contain them. For examples that show how to make requests, see Searching the Web for News. For information about the headers that requests should include, see Request Headers. For information about the query parameters that requests should include, see Query Parameters. For information about the JSON objects that the response may include, see Response Objects. For information about permitted use and display of results, see Bing Search API Use and Display requirements. Endpoints To request news articles, the request may include. See the Required column for required parameters. The query parameter values must be URL encoded. Response objects The following are the JSON objects that the response may include. If the request succeeds, the top-level object in the response is the News object if the endpoint is /news/search or /news, and TrendingTopicAnswer if the endpoint is /news/trendingtopics. If the request fails, the top-level object is the ErrorResponse object. Error Defines an error that occurred. ErrorResponse Defines the top-level object that the response includes when the request fails. Image Defines a thumbnail of a news-related image. News Defines the top-level object that the response includes when the news request succeeds. If the service suspects a denial of service attack, the request succeeds (HTTP status code is 200 OK), but the body of the response is empty. NewsArticle Defines a news article. Organization Defines the provider that ran the article. Query Defines the search query string. RelatedTopic Defines a list of news articles that are related to the search query. Thumbnail Defines a link to the related image. Topic Defines a trending news topic. TrendingTopics Defines the top-level object that the response includes when the trending topics request succeeds. News Categories by Market The following is a list of possible news categories that you may set the category query parameter to. You may set category to a parent category such as Entertainment or one of its subcategories such as Entertainment_MovieAndTV. If you set category to a parent category, it includes articles from one or more of its subcategories. If you set category to a subcategory, it includes articles only from the subcategory. Error codes The following are the possible HTTP status codes that a request may return. If the request fails, the body of the response will contain an ErrorResponse object. The response object will include an error code and description of the error. If the error is related to a parameter, the parameter field will identify the parameter that is the issue. And if the error is related to a parameter value, the value field will identify the value that is not valid. { "_type": "ErrorResponse", "errors": [ { "code": "RequestParameterMissing", "message": "Required parameter is missing.", "parameter": "q" } ] } { "_type": "ErrorResponse", "errors": [ { "code": "AuthorizationMissing", "message": "Authorization is required.", } ] } The following are the possible error codes..
https://docs.microsoft.com/en-us/rest/api/cognitiveservices/bing-news-api-v5-reference
2019-04-18T17:03:38
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
All Files Displaying Xsheet Thumbnails When there are a large number of columns in the exposure sheet, it's not easy to quickly identify a particular column. Displaying the column thumbnails makes this easier. This option displays a small thumbnail picture of the current frame below the column header. How to display the thumbnails Do one of the following: In the Xsheet toolbar, click the Show Thumbnails button. In the Xsheet menu, select View > Show Thumbnails. The thumbnails appear.
https://docs.toonboom.com/help/harmony-15/advanced/timing/display-xsheet-thumbnail.html
2019-04-18T16:23:29
CC-MAIN-2019-18
1555578517745.15
[]
docs.toonboom.com
You. The host treats USB CD/DVD-ROM devices as SCSI devices. Hot adding and removing these devices is not supported. Prerequisites. Verify that the current version of your ESXi host is 6.0 or later for adding eight virtual xHCI controller to the ESXi host. Procedure What to do next You can now add the device to the virtual machine. See Add USB Devices from an ESXi Host to a Virtual Machine.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-507F65CD-E855-4527-B076-567F27C98A29.html
2019-04-18T16:21:52
CC-MAIN-2019-18
1555578517745.15
[]
docs.vmware.com
By using the MAC traffic qualifier in a rule, you can define matching criteria for the Layer 2 (Data Link Layer) properties of packets such as MAC address, VLAN ID, and next level protocol that consumes the frame payload. Protocol Type The Protocol type attribute of the MAC traffic qualifier corresponds to the EtherType field in Ethernet frames. EtherType represents the type of next level protocol that is going to consume the payload of the frame. You can select a protocol from the drop-down menu or type its hexadecimal number. For example, to capture traffic for the Link Layer Discovery Protocol (LLDP) protocol, type 88CC. VLAN ID You can use the VLAN ID attribute of the MAC traffic qualifier to mark or filter traffic in a particular VLAN. The VLAN ID qualifier on a distributed port group works with Virtual Guest Tagging (VGT). If a flow is tagged with a VLAN ID through Virtual Switch Tagging (VST), it cannot be located by using this ID in a rule on a distributed port group or distributed port. The reason is that the distributed switch checks the rule conditions, including the VLAN ID, after the switch has already untagged the traffic. In this case, to match traffic by VLAN ID successfully, you must use a rule on an uplink port group or uplink port. Source Address By using the Source Address group of attributes, you can match packets by the source MAC address or network. You can use a comparison operator to mark or filter packets that have or do not have the specified source address or network. You can match the traffic source in several ways. For example, for a MAC network with prefix 05:50:56 that is 23 bits long, set the address as 00:50:56:00:00:00 and mask as 00:00:01:ff:ff:ff. Destination Address By using the Destination Address group of attributes, you can match packets to their destination address. The MAC destination address options have the same format as those for the source address. Comparison Operators To match traffic in a MAC qualifier more closely to your needs, you can use affirmative comparison or negation. You can use operators such that all packets except the ones with certain attributes fall in the scope of a rule.
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-B8B8769C-EE4E-4BB1-9A8D-3DD0D9776E4B.html
2019-04-18T16:45:59
CC-MAIN-2019-18
1555578517745.15
[]
docs.vmware.com
(cPanel >> Home >> Metrics >> Analog Stats) The Analog Stats interface allows you to access data from the Analog traffic statistics software. Analog compiles traffic statistics for your domain, and organizes the data by month so that it is easy for you to manage and interpret. The software also presents the data for each month in graphs to show additional categories. To view Analog's statistics for a domain, perform the following steps: cPanel, WebHost Manager, and WHM are registered trademarks of cPanel, Inc. for providing its computer software that facilitates the management and configuration of Internet web servers. ®2018 All rights reserved.
https://docs.cpanel.net/pages/viewpage.action?pageId=4789012
2019-05-19T13:13:58
CC-MAIN-2019-22
1558232254882.18
[]
docs.cpanel.net
Note This documentation is for the new OMERO 5.2 version. See the latest OMERO 5.1.x version or the previous versions page to find documentation for the OMERO version you are using if you have not upgraded yet. To unify the various components of OMERO, OMERO.grid was developed to monitor and control processes over numerous remote systems. Based on ZeroC‘s IceGrid framework, OMERO.grid provides management access, distributed background processing, log handling and several other features. Please notice that ZeroC uses a specific naming scheme for IceGrid elements and actors. A server in the context of this document is not a host computer - it is a process running inside an IceGrid node, servicing incoming requests. A host is a computer on which IceGrid elements get deployed. For more details, see Terminology. The normal OMERO installation actually makes use of OMERO.grid internally. Note that multi-node OMERO.grid configurations are only tested with Unix-like systems, multi-node configurations on Windows are untested and deprecated, and any remaining support will be dropped in OMERO 5.3. If you have followed the instructions under OMERO.server installation you will have everything you need to start working with OMERO.grid. The standard install should also be used to install other hosts in the grid, such as a computation-only host. Some elements can be omitted for a computation-only host such as PostgreSQL, Apache/nginx, etc. Running OMERO.web and/or starting up the full OMERO.server instance is not required in such a case (only the basic requirements to run omero node are needed, i.e. ZeroC Ice and Python modules for OMERO scripts). If you would like to explore your IceGrid configuration, use bin/omero admin ice It provides full access to the icegridadmin console described in the ZeroC manual. Specific commands can also be executed: bin/omero admin ice help bin/omero admin ice application list bin/omero admin ice application describe OMERO bin/omero admin ice server list Further, by running java -jar ice-gridgui.jar the GUI provided by ZeroC can be used to administer OMERO.grid. This JAR is provided in the OMERO source code under lib/repository. See also Unlike all other supported platforms, the bin\omero script and OMERO.grid are not directly responsible for starting and stopping the OMERO.blitz server and other processes. Instead, that job is delegated to the native Windows service system. A brief explanation is available under OMERO.server Windows Service. Multi-node deployment is not supported for Windows. IceGrid is a location and activation service, which functions as a central registry to manage all your OMERO server processes. OMERO.grid provides server components which use the registry to communicate with one another. Other than a minimal amount of configuration and starting a single daemon on each host machine, OMERO.grid manages the complexity of all your computing resources. All the resources for a single OMERO site are described by one application descriptor. OMERO ships with several example descriptors under etc/grid. These descriptors describe what processes will be started on what nodes, identified by simple names. For example the default descriptor, used if no other file is specified, defines the master node. As you will see, these files are critical both for the correct functioning of your server as well as its security. The deployment descriptors provided define which server instances are started on which nodes. The default descriptor configures the master node to start the OMERO.blitz server, the Glacier2 router for firewalling, as well as a single processor - Processor0. The master node is also configured via etc/master.cfg to host the registry, though this process can be started elsewhere. The master node must be started first to provide the registry. This is done via the omero admin start command which uses the default descriptor: bin/omero admin start The deploy command looks for any changes to the defined descriptor and restarts only those servers which have modifications: bin/omero admin deploy Both omero admin start and omero admin deploy can optionally take a path to an application descriptor which must be passed on every invocation: bin/omero admin deploy etc/grid/my-site.xml Two other nodes, then, each provide a single processor, Processor1 and Processor2. These are started via: To start a node identified by NAME, the following command can be used bin/omero node start NAME At this point the node will try and connect to the registry to announce its presence. If a node with the same name is already started, then registration will fail, which is important to prevent unauthorized users. The configuration of your grid, however, is very much up to you. Based on the example descriptor files (*.xml) and configuration files (*.cfg), it is possible to develop OMERO.grid installations completely tailored to your computing resources. The whole grid can be shutdown by stopping the master node via: omero admin stop. Each individual node can also be shutdown via: omero node NAME stop on that particular node. Two examples will be presented showing the flexibility of OMERO.grid deployment and identifying files whose modification is critical for the deployment to work. The first example will focus on changing the deployed nodes/servers on a single host. It should serve as an introduction to the concepts. Unless used for very specific requirements, this type of deployment doesn’t yield any performance gains. The first change that you will want to make to your application descriptor is to add additional processors. Take a look at etc/templates/grid/default.xml. There you can define two new nodes - node1 and node2 by simply adding a new XML element below the master node definition: <node name="node1"> <server-instance </node> <node name="node2"> <server-instance </node> Remember to change the node name and the index number for each subsequent node definition. The node name and the index number do not need to match. In fact, the index number can be completely ignored, except for the fact that it must be unique. The node name, however, is important for properly starting your new processor. You will need both a configuration file under etc/ with the same name, and unless the node name matches the name of your local host, you will need to specify it on the command line: bin/omero node node1 start or with the environment variable OMERO_NODE: OMERO_NODE=node1 bin/omero node start After starting up both nodes, you can verify that you now have three processors running by looking at the output of omero admin diagnostics. For more information on using scripts, see the OMERO.scripts advanced topics. Warning Before attempting this type of deployment, make sure that the hosts can ping each other and that required ports are open and not firewalled. Windows multi-node configurations are not supported. A more complex deployment example is running multiple nodes on networked hosts. Initially, the host’s loopback IP address (127.0.0.1) is used in the grid configuration files. For this example, let’s presume we have control over two hosts: omero-master (IP address 192.168.0.1/24) and omero-slave (IP address 192.168. 0.2/24). The goal is to move the processor server onto another host (omero-slave) to reduce the load on the host running the master node (omero-master). The configuration changes required to achieve this are outlined below. On host omero-master: etc/grid/default.xml - remove or comment out from the master node the server-instance using the ProcessorTemplate. Below the master node add an XML element defining a new node: <node name="omero-slave"> <server-instance </node> etc/internal.cfg - change the value of Ice.Default.Locator from 127.0.0.1 to 192.168.0.1 etc/master.cfg - change all occurrances of 127.0.0.1 to 192.168.0.1 On host omero-slave: To apply the changes, start the OMERO instance on the omero-master node by using omero admin start. After that, start the omero-slave node by using omero node omero-slave start. Issuing omero admin diagnostics on the master node should show a running processor instance and the omero-slave node should accept job requests from the master node. More than just making sure no malicious code enters your grid, it is critical to prevent unauthorized access via the application descriptors (*.xml) and configuration (*.cfg) as mentioned above. The simplest and most effective way of preventing unauthorized access is to have all OMERO.grid resources behind a firewall. Only the Glacier2 router has a port visible to machines outside the firewall. If this is possible in your configuration, then you can leave the internal endpoints unsecured. Though it is probably unnecessary to use transport encryption within a firewall, encryption from clients to the Glacier2 router will often be necessary. For more information on SSL, see SSL. The IceSSL plugin can be used both for encrypting the channel as well as authenticating users. SSL-based authentication, however, can be difficult to configure especially for within the firewall, and so instead you may want to configure a “permissions verifier” to prevent non-trusted users from accessing a system within your firewall. From master.cfg: IceGrid.Registry.AdminPermissionsVerifier=IceGrid/NullPermissionsVerifier #IceGrid.Registry.AdminCryptPasswords=etc/passwd Here we have defined a “null” permissions verifier which allows anyone to connect to the registry’s administrative endpoints. One simple way of securing these endpoints is to use the AdminCryptPasswords property, which expects a passwd-formatted file at the given relative or absolute path: mrmypasswordisomero TN7CjkTVoDnb2 msmypasswordisome jkyZ3t9JXPRRU where these values come from using openssl: $ openssl OpenSSL> passwd Password: Verifying - Password: TN7CjkTVoDnb2 OpenSSL> Another possibility is to use the OMERO.blitz permissions verifier, so that anyone with a proper OMERO account can access the server. See Controlling Access to IceGrid Sessions of the Ice manual for more information. Only a limited number of node names are configured in an application descriptor. For an unauthorized user to fill a slot, they must know the name (which is discoverable with the right code) and be the first to contact the grid saying “I am Node029”, for example. A system administrator need only then be certain that all the node slots are taken up by trusted machines and users. It is also possible to allow “dynamic registration” in which servers are added to the registry after the fact. In some situations this may be quite useful, but is disabled by default. Before enabling it, be sure to have secured your endpoints via one of the methods outlined above. Except under Windows, the example application descriptors shipped with OMERO, all use relative paths to make installation easier. Once you are comfortable with configuring OMERO.grid, it would most likely be safer to configure absolute paths. For example, specifying that nodes execute under /usr/lib/omero requires that whoever starts the node have access to that directory. Therefore, as long as you control the boxes which can attach to your endpoints (see Firewall), then you can be relatively certain that no tampering can occur with the installed binaries. It is important to understand just what processes will be running on your servers. When you run omero admin start, icegridnode is executed which starts a controlling daemon and deploys the proper descriptor. This configuration is persisted under var/master and var/registry. Once the application is loaded, the icegridnode daemon process starts up all the servers which are configured in the descriptor. If one of the processes fails, it will be restarted. If restart fails, eventually the server will be “disabled”. On shutdown, the icegridnode process also shutdowns all the server processes. In application descriptors, it is possible to surround sections of the description with <target/> elements. For example, in templates.xml the section which defines the main OMERO.blitz server includes: <server id="Blitz-${index}" exe="${JAVA}" activation="always" pwd="${OMERO_HOME}"> <target name="debug"> <option>-Xdebug</option> <option>-Xrunjdwp:server=y,transport=dt_socket,address=8787,suspend=y</option> </target> ... When the application is deployed, if “debug” is added as a target, then the -Xdebug, etc. options will be passed to the Java runtime. This will allow remote connection to your server over the configured port. Multiple targets can be enabled at the same time: bin/omero admin deploy etc/grid/default.xml debug secure someothertarget Ice imposes an upper limit on all method invocations. This limit, Ice.MessageSizeMax, is configured in your application descriptor (e.g. templates.xml) and configuration files (e.g. ice.config). The setting must be applied to all servers which will be handling the invocation. For example, a call to InteractiveProcessor.execute(omero::RMap inputs) which passes the inputs all the way down to processor.py will need to have a sufficiently large Ice.MessageSizeMax for: the client, the Glacier2 router, the OMERO.blitz server, and the Processor. The default is currently set to 65536 kilobytes which is 64MB. Currently all output from OMERO.grid is stored in $OMERO_PREFIX/var/log/master.out with error messages going to $OMERO_PREFIX/var/log/master.err. Individual services may also create their own log files. If the bin/omero script is copied or symlinked to another name, then the script will separate the name on hyphens and execute bin/omero with the second and later parts prepended to the argument list. For example, ln -s bin/omero bin/omero-admin bin/omero-admin start works identically to: bin/omero admin start Shortcuts allow the bin/omero script to function as an init.d script when named omero-admin, and need only be copied to /etc/init.d/ to function properly. It will resolve its installation directory, and execute from there unless OMERO_HOME is set. For example, ln -s $OMERO_PREFIX/bin/omero /usr/local/bin/omero omero-admin start The same works for putting bin/omero on your path: PATH=$OMERO_PREFIX/bin:$PATH This means that OMERO.grid can be unpacked anywhere, and as long as the user invoking the commands has the proper permissions on the $OMERO_PREFIX directory, it will function normally. One exception to this rule is that starting OMERO.grid as root may actually delegate to another user, if the “user” attribute is set on the <server/> elements in etc/grid/templates.xml (this holds only for Unix-based platforms including Apple OS X. See OMERO.grid on Windows for information on changing the server user under Windows). See also
https://docs.openmicroscopy.org/omero/5.2.2/sysadmins/grid.html
2019-05-19T13:16:19
CC-MAIN-2019-22
1558232254882.18
[]
docs.openmicroscopy.org
Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3708_1830266148.1558271227071" ------=_Part_3708_1830266148.1558271227071 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The JWST ETC uses point spread fun= ctions (PSFs) from a precomputed library of PSFs produced by WebbPSF. (For = more details on WebbPSF's optical model, check out the WebbPSF documentation pages.) <= br> Each instrument aperture is represe= nted by a precomputed set of roughly 30 PSFs, generated at log-normal wavel= ength intervals that span the entire wavelength range of all filters and di= spersers used with that aperture. With the exception of coronagraphy, all P= SFs are generated on-axis, centered on the detector, and are thus devoid of= any optical aberrations that may impact real PSFs at locations far from th= e detector center. Given that the ETC models sources close to the center, the effect is likely to be minimal.= p> Coronagraphic observations have mul= tiple sets of roughly 30 PSFs each, generated at multiple spatial locations= at log-normal wavelength intervals, to account for the effects of coronagr= aphic spots. The PSFs are generated to relativel= y small sizes=E2=80=94the largest are the MIRI 4QPM PSFs (used in MIRI coro= nagraphic imaging), which cover 8.9" on a side. Nevertheless, the PSFs cove= r more than 99.9% of the expected flux. ETC version 1.4 uses WebbPSF versio= n 0.8.0, WebbPSF Data version 0.8.0, and POPPY version 0.8.0 to construct t= he PSF library for all science instruments (MIRI, NIRCam, NIRISS, and NIRSp= ec). For a given observation, the entire= set of PSFs for a given aperture are loaded into the ETC. The scene generation process create= s a spatial and wavelength cube for each source in the scene. The next step= is to convolve each wavelength plane with a PSF; this PSF is produced by i= nterpolating the set of PSFs at the specific wavelength of the cube slice. = Thus, the apparent PSF in the output 2-D images is the sum of multiple wave= length t= o suit the shape of the occulting elements. Though they are generated for d= ifferent spatial positions, they are not interpolated spatially; instead, t= he ETC's code selects the closest PSF to the target location. This can resu= lt in step-function behavior with various sources. The MIRI 4-quadrant phase mas= ks (4QPM) used for coronagraphic imaging are assumed to have eight-f= old symmetry: they can be reflected across each of the quadrant axes and ac= ross the primary axes of the detector (which is approximately correct). PSF= s were generated in a triangular shape, covering the 0%, 33%, 66%, and 99% = unobscured positions. The positions take into account the roughly 5=C2=B0 c= lockwise rotation of the MIRI masks.=20 =20=20 The MIRI LYOT2300 mask is assumed to have radial symmetry. PSFs were generated along= the Y-axis, at points that are 0%, 25%, 50%, 75%, and 99% unobscured.= =20 =20=20 The NIRCam MASKSWB and MASKLWB masks are assumed to have v= ertical symmetry. They are tapered wedges (where MASKSWB tapers = toward negative X, and MASKL= strong>WB= em> tapers toward positive X), such that each filter ha= s an optimal position along the wedge where a point source is just barely f= ully obscured. Sets of five PSFs were generated at the optimal posit= ion, positions 1" and 2" on either side of the optimal position, and positi= ons above the bar that are 33%, 66%, and 99% unobscured.=20 =20=20 The NIRCam MASK210R, MASK335R= , and MASK430R masks are assumed to have= radial symmetry. PSFs were generated along the Y-axis, at points that are = 0%, 25%, 50%, 75%, and 99% unobscured.=20 =20=20 PSF Simulation Tool webpage Exposure Time Calculator = v. 1.3 Published November 13, 2018 =20
https://jwst-docs.stsci.edu/exportword?pageId=43552790
2019-05-19T13:07:07
CC-MAIN-2019-22
1558232254882.18
[]
jwst-docs.stsci.edu
Cocos Creator v2.0 Upgrade Guide 1 Overview Cocos Creator v2.0 is the result of a large-scale under the hood refactoring plus two months of stability testing. This article will assist v1.x users in upgrading to v2.0. In general, the core goals of the Cocos Creator v2.0 design were twofold: - Significantly improve the engine performance - Provide more advanced rendering capabilities and richer rendering customization options In order to accomplish this goal, we completely rewrote the underlying renderer, which structurally guarantees performance improvements of web and mini games platforms. And rendering capabilities. At the same time, in order to ensure that users project can be upgraded more smoothly, we have almost no changes to the API of the component layer. Of course, these changes are not completely transparent to the user, such as the engine loading process, the event system, the streamlining and reorganization of the engine's overall API, which will have an impact on the user-level API. If you want to have a quick understanding, you can watch the Cocos Creator v2.0 introduction video first. Of course, the upgrade is just the beginning, Cocos Creator has prepared more in-depth updates and features coming in updates to the v2.x version. 2.0.0 List of known issues Since many users have feedback on the problems encountered in upgrading from 1.x, we also need to highlight the risks of the current upgrade, the problems, and the plan to fix these issues. List of known issues: - The performance of Spine & DragonBones in 2.x native platform is not as good as 1.x. ETC texture compression are not supported (1.x can be implemented by hack). - Particle resources with built-in base64 texture may fail during 1.x upgrade. We will roll back the upgrade of the Particle resource in 2.0.1 and return to the 1.x state to avoid errors. If you encounter a similar problem, you can bypass it by using an external map file. - 1.x RichText upgrade may cause the scene to continue to report error: can not read property _worldMatrixof null. Will be fixed in 2.0.1. Temporarily you can remove RichText in the old version and then add it again in 2.0 to bypass it. - The remote avatar loaded in the WeChat open data field cannot be displayed, and the camera background color cannot be set. Fixed in 2.0.1. - Playing a release version may be blacked out because the script file name case under libs is overwritten during the release process. Fixed in 2.0.1. If you encounter problems, please use the 1.x version to play. - Some Spine animations are rendered incorrectly after the upgrade. Fixed in 2.0.1. - Using Tilemap with Camera zoom, there will be problems with the map being oversized. Fixed in 2.0.1. - RichText does not support color modification by node color. - Native platform does not support VideoPlayer and WebView components at this time - IE 11 is not supported. Fixed in 2.0.1. - The current v2.0 has the possibility that the rendering performance of the engine may decline on the native platform, which may have a significant impact on specific games. It is recommended that the native platform project under development be carefully upgraded. We will optimize afterwards. 2. Editor upgrade Let's take a look at the changes at the editor level. Since the focus of v2.0 is focused on the engine level, there are actually not many changes in this area. They are mainly texture resources, platform release, and the use of some components. In future versions of v2.x, editor level upgrades will be released. 2.1 Texture Resource Configuration Maybe developers have noticed the configuration of texture resources in Creator 1.x, such as Wrap Mode and Filter Mode, but in fact, no matter how you set it in 1.x, it will not affect the runtime texture resources. So in 2.0, we made these configurations take effect at runtime, and we also added an option to prefetch textures: - Wrap Mode: Loop mode, which determines how the texture is sampled when uv exceeds 1. - Clamp: the value of uv is automatically limited to 0, 1 and exceeds 0 or 1 directly. - Repeat: When over, the value of uv is modulo, so that the texture can be rendered in a loop. - Filter Mode: Filter mode, which determines whether to blend the surrounding pixels with the surrounding pixels when floating point samples are used to achieve the smoothing effect of texture scaling. In effect, Trilinear smoothness is higher than Bilinear, higher than Point, but Point is very suitable for pixel-style games. When scaling textures, the pixel boundaries will not be blurred, maintaining the original pixel painting style. - Point (nearest point sampling): directly use the nearest pixel on the uv value - Bilinear (secondary linear filtering): take the average of the pixel corresponding to uv and the surrounding four pixels - Trilinear (triangular linear filtering): Based on the quadratic linear filtering, the quadratic linear filtering results of two adjacent mipmaps are taken for the mean calculation. - Premultiply Alpha: This is a new parameter in 2.0. When checked, the engine will enable the GL pre-multiply option during the upload of the GPU map. This is very helpful for some textures that need to be pre-multiplied. Often there are some users who can't understand the inexplicable white edges around the texture or around the text, which is caused by the semi-transparent pixels around the texture: This can be eliminated by using code in 1.x, and in 2.0 you only need to turn on the pre-multiply option of the texture. It's also worth noting that if you find that this makes the texture darker, you can change the blending mode of the corresponding rendering component to ONE, ONE_MINUS_SRC_ALPHA. 2.2 RenderComponent component settings In 2.0, we abstracted a new base component class: RenderComponent, and all direct rendering components are inherited from this component. These include: Sprite, Label, Graphics, and so on. The most intuitive change for the user is that the rendering component that inherits from it will include the Src Blend Factor & Dst Blend Factor in the Properties: Because of the transformation of the underlying renderer in 2.0, we abstracted the functionality of many render phases for user access and setup. Many of the interfaces to these interfaces are in the RenderComponent. In addition to the blend mode, we also plan to introduce the material system (the engine is built-in, and only the script interface is temporarily available). 2.3 Camera component use The camera may be the most changed component from 1.x to 2.0. In order for developers to update smoothly, we tried to maintain the consistency of the component layer API. Here are details of the changes: - The Canvascomponent adds a default Main Camera node and mounts the Cameracomponent, which will default to the center of the Canvasnode, showing the rendered elements in the scene. NodeGroup corresponds to Camera's culling mask, only the Group contained in Camera culling mask will be rendered. - You can render different groups through multiple cameras, and let them have a global hierarchical relationship. Scene rendering is based on the Camera list, which is rendered in turn (multi-camera can also render the same object with different perspectives) If you need a more advanced Camera component, it will be necessary to upgrade to v2.0. It is not possible to directly specify the target corresponding to Camera. Instead, set the node and camera matching relationship by setting the culling mask of node Group and Camera. For specific changes, developers can refer to 2.0 Camera Using Documentation. 2.4 Build Panel Updates The biggest change in Build panels is the release of WeChat games open data domain. In 1.x, developers choose to publish the platform as Wechat Game and check the open data domain project. In 2.0, we separate the WeChat open data domain into a platform: Wechat Game Open Data Context. As you can see, the build options are much simpler than other platforms because the open data domain has a special environment that removes unnecessary options. At the same time, since the open data domain does not support WebGL rendering, the WebGL renderer will be rejected on the engine module clipping, regardless of the user's settings, and all modules that rely on WebGL rendering will be rejected. Other modules still need the user's own choice to try to get the smallest package in the open data domain. For the same reason, when building other platforms, please don't check the Canvas Renderer, because the Canvas renderer supports a small number of rendering components, meaning little. Starting with v2.0.1, we updated the open data domain solution. For details, please refer to Access Small Game Open Data Domain. 2.5 Module Settings In addition to the special module settings in the WeChat open data domain, there are several points to note in the module settings of other platform projects: - Currently we have deprecated the Canvas rendering mode on other platforms in the non-WeChat open data domain, so the Canvas Renderer module can be culled, but the WebGL Renderer module must be retained. - The native platform cannot currently remove the Native Network module (which will be adjusted in the future). 2.6 Custom Engine Quick Compile In 2.0, we provided a more convenient way for developers who needed a custom engine. 1.x After modifying the custom engine, you also need to build the gulp build to take effect, and the build time is very long. The root cause of this problem is that any minor changes require repackaging and confusing all engine files, which can take a long time. So in 2.0, we instead refer to the separated source files in the custom engine. When the user changes, only the modified file will be updated, and the developer can also manually trigger the update. When using a custom JS engine: - Check Automatically compile JS engine: scan engine and automatically recompile modified engine code when loading or refreshing editor - Uncheck the automatic compilation of the JS engine: the developer needs to manually use the menu item: developer > compilation engine to trigger the compilation after modifying the engine code. After the compilation is complete, the preview will use the new engine code directly. When the project is built, it will also be compiled and built with the new engine code. Of course, this will bring two side effects: the build time needs to be compiled when the engine is compiled; There are a lot of load engine scripts, so the preview load time will also grow. 3. Engine upgrades We have completely upgraded the engine framework in 2.0. Here are the most important pieces: - Even more modular - Remove the underlying cocos2d-html5 rendering engine and now share the underlying renderer with the 3D engine - Discard the render tree and assemble the rendered data directly using nodes and render component data. - Logic layer and render layer are isolated, interacting through limited data types - Rendering process zero garbage The specific updates are described below. 3.1 Underlay Renderer Upgrade In general, users control rendering by rendering component levels. For this type of usage, there is almost no difference between 2.0 and 1.x. The code of the component layer after upgrading still works the same. However, if the user touches the sgNode level in their project code due to optimization or other requirements, then it should be noted that the _ccsg module as the underlying renderer in 1.x has been completely removed, and the component layer can no longer access any of them. Here are the differences between 2.0 and 1.x at the node tree level: Another key point is that in addition to retaining limited Canvas rendering capabilities in the WeChat open data domain, other platforms have removed Canvas rendering and only support WebGL rendering. Due to space limitations, we do not delve into the update of the underlying framework of the engine. For details, please pay attention to our subsequent v2.0 rendering framework documentation. 3.2 Startup process changes In 1.x, the order in which the engine and user scripts are loaded is: - Load the engine - load main.js - Initialize the engine - Initialize the renderer - Load project plugin script - Load project main script - Call cc.game.onStart In 2.0, the user script can intervene into the initialization logic, such as setting cc.macro.ENABLE_TRANSPARENT_CANVAS (whether the Canvas background is transparent), cc.macro.ENABLE_WEBGL_ANTIALIAS (whether to enable WebGL anti-aliasing), or applying some pre-customization to the engine. Code. Previously these jobs had to be customized with main.js, added in the cc.game.onStart callback, mixed with the engine's default initialization logic, users often confused, and not friendly to version upgrades. So in 2.0 we preloaded the loading of user scripts: - Load the engine - load main.js - Load project plugin script - Load project main script - Initialization Engine (Animation Manager, Collision Manager, Physics Manager, Widget Manager) - Initialize the renderer - Call cc.game.onStart 3.3 Platform code separation and customization In 1.x, main.js hosts the initialization logic for all platforms, but as the platform grows more and more different, we decided to separate the startup logic of these platforms as much as possible. - Web & Facebook Instant Game - Entry file: index.html - Adaptation file: none - WeChat Mini Games - Entry file: game.js - Adaptation file: `libs/`` - Native platform - Entry file: main.js - Adaptation file: `jsb-adapter/`` - QQ light game - Entry file: main.js - Adaptation file: `libs/`` Developers who need to add their own custom code can refer to Custom Project Documentation for use in projects. Your own version overrides the original version, and try not to overwrite main.js. 3.4 Event System Upgrade Event systems are widely used in both engine and user code, but in order to be compatible with the need to dispatch touch events (capture and bubbling), its design in 1.x is too complex, and performance is somewhat slow for ordinary simple events. In order to solve this problem in 2.0, we implemented the event model containing the capture and bubbling phases in the tree structure only in cc.Node, which completely simplifies the design of EventTarget. Here are the key API comparisons: Node: - on (type, callback, target, useCapture): Register the event listener, you can choose to register the bubbling phase or the capture phase. - off (type, callback, target, useCapture): unregister the listener - emit (type, arg1, arg2, arg3, arg4, arg5): dispatch simple events - dispatchEvent (event): dispatches events on the node tree with capture and bubbling event models (the capture phase triggers the order from the root node to the target node, and the bubbling phase then uploads from the target node to the root node) EventTarget: - on (type, callback, target): register event listener - off (type, callback, target): unregister the listener - emit (type, arg1, arg2, arg3, arg4, arg5): dispatch simple events - dispatchEvent (event): compatible API, dispatching a simple event object You can see that only Node's on/ off supports event capture and event bubbling on the parent chain. By default, only system events support such a dispatch mode. Users can use node.dispatchEvent on the node tree. The same process distributes events. This is consistent with 1.x. However, the use of emit dispatch on Node and all event dispatch on EventTarget are simple event dispatch methods. The dispatch event is different from 1.x in the event callback parameters: // v1.x eventTarget.on(type, function (event) { // Get the argument passed when emit via event.detail }); eventTarget.emit(type, message); // message will be saved on the detail property of the event parameter of the callback function // v2.0 eventTarget.on(type, function (message, target) { // Get the event argument passed when emit directly through the callback parameter }); eventTarget.emit(type, message, eventTarget); // emits up to five extra arguments, which are passed flat to the callback function It is also worth mentioning that the event monitoring mechanism of the Hot Update Manager has also been upgraded. In the old version, AssetsManager needs to listen for callbacks through cc.eventManager. In 2.0, we provide an easier way: / / Set the event callback assetsManager.setEventCallback(this.updateCallback.bind(this)); // cancel event callback assetsManager.setEventCallback(null); 3.5 Adaptation mode upgrade Cocos Creator supports a variety of adaptation modes, which developers can manage through the settings in the Canvas component. One of the adaptation modes has some adjustments in 2.0, which is to check the Fit Width and Fit Height modes. In this adaptation mode, the developer's design resolution ratio will be faithfully preserved, and the scene will be zoomed until all content is visible. At this time, the aspect ratio of the scene and the aspect ratio of the device screen are generally different. Leave a black border on the left or right or up and down. In 1.x, we set the size of the DOM Canvas directly to the size of the scene, so content beyond the scene range will be clipped, and the background is the web page. However, this method has encountered problems in WeChat games. WeChat will force the size of the main Canvas to be stretched to the full screen range, resulting in 1.x using this adaptation mode often causes serious distortion in small games. 2.0 changed the implementation of the adaptation strategy, keeping the DOM Canvas full screen, and setting the GL Viewport to center the scene content and be in the correct position. The change brought about by this is that the proportions in the WeChat game are completely correct, but the content outside the scene is still visible. 3.6 RenderTexture Screenshot In 1.x, developers generally use cc.RenderTexture to complete the screenshot function, but this is a feature in the old version of the rendering tree. After we remove the rendering tree, the screenshot function is used in a completely different way. In simple terms, cc.RenderTexture in 2.0 becomes a resource type that inherits from the cc.Texture resource. The developer completes the screenshot by rendering a camera content to the cc.RenderTexture resource. For details, please refer to Camera Document Screenshots. 3.7 TiledMap function simplification Tile maps have been redesigned in 2.0. To improve rendering performance, we have simplified the capabilities of TiledLayer. Here are the TiledLayer features that have been modified or removed: getTiles setTiles - getTileAt: getTiledTileAt removeTileAt - setTileGID: setTileGIDAt setMapTileSize setLayerSize setLayerOrientation setContentSize setTileOpacity releaseMap We removed the ability to get and set Tiles and set the size and orientation of the map or layer. This is because we want this information to be stable after getting it from the tmx file. Developers can tmx to adjust the map instead of these interfaces. In 1.x, getTileAt and setTileAt are implemented by instantiating a map block into a sprite. The rendering of this sprite will create a lot of special processing logic in the rendering process of the map, which will also make the tile map rendering performance suffer. Biger impact. So in 2.0, we provide the getTiledTileAt interface to allow developers to get a node that mounts the TiledTile component. Through this node, developers can modify the position, rotation, scaling, transparency, color, etc. of the Tile, and also through the TiledTile component. To control the map position and tile ID, this replaces the original independent interface such as setTileOpacity. Of course, we are not simplifying for simplification. On the one hand, this brings about an improvement in performance. On the other hand, this simple framework also lays a good foundation for the upgrade of future tile maps. We plan to support multiple tilesets and nodes. Occlusion control and other capabilities. 3.8 Physical Engine Upgrade For the physics engine, we upgraded the old box2d library to box2d.ts, mainly to improve the performance of web and mini games platforms. And ensure the stability of the physical game. However, the interface inside box2d.ts and the previous interface will have some differences, developers need to pay attention to the use of these interfaces. 3.9 Other important updates In addition to the updates to the full modules above, there are some more important updates in other aspects of the engine: - Node - Removed tag related APIs - Update the transform get API to the matrix related API, and get the object that the developer needs to pass the stored result when fetching - Retain the attribute style API and remove the getter setter API that duplicates the attribute - Due to the change of the traversal process, the rendering order of the nodes is different from before. All child nodes in 2.0 will be rendered after the parent node, including nodes with zIndex less than 0. - Director - Removed APIs related to views and rendering, such as getWinSize, getVisibleSize, setDepthTest, setClearColor, setProjection, etc. - Discard the EVENT_BEFORE_VISIT and EVENT_AFTER_VISIT event types - Scheduler: In addition to the component object, you need to use the Scheduler to dispatch the target object, you need to execute scheduler.enableForTarget(target) - value types - Previously, the AffineTransform calculation API under the cc namespace was moved to AffineTransform, such as cc.affineTransformConcatto cc.AffineTransform.concat - The Rect and Point related calculation APIs are changed to the object API, such as cc.pAdd(p1, p2)to p1.add(p2) - Removed the API provided by JS directly from cc.rand, cc.randomMinus1To1, etc. - debug: Added cc.debug module, temporarily including setDisplayStats, isDisplayStats methods - Some important APIs removed - All APIs under the _ccsg namespace - cc.textureCache - cc.pool - Spine: Skeleton.setAnimationListener In addition to the above upgrades, for the engine core module, we recorded all API changes in deprecated.js, in the preview or debug mode, the developer will see the relevant API update prompts. Just follow the prompts to upgrade, and then combine this document. 4. Follow-up version plan Although the update of the underlying renderer has been completed, we have not officially opened the advanced rendering capabilities to developers. In follow-up versions of 2.x, we will gradually introduce new rendering capabilities so that developers can create a 2D game with Cocos Creator. The approximate roadmap is planned as follows:
https://docs.cocos.com/creator/manual/en/release-notes/upgrade-guide-v2.0.html
2019-05-19T13:27:06
CC-MAIN-2019-22
1558232254882.18
[array(['upgrade-guide-v2.0/texture.png', 'Texture Inspector'], dtype=object) array(['upgrade-guide-v2.0/spine-border.png', "Spine's strange white edges at the bone seams"], dtype=object) array(['upgrade-guide-v2.0/render-component.png', "TiledLayer's Mixed Mode Settings"], dtype=object) array(['upgrade-guide-v2.0/wechat-open-data.png', '2.0 WeChat game open data domain publishing panel'], dtype=object) array(['upgrade-guide-v2.0/quick-compile.png', 'Custom Engine Configuration'], dtype=object) array(['upgrade-guide-v2.0/tree-v1.jpg', 'v1.x node tree'], dtype=object) array(['upgrade-guide-v2.0/tree-v2.jpg', 'v2.0 node tree'], dtype=object) array(['upgrade-guide-v2.0/show-all.png', 'v1.x Fit Width & Fit Height'], dtype=object) array(['upgrade-guide-v2.0/roadmap.png', 'Cocos Creator 2.0 Planning Roadmap'], dtype=object)]
docs.cocos.com
Using Offline Maps¶ Collect's Location widgets can be configured to display different maps. To use online maps, set the mapping engine and basemap in User Interface Settings. Users will need to be online to load questions with maps. Offline maps are useful for low-connectivity environments or to present custom geospatial data. Use them to display high-resolution imagery, annotated maps, heatmaps, and more. ODK Collect can display any map layer saved as a set of tiles in the MBTiles format. Tiles are images that represent a subset of a map. The only limitation is that tile data in Mapbox's pbf format are not supported. Offline maps quick start¶ - From Collect's settings, change the Mapping SDK to Google Maps SDK - Get or create your MBTiles file with TileMill or other software. - Transfer tiles to devices. The MBTiles file must be placed in a sub-folder on your device in the /sdcard/odk/layersfolder. - Open a question that displays a map. - Tap the layers icon and select your map Getting map tiles¶ Open Map Tiles hosts many free map tile files that can be used in ODK Collect. To create MBTiles files, use one of the compatible applications that can write tiles. Commonly used free software packages are TileMill and QGIS with the QTiles plugin. Transferring offline tiles to devices¶ To make MBTiles files available for use in ODK Collect they must be manually transferred to Android devices. The tile files need to be inside a folder in the /sdcard/odk/layers folder. The folder name will be used to identify your offline map in the user interface so choose a user-friendly name for it. For example, if your MBTiles file is my_tiles.mbtiles and it includes information about settlements in the mapped area, you may want to call your folder /sdcard/odk/layers/All settlements/. Note MBTiles files placed directly in the /sdcard/odk/layers folder will not be detected! Placing it in a subdirectory with a friendly name is required. To transfer files, you can upload them to an online service such as Google Drive, connect your device to a computer and transfer them via USB or use adb.
https://docs.opendatakit.org/collect-offline-maps/
2019-05-19T12:24:23
CC-MAIN-2019-22
1558232254882.18
[]
docs.opendatakit.org
Managing Email Issues¶ Sometimes emails can become blocked in the mail system. There are a number of reasons why this may happen. It could be a full mailbox, incorrect email address, or other issues. If the member's email is on the blocked list ClubRunner will not send emails or bulletins to that member. How do I find out which emails are being blocked?¶ You can view the blocked emails by viewing each individual emails & bulletin's stats, and we also store a historic list of blocked emails under the Manage Blocked Emails Page. To review and remove emails, please follow the steps as outlined below. Step 1¶ Click on the Communication's tab, then on the blue bar below click on Manage Blocked Emails. Step 2¶ Now that you are on the blocked emails page, you can view who is blocked and the reasons why. Step 3¶ Clicking on the Unblock link will remove an email from the block list. This only removes the current block, and will not stop the member from showing back up on this list. You can repeat this step as often as required until all of your blocked emails have been dealt with. How do I stop user emails from being blocked?¶ If emails to a given address are still being blocked, the problem may exist at the member's end. They may need to add [email protected] to their email software's contacts list, white list, or approved senders list. The process required for this varies according to the email program they are using. As such, the user may need to contact their relevant software vendor for support. Alternatively if the member is using a professional email address, they may need to contact their IT or Email Provider for further assistance. Members Emails are still being Blocked.¶ If the email address is still blocked, it may be a result of overly strict email filters on the part of the member's Internet Service Provider. Alternatively the members email address may be incorrect or no longer valid. In this case, the member can make use of an alternate email account. Webmail services (such as Gmail, Live Mail or Yahoo! Mail) can provide your member with a free or low-cost email account.
http://docs.clubrunnersupport.com/bulletin/send-and-archive/managing-email-issues/
2017-05-22T23:11:49
CC-MAIN-2017-22
1495463607242.32
[array(['../../img/Bulletin/send-and-archive/communications-tab_blocked-emails.png', 'communications tab'], dtype=object) array(['../../img/Bulletin/send-and-archive/communications-tab_blocked-emails.png', 'Manage Blocked emails link'], dtype=object) array(['../../img/Bulletin/send-and-archive/email-issues_blocked-list.png', 'List of Blocked Emails'], dtype=object) ]
docs.clubrunnersupport.com
Contains the name and value of a parameter used by the Execute method. Syntax <Parameters> ... <Parameter> <Name>...</Name> <Value>...</Value> </Parameter> ... </Parameters> Element Characteristics Element Relationships Remarks Some XML for Analysis (XMLA) commands, such as the Process command, can require additional information. The Parameter element provides a mechanism for providing additional information, including chunked information, for an XMLA command.
https://docs.microsoft.com/en-us/sql/analysis-services/xmla/xml-elements-properties/parameter-element-xmla
2017-05-23T00:16:48
CC-MAIN-2017-22
1495463607242.32
[]
docs.microsoft.com
Durability refers to the ability of a database to withstand — or recover from — unexpected events. VoltDB has several features that increase the durability of the database, including K-Safety, snapshots, command logging, and database replication K-Safety replicates partitions to provide redundancy as a protection against server failure. Note that when you enable K-Safety, you are replicating the unique partitions across the available hardware. So the hardware resources — particularly servers and memory — for any one copy are being reduced. The easiest way to size hardware for a K-Safe cluster is to size the initial instance of the database, based on projected throughput and capacity, then multiply the number of servers by the number of replicas you desire (that is, the K-Safety value plus one). When using K-Safety, configure the number of cluster nodes as a whole multiple of the number of copies of the database (that is, K+1). K-Safety has no real performance impact under normal conditions. However, the cluster configuration can affect performance when recovering from a failure. In a K-Safe cluster, when a failed server rejoins, it gets copies of all of its partitions from the other members of the cluster. The larger (in size of memory) the partitions are, the longer they can take to be restored. Since it is possible for the restore action to block database transactions, it is important to consider the trade off of a few large servers that are easier to manage against more small servers that can recover in less time. Two of the other durability features — snapshots and command logs — have only a minimal impact on memory and processing power. However, these features do require persistent storage on disk. Most VoltDB disk-based features, such as snapshots, export overflow, network partitions, and so on, can be supported on standard disk technology, such as SATA drives. They can also share space on a single disk, assuming the disk has sufficient capacity, since disk I/O is interleaved with other work. Command logging, on the other hand, is time dependent and must keep up with the transactions on the server. The chapter on command logging in Using VoltDB discusses in detail the trade offs between asynchronous and synchronous logging and the appropriate hardware to use for each. But to summarize: Use fast disks (such as battery-backed cache drives) for synchronous logging Use SATA or other commodity drives for asynchronous logging. However, it is still a good idea to use a dedicated drive for the command logs to avoid concurrency issues between the logs and other disk activity. When using command logging, whether synchronous or asynchronous, use a dedicated drive for the command logs. Other disk activity (including command log snapshots) can share a separate drive. Finally, database replication (DR) does not impact the sizing for memory or processing power of the initial, master cluster. But it does require a duplicate of the original cluster hardware to use as a replica. In other words, when using database replication, you should double the estimated number of servers — one copy for the master cluster and one for the replica. In addition, one extra server is recommended to act as the DR agent. When using database replication, double the number of servers, to account for both the master and replica clusters. Also add one server to be used as the DR agent.
https://docs.voltdb.com/PlanningGuide/HwDurability.php
2017-05-22T23:34:53
CC-MAIN-2017-22
1495463607242.32
[]
docs.voltdb.com
Installation and setup using Puppet¶ The whole installation and setup process of Kallithea can be simplified by using Puppet and the rauch/kallithea Puppet module. This is especially useful for getting started quickly, without having to deal with all the Python specialities. Note The following instructions assume you are not familiar with Puppet at all. If this is not the case, you should probably skip directly to the Kallithea Puppet module documentation. Installing Puppet¶ This installation variant requires a Unix/Linux type server with Puppet 3.0+ installed. Many major distributions have Puppet in their standard repositories. Thus, you will probably be ready to go by running, e.g. apt-get install puppet or yum install puppet, depending on your distro’s favoured package manager. Afterwards, check the Puppet version by running puppet --version and ensure you have at least 3.0. If your distribution does not provide Puppet packages or you need a newer version, please see the Puppet Reference Manual for instructions on how to install Puppet on your target platform. Installing the Puppet module¶ To install the latest version of the Kallithea Puppet module from the Puppet Forge, run the following as root: puppet module install rauch/kallithea This will install both the Kallithea Puppet module and its dependency modules. Warning Be aware that Puppet can do all kinds of things to your systems. Third-party modules (like the kallithea module) may run arbitrary commands on your system (most of the time as the root user), so do not apply them on production machines if you don’t know what you are doing. Instead, use a test system (e.g. a virtual machine) for evaluation purposes. Applying the module¶ To trigger the actual installation process, we have to apply the kallithea Puppet class, which is provided by the module we have just installed, to our system. For this, create a file named e.g. kallithea.pp, a Puppet manifest, with the following content: class { 'kallithea': seed_db => true, manage_git => true, } To apply the manifest, simply run the following (preferably as root): puppet apply kallithea.pp This will basically run through the usual Kallithea Installation on Unix/Linux and Setup steps, as documented. Consult the module documentation for details on what the module affects. You can also do a dry run by adding the --noop option to the command. Using parameters for customizing the setup process¶ The kallithea Puppet class provides a number of parameters for customizing the setup process. You have seen the usage of the seed_db parameter in the example above, but there are more. For example, you can specify the installation directory, the name of the user under which Kallithea gets installed, the initial admin password, etc. Notably, you can provide arbitrary modifications to Kallitheas configuration file by means of the config_hash parameter. Parameters, which have not been set explicitly, will be set to default values, which are defined inside the kallithea Puppet module. For example, if you just stick to the defaults as in the example above, you will end up with a Kallithea instance, which - is installed in /srv/kallithea, owned by the user kallithea - uses the Kallithea default configuration - uses the admin user adminwith password adminpw - is started automatically and enabled on boot As of Kallithea 0.3.0, this in particular means that Kallithea will use an SQLite database and listen on. See also the full parameters list for more information. Making your Kallithea instance publicly available¶ If you followed the instructions above, the Kallithea instance will be listening on and therefore not publicly available. There are several ways to change this. The direct way¶ The simplest setup is to instruct Kallithea to listen on another IP address and/or port by using the config_hash parameter of the Kallithea Puppet class. For example, assume we want to listen on all interfaces on port 80: class { 'kallithea': seed_db => true, config_hash => { "server:main" => { 'host' => '0.0.0.0', 'port' => '80', } } } Using Apache as reverse proxy¶ In a more advanced setup, you might instead want use a full-blown web server like Apache HTTP Server as the public web server, configured such that requests are internally forwarded to the local Kallithea instance (a so called reverse proxy setup). This can be easily done with Puppet as well: First, install the puppetlabs/apache Puppet module as above by running the following as root: puppet module install puppetlabs/apache Then, append the following to your manifest: include apache apache::vhost { 'kallithea.example.com': docroot => '/var/www/html', manage_docroot => false, port => 80, proxy_preserve_host => true, proxy_pass => [ { path => '/', url => '', }, ], } Applying the resulting manifest will install the Apache web server and setup a virtual host acting as a reverse proxy for your local Kallithea instance.
http://kallithea.readthedocs.io/en/stable/installation_puppet.html
2017-05-22T23:12:41
CC-MAIN-2017-22
1495463607242.32
[]
kallithea.readthedocs.io
Translation¶ Overview¶ In order to make a Django project translatable, you have to add a minimal number of hooks to your Python code and templates. These hooks are called translation strings. They tell Django: “This text should be translated into the end user’s language, if a translation for this text is available in that language.” It’s your responsibility to mark translatable strings; the system can only translate strings it knows about. Django then provides utilities to extract the translation strings into a message file. This file is a convenient way for translators to provide the equivalent of the translation strings in the target language. Once the translators have filled in the message file, it must be compiled. This process relies on the GNU gettext toolset. Once this is done, Django takes care of translating Web apps on the fly in each available language, according to users’ language preferences.. Then Django will make some optimizations so as not to load the internationalization machinery. month and the day placeholders swapped. For this reason, you should use named-string interpolation (e.g., %(day)s) instead of positional interpolation (e.g., %s or %d) whenever you have more than a single parameter. If you used positional interpolation, translations wouldn’t be able to reorder placeholder text. then appear in the resulting .po file associated with the translatable construct located below it and should also be displayed by most translation tools. Note Just for completeness, this is the corresponding fragment of the resulting .po file: #. Translators: This message appears on the home page only # path/to/python/file.py:123 msgid "Welcome to my site." msgstr "" This also works in templates. See Comments for translators in templates for more details.. Pluralization¶ Use the function django.utils.translation.ungettext() to specify pluralized messages. ungettext takes three arguments: the singular translation string, the plural translation string and the number of objects. This function is useful when you need your Django application to be localizable to languages where the number and complexity of plural forms is greater than the two forms used in English (‘object’ for the singular and ‘objects’ for all the cases where count is different from one, irrespective of its value.) For example: from django.utils.translation import ungettext from django.http import HttpResponse def hello_world(request, count): page = ungettext( 'there is %(count)d object', 'there are %(count)d objects', count) % { 'count': count, } return HttpResponse(page) In this example the number of objects is passed to the translation languages as the count variable. Note that pluralization is complicated and works differently in each language. Comparing count to 1 isn’t always the correct rule. This code looks sophisticated, but will produce incorrect results for some languages: } compilemessages: a format specification for argument 'name', as in 'msgstr[0]', doesn't exist in 'msgid' Note Plural form and po files Django does not support custom plural equations in po files. As all translation catalogs are merged, only the plural form for the main Django po file (in django/conf/locale/<lang_code>/LC_MESSAGES/django.po) is considered. Plural forms in all other po files are ignored. Therefore, you should not use different plural equations in your project or application po files. Contextual markers¶ Sometimes words have several meanings, such as "May" in English, which refers to a month name and to a verb. To enable translators to translate these words correctly in different contexts, you can use the django.utils.translation.pgettext() function, or the django.utils.translation.npgettext() function if the string needs pluralization. Both take a context string as the first variable. In the resulting .po file, the string will then appear as often as there are different contextual markers for the same string (the context will appear on the msgctxt line), allowing the translator to give a different translation for each of them. For example: from django.utils.translation import pgettext month = pgettext("month name", "May") or: from django.db import models from django.utils.translation import pgettext_lazy class MyThing(models.Model): name = models.CharField(help_text=pgettext_lazy( 'help text for MyThing model', 'This is the help text')) will appear in the .po file as: msgctxt "month name" msgid "May" msgstr "" Contextual markers are also supported by the trans and blocktrans template tags. Lazy translation¶ Use the lazy versions of translation functions in django.utils.translation (easily recognizable by the lazy suffix in their names) to translate strings lazily – when the value is accessed rather than when they’re called. These functions store a lazy reference to the string – not the actual translation. The translation itself will be done when the string is used in a string context, such as in template rendering. This is essential when calls to these functions are located in code paths that are executed at module load time. This is something that can easily happen when defining models, forms and model forms, because Django implements these such that their fields are actually class-level attributes. For that reason, make sure to use lazy translations in the following cases: Model fields and relationships verbose_name and help_text option values¶ For example, to translate the help text of the name field in the following model, do the following: from django.db import models from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): name = models.CharField(help_text=_('This is the help text')) You can mark names of ForeignKey, ManyToManyField or OneToOneField relationship as translatable by using their verbose_name options: class MyThing(models.Model): kind = models.ForeignKey( ThingKind, on_delete=models.CASCADE, by looking at the model’s class name: from django.db import models from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): name = models.CharField(_('name'), help_text=_('This is the help text')) class Meta: verbose_name = _('my thing') verbose_name_plural = _('my things') Model methods short_description attribute values¶ For model methods, you can provide translations to Django and the admin site with the short_description attribute: from django.db import models from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): kind = models.ForeignKey( ThingKind, on_delete=models.CASCADE, related_name='kinds', verbose_name=_('kind'), ) def is_mouse(self): return self.kind.type == MOUSE_TYPE is_mouse.short_description = _('Is it a mouse?') Working with lazy translation objects¶. "Hello %s" % ugettext_lazy("people") # This will not work, since you cannot insert a unicode object # into a bytestring (nor can you insert our unicode proxy there) b long ugettext_lazy name, you can just alias it as _ (underscore), like so: from django.db import models from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): name = models.CharField(help_text=_('This is the help text')) the helper function described next. Lazy translations and plural¶ When using lazy translation for a plural string ( [u]n[p]gettext_lazy), you generally don’t know the number argument at the time of the string definition. Therefore, you are authorized to pass a key name instead of an integer as the number argument. Then number will be looked up in the dictionary under that key during string interpolation. Here’s example: from django import forms from django.utils.translation import ungettext_lazy class MyForm(forms.Form): error_message = ungettext_lazy("You only provided %(num)d argument", "You only provided %(num)d arguments", 'num') def clean(self): # ... if error: raise forms.ValidationError(self.error_message % {'num': number}) If the string contains exactly one unnamed placeholder, you can interpolate directly with the number argument: class MyForm(forms.Form): error_message = ungettext_lazy( "You provided %d argument", "You provided %d arguments", ) def clean(self): # ... if error: raise forms.ValidationError(self.error_message % number) from django.utils.translation import ugettext_lazy ... name = ugettext_lazy('John Lennon') instrument = ugettext_lazy('guitar') result = string_concat(name, ': ', instrument) In this case, the lazy translations in result will only be converted to strings when result itself is used in a string (usually at template rendering time). Other uses of lazy in delayed translations¶ For any other case where you would like to delay the translation, but have to pass the translatable string as argument to another function, you can wrap this function inside a lazy call yourself. For example: from django.utils activate, get_language_info >>> activate('fr') >>> li = get_language_info('de') >>> print(li['name'], li['name_local'], li['name_translated'], li['bidi']) German Deutsch Allemand False The name, name_local, and name_translated attributes of the dictionary contain the name of the language in English, in the language itself, and in your current active language respectively. The bidi attribute is True only for bi-directional languages. The source of the language information is the django.conf.locale module. Similar access to this information is available for template code. See below. The 'name_translated' attribute was added. Internationalization: in template code¶ Translations in Django templates uses two template tags and a slightly different syntax than in Python code. To give your template access to these tags, put {% load i18n %} toward the top of your template. As with all template tags, this tag needs to be loaded in all templates which use translations, even those templates that extend from other templates which have already loaded the i18n tag. trans template tag¶> Internally, inline translations use an ugettext() call. In case a template var ( myvar above) is passed to the tag, the tag will first resolve such variable to a string at run-time and then look up that string in the message catalogs. It’s not possible to mix a template variable inside a string within {% a string you can use in multiple places in a template or so you can use the output as an argument %} Other block tags (for example {% for %} or {% if %}) are not allowed inside a blocktrans tag. {% plural %}tag within the {% %} If you’d like to retrieve a translated string without displaying it, you can use the following syntax: {% blocktrans asvar the_title %}The title is {{ title }}.{% endblocktrans %} <title>{{ the_title }}</title> <meta name="description" content="{{ the_title }}"> In practice you’ll use this to get a string you can use in multiple places in a template or so you can use the output as an argument for other template tags or filters. The asvar syntax was added. {% blocktrans %} also supports contextual markers using the context keyword: {% blocktrans with name=user.username context "greeting" %}Hi {{ name }}{% endblocktrans %}. For instance, the following {% blocktrans %} tag: {% blocktrans trimmed %} First sentence. Second paragraph. {% endblocktrans %} will result in the entry "First sentence. Second paragraph." in the PO file, compared to "\n First sentence.\n Second sentence.\n", if the trimmed option had not been specified. %} Note Just for completeness, these are the corresponding fragments of the resulting .po file: #. Translators: View verb # path/to/template/file.html:10 msgid "View" msgstr "" #. Translators: Short intro blurb # path/to/template/file.html:13 msgid "" "A multiline translatable" "literal." msgstr "" # ... #. Translators: Label of a button that triggers search # path/to/template/file.html:100 msgid "Go" msgstr "" #. Translators: This is a text of the base template # path/to/template/file.html:103 msgid "Ambiguous translatable block of text" msgstr "" Switching language in templates¶ If you want to select a language within a template, you can use the language template tag: {% load i18n %} {% get_current_language as LANGUAGE_CODE %} <!-- Current language: {{ LANGUAGE_CODE }} --> <p>{% trans "Welcome to our page" %}</p> {% language 'en' %} {% get_current_language as LANGUAGE_CODE %} <!-- Current language: {{ LANGUAGE_CODE }} --> <p>{% trans "Welcome to our page" %}</p> {% endlanguage %} While the first occurrence of “Welcome to our page” uses the current language, the second will always be in English. Internationalization: in JavaScript code¶ Adding translations to JavaScript poses some problems: - JavaScript code doesn’t have access to a gettextimplementation. - JavaScript code doesn’t have access to .poor .mofiles;. You hook it up like this: from django.views.i18n import javascript_catalog js_info_dict = { 'packages': ('your.app.package',), } urlpatterns = [ url(r'^jsi18n/$', javascript_catalog, js_info_dict, name='javascript-catalog'), ].. You can make the view dynamic by putting the packages into the URL pattern: urlpatterns = [ url(r'^jsi18n/(?P<packages>\S+?)/$', javascript_catalog, name=. You can also split the catalogs in multiple URLs and load them as you need in your sites: js_info_dict_app = { 'packages': ('your.app.package',), } js_info_dict_other_app = { 'packages': ('your.other.app.package',), } urlpatterns = [ url(r'^jsi18n/app/$', javascript_catalog, js_info_dict_app), url(r'^jsi18n/other_app/$', javascript_catalog, js_info_dict_other_app), ] If you use more than one javascript_catalog on a site and some of them define the same strings, the strings in the catalog that was loaded last take precedence. Before Django 1.9, the catalogs completely overwrote each other and you could only use one at a time. 'javascript-catalog' %}"></script> This uses reverse URL lookup to find the URL of the JavaScript catalog view. When the catalog is loaded, your JavaScript code can use the following methods: gettext ngettext interpolate get_format gettext_noop pgettext npgettext pluralidx gettext¶ The gettext function behaves similarly to the standard gettext interface within your Python code: document.write(gettext('this is to be translated')); ngettext¶ The ngettext function provides an interface to pluralize words and phrases: var object_count = 1 // or 0, or 2, or 3, ... s = ngettext('literal for the singular case', 'literal for the plural case', object_count); interpolate¶ The interpolate function supports dynamically populating a format string. The interpolation syntax is borrowed from Python, so the interpolate function supports both positional and named interpolation: Positional interpolation: objcontains a JavaScript Array object whose elements values are then sequentially interpolated in their corresponding fmtplaceparameter as true. objcontains). get_format¶ The get_format function has access to the configured i18n formatting settings and can retrieve the format string for a given setting name: document.write(get_format('DATE_FORMAT')); // 'N j, Y' It has access to the following settings: DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS YEAR_MONTH_FORMAT This is useful for maintaining formatting consistency with the Python-rendered values. gettext_noop¶ This emulates the gettext function but does nothing, returning whatever is passed to it: document.write(gettext_noop('this will not be translated')); This is useful for stubbing out portions of the code that will need translation in the future. pgettext¶ The pgettext function behaves like the Python variant ( pgettext()), providing a contextually translated word: document.write(pgettext('month name', 'May')); npgettext¶ The npgettext function also behaves like the Python variant ( npgettext()), providing a pluralized contextually translated word: document.write(npgettext('group', 'party', 1)); // party document.write(npgettext('group', 'party', 2)); // parties pluralidx¶ The pluralidx function works in a similar way to the pluralize template filter, determining if a given count should use a plural form of a word or not: document.write(pluralidx(0)); // true document.write(pluralidx(1)); // false document.write(pluralidx(2)); // true In the simplest case, if no custom pluralization is needed, this returns false for the integer 1 and true for all other numbers. However, pluralization is not this simple in all languages. If the language does not support pluralization, an empty value is provided. Additionally, if there are complex rules around pluralization, the catalog view will render a conditional expression. This will evaluate to either a true (should pluralize) or false (should not pluralize) value. The json_catalog view¶ In order to use another client-side library to handle translations, you may want to take advantage of the json_catalog() view. It’s similar to javascript_catalog() but returns a JSON response. The JSON object contains i18n formatting settings (those available for get_format), a plural rule (as a plural part of a GNU gettext Plural-Forms expression), and translation strings. The translation strings are taken from applications or Django’s own translations, according to what is specified either via urlpatterns arguments or as request parameters. Paths listed in LOCALE_PATHS are also included. The view is hooked up to your application and configured in the same fashion as javascript_catalog() (namely, the domain and packages arguments behave identically): from django.views.i18n import json_catalog js_info_dict = { 'packages': ('your.app.package',), } urlpatterns = [ url(r'^jsoni18n/$', json_catalog, js_info_dict), ] The response format is as follows: { "catalog": { # Translations catalog }, "formats": { # Language formats for date, time, etc. }, "plural": "..." # Expression for plural forms, or null. } Note on performance¶ The javascript_catalog() view generates the catalog from .mo files on every request. Since its output is constant — at least for a given version of a site — it’s a good candidate for caching. Server-side caching will reduce CPU load. It’s easily implemented with the cache_page() decorator. To trigger cache invalidation when your translations change, provide a version-dependent key prefix, as shown in the example below, or map the view at a version-dependent URL: from django.views.decorators.cache import cache_page from django.views.i18n import javascript), you’re already covered. Otherwise, you can apply conditional decorators. In the following example, the cache is invalidated whenever you restart your application server: from django.utils import timezone from django.views.decorators.http import last_modified from django.views.i18n import javascript_catalog last_modified_date = timezone.now() @last_modified(lambda req, **kw: last_modified_date) def cached_javascript_catalog(request, domain='djangojs', packages=None): return javascript_catalog(request, domain, packages) You can even pre-generate the JavaScript catalog as part of your deployment procedure and serve it as a static file. This radical technique is implemented in django-statici18n. Internationalization: in URL patterns¶ Django provides two mechanisms to internationalize URL patterns: - Adding the language prefix to the root of the URL patterns to make it possible for LocaleMiddlewareto Deprecated since version 1.8: The prefix argument to i18n_patterns() has been deprecated and will not be supported in Django 1.10. Simply pass a list of django.conf.urls.url() instances instead. This function can be used in your root URLconf and Django will automatically prepend the current active language code to all URL patterns defined within i18n_patterns(). Example URL patterns: from django.conf.urls import include, url from django.conf.urls.i18n import i18n_patterns from about import views as about_views from news import views as news_views from sitemap include, url from django.conf.urls.i18n import i18n_patterns from django.utils.translation import ugettext_lazy as _ from about import views as about_views from news import views as news_views from sitemaps you’ve created the translations, the reverse() function will return the URL in the active language. Example: >>> from django.core.urlresolvers import reverse >>> from django.utils.translation import activate >>> activate('en') >>> reverse('news:category', kwargs={'slug': 'recent'}) '/en/news/category/recent/' >>> activate('nl') >>> reverse('news:category', kwargs={'slug': 'recent'}) '/nl/nieuws/categorie/recent/' Warning In most cases, it’s best to use translated URLs only within a language-code-prefixed block of patterns (using i18n_patterns()), to avoid the possibility that a carelessly translated URL causes a collision with a non-translated URL pattern. Reversing in templates¶ If localized URLs get reversed in templates they always use the current language. To link to a URL in another language use the language template tag. It enables the given language in the enclosed template section: {% load i18n %} {% get_available_languages as languages %} {% trans "View this category in:" %} {% for lang_code, lang_name in languages %} {% language lang_code %} <a href="{% url 'category' slug=category.slug %}">{{ lang_name }}</a> {% endlanguage %} {% endfor %} The language tag expects the language code as the only argument. Localization: how to create language files¶ Once the string literals of an application have been tagged for later translation, the translation themselves need to be written (or obtained). Here’s how that works. makemessages, that automates the creation and upkeep of these files. Gettext utilities The makemessages command (and compilemessages discussed later) use commands from the GNU gettext toolset: xgettext, msgfmt, msgmerge and msguniq. The minimum version of the gettext utilities supported is 0.15. To create or update a message file, run this command: django-admin makemessages -l de ...where de is the locale name for the message file you want to create. For example, pt_BR for Brazilian Portuguese, de_AT for Austrian German or id for Indonesian. The script should be run from one of two places: - The root directory of your Django project (the one that contains manage.py). - The root directory of one of your Django apps. The script runs over your project source tree or your application source tree and pulls out all strings marked for translation (see How Django discovers translations and be sure LOCALE_PATHS is configured correctly). It creates (or updates) a message file in the directory locale/LANG/LC_MESSAGES. In the de example, the file will be locale/de/LC_MESSAGES/django.po. When you run makemessages from the root directory of your project, the extracted strings will be automatically distributed to the proper message files. That is, a string extracted from a file of an app containing a locale directory will go in a message file under that directory. A string extracted from a file of an app without any locale directory will either go in a message file under the directory listed first in LOCALE_PATHS or will generate an error if LOCALE_PATHS is empty. By default django-admin makemessages examines every file that has the .html or .txt file extension. In case you want to override that default, use the --extension or -e option to specify the file extensions to examine: django-admin makemessages -l de -e txt Separate multiple extensions with commas and/or use -e or --extension multiple times: django-admin makemessages -l de -e html,txt -e xml Warning When creating message files from JavaScript source code you need to use the special ‘djangojs’ domain, not -e js. Using Jinja2 templates? makemessages doesn’t understand the syntax of Jinja2 templates. To extract strings from a project containing Jinja2 templates, use Message Extracting from Babel instead. Here’s an example babel.cfg configuration file: # Extraction from Python source files [python: **.py] # Extraction from Jinja2 templates [jinja2: **.jinja] extensions = jinja2.ext.with_ Make sure you list all extensions you’re using! Otherwise Babel won’t recognize the tags defined by these extensions and will ignore Jinja2 templates containing them entirely. Babel provides similar features to makemessages, can replace it in general, and doesn’t depend on gettext. For more information, read its documentation about working with message catalogs. No gettext? If you don’t have the gettext utilities installed, makemessages will have created a .po file containing the following snippet – a message: #: path/to/python/module.py:23 msgid "Welcome to my site." msgstr "" A quick explanation: msgidis the translation string, which appears in the source. Don’t change it. msgstrisline, Due to the way the gettext tools work internally and because we want to allow non-ASCII source strings in Django’s core and your applications, you must use UTF-8 as the encoding for your PO files (the default when PO files are created). This means that everybody will be using the same encoding, which is important when Django processes the PO files. To reexamine all source code and templates for new translation strings and update all message files for all languages, run this: django-admin makemessages -a Compiling message files¶ After you create your message file – and each time you make changes to it – you’ll need to compile it into a more efficient form, for use by gettext. Do this with the django-admin compilemessages utility. This tool runs over all available .po files and creates .mo files, which are binary files optimized for use by gettext. In the same directory from which you ran django-admin makemessages, run django-admin compilemessages like this: django-admin compilemessages That’s it. Your translations are ready for use. compilemessages now matches the operation of makemessages, scanning the project tree for .po files to compile. Working on Windows? If you’re using Windows and need to install the GNU gettext utilities so django-admin compilemessages works see gettext on Windows for more information. .po files: Encoding and BOM usage. Django only supports .po files encoded in UTF-8 and without any BOM (Byte Order Mark) so if your text editor adds such marks to the beginning of files by default then you will need to reconfigure it. Creating message files from JavaScript source code¶ You create and update the message files the same way as the other Django message files – with the django-admin makemessages tool. The only difference is you need to explicitly specify what in gettext parlance is known as a domain in this case the djangojs domain, by providing a -d djangojs parameter, like this: django-admin makemessages -d djangojs -l de This would create or update the message file for JavaScript for German. After updating message files, just run django-admin compilemessages the same way as you do with normal Django message files., download a precompiled binary installer. You may also use gettext binaries you have obtained elsewhere, so long as the xgettext --version command works properly. Do not attempt to use Django translation utilities with a gettext package if the command xgettext --version entered at a Windows command prompt causes a popup window saying “xgettext.exe has generated errors and will be closed by Windows”. Customizing the makemessages command¶ If you want to pass additional parameters to xgettext, you need to create a custom makemessages command and override its xgettext_options attribute: from django.core.management.commands import makemessages class Command(makemessages.Command): xgettext_options = makemessages.Command.xgettext_options + ['--keyword=mytrans'] If you need more flexibility, you could also add a new argument to your custom makemessages command: from django.core.management.commands import makemessages class Command(makemessages.Command): def add_arguments(self, parser): super(Command, self).add_arguments(parser) parser.add_argument( '--extra-keyword', dest='xgettext_keywords', action='append', ) def handle(self, *args, **options): xgettext_keywords = options.pop('xgettext_keywords') if xgettext_keywords: self.xgettext_options = ( makemessages.Command.xgettext_options[:] + ['--keyword=%s' % kwd for kwd in xgettext_keywords] ) super(Command, self).handle(*args, **options) Miscellaneous¶ The set_language redirect view¶ As a convenience, Django comes with a view, django.views.i18n.set_language(), that sets a user’s language preference and redirects to a given URL or, by default, back to the previous page. Activate this view by adding the following line to your URLconf: url(r'^i18n/', include('django.conf.urls.i18n')), (Note that this example makes the view available at /i18n/setlang/.) Warning Make sure that you don’t include the above URL within i18n_patterns() - it needs to be language-independent itself to work correctly.parameter in the POSTdata. - If that doesn’t exist, or is empty, Django tries the URL in the Referrerheader. - If that’s empty – say, if a user’s browser suppresses that header – then the user will be redirected to /(the site root) as a fallback. Here’s example HTML template code: {% load i18n %} <form action="{% url 'set_language' %}" method="post">{% csrf_token %} <input name="next" type="hidden" value="{{ redirect_to }}" /> <select name="language"> {% get_current_language as LANGUAGE_CODE %} {% get_available_languages as LANGUAGES %} {% get_language_info_list for LANGUAGES as languages %} {% for language in languages %} <option value="{{ language.code }}"{% if language.code == LANGUAGE_CODE %} </form> In this example, Django looks up the URL of the page to which the user will be redirected in the redirect_to context variable. Explicitly setting the active language¶ You may want to set the active language for the current session explicitly. Perhaps a user’s language preference is retrieved from another system, for example. You’ve already been introduced to django.utils.translation.activate(). That applies to the current thread only. To persist the language for the entire session, also modify LANGUAGE_SESSION_KEY in the session: from django.utils import translation user_language = 'fr' translation.activate(user_language) request.session[translation.LANGUAGE_SESSION_KEY] = user_language You would typically want to use both: django.utils.translation.activate() will change the language for this thread, and modifying the session makes this preference persist in future requests. If you are not using sessions, the language will persist in a cookie, whose name is configured in LANGUAGE_COOKIE_NAME. For example: from django.utils import translation from django import http from django.conf import settings user_language = 'fr' translation.activate(user_language) response = http.HttpResponse(...) response.set_cookie(settings.LANGUAGE_COOKIE_NAME, user_language) Using translations outside views and templates¶ While Django provides a rich set of i18n tools for use in views and templates, it does not restrict the usage to Django-specific code. The Django translation mechanisms can be used to translate arbitrary texts to any language that is supported by Django (as long as an appropriate translation catalog exists, of course). You can load a translation catalog, activate it and translate text to language of your choice, but remember to switch back to original language, as activating a translation catalog is done on per-thread basis and such change will affect code running in the same thread. For example: from django.utils import translation def welcome_translated(language): cur_language = translation.get_language() try: translation.activate(language) text = translation.ugettext('welcome') finally: translation.activate(cur_language) return text Calling this function with the value ‘de’ will give you "Willkommen", regardless of LANGUAGE_CODE and language set by middleware. Functions of particular interest are django.utils.translation.get_language() which returns the language used in the current thread, django.utils.translation.activate() which activates a translation catalog for the current thread, and django.utils.translation.check_for_language() which checks if the given language is supported by Django. To help write more concise code, there is also a context manager django.utils.translation.override() that stores the current language on enter and restores it on exit. With it, the above example becomes: from django.utils import translation def welcome_translated(language): with translation.override(language): return translation.ugettext('welcome') Implementation notes¶ Specialties of Django translation¶ Django’s translation machinery uses the standard gettext module that comes with Python. If you know gettext, you might note these specialties in the way Django does translation: - The string domain is djangoor djangojs. This string domain is used to differentiate between different programs that store their data in a common message-file library (usually /usr/share/locale/). The djangodomain is used for Python and template translation strings and is loaded into the global translation catalogs. The djangojsdomain is only used for JavaScript translation catalogs to make sure that those are as small as possible. - Django doesn’t use xgettextalone. It uses Python wrappers around xgettextand msgfmt. This is mostly for convenience. better matching translation is found through one of the methods employed by the locale middleware (see below). If all you want is to run Django with your native language all you need to do is set LANGUAGE_CODE and make sure the corresponding message files and their compiled versions ( .mo) exist. If you want to let each individual user specify which language they prefer, then you also need to use themakes use of session data. And it should come before CommonMiddlewarebecause CommonMiddlewareneeds an activated language in order to resolve the requested URL. - If you use CacheMiddleware, put LocaleMiddlewareafter the language prefix in the requested URL. This is only performed when you are using the i18n_patternsfunction in your root URLconf. See Internationalization: in URL patterns for more information about the language prefix and how to internationalize URL patterns. Failing that, it looks for the LANGUAGE_SESSION_KEYkey in the current user’s session. Failing that, it looks for a cookie. The name of the cookie used is set by the LANGUAGE_COOKIE_NAMEsetting. (The default name is django_language.) Failing that, it looks at the Accept-LanguageHTTP header. This header is sent by your browser and tells the server which language(s) you prefer, in order by priority. Django tries each language in the header until it finds one with available translations. Failing that, it uses the global LANGUAGE_CODEsetting.available, Django uses de. Only languages listed in the LANGUAGESsetting can be selected. If you want to restrict the language selection to a subset of provided languages (because your application doesn’t provide all those languages), set LANGUAGESto a list of languages. For example: LANGUAGES = [ ('de', _('German')), ('en', _('English')), ] This example restricts languages that are available for automatic selection to German and English (and any sublanguage, like de-ch or en-us). If you define a custom LANGUAGESsetting, as explained in the previous bullet, example: from django.http import HttpResponse At runtime, Django builds an in-memory unified catalog of literals-translations. To achieve this it looks for translations by following this algorithm regarding the order in which it examines the different file paths to load the compiled message files ( .mo) and the precedence of multiple translations for the same literal: - The directories listed in LOCALE_PATHShave the highest precedence, with the ones appearing first having higher precedence than the ones appearing later. - Then, it looks for and uses if it exists a localedirectory in each of the installed apps listed in INSTALLED_APPS. The ones appearing first have higher precedence than the ones appearing later. - Finally, the Django-provided base translation in django/conf/localein your settings file are searched for <language>/LC_MESSAGES/django.(po|mo) $APPPATH/locale/<language>/LC_MESSAGES/django.(po|mo) $PYTHONPATH/django/conf/locale/<language>/LC_MESSAGES/django.(po|mo) To create message files, you use the django-admin makemessages tool. And you use django-admin compilemessages to produce the binary .mo files that are used by gettext. You can also run django-admin compilemessages --settings=path.to.settings to make the compiler process all the directories in your LOCALE_PATHS setting.
https://docs.djangoproject.com/en/1.9/topics/i18n/translation/
2017-05-22T23:17:14
CC-MAIN-2017-22
1495463607242.32
[]
docs.djangoproject.com
Home > Journals > RR > Vol. 1 (2005) > Iss. 1 Article Title Kevin Phillips. Wealth and democracy: a political history of the American rich. New York: Broadway Books, 2002 Abstract Mr. Phillip’s book, Wealth and Democracy, stands as a continuation in many ways of his previous works considering the increasing divide between rich and poor. The collection of works that comprise Kevin Phillips’ impressive list of publications, mostly center upon the recurring theme of increasing inequality within American society, and what that widening divide is doing to us economically and politically. While the social implications of such a divide are many, these implications are less a focus of Wealth and Democracy, than are the economic and political ramifications. Recommended Citation Engvall, Robert (2008) "Kevin Phillips. Wealth and democracy: a political history of the American rich. New York: Broadway Books, 2002," Reason and Respect: Vol. 1 : Iss. 1 , Article 7. Available at:
http://docs.rwu.edu/rr/vol1/iss1/7/
2017-05-22T23:20:23
CC-MAIN-2017-22
1495463607242.32
[]
docs.rwu.edu
Apple Push Notification Service Production vs Development Apps in Urban Airship When you create or edit an application record on our server, you must select whether your app system is “In development, connecting to test servers,” or “In production, connecting to real push servers.” Apple treats the two servers separately, so a device token for development/sandbox will not work on production/distribution. When building your app using a development provisioning profile, set your Urban Airship application to In development, and upload the development Push SSL certificate. To push to an app built with a distribution provisioning profile (either with a release build in Xcode, ad hoc distribution, or the iTunes App Store), use an application that is In production, and upload the production Push SSL certificate. Because Apple treats development and production/distribution as completely separate instances, we suggest making two applications in the Urban Airship web interface. That way you can continue to build and develop your application even after releasing it without interrupting your users. Do not a) submit to the App Store or b) test notifications on an ad hoc build while your app’s code is pointing to an Urban Airship app key that is set as In Development. Development apps use different tokens that, when included in a push to a production app, will fail and in many cases cause all other pushes to fail. Always create a Production Status Urban Airship app first and make sure your application code is pointing to the production app key. For more tips on what to check before you release your app, see the iOS Production Launch Checklist. Get Your Certificate from Apple Before you can integrate Urban Airship into your iOS apps, there are a handful of steps you must take on the Apple end of things, which require membership in the iOS Developer Program. You will complete the following steps in the Apple Developer Member Center before your app is ready to communicate with Urban Airship: Find Your Application In the Apple Developer Members Center, click on App IDs in the Identifiers section of the Certificates, Identifiers & Profiles menu pane. If you haven’t registered an App ID yet, click the + symbol and fill out the form, making sure to check the Push Notifications checkbox. When you expand the application, you will see two settings for push notifications with yellow or green status icons: Click Settings or Edit to continue. The Settings button may be titled Edit if push notifications have been previously configured. If the Settings/Edit button is not available, you may not be the team agent or an admin. The person who originally created the developer account is your team agent and they will have to carry out the remainder of the steps in this section. Generate Your Certificate Choose either the Development or Production certificate option and click Create Certificate.. Export the .p12 File You’re almost there. The final step before heading back over to the Urban Airship application. Configure APNS with Urban Airship Never use the same push certificate across multiple Urban”. Find your app in the Urban Airship web application and in the settings drop down menu click Services. From the list of available services to configure, navigate to Apple Push Notification Service (APNs) and click Configure. If your certificate has a password, enter it here. Then, click the Choose File button. Select the file that was saved in step 2 of Export the .p12 File. After uploading the file, make sure to click Save. Your push certificate is now uploaded and ready for use.
https://docs.urbanairship.com/platform/push-providers/apns/
2017-05-22T23:18:02
CC-MAIN-2017-22
1495463607242.32
[array(['https://docs.urbanairship.com/images/app-ids.png', None], dtype=object) array(['https://docs.urbanairship.com/images/creating-app.png', None], dtype=object) array(['https://docs.urbanairship.com/images/app-id.png', None], dtype=object) ]
docs.urbanairship.com
Exec Transformation Service Execute an external program, substituting the placeholder %s in the given command line with the input value, and returning the output of the external program. The external program must either be in the executable search path of the server process, or an absolute path can be used. Example With the command line /bin/date -v1d -v+1m -v-1d -v-%s return a string showing the last weekday of the month.
http://docs.openhab.org/addons/transformations/exec/readme.html
2017-05-22T23:32:57
CC-MAIN-2017-22
1495463607242.32
[]
docs.openhab.org
Supported picture file extensions Your BlackBerry device is designed to support the following picture file formats. - BMP - JPG - GIF - PNG - TIF - WBMP For information about media file extensions and codecs for your device, visit and click Smartphones > BlackBerry Smartphones > Supported Media. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/50757/mba1336581039661.jsp
2015-07-28T08:34:38
CC-MAIN-2015-32
1438042981753.21
[]
docs.blackberry.com
If you can't remember your password, the only way to change your password or regain access to your BlackBerry device is to delete all of your data by completing a security wipe. Take care to remember your work space password, as it can't be reset or recovered. When you exceed the number of allowed attempts to enter your work space password, your work space and all of its contents are deleted.
http://docs.blackberry.com/en/smartphone_users/deliverables/55418/als1341501033686.html
2015-07-28T08:41:53
CC-MAIN-2015-32
1438042981753.21
[]
docs.blackberry.com
Information for "JModel" Basic information Display titleAPI15:JModel Default sort keyJModel Page length (in bytes)1,600 Page ID6842 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page0 Counted as a content pageYes Number of subpages of this page12 (0 redirects; 12 non-redirects) Page protection EditAllow all users MoveAllow all users Edit history Page creatorDoxiki (Talk | contribs) Date of page creation16:52, 22 March 2010 Latest editorDoxiki (Talk | contribs) Date of latest edit12:27, 25 March 2010 Total number of edits3 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (3)Templates used on this page: Template:Extension DPL (view source) SeeAlso:JModel (view source) Description:JModel (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=API15:JModel&action=info
2015-07-28T09:50:00
CC-MAIN-2015-32
1438042981753.21
[]
docs.joomla.org
Information for "PatTemplate Function Translate/call" Basic information Display titleAPI15:PatTemplate Function Translate/call Default sort keyPatTemplate Function Translate/call Page length (in bytes)1,660 Page ID80:59, 12 May 2013 Total number of edits2 Total number of distinct authors2 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’
https://docs.joomla.org/index.php?title=API15:PatTemplate_Function_Translate/call&action=info
2015-07-28T09:28:21
CC-MAIN-2015-32
1438042981753.21
[]
docs.joomla.org
Allow your subscribers to chat with you - In your My Channels list, highlight a channel that you own. - Press the key > View Channel Info. - Scroll left to the Settings section. - In the Communication Options section, do one of the following: - To allow your subscribers to contact you any time, click Always. - To set up specific times for chats, click Scheduled. Select the checkbox next to each day you want to be available. Change the time as needed. Click OK. - To prevent your subscribers from contacting you, click Never. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/60444/mba1374759024917.jsp
2015-07-28T08:26:17
CC-MAIN-2015-32
1438042981753.21
[]
docs.blackberry.com
Enhanced Email Attachments We have made the following enhancements to the email attachments in Local CRM: - Increased limitation on the size of attachments from 12 MB to 20 MB. - Ability to add email attachments already linked to the case or linked to the follow-ups while replying to or forwarding the email. - Removed limitation on the number of attachments as long as they do not exceed 20 MB. Earlier, we were limited to five attachments. To send an email with attachments: - Accept an incoming email. - Review the email and click Reply. - Select from Reply, Reply All, or Forward options and craft your response. - In the Attachment section: - Select the desired attachments. Note: The combined size of the attachments must not exceed 20 MB. - Click Send.
https://docs.8x8.com/8x8WebHelp/VCC/release-notes/Content/8-1-6-release/EmailAttachmentsForwarding.htm
2022-09-25T02:42:25
CC-MAIN-2022-40
1664030334332.96
[]
docs.8x8.com
Automatically generated integration/unit tests The assumptions component allows you to easily sanity check your system, providing you with a diagnostic tool, ensuring that your system is functioning as it should. An assumption is basically an integration test, with one or more “assumptions” about your system, which if not is true implies your system is not functioning as it should. An example of an assumption can be for instance as follows; “If you try to login without a password the system should not authenticate you”. Such assumptions allows you to ensure your system is functioning, by providing you with “automated unit tests”, that are easily executed all at once as you modify your system’s code. Magic is delivered out of the box with a whole range of integrated assumptions, but you can also create your own assumptions from the “Endpoints” menu item. Below is a screenshot of the assumptions component. The assumptions component allows you to run all your assumptions automatically by clicking the “play” button in the top/right corner. This will sequentially execute all assumptions in the system, and if one of your assumptions fails for some reasons, it will show you which assumption failed, easily allowing you to determine which part of your system is not functioning as it should. This results in a similar security mechanism as unit tests provides you with, except assumptions are technically integration tests and not really unit tests, and such tests your system from a much higher level. Create assumptions automatically Magic creates such assumptions automatically for you, eliminating the need to manually write test code. To create your own assumption open up your “Endpoints” menu item, find an endpoint, invoke it somehow, and create a new assumption after having invoked your endpoint. Below is a screenshot of this process. If you click the above “Match response” checkbox the response returned from the server needs to be the exact same in the future, which depending upon your endpoint’s semantics might or might not be a correct assumption. Obviously, for assumptions reading data from database tables that could somehow change their content over time, you should not match the response. If you don’t click the checkbox, it will only verify the status code your server returns. You can also create assumptions for error status codes, such as for instance; “If I invoke a non-existing endpoint a 404 status code should be returned from the server”. As you create an assumption, what happens is that Magic “records” your HTTP invocation, persist its response, and allows you to automatically “replay” your invocation assuming its HTTP response will be similar as you replay your assumption. Manually create assumptions You can also manually create assumptions that contains any amount of Hyperlambda you need to verify your system is optimally functioning. This is done by creating a Hyperlambda file, store this file in your “/etc/tests/” folder, and put content into it resembling the following. description:Verifies that 2+2 equals 4. .lambda .no1:int:2 .no2:int:2 if not eq math.add:x:@.no1 get-value:x:@.no2 .:int:4 .lambda throw:Math error! The idea is to have your assumption throw an exception if something is wrong. This allows you to create “unit tests” where you somehow verify that your system is functioning correct, by for instance invoking dynamically created slots, with assumptions about their return values, etc.
https://docs.aista.com/documentation/magic/components/assumptions/
2022-09-25T01:23:26
CC-MAIN-2022-40
1664030334332.96
[array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/assumptions.jpg', 'Assumptions'], dtype=object) array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/new-assumption.jpg', 'Creating an assumption'], dtype=object) ]
docs.aista.com
How to perform an alternate emergency key rollover. This is an alternate method for performing an emergency key rollover. For complete details on this task, refer to Managing DNSSEC key rollover and generation. To perform an emergency rollover: - From the configuration drop-down menu, select a configuration. - From the DNS or IP Space tab, navigate to a DNS zone or reverse zone. - Select the DNSSEC tab. - Under Zone Signing Keys or Key Signing Keys, click the number of a key. - From the DNSSEC Key Properties page, click the key name menu and select Emergency Rollover. - Click Yes.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Alternate-emergency-key-rollover-method/9.2.0
2022-09-25T02:27:32
CC-MAIN-2022-40
1664030334332.96
[]
docs.bluecatnetworks.com
As part of the discovery process, Address Manager performs FQDN and DNS reverse lookups on discovered hosts in an attempt to associate a DNS name with an IP address. For the IP reconciliation and discovery engine to perform these lookups without issues, you need to specify DNS servers. You specify DNS servers at the global configuration level and for specific IP reconciliation policies. - Global configuration level—DNS servers set at this level can be used by all IP address reconciliation policies within the current configuration. - IP reconciliation policies—DNS servers set in IP reconciliation policies will override DNS servers set at the global configuration level. If you want to add DNS servers in an IP reconciliation policy and have multiple IP reconciliation policies, you need to add DNS servers for each IP reconciliation policy. If you don't use a DNS server, you don't need to specify the server. However, you must select the Skip FQDN/Reverse DNS Resolution option when adding an IP reconciliation policy. When multiple DNS servers are specified, each will be queried in turn until a positive response, in the form of one or more PTR records, is received. If all servers provide an error or negative response, no host name will be associated with the discovered IP address. This behavior enables the scenario where multiple DNS servers must be queried to resolve PTR records for a single IP reconciliation policy. For example, if a policy is created for 192.168.0.0/23 and two distinct DNS resolvers must be queried for PTR records within 192.168.0.0/24 and 192.168.1.0/24, configuring both of those DNS servers on the policy will enable resolution for both networks.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Specifying-DNS-servers-for-IP-address-discovery-and-reconciliation/9.2.0
2022-09-25T02:15:54
CC-MAIN-2022-40
1664030334332.96
[]
docs.bluecatnetworks.com
Google Cloud Storage (GCS) On This Page You can load data from files in your GCS bucket into a Destination database or data warehouse using Hevo Pipelines.: The format of the data file in the Source. Hevo supports the AVRO, CSV, JSON,, then, automatically creates these during ingestion. Default setting: Enabled. Refer to section, Example: Automatic Column Header Creation for CSV Tables.. JSON:. XML: Enable the Create Events from child nodes option to load each node under the root node in the XML file as a separate Event. Click TEST & CONTINUE. Proceed to configuring the data ingestion and setting up the Destination. Data Replication Note: The custom frequency must be set in hours, as an integer value. For example, 1, 2, 3 but not 1.5 or 1.75. below: The record in the Destination appears as follows: See Also Revision History Refer to the following table for the list of key updates made to this page:
https://docs.hevodata.com/sources/dbfs/file-storage/google-cloud-storage-(gcs)/
2022-09-25T02:50:42
CC-MAIN-2022-40
1664030334332.96
[array(['https://res.cloudinary.com/hevo/image/upload/hevo-docs/CSVHeaders2982/csv-schema.png', 'Column headers generated by Hevo for CSV data'], dtype=object) array(['https://res.cloudinary.com/hevo/image/upload/v1600844357/hevo-docs/CSVHeaders2982/csv-dest-record.png', 'Destination record with auto-generated column headers'], dtype=object) ]
docs.hevodata.com
Collect app usage data from users in JUCE applications. Send analytics events to Google Analytics using the analytics module. Level: Intermediate Platforms: Windows, macOS, Linux, iOS, Android Classes: ThreadedAnalyticsDestination, ButtonTracker, WebInputStream, CriticalSection, CriticalSection::ScopedLockType Download the demo project for this tutorial here: PIP | ZIP. Unzip the project and open the first header file in the Projucer. If you need help with this step, see Tutorial: Projucer Part 1: Getting started with the Projucer. Please make sure you have your Google Analytics API key written down and ready for this tutorial to fully work. The demo project shows a very simple UI with two buttons sending analytics events when pressed. Since the API key has not been set up yet, Google will not receive any events before implementation. Events describe how the user has interacted with the content in applications and are sent to the analytics tracking system. To better categorise and filter the interactions, events are structured using the following keywords: All the events are sent with a unique user ID and a timestamp along with the keywords mentioned above. Additionally, users can be grouped into categories to better describe their capacity such as beta testers or developers. The first step for the project to work properly is to set up the Google Analytics API key. You can find the Tracking ID in your Google Analytics dashboard here: Copy this ID, and replace the apiKey placeholder variable in the GoogleAnalyticsDestination class: Let's first start by tracking user-independent information such as app launches and define constant user information that will be used by the analytics system. In the constructor of the MainContentComponent class, we start by getting a reference to the Analytics singleton by calling Analytics::getInstance(). We can then set the user ID with setUserID() by choosing a unique identifier for this user [1]. Make sure not to include any sensitive personal information in this identifier. We can also set a user group on this user by calling setUserProperties() using a StringPairArray [2]. For the events to be received, we need to specify at least one destination to our Analytics instance. We can optionally add multiple destinations if we wish. In this case we add an instance of the GoogleAnalyticsDestination class to the singleton [3]. Since the MainContentComponent constructor gets called when the MainWindow is instantiated, we can log this event using the function logEvent() right when the component gets owned by the MainWindow [4]. Likewise, we can log the shutdown event in the MainContentComponent destructor right when the MainWindow gets deleted [5]. In order to add tracking to specific user actions, we need to define which user interactions we want recorded and sent. Fortunately to record button behaviour, we can use a handy class included in the JUCE analytics module called ButtonTracker that will automatically handle this for us. Let's first declare a ButtonTracker as a member variable in the MainContentComponent class [1]. Now in the MainContentComponent constructor, we can link the specific TextButton object we want to track by passing it as an argument to the ButtonTracker constructor. We also set the event category and action properties to send when the event is fired [2]. The JUCE analytics module handles the logging of events on a dedicated thread and sends the analytics data in batches periodically. Therefore, we need to temporarily store the events on local storage until the data is sent. In the rest of this tutorial, we will be working in the GoogleAnalyticsDestination class. We first need to specify a location to store our analytics event data in the application data directory. For this we use the special location File::userApplicationDataDirectory to find the correct location and navigate to the corresponding application folder for our app [1]. If the location does not exist we create the folder [2] and save the file path as an XML file name extension [3]. We can now start the thread by using the startAnalyticsThread() function and specifying the waiting time between batches of events in milliseconds [4]. In the class destructor, we have to ensure that the last batch of events can be sent without the application being killed by the operating system. To allow this, we provide one last batch period while sleeping the thread before stopping it forcibly after 1 second. This provides enough time for one last sending attempt without elongating too much the application shutdown time. We can supply the maximum number of events to send in batches by overriding the getMaximumBatchSize() function like so: Now we need to format the correct HTTP request to log these events to the analytics server. The URL we are trying to construct with its corresponding POST data in the case of a button press behaviour for example looks something like this: In a typical app lifecycle, the batched logger will first process the appStarted event when the application is fired up. Then when the user clicks on the button we log the button_press event and finally log the appStopped event when the application quits. In order to account for these 3 logging scenarios, we need to construct different requests in the logBatchedEvents() function: Now that we have our URL ready we need to send the request to the server by creating a WebInputStream. We first have to lock the CriticalSection mutex declared as a member variable called webStreamCreation. Using a ScopedLock object allows us to automatically lock and unlock the mutex for the piece of code delimited by the curly brackets [1]. If the stopLoggingEvents() function was previously called due to the application terminating, we return immediately without attempting to initialise the WebInputStream [2]. Otherwise, we can create it in a std::unique_ptr by passing the previously constructed URL as an argument and using POST as the method [3]. We can then connect to the specified URL and perform the request using the connect() function on the WebInputStream [4]. If the response is successful, we just return positively from the function. Otherwise, we set an exponential decay on the batch period by multiplying the previous rate by 2 and return negatively from the function [5]. When the application shuts down, we need to cancel connections to the WebInputStream if there are any that are concurrently running. By first acquiring the lock from the same CriticalSection object using a ScopedLock, we ensure that the previously encountered critical section of the code in the logBatchedEvents() function will have terminated before [1]. Setting the shouldExit boolean to true prevents any new connections from being created subsequently [2]. Then we can finally cancel any WebInputStream connections using the cancel() function if there are any [3]. This completes the part of the tutorial dealing with logging events. However, if the transmission of event data fails and the application terminates, we currently have no way of keeping track of unlogged events. This section will cover the use of XML files to store any unlogged events to disk in the case of a lost connection. The XML document storing unlogged event information will look something like this for a single button press: We will look at the saveUnloggedEvents() and restoreUnloggedEvents() functions that deal with saving and restoring events respectively. The saveUnloggedEvents() function will build an XML structure based on the format shown above and save the content in an XML file: On the other hand, the restoreUnloggedEvents() function will in turn read an XML structure based on the same format shown previously and fill up the event queue: In this tutorial, we have learnt how to track usage data with Google Analytics and the JUCE analytics module. In particular, we have:
https://docs.juce.com/master/tutorial_analytics_collection.html
2022-09-25T02:46:07
CC-MAIN-2022-40
1664030334332.96
[]
docs.juce.com
How do hours in activities from an offer show in Moment's capacity overview? Moment's powerful offer modules gives the possibility to send offers to customers that include custom "packages" of working hours attached to either tasks or activities (phases). Understanding the way in which those packages of activities can potentially influence the company's capacity, is an important way of keeping good track of your company's resources. In this article I will show one way to keep track of available hours capacity when you have made an offer to a customer. An offer that includes activities with included hours. As an example we can start with an offer in which certain activities are included. A certain amount of hours, with a given unit price hav been included as part of the offer: Here we can see that a pack of 32,27 hours is included in the activity. Let's assume that this offer is accepted by the customer. How do these hours affect the company's capacity? If we assume that a project has been made based on this offer, either manually or by using Moment's built in functionality (the activities will be transferred to the new project automatically). As we can see from the screenshot, the hours from the offer have been transferred to the activity as estimated hours. However these hours are still not planned or "reserved" in any way. to be able to show them as part of the company's capacity overview you will need to use the project plan. Using the project plan to reserve hours By using the project plan to plan the hours, we are effectively reserving these hours as well and hence changing the capacity overview. Let's have a look at the co-worker's reservations looks like. As you can read in the project plan article , entries in the project plan will automatically be transferred to the reservations page. In this case we have planned 30 hours across three weeks for one co-worker in one activity. How will this look in the reservations page? (projects > reservations) These are the same hours that we registered in the project plan. How does this affect the capacity view? As we can see from the GIF above. By going to the capacity overview, after making sure our project has status "offer" and using the appropiate filter settings, we can see that the capacity overview show 30 hours across 3 weeks have been "taken" by our project. This means that in order for Moment to properly show the effects of a project based on an offer on the capacity of the company, some steps need to be taken before this shows correctly. However this process is simple if the project plan is used effectively as an integral part of your project management strategy. How to activate the offer module in Moment Projects Last modified 1yr ago Copy link Outline An offer that includes activities with included hours. Using the project plan to reserve hours
https://docs.moment.team/help/offers/how-do-hours-in-activities-from-an-offer-show-in-moments-capacity-overview
2022-09-25T02:40:37
CC-MAIN-2022-40
1664030334332.96
[]
docs.moment.team
This guide walks through integration with react-native-video to collect video performance metrics with Mux Data. Features The features supported by Mux Data for this player. Install @mux/mux-data-react-native-video Install @mux/mux-data-react-native-video from NPM. Wrap your Video component Wrap your react-native-video component in the muxReactNativeVideo SDK Make your data actionable Use metadata fields to make the data collected by Mux actionable and useful. Release Notes This SDK is currently beta. See the Known Issues and Caveats in the README on GitHub. The following data can be collected by the Mux Data SDK when you use the React Native Video SDK, as described below. Supported Features: Notes: Video Quality metrics are not available. Include the Mux JavaScript SDK on every page of your web app that includes video. npm install --save @mux/mux-data-react-native-video. Wrap your Video component with the muxReactNativeVideo higher-order-component. import app from './package.json' // this is your application's package.json import Video from 'react-native-video'; // import Video from react-native-video like your normally would import muxReactNativeVideo from '@mux/mux-data-react-native-video'; // wrap the `Video` component with Mux functionality const MuxVideo = muxReactNativeVideo(Video); // Pass the same props to `MuxVideo` that you would pass to the // `Video` element. All of these props will be passed through to your underlying react-native-video component // Include a new prop for `muxOptions` <MuxVideo style={styles.video} source={{ uri: '', }} controls muted muxOptions={{ application_name: app.name, // (required) the name of your application application_version: app.version, // the version of your application (optional, but encouraged) data: { env_key: 'YOUR_ENVIRONMENT_KEY', // (required) video_id: 'My Video Id', // (required) video_title: 'My awesome video', player_software_version: '5.0.2' // (optional, but encouraged) the version of react-native-video that you are using player_name: 'React Native Player', // See metadata docs for available metadata fields }, }} /> The required fields in the muxOptions that you pass into the MuxVideo component are application_name, data.env_key and data.video_id. However, without some metadata the metrics in your dashboard will lack the necessary information to take meaningful actions. Metadata allows you to search and filter on important fields in order to diagnose issues and optimize the playback experience for your end users. Pass in metadata under the data on initialization. muxOptions={{ application_name: app.name, // (required) the name of your application application_version: app.version, // the version of your application (optional, but encouraged) data: { env_key: 'ENV_KEY', // playerIDis nullwhen wrapping the component with react-native-video-controls. mux-embedto v4.2.0 programchangemay not have been tracked correctly destroywas called multiple times, it would raise an exception
https://docs.mux.com/guides/data/monitor-react-native-video
2022-09-25T01:19:18
CC-MAIN-2022-40
1664030334332.96
[]
docs.mux.com
📒 📒 📒 📒 Stumble upon Rumble: Litepaper Search… Overview Game Mechanics Cash Tournaments Expanding Ecosystem NFTs & Customization In-game NFTs Obtaining the NFTs Partnerships Tokenomics Token Utility Token Metrics Governance GitBook Cash Tournaments As mentioned before, wagering is completely legal in Stumble upon Rumble as the player remains in full control over the outcome of a 100% skill-based game. We offer several options to earn through wagering in-game. Player vs Player Players who are confident in their own abilities can participate in wager matches to earn $GLOVE tokens. Players can indicate how much they would like to wager against their opponent before the fight starts. In order to simplify and speed up the process, we will offer four wager levels: high, medium, low, and zero. The lowest stake will be used for both players. The exact numbers for each wagering tier are TBD and will be adjusted if the $GLOVE token price fluctuates dramatically. Player vs the AI The player can choose between different AI difficulty levels with varying payouts. For example, a 'normal' AI with a payout of 2X your wager, a 'hard' AI with a payout of 4X your wager, and an 'extreme' AI with a payout of 10X your wager. AI matches are a good option for the times when there might not be enough players online to guarantee a quick matchup for the payer. The beauty of these matches is that the AI is self-adjusting, meaning that it is a zero-sum game among players. For the normal AI with a 2X payout, the accompanying AI program would self-adjust its difficulty so that the player’s win rate always stays around 50%. When the win rate is higher, the difficulty is increased so that fewer players win. If the win rate drops too low, the AI match becomes easier. The same counts for harder AIs, so a 10X payout would regulate around a 10% win rate. Effectively players are betting that they are better than the threshold of the entire player base, rather than a particular player. For example, betting on yourself against a 2X return AI basically means you are betting that you are better than 50% of the players wagering against it. Backing Top Players We understand that betting only on yourself might not be very attractive for a casual player or investor. Hence, we offer an option of wagering by staking your $GLOVE tokens to back one of the top players. You can choose current fighters of the ring or the ones at the top of the leaderboard. If two players with stakers behind them fight and wager on themselves, the stakers from both sides automatically place a bet against each other as well. This will be a fixed % of the lowest stake between the two. The proceeds or losses will be added or subtracted from the total stakes. Top players have a high incentive to secure a large backing because the part of the wagering proceeds goes back to the winning player. We are up only for a fair fight, so there will be restrictions and close monitoring of fighters to ensure that top players are not colluding to take tokens away from their stakers. Stakers’ wagering will automatically pause if a player has lost X% within a specified time period. Top player’s stakers can only wager against another top player’s stakers X times per day. Game Mechanics Expanding Ecosystem Last modified 1mo ago Copy link
https://docs.stumbleuponrumble.com/cash-tournaments
2022-09-25T01:37:26
CC-MAIN-2022-40
1664030334332.96
[]
docs.stumbleuponrumble.com
Baka (East Region, Cameroon) Facts - Language: Baka (East Region, Cameroon) - Alternate names: Bayaka, Bayaga, Bibaya, "Babinga" - Language code: bkc - Language family: Niger-Congo, Atlantic-Congo, Volta-Congo, North, Adamawa-Ubangi, Ubangi, Sere-Ngbaka-Mba, Ngbaka-Mba, Ngbaka, Western, Baka-Gundi (SIL classification) - Number of speakers: 30,000-50,000 - Vulnerability: Vulnerable - Script: Latin script, not used in Gabon. More information: Semi-nomadic but encouraged by the government to settle along roadways. Different from Baka of Democratic Republic of the Congo and Sudan and from the Aka (See: [[Aka]]) (Baaka, Bayaka, Biyaka). Although similar in culture, they are different in language. Traditional religion, Christian. Baka (East Region, Cameroon) is spoken in Gabon, Cameroon, Africa.
https://docs.verbix.com/EndangeredLanguages/BakaEastRegionCameroon
2022-09-25T01:01:17
CC-MAIN-2022-40
1664030334332.96
[]
docs.verbix.com
When adding Kubernetes components to a vRealize Automation Cloud Assembly blueprint, you can choose to add clusters or enable users to create namespaces in various configurations. Typically, this choice depends on your access control requirements, how you have configured your Kubernetes components, and your deployment requirements. To add a Kubernetes component to a blueprint in vRealize Automation Cloud Assembly, click Blueprints, select New, and then locate and expand the Kubernetes option on the left menu. Then, make the desired selection, either Cluster or KBS Namespace by dragging it to the canvas. Adding a Kubernetes cluster that is associated with a project to a blueprint is the most straightforward method of making Kubernetes resources available to valid users. You can use tags on clusters to control where they are deplyed just as you do with other Cloud Assembly resources. You can use tags to select a zone and a PKS plan during the allocation phase of cluster deployment. Once you add a cluster in this way, it is automatically available to all valid users. Blueprint Examples The first blueprint example shows a blueprint for a simple Kubernetes deployment that is controlled by tagging. A Kubernetes zone was created with two deployment plans, configured on the New Kubernetes Zone page In this case, a tag called placement:tag was added as a capability on the zone, and it was used to match the analogous constraint on the blueprint. If there were more than one zone configured with the tag, the one with the lowest priority number would be selected. formatVersion: 1 inputs: {} resources: Cluster_provisioned_from_tag: type: Cloud.K8S.Cluster properties: hostname: 109.129.209.125 constraints: -tag: 'placement tag' port: 7003 workers: 1 connectBy: hostname The second blueprint examples shows how to set up a blueprint with a variable called $(input.hostname) so that users can input the desired cluster hostname when requesting a deployment. Tags can also be used to select a zone and a PKS plan durring the resource allocation phase of cluster deployment. formatVersion: 1 inputs: hostname: type: string title: Cluster hostname resources: Cloud_K8S_Cluster_1: type: Cloud.K8S.Cluster properties: hostname: ${input.hostname} port: 8443 connectBy: hostname workers: 1 If you want to use namespaces to mange cluster usage, you can set up a variable in the blueprint called name: ${input.name} to substitute for the namespace name which a user enters when requesting a deployment. For this sort of deployment, you would create a blueprint something like the following example: 1 formatVersion: 1 2 inputs: 3 name: 4 type: string 5 title: "Namespace name" 6 resources: 7 Cloud_KBS_Namespace_1: 8 type: Cloud.KBS.Namespace 9 properties: 10 name: ${input.name} Users can manage deployed clusters via kubeconfig files that are accessible from the Kubeconfig.page. Locate the card on the page for the desired cluster and click
https://docs.vmware.com/en/vRealize-Automation/8.0/Using-and-Managing-Cloud-Assembly/GUID-7BD71D53-A67B-4E3A-9E1C-7AA71C4F6B70.html
2022-09-25T02:10:05
CC-MAIN-2022-40
1664030334332.96
[]
docs.vmware.com
6. Transport Layer¶ The transport layer provides communication services between DDS entities, being responsible of actually sending and receiving messages over a physical transport. The DDS layer uses this service for both user data and discovery traffic communication. However, the DDS layer itself is transport independent, it defines a transport API and can run over any transport plugin that implements this API. This way, it is not restricted to a specific transport, and applications can choose the one that best suits their requirements, or create their own. eProsima Fast DDS comes with five transports already implemented: UDPv4: UDP Datagram communication over IPv4. This transport is created by default on a new DomainParticipant if no specific transport configuration is given (see UDP Transport). UDPv6: UDP Datagram communication over IPv6 (see UDP Transport). TCPv4: TCP communication over IPv4 (see TCP Transport). TCPv6: TCP communication over IPv6 (see TCP Transport). SHM: Shared memory communication among entities running on the same host. This transport is created by default on a new DomainParticipant if no specific transport configuration is given (see Shared Memory Transport). Although it is not part of the transport module, intraprocess data delivery and data sharing delivery are also available to send messages between entities on some settings. The figure below shows a comparison between the different transports available in Fast DDS. - 6.1. Transport API - 6.2. UDP Transport - 6.3. TCP Transport - 6.4. Shared Memory Transport - 6.5. Data-sharing delivery - 6.6. Intra-process delivery - 6.7. TLS over TCP - 6.8. Listening Locators - 6.9. Interface Whitelist - 6.10. Disabling all Multicast Traffic
https://fast-dds.docs.eprosima.com/en/v2.7.1/fastdds/transport/transport.html
2022-09-25T01:43:55
CC-MAIN-2022-40
1664030334332.96
[]
fast-dds.docs.eprosima.com
You can review the diagnostics output for a particular service point participating in Anycast by either checking the Address Manager’s service point configuration page corresponding to the DNS/DHCP server enabled with the given service point or by calling the diagnostics endpoint using the Primary IP address of the DNS/DHCP server.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Viewing-the-Anycast-DNS-Edge-service-point-health-and-configuration/9.2.0
2022-09-25T01:43:44
CC-MAIN-2022-40
1664030334332.96
[]
docs.bluecatnetworks.com
Understanding Room Type and Rate Plan Allocations A Room Type/Rate Plan allocation controls the availability of Rooms and the Rates you offer in each channel. Room Type/Rate Plan allocations are also used to sell packages, discounted rates and for Promo Codes. The Room Types and Rate Plans available to allocate in the channels are already setup in Room Types and Default Rates or Packages. When you add Room Type/Rate Plan allocation to a channel, all of the rooms assigned to that Room Type will be available at the Rate Plan you have assigned to it. You can use the current Room Types you have set up or create new Room Types and Rate Plans and Packages for use in a specific channel. The functions you decide to use depends on how you want to manage your online distribution and the complexity of your Room Type and Rate Plan Structure. Understanding how you can use combinations of Room Type and Rate Plan allocations and control the restrictions through Manage Rates will help you in managing availability. Managing Room Type and Rate Plan Allocations Allocating a Room Type and Rate Plan involves the same function for all channels in Agent Relationships.To see which Room Types and Rate Plans are allocated to an Agent Channel or to ADD or DELETE a Room Type allocation, click on blue link that displays the number of rooms allocated in the "Rooms Allocated" column for the corresponding Agent. For example, 15 of 17. When you click on this number a new screen will open with a List of the Room Types and corresponding Rate Plans you have allocated to that Agent. At this point you can Add or Delete a Room Type Allocation.See Add or Delete Allocation. Overall, the Room Type Allocation function can be use in the following ways: However, managing Availability and Rates to each channel can further customized by strategically using the Room Type and Rate Plan allocation functions. To quickly block out a room or block of rooms for short time period, see Blocking out rooms for a specific time period
https://docs.bookingcenter.com/display/MYPMS/Room+Type+and+Rate+Plan+Allocations
2022-09-25T00:58:09
CC-MAIN-2022-40
1664030334332.96
[]
docs.bookingcenter.com
Reserved Instance (RI) and Savings Plan (SP) portfolio management set-up Reserved Instance and Saving Plan Portfolio Management Integration CloudZero selected ProsperOps as the best in breed to automate Reserved Instances (RI) and Saving Plan (SP) portfolio management. Once connected to CloudZero you will be able to see you Effective Savings Rate (ESR). This rate is the ROI of cloud discount instruments and the single most important ROI metric when it comes to cloud pricing optimization. Setup Process Below the outline for account connection - Let CloudZero Customer Success team know your interest in Reserved Instance or Savings Plans Optimizations. We will set up an account with ProsperOps. - CloudZero will arrange for free saving analysis. - Onboard with ProsperOps. Account Creation CloudZero Customer Success representative or sales-engineer to create the account with Prosper Ops. Savings Analysis Configure your Prosper Ops and AWS account to enable Prosper Ops to execute a savings analysis. This 10-minutes setup will benchmark your Effective Savings Rate vs. peers and forecast future savings with Prosper Ops. Overview can found here: The CloudZero dashboard give visibility into the state of your platform Onboard with ProsperOps A ProsperOps and CloudZero will work with to set the configuration particular to your environment. Updated 7 months ago
https://docs.cloudzero.com/docs/reserved-instance-ri-and-savings-plan-sp-portfolio-management-set-up
2022-09-25T01:09:20
CC-MAIN-2022-40
1664030334332.96
[]
docs.cloudzero.com
Posting tickets to a DutyCalls channel requires a special email address. This email address is unique and can only be used for posting to this channel. You can find this email address by: - Going to the Channels page in your workspace. - Clicking on the Settings button of the relevant channel. - Navigating to the Email tab. - Verifying that you are authorized to access this email address. Copying the email address from the settings dialog. The only thing left to do, is sending an email! Warning You should store this email address in a secure location. It is important to keep your credentials confidential to protect your account.
https://docs.dutycalls.me/email/email-address/
2022-09-25T02:10:00
CC-MAIN-2022-40
1664030334332.96
[]
docs.dutycalls.me
Dimensions, custom properties, and tags in Splunk Observability Cloud 🔗 Data comes into Splunk Observability Cloud as data points associated with a metric name and additional metadata. Observability Cloud has three types of metadata: Comparing dimensions, custom properties, and tags 🔗 The following table shows the main differences between the three types of metadata Dimensions 🔗 Dimensions are metadata in the form of key-value pairs that a monitoring software sends in along with the metrics. Dimensions provide additional information about the metric, such as the name of the host that sent the metric. For example, "hostname": "host1". Note Two key-value pairs with different keys are different dimensions, regardless of value. Two key-value pairs that have the same key but different values are different dimensions. Two key-value pairs with the same key and value are the same dimension. Dimensions criteria 🔗 You can define up to 36 unique dimensions per MTS. Dimension name criteria: UTF-8 string, maximum length of 128 characters (512 bytes). Must start with an uppercase or lowercase letter. Must not start with an underscore (_). After the first character, the name can contain letters, numbers, underscores (_), hyphens (-), and period (.). Must not start with the prefix sf_, except for dimensions defined by Observability Cloud such as sf_hires. Must not start with the prefix aws_, gcp_, or azure_. Custom properties 🔗 Custom properties are key-value pairs you can assign to dimensions of existing metrics. For example, you can add the custom property use: QA to the host dimension of your metrics to indicate that the host that is sending the data is used for QA. The custom property use: QA then propagates to all MTS with that dimension. To learn more about adding custom properties to existing metric dimensions, see Search and edit metadata using the Metadata Catalog. When Splunk Observability Cloud assigns a different name to a dimension coming from an integration or monitor, the dimension also becomes a custom property as it is assigned to the metric after ingest. For example, the AWS EC2 integration sends the instance-id dimension, and Observability Cloud renames the dimension to aws_instance_id. This renamed dimension is a custom property. For more information on how Observability Cloud uses custom properties to rename dimensions generated by monitoring software, see Guidance for metric and dimension names. You can also apply custom properties to tags. When you do this, anything that has that tag inherits the properties associated with the tag. For example, if you associate the "tier:web" custom property with the "apps-team" tag, Observability Cloud attaches the "tier:web" custom property to any metric or dimension that has the "apps-team" tag. Custom properties criteria 🔗 You can define up to 75 custom properties per dimension. Custom property name and value criteria: Names must be UTF-8 strings with a maximum length of 128 characters (512 bytes). Values must be UTF-8 strings with a maximum length of 256 characters (1024 bytes). The optional description property lets you provide a detailed description of a metric, dimension, or tag. You can use up to 1024 UTF-8 characters for a description. In custom property values, Observability Cloud stores numbers as numeric strings. Note Metadata tags in Splunk Infrastructure Monitoring are distinct from span tags in Splunk APM, which are key-value pairs added to spans through instrumentation to provide information and context about the operations that the spans represent. To learn more about span tags, see Manage services, spans, and traces in Splunk APM. Tags are labels or keywords that you can assign to dimensions and custom properties. A tag is a string rather than a key-value pair. Use tags when you want to give the same searchable value to multiple dimensions. To learn more about adding tags to existing metrics, see Search and edit metadata using the Metadata Catalog. When to use each type of metadata 🔗 Each type of metadata has its own function in Observability Cloud. The following sections discuss several considerations for you to choose the most appropriate type of metadata for your metrics. Dimensions versus custom properties 🔗 Note Dimensions and custom properties are not distinguishable from one another in the UI, but they behave in different ways and serve different purposes. Dimensions and custom properties are similar in that they are both key-value pairs that add context to your metrics and offer you the tools to effectively group and aggregate your metrics. The key differences between dimensions and custom properties are: You send in dimensions at the time of ingest, and you add custom properties after ingest. You can’t make changes to dimensions, but you can make changes to custom properties. Due to these differences, use dimensions in the following situations: - When you need the metadata to define a unique MTS. Example: You send in a metric called cpu.utilizationfrom two data centers. Within each data center, you have 10 servers with unique names represented by these key-value pairs: host:server1, host:server2,…, host:server10. However, your server names are only unique within a data center and not within your whole environment. You want to add more metadata for your data centers, dc:westand dc:east, to help with the distinction. In this case, you need send metadata about the hosts and the data centers as dimensions because you know before ingesting that you want a separate MTS for every host in your environment. - When you want to keep track of historical values for your metadata. Example: You collect a metric called latencyto measure the latency of requests made to your application. You already have a dimension for customers, but you also want to track the improvement between versions 1.0 and 2.0 of your application. In this case, you need to make version:1.0and version:2.0dimensions. If you make version:1.0a custom property, then change it to version:2.0when you release a new version of your application, you lose all the historical values for the latencyMTS defined by version:1.0. Use custom properties in the following situations: - When you have metadata that provides additional context for your metrics, but you don’t want that metadata to create another uniquely identifiable MTS. - When you have metadata you know you want to make changes to in the future. Example: You collect a metric called service.errorsto know when your customers are running into issues with your services. The MTS for this metric are already uniquely identifiable by the customer and service dimensions. You want to attach the escalation contacts for each service for every customer to your metrics. In this case, you assign the escalation contacts as custom properties to the specific service dimension or customer dimensions. As your team grows and goes through reorganization, you want to be able to change this metadata. You also don’t need the escalation contacts as dimensions as the customer and service dimensions already yield separate MTS. Use tags when there is a one-to-many relationship between the tag and the objects you are assigning it to. Example 1: You do canary testing in your environment. When you do a canary deployment, you use the canary tag to mark the hosts that received the new code, so you can identify their metrics and compare their performance to those hosts that didn’t receive the new code. You don’t need a key-value pair as there’s only a single value, canary. Example 2: You have hosts that run multiple apps in your environment. To identify the apps that a particular host is running, you create a tag for each app, then apply one or more of these tags to the host:<name> dimension to specify the apps that are running on each host.
https://docs.signalfx.com/en/latest/metrics-and-metadata/metrics-dimensions-mts.html
2022-09-25T01:22:28
CC-MAIN-2022-40
1664030334332.96
[]
docs.signalfx.com
Fleet management You can organize your devices by renaming them so they share an identical prefix. For example, devices with names Temp1, Temp2 and Temp3 are seen as a fleet of devices in the Toit CLI by using the prefix -p temp as in: Prefixes are case-insensitive. Update firmware on multiple devices Update the the Toit firmware on several devices in one command with the Toit CLI command: where prefix is replaced with the device name prefix. Deploy apps on multiple devices Deploy a Toit app on several devices in one command: or deploy two Toit apps on a fleet of devices in one command with: where prefix is replaced with the device name prefix. When updating the Toit firmware of a fleet of devices, or when deploying apps, the devices in the fleet do not need to be online. The firmware update, or app deployment, will automatically begin the next time the devices go online.
https://docs.toit.io/platform/devices/fleet
2022-09-25T01:54:26
CC-MAIN-2022-40
1664030334332.96
[]
docs.toit.io
Overview CDP One provides two SQL engines for querying your data: Hive and Impala. For running simple queries, these SQL engines are very similar. You can quickly master using either engine if you are familiar with SQL. You see the minor differences in query examples, and can easily choose the one that suits your needs. You can start Hive or Impala from the command line of the cluster or Hue to run queries.
https://docs.cloudera.com/cdp-one/saas/cdp-one-using-sql/topics/cdp-one-using-sql-overview.html
2022-09-25T02:05:53
CC-MAIN-2022-40
1664030334332.96
[]
docs.cloudera.com
Enable server-server mutual authentication You can enable mutual authentication between multiple ZooKeeper Servers.. - In Cloudera Manager, select the ZooKeeper service. - Click the Configuration tab. - Find the Enable Server to Server SASL Authentication and select it. - Click Save Changes. - Click the Actions button. - Select Restart.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/zookeeper-security/topics/zookeeper-enable-server-server-mutual-authentication.html
2022-09-25T02:09:41
CC-MAIN-2022-40
1664030334332.96
[array(['../images/zookeeper-enable_sasl.png', None], dtype=object) array(['../images/zookeeper-enable_sasl.png', None], dtype=object)]
docs.cloudera.com
Source NAT (many-to-one) The following figure shows an example of SNAT in which many different “inside” addresses are dynamically translated to a single “outside” address. In this example, all hosts on the 10.0.0.0/24 subnet show the same source address externally. To configure NAT in this way, perform the following steps in configuration mode.
https://docs.vyatta.com/en/supported-platforms/vnf/configuration-vnf/system-and-services/nat/nat-configuration-examples/source-nat-many-to-one
2022-09-25T01:17:47
CC-MAIN-2022-40
1664030334332.96
[]
docs.vyatta.com
What is Email Subject Line Generator? Create unique subject lines to grab your audience’s attention with Writesonic! Writesonic’s email subject line generator is a unique tool that can help you craft creative and effective subject lines, which can help you to boost open rates. How to use Writesonic's Email Subject Line Generator? Get ready to generate a perfect email subject line: STEP 1 - Go to the dashboard and click on “Email Subject Line.” STEP 2 - Add product name and description and click the “Generate" button. STEP 3 - Writesonic generates five unique email subject lines for you. STEP 4 - In case you want more or something else, you can click on the “Re-generate” button. Here are a couple of examples: Example 1 Input: Content marketing is a marketing strategy used to attract, engage, and retain an audience by creating and sharing relevant articles, videos, podcasts, and other media. This approach establishes expertise, promotes brand awareness, and keeps your business top of mind when it's time to buy what you sell. Language: English Output 1: Using engaging content to build brand awareness and keep your customers engaged and interested is known as content marketing. Articles, videos, podcasts, and similar media are used to accomplish this. Output 2: Creating and sharing relevant articles, videos, podcasts, and other media is how content marketing attracts, engages, and retains an audience. This approach establishes expertise, promotes brand awareness, and keeps your business top of mind when it's time to buy what you sell. Example 2 Input:. Wikipedia Language: English Output 1: There's a wide variety of sushi styles and presentations; however, the fundamental ingredient is sushi rice, also known as sumeshi or shari. Sushi is a Japanese dish made with vinegar-treated rice, usually with salt and sugar, along with seafood and vegetables. Sushi rice is the key element. Because it includes a wide variety of ingredients, sushi can be served in many different ways. For example, nigiri sushi is a popular kind of sushi that features a ball of sushi rice topped with a bite of fresh fish, while maki sushi is sushi that is rolled up into a cylindrical shape. Output 2: Sushi is a Japanese meal consisting of cooked rice seasoned with vinegar, salt, and sugar and accompanied by a range of ingredients such as seafood, which is often raw, or vegetables. Sushi is served in a variety of styles, but the key ingredient is 'sushi rice,' also known as 'sushi rice' or 'sushi rice.' It is a Wikipedia article. Example 3 Input:, which historians often regard as the origin of England and the English people. Language: English Output 1: Between AD 43 and the early 5th century, the Romans established Britain as a province. The Anglo-Saxon settlement that marked the beginning of England and the English people is often credited with being the end of Roman rule. Output 2: The Anglo-Saxon settlement of Britain, which historians consider the birth of England and the English people, occurred after the end of Roman rule in Britain in the early 5th century. Rome controlled Britannia from AD 43 to the early 5th century. Tips for using the Email Subject Line Generator - Enter the description properly. - Don’t forget to mention the key points that you want to be included in the subject line. Get ready to write amazing and click-worthy email subject lines with Writesonic’s Email Subject Line generator. Ready to increase your email open rates? Updated about 1 month ago
https://docs.writesonic.com/docs/email-subject-lines
2022-09-25T02:13:22
CC-MAIN-2022-40
1664030334332.96
[array(['https://files.readme.io/181e8bd-OnPaste.20220714-233635.png', None], dtype=object) array(['https://files.readme.io/181e8bd-OnPaste.20220714-233635.png', 'Click to close...'], dtype=object) ]
docs.writesonic.com
Dial Plan Usability Improvements In this release, we have introduced some usability enhancements to dial plan configuration. - Ability to copy or duplicate: You can copy or duplicate system plans and existing custom plans. Click the copy icon to instantly copy a plan and customize it. - Ability to edit dial plan: You can edit custom dial plans. But, you are not allowed to edit system dial plans. The edit icon that was previously shown next to system dial plan is now replaced with a view icon. - Label change: Dial plan type is renamed from Pre-Configured to System Pre-Configured. In the editor/view screen of dial plans: - For testing, enter a string in the test input box and press Enter key to execute the test. Earlier, you had to click the test button. - You can add comments to rules to identify the purpose. - Ability to re-order: For custom plans, you can change the order of the rules using the up/down icons. Drag and drop is NOT supported. - Usability improvement: Adding a new rule auto fills basic regular expression info. Prior to this enhancement, adding a new rule required you to manually fill the expression and prevented you from proceeding. - Plans must now translate to a value that is E164 compliant including + sign. + is no longer implicitly added. Inheritance of changes in tenant default plan: You can now change the tenant default dial plan and agents assigned to the plan will automatically inherit the changed dial plan. Note: The change will not be inherited by agents assigned to a specific dial plan. Previously, once the Agent > Phone tab settings was saved for an agent, the agent acquired the tenant's default plan at that time as a specific agent setting. Changing the dial plan was not inherited by this agent there after. - Visibility to agent's dial plan: Administrators can now locate agents assigned to a specific dial plan and those assigned to a tenant default plan from the agent settings easily.
https://docs.8x8.com/8x8WebHelp/VCC/release-notes/Content/8-1-7-release/DialPlanImprovements.htm
2022-09-25T02:06:25
CC-MAIN-2022-40
1664030334332.96
[]
docs.8x8.com
Create an SQL web API from your phone The following screenshot was literally taken from my iPhone, and yes I am not only writing SQL on my phone, but I am also creating an HTTP API in the same process. When you go “all in” on compatibility and cross platform some interesting axioms and use cases emerges as a natural consequence. A couple of weeks ago I showed the above feature in our CRUD generator to some colleagues attending a networking event where I live, and one of them even laughed out loud and said. I don’t think I have ever seen somebody writing SQL on their phone It might be easy to dismiss the above feature as “overkill” or “marketing”, but that’s probably because your brain isn’t geared to understanding its usefulness. For instance, imagine you’re a DB administrator, and you’re on vacation, and you need to create a report for your manager extracting SQL for some frontend guy creating a chart of some kind. In the “old world” this would imply having to go back to your hotel room, finding your laptop that you for this sole purpose had to drag with you to the other side of the world, boot it, and sit by your bedside for an hour messing with your SQL. All in all we’re probably talking about 2/3 hours of destroyed “vacation time”. If you had Magic on your server though, you could probably use your phone from beneath the dinner table, create the HTTP endpoint in some few minutes, without anybody even noticing you were actually working. To illustrate the use case slightly better, let me show you how the above looks like on a computer, and realise you’ve got access to every single feature in Magic on your phone that you can access from your computer. - Adding arguments to your endpoint? Check! - Joins and any amount of complexity in your SQL? Check! - Authorisation and authentication to prevent unauthorised user to access your endpoint? Check! - Configuring the exact URL you need? Check! - Scalar values and record sets? Check! - Having the ability to do all of the above from your phone? Check! If you want to try it out feel free to download Magic. The entire code base is 100% open source, and you can use it as you see fit. It even comes with docker containers for easy deployments to a VPS of your choosing.
https://docs.aista.com/blog/sql-web-api-from-your-phone
2022-09-25T01:40:09
CC-MAIN-2022-40
1664030334332.96
[array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/blogs/phone-sql-api.jpeg', 'SQL web API from your phone'], dtype=object) array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/blogs/count-actors-blog.jpg', 'SQL web API from your phone'], dtype=object) ]
docs.aista.com
ajSortArray functionajSortArray function DescriptionDescription The ajSortArray function sorts the contents of a range or array. SyntaxSyntax ajSortArray(in_array, [sort_index], [sort_order], [by_col], [convert_to_text]) The function will return: 1) Content type: The sorted range of cells 2) Method: Cell array ExamplesExamples Here are a few examples of ajSortArray.
https://docs.alchemyj.io/docs/4.0/Reference/AJExtendedFunctions/CommonFunction/ajSortArray
2022-09-25T02:55:42
CC-MAIN-2022-40
1664030334332.96
[array(['ajSortArrayImg/image4.png', None], dtype=object) array(['ajSortArrayImg/image5.png', None], dtype=object)]
docs.alchemyj.io
An Owner must first be created in the software under the owners' section. Once added to the software the owner is then assigned to a specific unit or units in order to become active. Now that the owner is set up, you are able to track bookings to the unit(s), create transactions and generate reports. Add as many Owners as needed. The owner can access the 'Owners Area' with a Login ID and password to view their bookings, track transactions, generate reports for their units, make and edit bookings. The online 'Owners Area' available at: enables owners to login with their Owner ID and password entered in the Owner Information. See Owner Units See below for a description of the Owner Login area and the information that will be available to the owner. Owners can login if they go to. They will need the Owner ID and password entered in the Owner Information section, with the Site ID appended in front of the Owner ID. For example, if your BookingCenter Site ID is 'DEMO' and the 'Owner ID' is 'John', the the Owner ID to login at will be: DEMOJohn. *note - the ID and password are case sensitive. The Owner login should be received from the Property directly, BookingCenter cannot give these credentials to Owner(s) of your unit. See Owner Units. Owners can view the bookings made for their Unit with Booking Information and Status. They can also Edit or Cancel a Booking by clicking on the Booking ID to open the Booking Details. From the ‘Booking Details’ page (booking_details.phtml), an Owner can click a LINK to ‘Edit Booking" or "Cancel Booking" The Edit feature enables: If the Owner is enrolled in the Channel Manager product, then the ability to full edit the booking (dates, Unit(s), rates, numbers of guests, names of guests, etc) exists when they are logged into the booking. Additional features of the Owner Channel Manager include SMS messaging to Guests, using Auto Letters to automate daily activities (Self Checkin instructions, Registration, eSign requests, survey/review requests, etc) and run reports on activities such as Arrivals, Departures, Receipts, etc. In addition, any booking that is edited or canceled from an OTA (Expedia, Booking.com, Airbnb, etc) or a GDS Travel Agency is automatically modified and/or canceled, with cancellation information included. If an Owners wishes for great credit control over their bookings, upgrade to the Channel Manager product to get complete editing features (as well as a host of others, as detailed here). If Owners wish to EDIT their bookings beyond what is included in the Owner's Area, they must: Commissions The owner can view any commissions earned from their Unit(s) being booked and in status: COMPLETE. The idea for the Commission is that the Owner has a commission amount is commissionable at an agreed-upon rate (always a %). This is set for each Owner and can be unique to each Owner. The Total Commission is then viewed by the Owner on the non-taxed portion of the RENT (rate total for the booking) and excludes any extra Items that might have been added, such as a bottle of wine or transportation fee. This area allows Owners, if they have a commissionable relationship with the Property Management Company, to see what commissions are payable. If the commission relationship is 0%, then the Total Commission will always be $0. The owner can view their contact details on file and their units. Owners can generate Reports for Expenses and Payments. These reports can be sorted by date range to create statements. Expenses are associated to a Unit by the Property manager who has access to the Setup Area of MyPMS.Expenses are associated to a Unit by the Property manager who has access to the Setup Area of MyPMS. Payments are records of payments the Property managers made to the Owner. A running balance is kept of each Owner's commissions (revenue) less Expenses (debits), to make a Total owed and then paid. Owners can make bookings in order to block out availability and deliver detailed booking information to the Property Management System. For the feature to work with the correct Unit(s), it is imperative that the setup of the 'Owner' use the Agent Allocations Agent Relationships and Agent Allocations feature to allocate the correct Units(s) to the right Owner/Agent ID. Note, doing this requires two dependencies: To use this feature for your Owners, consider the following Booking Engine Settings: An Owner can be notified when an online booking occurs for a Unit that they are assigned to. To do this, one must: The Owners Area allows an Owner to cancel a booking. This allows an Owner to cancel a booking, then 'booking' again with new dates or guest details. Each cancellation allows a manager to place a cancellation number for the cancellation to record the cancellation. If more edit functionality is needed, such as rates, rooms, or additional guest names, have the Owner is enroll in the Channel Manager product. The Owner Channel Manager features include SMS to Guests, using Auto Letters to automate daily communications (Self Checkin instructions, Registration, eSign docs, survey/review requests, etc) and run reports on activities such as Arrivals, Departures, Receipts, etc. In addition, any booking that is modified or canceled from an OTA (Expedia, Booking.com, Airbnb, etc) or a GDS Travel Agency is automatically modified and/or canceled, with cancellation information included.
https://docs.bookingcenter.com/display/MYPMS/Owner+Login+and+Booking+Management
2022-09-25T03:05:28
CC-MAIN-2022-40
1664030334332.96
[]
docs.bookingcenter.com
This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OpenShift Container Platform..10, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data s
https://docs.openshift.com/container-platform/4.10/operators/understanding/olm/olm-understanding-olm.html
2022-09-25T02:55:45
CC-MAIN-2022-40
1664030334332.96
[]
docs.openshift.com
using each set of 3 vertices passed. If you pass 3 vertices, one triangle is drawn, where each vertex becomes one corner of the triangle. If you pass 6 vertices, 2 triangles triangle that covers the middle of the screen Material mat; void OnPostRender() { if (!mat) { Debug.LogError("Please Assign a material on the inspector"); return; } GL.PushMatrix(); mat.SetPass(0); GL.LoadOrtho(); GL.Begin(GL.TRIANGLES); GL.Vertex3(0, 0, 0); GL.Vertex3(1, 1, 0); GL.Vertex3(0, 1, 0); GL.End(); GL.PopMatrix(); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.2/Documentation/ScriptReference/GL.TRIANGLES.html
2022-09-25T02:24:12
CC-MAIN-2022-40
1664030334332.96
[]
docs.unity3d.com
Content in 24 languages Almost anyone around the world can use Writesonic When we have so many languages around the globe, why should content generation be in just English?! Be it Article Writer or Email Generator, you can use all our writing features to generate content in 24 different languages. 👇 Here's a list of languages Writesonic supports: - Going forward, we'll be adding many more languages. ✅ Want to create content in a specific language that we don't support yet? Send your suggestions to us! Updated about 1 month ago Did this page help you?
https://docs.writesonic.com/docs/we-support-25-languages
2022-09-25T01:52:19
CC-MAIN-2022-40
1664030334332.96
[array(['https://files.readme.io/ffc33db-Screenshot_2022-08-16_at_4.06.07_PM.png', None], dtype=object) array(['https://files.readme.io/ffc33db-Screenshot_2022-08-16_at_4.06.07_PM.png', 'Click to close...'], dtype=object) ]
docs.writesonic.com